Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Multi-user interface for group ranking: a user-centered approach Luk, Wai-Lan 1994

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_1994-0500.pdf [ 2.44MB ]
JSON: 831-1.0087518.json
JSON-LD: 831-1.0087518-ld.json
RDF/XML (Pretty): 831-1.0087518-rdf.xml
RDF/JSON: 831-1.0087518-rdf.json
Turtle: 831-1.0087518-turtle.txt
N-Triples: 831-1.0087518-rdf-ntriples.txt
Original Record: 831-1.0087518-source.json
Full Text

Full Text

MULTI-USER INTERFACE FOR GROUP RANKING: A USER-CENTERED APPROACH By Wai-Lan Luk B.S. California State University, Los Angeles  A THESIS SUEMIITED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in THE FACULTY OF GRADUATE STUDIES ( MANAGEMENT INFORMATION SYSTEM) FACULTY OF COMMERCE ‘ AND BUSINESS ADMINISTRATION  We accept this thesis as conforming to the required standard  THE UNIVERSiTY OF BRITISH COLUMBIA July 1994 © Wai-Lan Luk, 1994  ____  in presenting this thesis in partial fulfilment of the requirements for degree at iie University of British Columbia, I agree that the Library freely available for reference and study. I further agree that permission copying of this thesis for scholarly purposes may be granted by the department  or  by  his  or  her  representatives.  It  is  understood  that  an advanced shall make it for extensive head of my copying  or  publication of this thesis for financial gain shall not be allowed without my written permission.  (Signature)  Department of  1 (1Wia Sk.  The University of British Columbia Vancouver, Canada  Date  DEo (2)88)  ABSTRACT The proliferation of collaborative computer applications in the past decade has resulted in a corresponding increase in the need for multi-user interfaces. The current research seeks to contribute to the design of a user-centered multi-user interface for a group ranking task. User requirements were identified by observing groups perform the ranking task in a non-computer environment. A design was proposed based on these identified requirements. The user-centered design was compared to preliminary designs based on the intuitions of programmers. The conclusions indicate that an analysis of observations in the non-computer environment does yield insight beyond the initial intuition of programmers. A prototype based on the user-centered design was implemented. Informal user evaluation was performed by observing users working with the prototype and obtaining verbal feedback both on the ease of use of the system and on possible improvements. The informal user evaluation provides evidence for the usefulness of user-centered design. The evaluation also suggests that not all features identified were found useful and not all features necessary were identified.  11  TABLE OF CONTENTS  bstract 1 A  ist of Figures 1 I  .  A cknovledgn1ents  2  vi  Introduction  1  1.1 Background  1  1.2 Approaches to Interface Design  3  1.3 Objectives of Research  4  1.4 Outline of the Thesis  4  Related Work and Literature Survey  6  2.1 User Interface Design  6  2.1.1  Single-user Interface Design Approaches  2.1.2 Multi-user Interface Design Approaches 2.2 Group 3  ii  .  Processes  7 10 13  User-Centered Analysis and Design  15  3.1 Observation Procedure Subjects  15 16  3.1.2 Materials  16  3.1.3 Procedures  16  3.1.1  3.2 User-centered Analysis 3.2.1 Activities in the Group Ranking Process  17 19  3.2.2 Major Factors in Multi-user Interface for Group Ranking  20  3.2.3 User Requirements  22  3.2.4 Design  24  Solutions  3.3 Implementation of the Prototype 3.3.1 Description of the Prototype  111  29 29  3.3.2 Limitations of the Prototype  4  5  6  .  34  3.4 Limitations of the Design Process  35  3.5 Summary  36  Comparison with Alternative Designs  37  4.1 Procedures  37  4.2 Suggested Designs  38  4.3 Comparison of Designs  41  4.4 Limitations of Analysis  43  PrototypeEvaluation  46  5.1 User Testing of Prototype  46  5.2 Results and Analysis 5.2.1 General Issues  48 49  5.2.2 Specific Feedback  50  5.2.3 OverallFeedback  50  5.2.4 Summary Comparison of System Usage with Requirements Identified in User-centered Analysis  51  Concluding Remarks and Future Directions  53  6.1 Contributions and Limitations  53 54  6.2 Future Directions References  57  Appendix—i  61  Appendix-2a  62  Appendix-2b  63  A ppendix—3a  64  Appendix-3b  65  A ppendix—4a  66  Appendix-4b  67  iv  LIST OF FIGURES  3.1  A Structure of the Analysis Procedures  17  3.2  Sequence of Ranking Process Common to All Groups  18  3.3a Summary of the Analysis (part 1 of 4)  26  3.3b Summary of the Analysis (part 2 of 4)  26  3.3c Summary of the Analysis (part 3 of 4)  27  3.3d Summary of the Analysis (part 4 of 4)  27  3.4 Main Window in Multi-User Ranking Program  30  3.5  Multi-user Ranking Program in “Tidy On” mode  31  3.6  Registration Panel in Multi-user Ranking Program  32  3.7  Ranking Information Panel in Multi-user Ranking Program  33  4.1  Programmer-Suggested Interface for Multi-user Ranking Program (1 of 3)  38  4.2 Programmer-Suggested Interface for Multi-user Ranking Program (2 of 3) 4.3 Programmer-Suggested Interface for Multi-user Ranking Program (3 of 3)  39 40  4.4 A Comparison of Programmer Designs to Requirements from 42  User-centered Analysis 5.1  Single-user Ranking Program from GDSS Research at UBC 5.2 Comparison of User Requirements to Usefulness During Evaluation  V  48 51  ACKNOWLEDGMENTS I gratefully acknowledge the support from the Natural Sciences and Engineering Research Council of Canada (NSERC), which made this research possible through an operating grant to Professor V. Srinivasan Rao. I would like to express my greatest appreciation to Professor V. Srinivasan Rao, my thesis supervisor, for his time and patience in directing my research. Without his supervision and encouragement, this thesis would not have been completed. I would also like to thank the other members of my committee, Professor Kelly Booth and Professor Carson Woo, for their helpful criticism and comments. Special thanks go to my parents, whose understanding and support have stayed with me and kept me going. I am also very grateful to my sisters, Alice, May and Debbie, who kept me company as I worked through the late nights. Finally, I would like to thank my heavenly Father for giving me the opportunity, ability, and stamina to finish this thesis. To my God and Father be glory forever.  vi  Chapter 1. Introduction  Chapter 1  Introduction 1.1  Background The importance of human-computer interfaces has long been recognized by  computer scientists [Mantei 92; Greenberg, Roseman, and Webster 92; Gould and Lewis 8-5; Wasserman, Pircher, Shewmake and Kersten 87]. Researchers note that the user interface is often the principal determinant of system success (for example, Wasserman et al. 87), especially for those interactive systems where usage is discretionary. Indeed, Greenberg, Roseman and Webster [1992] suggest that the usability of an application will be seriously limited if the human factors are ignored. In short, a good user interface is important to both the success and the usability of an application. Most of the early research on interfaces focused on single-user interfaces. The late 80s and early 90s, however, have seen an increase in attention on group applications. Group applications are software designed to facilitate collaboration, coordination, and group decision-making. Many observers predict an explosion in group applications in the coming years (for example, Mantei, 1992). Mantei suggests that if the 1980s was the decade of personal computers, then the 2000s will be the decade of computer-supported cooperative work (CSCW). Thus, the anticipated growth in CSCW points to a need for studies in interfaces for group applications. Group support systems (GSS) [Dennis, George, Jessup, Nunamaker, and Vogel 1988] constitute one important segment of group applications. A group support system is defined as an interactive computer system which facilitates the solving of unstructured or semi-structured problems by a group of decision makers [DeSanctis and Gallupe 87]. Existing GSS provide support for activities such as brainstorming, weighting of ideas, rating of ideas, ranking, voting, stakeholder analysis and so on. Such group activities can be performed in one of two ways. First, individuals can express preferences, which can 1  Chapter 1, Introduction  then be aggregated as group preferences. Second, individuals can be allowed to interact dynamically with each other to arrive at group preferences. Most current implementations of GSS, such as GroupSystems [Dennis et al. 881, SAMM [Dickson, Poole and DeSanctis 90], SAGE [Wei, Tan and Raman 92], and Claremont GDSS [Mandviwalla, Gray, Olfman and Satzinger 91] provide support for the aggregation of individual preferences. Such a process is advantageous in that it reflects in the final evaluation the opinion of each participant.  However, it suffers from the  shortcoming that it does not permit participants to be aware of each other’s relative preferences during the process of arriving at the group preference, and thus may lead to an aggregated group preference that is unacceptable to all participants. In contrast, a dynamically interactive process allows group members to be aware of each other’s preferences and thus may enable the group to arrive at more acceptable compromises. The dynamic model is not without faults, however. For instance, some individuals may dominate the process to an unacceptable level. The relative merits of aggregating individual preferences versus interacting dynamically will have to be determined empirically. Such empirical comparisons can be made in a computer or non-computer environment, but a computer environment may be preferable for two reasons. First, the process of aggregating preferences in the non-computer mode is more cumbersome. Second, it will be easier to extend the computer-based multi-user design to the distributed mode. Achieving the ability to perform most group activities in the distributed mode is one of the goals of CSCW. Empirical comparison of the computer-based ranking systems is possible only after the multi-user interfaces have been designed and implemented. As a first step towards understanding the merits of multi-user interfaces, a prototype multi-user interface is implemented in this project. The focus of the project is on identifying the requirements for a multi-user interface for one activity in GSS, to implement the interface and obtain user feedback on it. The activity of “ranking” was chosen. Ranking is defined as placing a list of alternatives in ascending or descending order based on various criteria. Ranking can be performed by individuals or groups. Group ranking is a process adopted to arrive at the group’s opinion of the order, based on mutually-agreed-upon criteria, of the list of alternatives. Ostensibly, the goal of group ranking is to arrive at a consensus. In reality, however, either the group or individual 2  Chapter 1. Introduction  members may have other goals. Therefore, the objective of the study is build a multi-user interface that is independent of the goal of group ranking, i.e., the group should continue to have the flexibility to set its own goals, and not be constrained by the technological features of the system.  1.2  Approaches to Interface Design Three principles are fundamental to the design of single-user interfaces: an early  focus on users and tasks, empirical measurement, and iterative design [Gould and Lewis 85]. Apart from the above three principles of design, many researchers studying the issue of cognitive compatibility in the field of human-computer interaction [Streitz 87] have recommended the use of knowledge from cognitive psychology in single-user interface design [Jameson 88; Eberts and Eberts 89; Young, Green, and Simon 89; Olson and Olson 90]. One way to map users’ cognitive model of an activity is first to observe how people perform the task naturally, and then to develop the cognitive model of users at a concrete and operational level by describing their behavior [Baron, Kruser, and Huey 90]. This user-centered approach has been employed by interface designers to build single-user interfaces [for example, Sellen and Nicol 91]. Multi-user interface design, however, requires knowledge of not only the cognitive models of users but also the patterns of interaction among users. Research in the area of multi-user interfaces has focused on identifying primitives [Dewan and Choudhary 91b, Ishii and Arita 91, Bier and Freeman 91], building toolkits [Roseman and Greenberg 92], or building interfaces based on intuitive conceptual models [Mantei 881 or on user-centered models elicited from observations of users [Sellen and Nicol 90, Baecker, Nastos, Posner, and Mawby 93, Lu and Mantei 91]. The identification of primitives and the building of toolkits focus on providing the underlying technology to meet the requirements that surface from modeling of user behaviors.  In user-centered analyses, observations of user  behaviors are utilized to identify patterns of interaction, which are then used as a basis for multi-user interface design. In the current study, a user-centered approach is adopted to identify the requirements for a multi-user interface.  3  Chapter 1. Introduction  A further categorization of the approaches to user interface design is based on the level of analysis, i.e., microscopic or macroscopic. The microscopic approach is concerned with specific interface units and a detailed consideration of the behavior of these units. An example of the microscopic approach is research on menu design [Souza and Bevan 90; Gillan, Holden, Adam, Rudisill, and Magee 90; Boritz, Booth, and Cowan 91]. The macroscopic approach to designing user interfaces is concerned with the application as an aggregate, a collection of specific units of the user interface which are treated as if they were one unit. An example of the macroscopic approach is research on shared drawing tools [Lu and Mantei 911. In that research, a macroscopic view is employed to design a multi-user interface.  1.3  Objectives of Research The overall objective of the project is to contribute to the implementation of a multi-  user interface for group ranking. The sub-objectives are as follows. The first is to identify the requirements of a multi-user interface and propose a design for group ranking by analyzing observations of groups performing the ranking task in a non-computer environment. The second objective is to compare the proposed design to the intuitive designs of programmers. The third and last objective is to implement the proposed design and perform an infonnal evaluation. This informal evaluation is to include observations of users working with the system and user feedback such as general statements on the ease ,  of use of the interface and specific suggestions about features to include or eliminate.  1.4  Outline of the Thesis The thesis is structured as follows. Chapter 2 reviews the literature on approaches  to human-computer interface design and presents relevant information about group support systems. Chapter 3 describes the observation procedure, the results of observation, recommendations for a user-centered design and the implementation of the design. Chapter 4 provides a comparison of the user-centered design to the designs offered by three programmers. Chapter 5 outlines observations based on user evaluation of the prototype.  4  Chapter]. Introduction  Chapter 6 states the conclusion of the study, along with its limitations, and possible future research directions.  5  Chapter 2. Related Work and Literature Survey  Chapter 2  Related Work and Literature Survey In this chapter, relevant literature on human-computer interfaces and group support systems is discussed. Problems related to human-computer interface design have been addressed by researchers in the fields of computer science, psychology, and behavioral science. This is because designing human-computer interfaces requires a blend of knowledge of the task, the users, and computer systems. Various design approaches have been suggested, but it is doubtful that any of them have been validated adequately enough to be termed the best approach. Nonetheless, many useful principles have been articulated to guide the design process. The discussion of design approaches is segmented into design approaches for single-user interfaces and design approaches for multi-user interfaces. The approaches discussed under each category are not unique to it, because many of the principles suggested apply equally to the design of both types of interface. The approaches are classified based on the literature from which they are cited. The approaches discussed under single-user interfaces include the empirical approach, the predictive modeling approach, the anthropomorphic approach, and the cognitive approach. Approaches discussed under multi-user interfaces include the identification of primitives, the building of toolkits and the use of conceptual models as bases for designing interfaces. The last section of this chapter provides an overview of group support systems and a description of human-computer interface design in existing group support systems.  2.1  User Interface Design  Research on single-user interface design has been continuing for over 30 years [Grudin 90]. Many of the discussed approaches and methodologies of interface design originated with single-user interface design. Eberts and Eberts [1989] classified the 6  Chapter 2. Related Work and Literature Survey  approaches into four categories: the empirical approach, the predictive modeling approach, the anthropomorphic approach, and the cognitive approach. Research on design of multiuser interfaces has received attention only in the past decade. A review of the literature in this area suggests that the focus is on approaches based on either the identification of primitives or the use of conceptual models. The conceptual models are derived intuitively or arrived at following user-centered analysis. As noted, the approaches mentioned are neither exhaustive nor mutually exclusive.  2.1.1  Single-user Interface Design Approaches  Eberts and Eberts [1989] identity four approaches to designing single-user interfaces: the empirical approach, the predictive modeling approach, the anthropomorphic approach, and the cognitive approach [Eberts and Eberts 89]. The empirical approach uses the results of experimentation to design human computer interfaces. Using various theories, researchers identify variables of interest and then perform controlled studies, This approach has been used to provide support for the design of menus [Souza and Bevan 90; Gillan, et a!. 90; Boritz, Booth, and Cowan 91]. For example, in an experiment to study the effect of different types of menus on user performance, the independent variable is menu type and the results can be measured by the numbers of errors made or by the time needed to complete a menu selection. According to a study by Callahan et al., pie menus gain over traditional linear menus by reducing targetseek-time and lowering error rates [Callahan, Hopkins, Weiser and Shneiderman 88]. Such studies often focus on individual components of the user interface. The results offer useful guidelines to the designer, but it is sometimes difficult to aggregate the results into a user interface. In the predictive modeling approach human-computer interaction is predicted based on cognitive or physical models. Designers using this approach try to predict the best design before building prototypes. One of the examples of the predictive modeling approach is the Goals, Operators, Methods, and Selection of rules model, “GOMS” [Card, et al. 80b]. The GOMS model predicts the user behavior sequence, the time required to do a particular unit of work, and also the error rate that may occur, In the GOMS model, 7  Chapter 2. Related Work and Literature Survey  users are assumed to first specify the goal of the work involved, and then expand the goal into subgoals and eventually into sequences of operations to achieve the goal. The user will have to choose the sequence of operations to achieve the original goal. In the example of a text editor [Card, et al. 80a], the general goal of DO-UMT-TASK (for example, CORRECT-SPELLING) may have a sequence of two subgoals, i.e. LOCATE-LINE, and MODIFY-TEXT. For each subgoal, there may be a sequence of possible operations. For example, the subgoal LOCATE-LINE may have the operations USE-S-COMMAND or USE-M-COMMAND. The user will have to select the operation that he/she is going to use to achieve the goal. The amount of time required for a task can be predicted by breaking the task into its components and summing the time it takes to perform each component task [Eberts and Eberts 89]. A GOMS model allows developers to predict accurately the time and error rate of a work-unit, which is useful in the design process. However, it is difficult to gather data for prediction. Problems arise in defining goals, sub-goals, and operators for the model. One way to obtain those data is verbal protocol analysis of people performing the task, The anthropomorphic approach makes use of the human-human communication model to design human-to-computer communication. Advocates of this approach believe that human-human communication is effective, or at least adequate, and somewhat well understood. If there is any ineffectiveness in human-computer communication, the problem is assumed to originate in the design incorporated in the computer. So, having the computer behave similar to a human is believed to solve problems in human-computer interaction and facilitate the communication between human and computer [Eberts and Eberts 89]. The example of a natural language processor illustrates designers’ attempts to improve the computer’s behavior in human-computer interaction. Based on conceptual dependency theory, pragmatic rules (rules to categorize the words as actors or objects) are developed.  Conceptual dependency (CD) theory explains how people understand  sentences. In CD, the computer parses a sentence by trying to fill in slots which correspond to the four items: ACTOR (the person performing the actions), the ACTION (what the actor does), an OBJECT (what the action is performed upon), and DIRECTION (the orientation of the action, composed of a TO and a FROM component). With the pragmatic rules, the computer is able to parse a natural language sentence and may be able to understand the input [Schank and Abelson 77]. But the understanding is limited by the role of context in attributing meaning to words. A simple computerized dictionary 8  Chapter 2. Related Work and Literature Survey  containing the meanings of words is not enough to understand natural language. Moreover, the anthropomorphic approach is overly dependent on future advances in technology to be effective. With the present technology, it is possible to implement natural language processing only in highly restricted domains. For example, a natural language processing system, called “AM” [Cullingford 811, is proficient at understanding news stories appearing in newspapers only, and is not applicable to other domains. The cognitive approach applies theories in cognitive science and cognitive psychology to the design of human-computer interfaces in order to make the processing of information by both the human and the computer easier and more efficient [Eberts and Eberts 891. The cognitive approach assumes that users are flexible, adaptive information processors who are actively trying to solve problems when using computers. The goal of the cognitive approach is to choose the display representation of data so that the mental model of the user closely corresponds to the conceptual model of information processing in computers. One example of the use of cognitive theories is use of analogical reasoning theory in the application of icons [Bewley, Roberts, Schroit, and Verplank 83]. In an interface based on this theory, a waste paper basket is shown on the screen. If the user wants to delete a file, the user can point to the waste paper basket, as deleting a file is similar to discarding trash in a waste paper basket. However, problems arise because the extent to which the analogy holds good is not well-defined. For example, in the original implementation of the waste basket analogy, the discarded item would be permanently deleted. Some users over-extended the waste paper basket analogy and thought that the discarded files could be retrieved before the end of the day. Thus, if analogies are used, the designer will have to make sure that the interface corresponds closely with the analogy, or ensure that the extent to which the analogy holds good is made clear [Rumelhart and Norman 83]. In the example of the waste paper basket, the implementations have been modified to allow the user to recover discarded files until the end of the day. As noted, the approaches described above are not mutually exclusive. For example, the Goals, Operators, Methods, and Selection of rules model can be shown to have a basis in both the predictive modeling approach and the cognitive approach. In other words, it is likely a designer will draw upon more than one approach to design a human computer interface.  9  Chapter 2. Related Work and Literature Survey  2.1.2  Multi-user Interface Design Approaches In this subsection, guidelines and approaches for multi-user interface design are  reviewed. Tang [1989] has suggested a list of design criteria to guide multi-user interface designs. Dewan and Choudhary [Dewan and Choudhary 91b] have proposed primitives for programming multi-user interfaces. Roseman and Greenberg [Roseman and Greenberg 92] describe a toolkit to build real-time conferencing applications. Conceptual models have been identified intuitively or through user-centered analyses.  Guidelines: Basing his criteria on observations of groups sharing a work surface in a non-computer environment, Tang [1989] suggested that: (1)  A shared workspace should provide a way of conveying and supporting gestural communication. Gestures should be clearly visible, and should maintain their relation with objects within the work surface and their relation with voice communication.  (2)  A shared workspace should convey the process of creating artifacts to express ideas.  (3)  A shared workspace should allow seamless intermixing of work surface actions and functions.  (4)  A shared workspace should enable all participants to share a common view of the work surface while providing simultaneous access and a  (5)  sense of close proximity to it. A shared workspace should facilitate the participants’ natural abilities to coordinate their collaborations.  Tang’s design criteria for shared working space have been used in designing group drawing programs (GroupSketch and GroupDraw [Greenberg, et al. 92]) and video-based programs (VideoDraw [Tang and Minneman 90] and TeamWorkStation [Ishii 90]). These examples are evidence of the usefulness of the design principles. Such usefulness has been demonstrated mostly in shared drawing programs. However, it can be readily seen that the design guidelines would apply to other shared applications requiring multi-user interfaces.  10  Chapter 2. Related Work and Literature Survey  Primitives: Dewan and Choudhary constructed a set of primitives for building multi-user interfaces [Dewan and Choudhary 91b]. They introduced the concepts of collaboration awareness, group awareness, user/session awareness, environment awareness, and coupling awareness as necessary programming extensions to single-user primitives in multi-user interface design. Single-user primitives consist of two components: the message communication that is invoked by the applications to perform user-interface operations, and the procedures in applications that are used in response to user actions. The communication calls consist mainly of Creation/Deletion calls, Attribute Setting/Retrieving calls, Value Setting/Retrieving calls, and Message calls. The suggested additional set of primitives for building multi-user interfaces focus on the technical aspects of implementing multi-user interfaces. Two main primitives are described below for reference. Collaboration-awareness refers to an application’s level of awareness that multiple users are interacting with it. Group awareness refers to the capability of an application to be able to execute calls to groups of users. The use of primitives focuses on the underlying technical aspects rather than on the users. There have been other attempts at building design primitives for specific applications. For example, Ishii and Arita applied the cognitive theory of selective looking to the design of a multi-user interface for TeamWorkStation [Ishii and Arita 91]. Bier and Freeman attempted to provide an architecture for multi-user application design [Bier and Freeman 91]. Toolkits: Multi-user applications have some common underlying needs. Roseman  and Greenberg [Roseman and Greenberg 92] have constructed toolkits to assist in the construction of real-time work surfaces. Their philosophy has been to draw on work in the human factors area and the technical innovations in computer-supported cooperative work (CSCW) to list requirements for a groupware toolkit. Their kit includes three strategies to construct the necessary components. First, an extensible, object-oriented run time architecture supports the management of distributed processes. Second, transparent overlays allow the addition of general components to various groupware applications. Last, open protocols allow the creation of a wide range of interface and interaction policies. They have demonstrated the usefulness of their kit, GroupKit, by building several applications, such as GroupSketch and GroupDraw [Greenberg, et al. 92].  11  Chapter 2. Related Work and Literature Survey  Conceptual Models: The second approach to multi-user interface design is the use of conceptual models of the interaction. Such models can be intuitively derived [for example, Mantei 1988] or based on user-centered analyses [for example, Baecker et al. 93; Lu and Mantei, 91]. An example of the intuitive approach to conceptual models is the Capture Lab design [Mantei 1988]. The Capture Lab user interface was designed to support existing meeting protocols. The conceptual model used distinguishes between private and shared resources, simple access protocol, and simple information transfer protocol. From the experience of Capture Lab, using a conceptual model reduces the amount of training for new users and enables users to use the facilities effectively. Conceptual models can also be derived from user-centered analyses. Many researchers have recommended the use of cognitive psychology research to improve human-computer interface design [Jameson 88; Eberts and Eberts 89; Olson and Olson 90]. User-centered approaches offer a way to identify the cognitive models. The user-centered process involves observing users perform a task without the aid of computers, and then designing the interface based on the observations and making changes iteratively to the interface based on user feedback [Sellen and Nicol 91]. Using an iterative user-centered approach, Baecker et al.[1993] designed the collaborative writing software SASE (Synchronous Asynchronous Structured Editor) and SASSE (Synchronous Asynchronous Structured Shared Editor). They interviewed writers about the writers’ experiences in collaborative writing. Then they conducted a controlled laboratory study in which group interactions were observed. The groups were provided with two computers, and the participants were free to choose any writing approach. The observations were analyzed, and Baecker et al. found that individual differences in group behavior dominated the results and that the personalities of the participants had significant effects on the writing approach chosen by the group. Also, the type of communication between participants varied with the ease with which group members could see each other’s work. These findings were then used in an iterative design process (i.e., cycles of design, implementation, user feedback) to come up with the fmal interface.  12  Chapter 2. Related Work and Literature Survey  Lu and Mantei also used the user-centered approach to design a shared drawing tool [Lu and Mantei 91]. They videotaped groups performing drawing activities, and by studying the videotapes, Lu and Mantei developed a taxonomy of group idea-management processes. Detailed design requirements were then derived from the result. Researchers in the area of multi-user interfaces have advocated consistency between design and existing user activities. Olson et al. suggested that, in collaborative writing, command structure should be consistent and resemble users’ thinking about task goals [Olson et al, 90]. Posner and Baecker also suggested support for transitions between activities, support for several writers in collaborative writing, and so on [Posner and Baecker 92]. Obviously, the design of the multi-user interface must take into consideration not only the user cognitive model, as in the design of single-user interfaces, but also the patterns of interaction among users. The cognitive models and the patterns of interaction can be inferred either by using designer intuition or by analyzing observations.  2.2  Group Processes  The motivation for this research project stems from the need to design multi-user interfaces for group support systems (GSS). Group decision-making is a complex, difficult, and dynamic process. GSS is an interactive computer-supported system which facilitates the solving of unstructured or semi-structured problems by a group of decisionmakers [DeSanctis and Gallupe 87]. Existing GSS support functions such as idea gathering/brainstorming, weighting, rating, ranking, voting, performing stakeholder analysis, allocating models, making paired comparisons, connecting/linking ideas, and grouping [Dennis et al. 88; Dickson et al. 90]. There are two general ways to design group applications: the aggregation model and the dynamic model of interaction. In the aggregation model, individual decisions (such as voting, ranking and rating) are aggregated to produce a group decision.  Existing  groupware, such as SAMM, are based on the aggregation model of interaction. This model consists of three steps: first, the preferences of individuals are captured. Then, the aggregated result is displayed on a public screen. The group may discuss the aggregated result in order to reach a final decision. 13  Chapter 2. Related Work and Literature Survey  Another way to reach group decisions is to let the group interact dynamically to discuss a decision. In this case, the decision is made through the use of a multi-user application. In a technical sense, multi-user applications must perform collaboration tasks such as dynamically making and breaking connections with users, gathering data and displaying output from multiple users, and providing concurrency and access control [Ellis, Gibbs, and Rein 91; Sarin and Greif 85; Dewan and Choudhary 91aJ. A multi-user interface can be implemented with a single cursor which users have to take turns controlling. Alternately, a multi-user interface can be implemented using multiple cursors, in which case simultaneous interaction among users is supported. In this chapter, approaches to human-computer interface design for both single-user interfaces and multi-user interfaces have been reviewed broadly. The group support system literature was briefly touched upon to indicate that existing systems do not provide multi-user interfaces.  14  Chapter 3. User-Centered Analysis and Design  Chapter 3  User-Centered Analysis and Design The goal of our project was to perform a user-centered analysis to identify requirements for a multi-user interface for group ranking. The requirements were used to make design recommendations. The user-centered design process and the implementation of the design are described in this chapter. In subsequent chapters, the user-centered design is compared to three designs based on the intuitions of programmers, and user feedback is discussed. The objective of the design process was to identify the key aspects of both the user models of the ranking task and the patterns of interaction among the group members. The first section of this chapter gives a detailed description of the observation process used to gather the data for user-centered analysis. It includes information on the subjects, materials, and the procedures followed. The second section describes the analytical process followed to produce recommendations. Limitations of the user-centered design process as used in this study are also discussed. The third section describes the prototype interface implemented.  3.1  Observation Procedure  In the study, groups of three to six persons were videotaped while performing a group ranking task in a non-computer environment. A rectangular piece of paper was taped onto a table surface to represent the working area. The items to be ranked were printed on small cards, referred to as item cards. A pilot study with three single users and three groups of two persons was conducted. The pilot sessions were intended to finalize the setup of video-cameras, the position of the table, and other operational details. To conserve our subject pooi, pilot studies with one or two persons session were conducted. Some adjustments in the setup were made in the final study. The size of the item card was  15  Chapter 3. User-Centered Analysis and Design  changed from 3 cm by 9 cm to 3.5 cm by 9 cm, and the font size was enlarged from 12 points to 14 points to improve legibility. Also, florescent yellow cards were used to enhance readability under the video camera. Ambient lighting in the laboratory was kept low to facilitate video-taping. 3.1.1  Subjects:  A total of forty-two volunteers participated in the study.  All  volunteers were students and staff of the University of British Columbia. They ranged in age from 18 to 53. The group was 52% female and 48% male. Of the 42 participants, 33 were students and 9 were university staff. Participants were paid $12 each for the roughly one and a half hour task. The volunteers were divided into eleven groups, among which six were of four persons, four were of three persons, and one was of six persons. Group size was varied to observe if it affected interaction patterns. All groups were ad hoc groups. 3.1.2 Materials: All groups were given twenty-five items to rank. The twenty-fiveitem list was taken from a study on social status ranking for occupations [Thomas and O’Brien 84]. The list of 25 items is shown in Appendix 1. 3.1.3 Procedures: The study was carried out in a temporary laboratory in the Faculty of Commerce at the University of British Columbia. A piece of paper 42 cm by 164 cm was taped onto a table to represent the working space. A video-camera focusing on the working surface was mounted about one metre above the table. All groups were asked to rank the list of twenty-five occupations in order of importance to society. The participants had no time limit to complete the task, which they were instructed to perform as a group. At the beginning of each session, each group was allowed five to ten minutes for mutual introductions. Each member in the group briefly stated his/her name and gave some information on his/her background. Then, each group was given a set of instructions. Briefly, the members were told to keep the cards within the working area on the table, told to take as much time as they needed to complete the task, and encouraged to interact verbally when necessary during the session. The stack of cards given to the group was in random order. The deck of cards was placed in the middle of the working area on the table, and each member of the group had physical access to the complete working area.  16  Chapter 3. User-Centered Analysis and Design  The group then gathered around the table and started the ranking task in the designated working area. Throughout each session, the working surface was video-taped. The conversations among the group members were also recorded. The video captured only the movement of the items (cards) and the hands of the participants. No other parts of the body were video-taped.  3.2  User-Centered Analysis  The videotapes were analyzed using procedures similar to those used by Lu and Mantei [Lu and Mantei 91]. First, a list of activities in the group ranking process was identified. Second, the activities were then clustered under factors to facilitate cogent discussion of related activities. Third, user requirements for the task were deduced from the activities. Last, design solution recommendations were derived based on user requirements [Fig 3.1].  Taxonomy of Activities  Fig 3.1  User Requirements  Factors  A Structure of the Analysis Procedures  17  Design Solution  Chapter 3. User-Centered Analysis and Design  Groups generally followed similar sequences of steps when performing the task [see Fig 3.2]. The cards were placed in one stack by the experimenter. The groups first spread the cards so that they could see all the items. Then, they divided the set of cards into different subsets. Lastly, they ranked each subset of cards and merged the subsets to one set as the final group result.  I I I  I  Step 1: Starting Point: (All Cards in One Stack)  I I I I I  I  I I  I I  I  ii  I I I I I I  I  ii  I  I I ii I  I I  I  I  II  II II II II II II  II II II II II II  II II II II II Ii  Step 4: Final Ranking (Top to Bottom, Left to Right)  Step 3: Sub-grouping of Cards  Fig 3.2  I ii  II II I Ii Ii Ii  I  I  Step2: Cards Spread out for Easy Visualization  IIII I  ii I  Sequence of Ranking Process Common to All Groups  18  I I I I I I  Chapter 3. User-Centered Analysis and Design  3.2.1  Activities in the Group Ranking Process: A list of activities in the group  ranking process was identified by studying the videotapes. The list of fifteen activities covers the major behaviors that were observed, but is not necessarily exhaustive.  List of Activities  1.  2.  3.  4.  Spreading out items At the start of the ranking session, the cards were given to users in one stack. Users spread out all items in the working area for easy visualization. During the ranking process, they would often stack subsets that had been ranked to create space to work with the other items. Such subsets were also spread out at times for further review. Agreeing on a suggested position When group members agreed on a specific rank/position for a card they would place the card at the agreed upon position. Agreeing on the grouping of items In the initial stages, the group often divided items into coarse categories—for example, as important, not-important or don’t-agree-on-importance. When group members agreed on the category that an item belonged to, they would place the item in that group. Individual rankiiig of items in subgroups The group would divide the set of cards into subgroups and agree to let individual members rank one subgroup each. The individual member would then select a subgroup of cards and rank the items in that subgroup.  5.  Modifying the grouping of an item The group would divide the set of cards into subgroups. Suggestions would be put forth to change the categorization of a card. The group would agree on the suggestions and change the grouping of cards.  6.  Modifying a suggested position The group would put the set of cards in an initial order; subsequently, a suggestion to change the position of one or more cards would be put forward. If the group agreed on the suggestion, the position(s) of the card(s) would be changed.  19  Chapter 3. User-Centered Analysis and Design  7.  Postponing decisions The group would disagree on the position of a card, and put the card aside.  8.  Deciding not to rank The group would not be able to agree on the position of a card. The card would not be put in the ranking list.  9.  Aligning items The group would align the cards for better visualization and presentation.  1(1. Stacking items  The group would stack the cards, i.e., put more than one card into one deck in order to make room for other items. 11. Recalling and clarifying the task objective While performing the task, members in the groups would discuss and clarify the goal of the task. 12. Recording ideas While performing the task, members would make suggestions on the subgrouping, elaborate on the reason(s) for assigning a particular rank to an item or for placing an item in a particular group, and agree on procedural issues. 13. Item control During the process of ranking, members would gain control of an item by picking it up. 14. Item identification During the ranking process, members would identify items during discussion by pointing at the items. 15. Consolidating groups into one final list Group members would divide the set of cards into subgroups (activity 3). They would perform the ranking task within each of the subgroups (activity 4). Finally, the group members would put all the subgroups together in one list.  Major Factors in Multi-user Interfaces for Group Ranking: The activities can be clustered under four major factors. The factors pull together related  3.2.2  activities, thus facilitating further discussions in the thesis. Once again, the factors are not exhaustive but cover the major activities that were observed in the study. They are:  20  Chapter 3. User-Centered Analysis and Design  A.  Screen Real Estate Management This factor concerns the freedom and ability of users to manage the cards within the designated area. In the non-computer environment, the paper on the table defines the area in which they have to work. This corresponds to the screen (or window) in the computer environment. The activities have to be performed within the screen real estate available. In other words, the screen real estate has to be organized and managed to facilitate the performance of the activities. The activities that affect screen real estate management include: spreading out items,  agreeing on categorization of items, modifying suggested categorizations, postponing decisions, deciding not to rank, stacking items, and consolidating the subgroups into one final list.  The pattern that surfaces is that some activities are limited by the available screen real estate, but others are made necessary as a result of the limited screen real estate. For instance, the ability to spread the cards or show all the cards in the final display of ranked items may be limited by the available screen real estate, however, spreading of items will be done no matter how much screen real estate is available. On the other hand, stacking items or overlapping items is made necessary by the fact that there is not sufficient screen real estate, that is, items will be overlapped or stacked mostly when the available screen real estate is insufficient. B.  Matrix Mode This factor deals with the activities that help define the design of the matrix mode to facilitate ranking of items. In one sense, the factor could be considered a subset of the screen real estate management factor, but is separated here because ranking is the primary task in the study. The activities include: agreeing on the rank of a card, individuals ranking the items in a subgroup,  modifying the rank of the card, and aligning items. All these activities have to be considered in deciding how a matrix to display the ranked cards is going to be created.  21  Chapter 3. User-Centered Analysis and Design  C.  Auxiliary Working Space The auxiliary working space factor concerns activities that play a supporting role in the group ranking task. The activities include: individual group members ranking items in a subgroup, the recall and clarWcation of the ask objective, and the recording of ideas. In effect, there is an area for the primary task of ranking the items, and an auxiliary area for secondary activities.  D.  Concurrency Control and Coordination This factor pertains both to the coordination of group member activities and control of the cards during the ranking process and to how conflicts for control are resolved. It includes the activities of item control and item identification.  3.2.3 User Requirements: Based upon the activities listed under the four factors, a list of user requirements was developed. Each of the requirements and its role in assisting with the activities observed during the ranking of items in the non-computer-supported environment are discussed. Screen Real Estate Management: The screen real estate management factor includes the following activities: spreading out items, agreeing on the grouping of items, modifying the grouping of an item, postponing decisions, deciding not to rank, stacking items, and consolidating the subgroups into one final list.  Based on the observations, three  requirements can be identified that would be beneficial to the users. First, the icons denoting the cards should be able to move freely on the working surface, i.e., screen. This will allow the cards to be spread out. Second, it must be possible to let the cards overlap. When cards are allowed to overlap, it will be possible to stack them, if necessary. Third, there is a further need to indicate the existence of sub-categories of cards. This can be done either by creating formal boundaries or by implying boundaries by putting each sub category’s cards in clusters in different parts of the working space. In the computer environment, it is easier to manipulate color, card size, card shape and so on. The observation procedures used in this study do not identify requirements related to these factors. Furthermore, in the computer environment scrolling of windows is  22  Chapter 3. User-Centered Analysis and Design  possible. This is probably a desirable characteristic when screen real estate is limited. Hence, scrolling of windows will be incorporated in the design if appropriate. Matrix Mode: The matrix mode factor includes the following activities: agreeing on the rank of a card, individuals ranking the items in a subgroup, modifying a suggested position of an item, and aligning items. When the rank of a card is agreed upon, a designated space should be available for the card. Presumably the cards will be placed in ascending or descending orders of importance. In the observation phase, it was noticed that all groups ranked in columns. Also, since one column may not be adequate to accommodate all the cards, it should be possible to arrange the cards in multiple columns. These are not rigid requirements, but can be implemented as observed. The insertion of an item between two adjacently ranked cards can benefit from functions that are feasible in the computer environment, but not in the non-computer environment (for example, automatic alignment of cards). The insertion of the card creates the requirement that all subsequent cards have to be moved to accommodate the new card, but without disturbing the existing logical sequence of cards. In the non-computer environment, users made these adjustments manually. This manual operation was simple when the physical location of only a few items had to be changed. When many items had to be moved, the manual operation was more cumbersome. In the computer environment, the items that have been ranked can be moved to accommodate the new card automatically, that is, all ranked cards re-align themselves automatically. The ranked cards cannot be locked into position, since it must be possible to modify the ranking. In the non-computer environment, the position of several cards were adjusted in one smooth motion using two or more fingers. This requirement to move multiple cards independently in one motion may be implementable in a touch-screen system, but is not feasible in a mouse-based system. Auxiliary Working Space: The activities pertaining to the auxiliary working space are individual group members ranking items in a subgroup, recall and clarification of task objective, and recording of ideas. In the non-computer environment, individual group members ranked subgroups of items in separate areas of the working space. This indicates  23  Chapter 3. User-Centered Analysis and Design  that individual members need a separate space for the activity. Such ranking of subgroups of items can be done in one portion of the main window or in a separate window. In the observations, group members kept track of the goals of the task and other ideas mentally, occasionally verbalizing such mental notes. The verbalization presumably serves two purposes. One is to recall the information for use in the process of ranking. The other is to test and verify that other members are also using the same criteria and working towards the same goal. The process of recall can be supported by (a) having an information panel listing the topic and objective of the task, and (b) by providing a private window for maintaining ideas. In the computer-environment, the windows providing the auxiliary space can be closed. This would ensure that available screen real estate is not wasted on activities that are not being performed. Concurrency Control and Coordination: The concurrency control and coordination factor is mainly concerned with the activity of identifying items and resolving conflicts for control of cards. In the non-computer environment, users identify items by pointing to but not touching cards while discussing the item. The pointing can be specific because the manual operation is taking place in a three-dimensional space; that is, the movement of the finger is not confined to the plane of the working surface. In the computer environment, the cursor serves as the pointing device. As the cursor is in the same plane as the items, the pointing is non-specific. Hence, specificity in pointing to an item under discussion will have to be achieved by highlighting the item when the cursor is close or on it. In the non-computer environment, more than one member can point to the same item. Therefore, users should have the means to point to the same item at the same time. However, only one person can pick up an item to rank it. Hence, in the computer environment only one individual must be able to move the item at a given time. This can be accomplished by providing a locking feature to ensure that the item is controlled by only one of the participants.  3,2.4 Design Solutions: A design solution is proposed based on the requirements identified. Admittedly, however, alternate solutions are possible.  24  Chapter 3. User-Centered Analysis and Design  The proposed design offers a working space in which the icons representing the cards (henceforth referred to as items) can move freely, overlap if necessary and be stacked, This feature meets the requirement to spread the items and allow them to overlap. The design does not include formal boundaries for the different categories that are created during the ranking process. However, the visual segmentation necessary between the different sub-categories is achieved by clustering items in user-defined areas of the screen. As there are no formal boundaries, these informal areas can enlarge or shrink to accommodate the number of items placed in that category. Alternately, segmentation can be represented with color or formal boundaries. In either case, additional steps to define the colors or the boundaries will be necessary. In the case of boundaries, users would have known the number of categories required. Both color and formal boundaries have the advantage of adding to the clarity of the visual segmentation—but at the expense of additional operations to be performed by the users. The observations in the non-computer environment suggest that the visual segmentation offered by clustering items in informally-defined areas was adequate for this task. The free working space is convenient for moving the items without restrictions and allowing items to overlap as necessary. However, it is not convenient with the requirement for ranked items to adjust automatically when new items are inserted between adjacent items, The automatic adjustment requires a matrix to be defined, which can anchor the location of the card. The free working space and the defined matrix are mutually exclusive modes. A toggle-switch is thus necessary to switch between the two, The items were aligned in columns with no overlapping when the screen was in the matrix mode. If the number of items exceeded the space available on the screen, the excess items would not be visible. However, scrolling is an option in the computer-environment, the design allowed the screen to be scrolled to see such excess items, An information panel facilitates the recall of the purpose of the task. Individual participants can open private windows to create space for taking notes. Multiple cursors are provided, one for each participant. Each cursor is a different color. Pointing or identifying an item during discussion can be accomplished by moving the cursor close to the item of interest. The item can then be highlighted. A user can obtain control of the item  25  Chapter 3. User-Centered Analysis and Design  ACTiVITIES  DESIGN SOLUTION  USER REOUIREMENT  Spreading Out Items Agreeing On the Grouping of Items  MOVE iTEMS FREELY  FREE WORKING SPACE  GROUP ITEMS  ABILTIY TO OVERLAP ITEMS  Modifying the Grouping of an Item Postponing Decisions  Deciding not to Rank  Stacking Items Consolidating Subgroups into one Final List  Fig 3.3a  Summary of the Analysis: Screen Real Estate Management (see also Fig 3.3b, 3.3c, 3.3d)  ACTiVITIES  USER  DESIGN  REOUIREMENT  SOLUTION  Agreeing on the Rank  PUT ITEMS INTO A LIST  of an Item  Individual Ranking of Items in Subgroups  Modifying a Suggested Position of an Item  [  ARRANGE iTEMS iNTO MULTIPLE COLUMNS  I I  MOVE iTEMS IN THE LIST ALIGN ITEMS  Fig 3.3b  Summary of the Analysis: Matrix Mode (see also Fig 3.3a, 3.3c, and 3.3d) 26  GRID WiTH MULTIPLE COLUMNS  AUTOMATIC ALIGNMENT OF ITEMS  Chapter 3. User-Centered Analysis and Design  USER REOUIREMENT  ACTIVITIES Recall and Clarify Task Objectives  ____  DESIGN SOLUTION  RECALL TOPICS —. OF CONCERN  PROVIDE INFORMATION PANEL  Individual Ranking of Items in Subgroups AUXILIARY PRIVATE WORKING SPACE  Recording Ideas Fig 3.3c  Summary of the Analysis: Auxiliary Working Space (see also Fig 3.3a, 3.3b, and 3.3d)  ACTIVITIES  USER REOUIREMENT BE ABLE TO POINT AT THE SAME ITEM  Item Identification  Item Control  Fig 3.3d  DESIGN SOLUTION MULTIPLE CURSORS WiTH DIFFERENT COLORS  ONLY ONE USER CAN MOVE AN ITEM AT ONE TIME  Summary of the Analysis: Concurrency Control and Coordination (see also Fig 3.3a, 3.3b, and 3.3c)  by pointing to the item and holding the mouse button down. Control is surrendered when the user releases the mouse button.  27  Chapter 3. User-Centered Analysis and Design  The mapping of the taxonomy of activities to the user requirements and the mapping of the user requirements to the design solutions are summarized in Figure 3.3a, Figure 3.3b, Figure 3.3c, and Figure 3.3d. It should be noted that the proposed design of the multi-user interface for the group ranking task meets in a limited sense the design criteria suggested by Tang [Tang 89]. Tang recommended that a shared workspace should (1) convey and support gestural communication, (2) convey the process of creating artifacts, (3) allow seamless intermixing of work surface actions and functions, (4) enable participants to share a common view, and (5) facilitate participants’ natural ability to coordinate collaborations. The multi-user interface suggested for group ranking provides a way of facilitating and supporting gestural communication in a limited sense. As the users are talking, they can point at the items with a colored cursor of their own. In addition, users can move items as they are expressing their opinion of the rank position. The designed interface also supports the process of creating artifacts to express ideas as the users can actually move the items and create new ranking of the items and new subgroups. The working area in the multi-user interface not only provides a shared workspace that enables all participants to share a common view of work surface, it also allows seamless intermixing of work surface actions and functions. When a user moves his/her cursor on the work surface, all participants will be able to view the movement of that cursor. Moreover, if an item is moved by a user, the program will automatically update the view on all screens at the same time. The common view requirement is relaxed partly in instances when the space required by the number of items exceeds the space available in the window, and two or more participants are working on items on opposite edges of the working area. Lastly, with the implementation of multiple cursors and free working area, participants in a ranking session can coordinate their collaborations naturally as if they were working in a non-computer environment.  28  Chapter 3. User-Centered Analysis and Design  3.3  Implementation of the Prototype  The recommendations from the user-centered analysis were used to construct a prototype of the multi-user ranking program. The prototype was built in the NeXTStep environment at the Group Decision Support System (GDSS) lab at the University of British Columbia. This prototype was designed to support interactive, synchronous, face-to-face multi-user ranking. 3.3,1 Description of the Prototype: The prototype of a multi-user ranking program allows one to four people to perform a ranking task synchronously while working at their own NeXT workstations in the same room. The current implementation assumes that the users will be able to communicate orally, if necessary. The actions of all users are immediately transmitted to the other workstations, so users are able to see and discuss each participant’s ranking preferences as he/she performs an action. The main window of the multi-user ranking program [Fig 3.4] is composed of two parts: the working area where the actual ranking task is done, and a control section that contains five icons to activate different functions. The working area covers about 90% of the space in the main window. Items to be ranked are represented as card-type icons in the working area [Fig 3.4]. The items are loaded from a text file at present, but they could be loaded from a brainstorming session in a group support system. These items can be moved around in the working area by clicking on them and dragging them to the desired position. When a user clicks on an item, the border of the item (box) will be highlighted to show that the user has exclusive control of the item. An item can be moved by only one user at any time. The control of it will be relinquished when a user releases the mouse button. A control section is placed on the top in the main window of the program. There are four buttons and one Edit-Box (a standard Windows control) in the control section. The former are labeled “Information”, “Tidy”, “Scratch Pad”, and “Quit”; the latter is labeled “Num of rankers”.  29  Chapter 3. User-Centered Analysis and Design  Fig 3.4  Maui Window in Multi-user Ranking Program  The “Information” button calls up a panel which displays information and/or ranking criteria that are specified in the beginning of the ranking session by the session initiator. The working area in the ranking program can be set to two different modes of display: the Tidy-ON mode and the Tidy-OFF mode. In the “Tidy-OFF” mode, the working area serves as a free working space in which users can move the items freely, or  stack up the items to save space. In the “Tidy-ON” mode, the structure of a matrix is imposed on the working area and thus all items will be aligned in a matrix according to their  relative position before the tidy mode was turned on. When the tidy mode is ON [Fig 3.5], items in the ranker can be moved only into preset positions defined by the matrix; the items 30  Chapter 3. User-Centered Analysis and Design  in the ranker will be aligned at all times. The “Tidy ON/OFF” button shows the current mode of the ranking program, and toggling the button will cause the working area to alternate from Tidy ON to Tidy OFF. It is assumed that the size of the working area in the main window will not be large enough to display all items in it without overlapping. When the Tidy button is turned on, the working area is divided into columns. The coordinates of the center of all the items are recorded. All the items are then sorted by the coordinates of their centers within each column. The display of the working area is then refreshed with all items aligned in the mathx. If the coordinates of any two items are the same, the relative ranking of the two items is decided randomly. .  RANKER Num orrankerj 1  merchant  •.  de’nhst  a e  dItch-digger  brlcscarrJ?p:2  civil engineer  m  .  Quit  Scratch Pad  accountant  doctor  lawyer  Tidy On  Information  army  politician  school principal  bus driver  foreign missionary  banker  .  Fig 3.5  Multi-User Ranking Program in “Tidy On” Mode  The “Scratch Pad” button activates the scratch pad, which provides a writing area for users to take notes in during the ranking session. The “Quit” button allows the user to 31  Chapter 3. User-Centered Analysis and Design  quit the ranking program. Participants of the ranking session are allowed to quit at any time. However, the ranking session initiator will be able to quit the ranking program only when all other users have quit. The “Num of ranker” edit box shows the number of participants in the ranking session. This information will be updated as participants logon to or logout of the program. Two graduate students provided feedback on the first implementation of the prototype. Minor modifications to the size of a window, size of the items to be ranked, and size of the font were made on the basis of their feedback. The idea to highlight items to indicate that they were under the control of a user was also suggested during the preliminary testing.  Fig 3.6  Registration Panel in Multi-User Ranking Program  Here, it must be reiterated that the goal was to construct a prototype to gather user feedback on the interface. In consequence, many issues that will be significant in a final implementation have not been addressed, or if addressed, may have been addressed in one way only. For instance, ranking sessions have to be initiated and closed. In the prototype, 32  Chapter 3. User-Centered Analysis and Design  the first user to activate the ranking program becomes the session initiator, and will have to perform a set of activities attendant to the role of the initiator. The final implementation may or may not require a session initiator. If a session initiator is required, the method of selection may be different The initiation of a session is not the focus of the study, and so an essentially arbitrary method was chosen to assign the role of the initiator. Some of the other features are described below.  Fig 3.7  Ranking Information Panel in Multi-User Ranking Program  The sequence of steps to use the software is as follows. When the multi-user ranking program is executed, a registration panel is displayed on the screen, which allows a user to choose a colored cursor [Fig 3.6]. Users can enter their names to identify their cursors, or they can choose not to enter their names on the screen for an anonymous ranking session. The first user who starts the multi-user ranking program is the ranking session initiator, the chairperson of the group. A “ranking information” panel [Fig 3.7] will appear on the screen for the ranking session initiator. The ranking session initiator will have to enter the topic of the particular ranking session along with any other relevant information or ranking criteria that needs to be communicated to the other participants. Then, the initiator starts the ranking session by using a file browser to choose the file that 33  Chapter 3. User-Centered Analysis and Design  contains items to be ranked. All users will be able to see all the different colored cursors on their screen, and any movement of any cursor will also be displayed concurrently. The session initiator will be the last person to logout. Currently, intermittent positions of the ranking cannot be saved, (i.e., logging is not possible) but the final positions can be.  3.3.2 Limitations of the Prototype: There are four limitations with this prototype: performance degradation with increasing numbers of users, the need to relax what-you-seeis-what-i-see (WYSIWIS), the non-interruptability of the sessions, and the absence of a voice channel. This prototype of the multi-user ranking program was implemented in the NeXTStep environment.  The NeXT workstations are connected in a network and  communication among machines is provided by Distributed Objects. Distributed Objects is a class supported by the NeXT Library. Calls and messages are sent among different workstations to update the representation of the movement of items and cursors by different users, The amount of communication required to provide synchronous screen of display among the workstations is considerably high. Thus, in order to provide reasonable efficiency in the network, the number of users for this prototype is restricted to a maximum of four. However, it would be possible to build a set of communication protocols at the machine language level to permit faster transmission of information among different workstations, and thus increase the number of concurrent users beyond four. This was not done, because once again, the goal is to evaluate the user-centered design, which can be done with fewer than four users. A second limitation is related to maintaining WYSIWIS (what-you-see-is-what-isee). When the number of items is large and space required exceeds what is available in the window, the items that are off-screen can be seen by scrolling. However, if one user starts to scroll a window, then the scrolling will take place on all workstations. This could be problematic—if one of the other users is working on an item close to the other edge, because those items will be scrolled out of the window. Thus, this prototype relaxes the WYSIWIS requirement in the sense that scrolling at one workstation will not lead to scrolling at other workstations. However, the position of all items will be updated even if the item is not visible in the current scrolled section of the screen. 34  Chapter 3. User-Centered Analysis and Design  As implemented currently, the prototype requires all participants in the ranking session to logon to the program before the ranking session starts. Once the ranking session has started, nobody can join the session (i.e. the session cannot be interrupted to allow latecomers to join in). All participants, except the session initiator, may exit early. The current version of the prototype does not provide an electronic communication channel for text or voice, and hence is suitable for face-to-face interactions only. But the inclusion of such channels in production systems is possible. With the inclusion of a text and I or voice communication channel, the system could be used in the distributed mode. 3.4  Limitations of the Design Process The limitations of the user-centered approach employed in this study have four  sources. First, the way in which the task is framed and the materials provided to the subjects can bias the observations. In this study, the participants were given a stack of cards representing individual items for ranking. An alternate way to frame the task would be to provide the group with the list of items on paper. In such a case, the individual or group may write numerical ranks beside the items. If changes to the numerical ranks are necessary, the numbers will be overwritten or arrows drawn to suggest the logical moving of the item to a different point in the list. Ultimately, the list may be recopied to reflect final ranks decided upon. It is unlikely that the group will go through the process of cutting up pieces of paper, writing the items, moving the pieces to arrive at the final ranking, and then recopying the items to reflect the proper sequence. Consequently, providing the initial list on paper may produce a very different design of the interface than when the list of items is provided on individual cards. In this study, the assumption is made that it is advantageous to have a visual picture of the relative ranks during the ranking process. The use of cards provides this visual picture and naturally suggests a direct manipulation computer interface. Second, the computer environment makes easy certain processes that are difficult without the computer. In instances where the computer processes are easier or more intuitive, it may not be wise to reproduce the manual process on the computer. In this study, it was assumed that an icon-based interface would be more appropriate than an interface that required users to write the numeric ranks beside the items. The goal of the 35  Chapter 3. User-Centered Analysis and Design  study was to examine what additional insight could be gained once this assumption was made. Also, since that screen real estate will be limited in the software implementation, the space that was made available to place the cards in was less than the total space required if all cards had to be visible (i.e. available working surface was restricted). In this way, it was hoped that results relevant to the existing technology would be produced. Third, the observation procedure required the group to rank twenty-five occupations in terms of their importance in the society. This task, however, is relatively simple and the resulting rank may not carry any significance to members of the group. Thus, there is little incentive for participants to persist in their opinions to the point of conflict. The absence of major conflicts during the interactions precludes observations on how such conflicts are resolved. Hence, the conflict resolution process in the interface may be relatively simplistic. Fourth, the process of arriving at the design solution from the user requirements is somewhat subjective. This is so in those instances when actions can be easily performed in the computer environment but not so easily in the non-computer environment. For example, scrolling of the window was permitted in the prototype to enable the users to see all items. The inclusion of scrolling in the design solution is not a result of the observations, but reflects the subjective belief of the researcher that scrolling is beneficial.  3.5  Summary  In this chapter, the user-centered analysis and design process was described. Subjects were observed performing the task in a non-computer environment. The observations were analyzed to provide a list of activities that had to be supported. These activities were then clustered into sub-categories. User requirements were defined in terms of the activities that had to be supported; subsequently, the requirements were translated into design features. The limitations of the process were discussed. In the next chapter, the design features suggested here are compared to the design features suggested by other computer programmers for the same task.  36  Chapter 4.  Comparison with Alternative Designs  Chapter 4  Comparison with Alternative Designs The design solution suggested in the previous chapter is based on a relatively lengthy process of observation. Such a lengthy process can be better justified if it yields information that would not be intuitively apparent. A study was thus conducted to determine if programmers would arrive at the same or a similar design solution intuitively, without the benefit of observing users. Three programmers were asked to design a user interface for a multi-user ranking program. The designs of the programmers were then compared to the design solution arrived at using the user-centered approach. The comparison is limited by the parameters in the data-gathering process (for example, the experience of the programmers and the statement of the problem). These limitations are discussed at the end of this chapter.  4.1  Procedures Three programmers were invited to complete a design exercise (Appendix 2a and  2b). A ranking problem was described and the programmers were asked to design a program for groups to rank items. The programmers were instructed to design a system which would require minimal learning and training for users with limited computer experience. The programmers were given three days to do this paper-and-pencil design of a multi-user ranking program. In other words, the programmers were requested to give a written description of their suggested system, but were not required to implement the design.  37  Chapter 4. Comparison with Alternative Designs  4.2  Suggested Designs  All the programmers were experienced programmers (2 to 5 years experience) familiar with the concepts of command-line and menu-driven interfaces. It is not known whether the programmers had formal training in user interface design or groupware design. Two programmers had written programs with icon-driven interface. Only one programmer had experience working on a NeXT machine. They ranged in age from 23 to 26. Two of them are female. A brief description of the program designed by each programmer is provided below.  Itemi  I  Item3  4  Itein4  Scroll bar  ItemS  Lltem 6 Item7 Item8  Fig 4.1  Programmer-suggested interface for multi-user ranking program (Programmer 1)  Programmer 1 designed a one-column ranking program with all the items aligned on the screen. In this design, the user could scroll the window to see all items. Each user would be able to see his I her own set of items only. The items could be moved vertically  38  Chapter 4. Comparison with Alternative Designs  by dragging them with a cursor. After all users finish ranking the items, the computer would average the results to come up with the final rank [Fig 4.1]. This design is comparable to the implementation of the ranking program in MeetingPlace®, a Group Support System implemented in the NeXTStep environment at the University of British Columbia (the programmer was not associated with the development of MeetingPlace). This design of the user interface would involve a non-multi-user implementation. All participants in the group would have to work individually and send their results to the ranking session initiator when the task was completed. The session initiator would send the group rankings to the group.  RANKING 1  Item 1  2  Item2  3  Item3  4  jltem4  1  I  Item 5  2 3  I 5  5  Fig 4.2 Programmer-suggested interface for multi-user ranking program (Programmer 2)  Programmer 2 designed a ranking program using a card-type icon to represent items on the screen. The screen was divided into two columns, which were themselves divided into numbered slots [Fig. 4.2]. The items to be ranked would be presented in the left column in alphabetical order. When the rank of an item were decided upon, users would move the items to the appropriate slot on the right side of the screen. If a particular item were being moved by a user, the item would change to a different color. Items that had been moved to the right side of the screen could be moved into other slots in the column.  39  Chapter 4. Comparison with Alternative Designs  The items would shift automatically to accommodate the insertion of a new item. The design suggested by the programmer would allow single users only. However, it is possible to envision how it could be implemented in a the multi-user design.  Programmer 3 suggested a ranking program analogous to the idea of a token ring. A user of the system would have to request the token in order to get control of an item for a certain period of time. The screen would be divided into several areas with the items displayed in one area. The ranked items would be shown in a columnar ranking area. An area would be provided for suggestions and another for a forum [Fig. 4.3].  Get Token  st  Token User:  [ Item 1 J I Item 2 1  Rank Item 3]  [Item 5  jltem7  [Item 4]  Item 6  IItem8  Insert Above Insert Below  Delete  Suggestion  II I.  I  Forum  II  I  Fig 4.3  I I  j  Programmer-suggested interface for multi-user ranking program (Programmer 3)  Items would be represented by card-type icons and listed in the item area of the window. The token holder could select an item from the item area and place it in the rank area by clicking on the icons which would specify the position. Users could use the  40  Chapter 4. Comparison with Alternative Designs  suggestion box to enter suggestions of possible rank for an item, and other participants could use the forum box to enter their opinions on the suggestion. Clicking on the “suggest” button would activate the suggestion box and let users enter their suggestions. Users would also be free to change their suggestions. Once every user agreed on the item, it would then be entered on the rank list when the user pressed the “Save” button. This design would also provide an “undo” function to return to the original state.  4.3  Comparison of Designs The designs provided by the programmers included the layout of the screen and a  brief description. However, the programmers did not always provide specific information on how some issues would be handled. In addition, two of the three programmers designed single-user interfaces rather than multi-user interfaces. In comparing the designs, the focus is to examine if major differences exist between each of the programmer designs and the user-centered design. Several differences can be seen. First, the programmers do not seem to have anticipated that groups would first categorize the items into coarse categories. Consequently, whereas the user-centered analysis suggests the need for dual modes (i.e. a free working space and a matrix mode), the programmers have provided only a one-column ranking mode. Second, user-centered design suggests the need for multiple columns, yet none of the programmers allowed for multiple columns. Admittedly, multiple columns are not essential, but they do provide some advantages. With the single-column design, when the number of items to be ranked is large, only a limited number of items will be visible at a given time. Observations of subjects in the non-computer mode has shown that subjects like to spread out items so all of them can be seen at the same time. With multiple columns and the ability to overlap items, the number of items that can be managed is much larger than with a single column.  41  Chapter 4.  Comparison with Alternative Designs  User Requirements Free working space Stacking/overlapping  Programmer 1  Programmer 2  Programmer 3  not available  not available  not available  Formation of sub categories  not possible  not possible  not possible  not possible  not possible  not possible  Multiple columns  not possible  not possible  not possible  Automatic readjustment of ranks  Possible  Possible  Possible  Allow change rank of items  Possible  Possible  Possible  Space to write private comments  no space  no space  no space; but space was allowed to write in suggestions  Space to work individually  not required  not required  no space  Recall topic  not possible  not possible  not required  Identifying items  not applicable  not applicable  possible  Controlling items not applicable not applicable possible Fig 4.4 A Comparison of Programmer Designs to Requirements Derived from User-Centered Analysis Third, user-centered analysis suggested the need for a private working space to allow individuals to write notes and rank sub-sets of items. Private notes may not be an essential feature, but the need for a private area to rank sub-sets of items was identified in the user-centered analysis. The failure of the programmers to provide a private space is attributable to two reasons. First, none of them had anticipated that groups would divide items into sub-categories. Second, Programmers 1 and 2 had designed an interface based on the editing mode (i.e. each user would rank the complete set of items individually, and the ranks would be aggregated in the end). In such a design, there is no need for private  42  Chapter 4. Comparison with Alternative Designs  space for individual group members to rank. Programmer 3 did not provide a space for individual group members to rank sub-sets. Fourth, the identification and control issue was addressed only by Programmer 3. This is not surprising. Programmers 1 and 2 based their design on the editing model, in which each group member has control over his/her own set of items. Fifth and last, all three programmers incorporated the idea of direct manipulation in the interface. Programmers, in general, accept that direct manipulation interfaces require less learning and mental effort, and thus would be more appropriate for personnel with limited computer experience. In summary, it is hypothesized that programmers assume a linear sequence of activities when groups rank items (i.e. the rank of each item is decided when the item is first considered). In reality, however, both groups and individuals (individuals were observed in pilot studies), tend to divide the overall set of items into coarse sub-sets and set aside items that they are unsure of. The sub-sets of items are ranked relative to each other, and then all the subsets collapsed into one large set. The items set aside are then considered and inserted into appropriate positions. The sequence followed by the groups is not supported by any of the programmer designs. Based on conversations with programmers, it would appear that programmers tend to think from the perspective of the programming effort required and the feasibility of solutions in specific computer environments, rather than designing the interface that would be most intuitive for users.  4.4  Limitations of Analysis The differences observed between programmer designs and the user-centered  design must be interpreted with caution. Several caveats are in order. These include the statement of the problem, the background of the programmers, and the preliminary nature of the designs. Each of these issues in discussed in greater detail below.  43  Chapter 4. Comparison with Alternative Designs  First, the statement of the problem presented to the computer programmers raises some questions. A high-level statement was written with the intent of not biasing the programmers towards any particular design. However, for purposes of observation in the non-computer environment, the framing of the problem made some assumptions. These include the beliefs that icon-based designs would be preferable and that screen real estate would not be adequate to accommodate all the items. Hence, the criticism can be made that the starting point in the observation process was different from the starting point of the programmers and so it is not surprising that the designs are different. The defense against the criticism is that while the assumptions made in framing the problem for observation were not made clear to the programmers, the programmers have made similar assumptions on their own. Each of the programmers has used the icon-concept in his / her design. Two of the three programmers have explicitly allowed for scrolling, which indicates that they anticipate more items than can be displayed on the screen at one time. A second criticism of the problem statement could be that it was not made clear to the programmers that the users would be working concurrently. This led two of the three programmers to suggest designs that used single-user interfaces with the group ranking being arrived at by aggregation. The criticism is valid. The failure of the programmers to offer a multi-user design has not been listed as a shortcoming of the programmer designs. Instead, the design offered by the programmers has been extrapolated to reflect a multi-user design and comparisons have been made on that basis. The second caveat relates to the background of the programmers. The criticisms here include (a) programmers do not design, systems designers do, (b) the programmers may not have had formal training in user interface design, and (c) the programmers may not have had formal training in groupware design.  These are valid observations. The  significance of the comparisons is therefore limited by the background of the programmers. The third caveat is that programmer designs are initial designs. In practice, it is conceivable that the designs would be tested by users, and modifications made on the basis of user feedback. Hence the comparison presented suggests that user-centered analysis provides insight over and beyond that provided by the initial designs by programmers. The question of whether user-centered analysis as performed in this research provides greater  44  Chapter 4.  Comparison with Alternative Designs  insight then do programmer designs when the programmer adopts an iterative design-test approach remains to be seen. In the next chapter, informal evaluation of the prototype implemented is discussed.  45  Chapter 5. Prototype Evaluation  Chapter 5  Prototype Evaluation A prototype of the multi-user interface for group ranking was implemented based on the results of the user-centered analysis. The prototype was then informally evaluated by users. The phrase “informal evaluation” is used to reflect the fact that no controlled study was done. Instead, subjects were observed performing a ranking task on the prototype and then asked to give verbal feedback. User feedback points to some of the problems of separating the evaluation of the interface from the evaluation of the application.  5.1  User Testing of Prototype The user study was conducted with three teams of three people each. Each group  used a single-user ranking program and the multi-user ranking program to perform a group ranking task. The single-user ranking program is part of the Group Decision Support System Research at the University of British Columbia [Fig. 5.1]. This single-user ranking program (referred to as SU-ranker) looks very similar to the design suggested by Programmer 1 in Chapter 4. AU items to be ranked are arranged in one column, and users can use the mouse to drag and move each item vertically along the SU-ranker. On the left of the items is a list of numbers showing the rank position of the items. Each participant ranks the items individually. The ranking with the SU-ranker can be done synchronously or asynchronously. In this study, the ranking was done synchronously. When the ranking is completed, average ranks of the items for the group are reported. During the evaluation study, all users were seated in the same room. They were allowed to talk during the study. However, users were asked to remain in their seats during the session, and they were not allowed to look at each other’s screens. 46  Chapter 5. Prototype Evaluation  All the groups were asked to rank twenty occupations in terms of their importance to society. There was a total of forty occupations for this study of two ranking programs. Twenty-five of them were taken from the same used in the study of manual ranking and fifteen new ones were added. The list of forty occupations was then randomly divided into two. This topic was chosen for two reasons. First, this topic did not require any specialized knowledge from the users and each user should thus have had an opinion. Second, the same topic was used in the study of manual ranking process as described in chapter 4. However, the issue of subject involvement and motivation in the ranking task is still of concern. Before the study started, each participant was given an information sheet and a printed illustration of the multi-user ranker (MU-ranker, Appendix 3a and 3b). Two groups performed the task first with the multi-user ranking program and then with the SU ranker, while one group used the SU-ranker first. Participants were given a brief sheet of information about the ranking program they were going to use, and then asked to start on their first ranking task. Upon completion of the task, all participants completed two questionnaires (Appendix 4a and 4b). Following a ten-minute break, they proceeded to the second ranking session. All participants were asked to work as a team and they were also informed that they could communicate orally with each other throughout the ranking session. The experimenter was present in the room to answer questions raised by the participants and to observe their reactions to the systems. For the multi-user ranking program, one of the participants was chosen to be the ranking session initiator. This ranking session initiator typed in the topic and other relevant information. Then, all other participants logged on to the ranking program and started the ranking session. The researcher noted comments made by the participants during the performance of the task. When the participants finished the two ranking sessions, the author debriefed them and received qualitative comments about the two implementations. At the end of each session, all participants were asked the following for each ranking program: what they liked about each system, what they disliked about each system, and how they would improve the system. 47  Chapter 5. Prototype Evaluation  mtary. school techér  3I L  S4  I  erf  Phys1tañ.  4  j  S  .SjF: .  4I  FL.  13  Fig 5.1  5.2  Single-User Ranking Program from GDSS Research at UBC  Results and Analysis Several issues need to be clarified before discussing the results of the user  feedback. First, the MU-ranker is a dual-mode system (i.e. free-ranking surface mode and a matrix mode), while the SU-ranker has only one mode (ranking in a single column). Second, only a few groups were observed.  Third  48  and  last, there may be an effect due to  Chapter 5. Prototype Evaluation  the order of the use of the rankers, which was addressed to a limited extent by having one of the groups use the rankers in the opposite order. The results are grouped as follows. First, general observations, such as time to completion and sequence of usage, are discussed. Second, specific feedback from the users is discussed. Third, overall feedback is discussed. Last, the usage of the features provided are compared to the requirements identified in the user-centered analysis.  5.2.1  General Issues: Issues in this category include the time to completion and the  typical sequence of using the interface. Time to completion: In general, it took longer to complete the ranking task using the multiuser system (about 32 minutes on average) than when using the SU-ranker (about 18 minutes on average). In fact, one of the groups took about thrice the time to complete the task with the multi-user ranking program (about 45 minutes) than it needed with the SU ranker (about 15 minutes). In the SU-ranker, the subjects were not asked to come to a consensus. The multi-user ranker requires the group to reach consensus, and thus involves more discussion and communication. It is assumed that if groups were required to arrive at a consensus with the SU-ranker, the time to completion would be comparable in the two cases. Typical sequence of use:  For the MU-ranker, the group discussed the topic of ranking,  before beginning the ranking task. The group would then decide on the arrangement of the ranked item, such as placing the most important item in the upper left hand corner and then going from the top to the bottom. During the ranking session, eight out of nine users referred to the ranking information panel and reviewed the topic of ranking. However, none of the users used the scratchpad to take notes. Users ranked the items roughly with tidy mode OFF, and when they finished the primary ranking, they turned the tidy mode ON. For the SU-ranker, the groups started the ranking process by verbally clarifying the topic of ranking. Then, each participant worked on his/her own list of items without further discussion. During the ranking process, there was no verbal discussion about the 49  Chapter 5, Prototype Evaluation  ranking task. Only comments and questions about the SU-ranker in general were raised. When they completed the task, the participants sent their results back to the issue initiator.  5.2.2  Specific Feedback: The improvements suggested by the participants mostly concerned low-level issues. Most participants complained that sometimes they could not move an item, especially when the multi-user ranking program was in Tidy-ON mode. This could have been due to two reasons. First, user A may have tried to move an item while user B had control of it. Second, when the multi-user ranking program is in TidyON mode, it updates the position of the item and arranges all items again in terms of their relative positions.  Then, messages are sent across the network to inform other workstations to update their screens. Thus, the program has much computing overhead which delays the response of the mouse and the screen display.  One participant suggested that the slots in the mathx be numbered when Tidy mode is ON. She found that it was confusing to figure out how the items were arranged when Tidy mode was ON. Three users suggested that an on-line help function should be added to the program to explain how the ranking program works. One participant suggested that the window of the multi-user ranking program should be enlarged so that users will be able to put all twenty items in the window without overlapping. All these suggestions have a valid basis. The response of the system was known to be slow. The software will have to be re-implemented to address that issue. The suggestion to number the matrix and include on-line help are useful feedback. The suggestion to enlarge the window is not useful as it merely postpones (it eases but does not eliminate) the problem of limited screen real estate.  5.2.3  Overall Feedback: Six out of the nine participants said that they enjoyed using  the multi-user ranking program because they had more discussion and they liked the physical movement involved when moving the items on the screen. However, some participants thought that the multi-user ranking program was too cumbersome.  50  Chapter 5. Prototype Evaluation  Participants said that they liked the SU-ranker because they had total control of all items and the ultimate right to make the decision. However, when they were presented with the aggregated rankings of the group, they disagreed with several of the ranks. The key point that surfaces from the verbal feedback is that when individuals evaluate interfaces, issues related to intuitiveness may be secondary to issues related to control and coordination. A bad interface may cause problems, but a good interface does not overcome other problems. Based on observations the users and the few requests for assistance during the use of the interface, there is support to indicate that the multi-user interface was adequately intuitive. 5.2.4 Summary Comparison of System Usage with Requirements Identified in User-Centered Analysis: User feedback has provided some useful information, but has failed to support the need for some of the features that were included on the basis of user-centered analysis. New features, such as numbering the slots and providing a help screen, were suggested. A comparison of the features used in the MU-ranker to the requirements initially identified indicates that most of the features were used [Fig 5.2].  REQUIREMENT  USEFULNESS DURING MU-RANKER EVALUATION  Be able to move items freely Be able to group items Be able to put items into a list Be able to arrange items into multiple columns Be able to move items in the list Be able to align items Be able to recall topics of concern Be able to write down comments or ideas Be able to point at the same item Only one user can move an item at one time  Used Used Used Used  Fig 5.2  Used Used Used Not Used Used Used  Comparison of User Requirements to Usefulness During Evaluation  51  Chapter 5. Prototype Evaluation  The need to have access to a statement of the goals of the task was made clear by the users calling the information frequently. The usefulness of the dual mode has not been substantiated. A need for a private space was not observed. It is conceivable that the application is too simple for such a need to arise or that these users were not sufficiently motivated or challenged. In sum, further studies are necessary to establish that the dual mode is necessary and that the private space is needed in simple one-time decisions.  52  Chapter 6. Concluding Remarks and Future Directions  Chapter 6  Concluding Remarks and Future Directions The primary objective of the current research was to contribute to the design of multi-user interfaces for ranking activity in group support systems. Three sub-objectives were articulated: (1) to identify user requirements for a multi-user interface and propose a design based on the requirements, (2) to examine if the user-centered process provided insight beyond the preliminary intuitive designs of programmers, and (3) to implement the user-centered design and gather feedback from users. The study makes some useful contributions towards each of the sub-objectives. However, the contributions must be viewed in the light of the methodologies used to gather the information. The study raises many issues that need to be examined further. The contributions and the associated limitations are discussed first, followed by some of the issues that need further attention in future research programs.  6.1  Contributions and Limitations  The contributions and associated limitations are discussed in relation to the subobjectives. First, user requirements for a multi-user interface were identified and a design proposed. Some of the needs identified are (a) a dual mode system (i.e. a free working surface and a matrix mode) with the ability to toggle between the two modes, (b) the ability to overlap items, (c) multiple cursors, and so on. The requirements identified and the resultant design are limited by the user-centered approach used. The first limitation is that the design for the multi-user interface for group ranking is biased by the framing of the initial observed exercise. In this study, it was assumed that an icon-type interface would be most appropriate and so the subjects were provided the items typed on cards. Framing the exercise in this fashion leads to designs that are icon-based. A second limitation is that the task was relatively simple and did not require any major emotional involvement of the  53  Chapter 6. Concluding Remarks and Future Directions  participants. Hence no serious conflict was observed, and consequently the observations did not reflect how serious conflicts would be resolved by groups in a non-computer environment. This resulted in a fairly simple mechanism to control access of items in the prototype implemented. Third, some manipulations are possible with computers that are not possible in a non-computer mode. The user-centered design does not identify features that could be based on such manipulations. The second contribution is that the comparison of user-centered design to the intuitive designs of the programmers suggests that user-centered design does yield insight beyond the preliminary intuition of programmers. For instance, programmers failed to anticipate that groups would cluster items into coarse categories before doing the actual ranking.  Findings in this category are limited by the following. First, only three programmers were asked to provide a design. These programmers had limited experience (2 to 5 years), but no special experience or training in designing interfaces or groupware. Second, they were allowed a limited amount of time (3 days), but were not asked to either  revise their design or do an iterative design. Last, the statement of the problem may have been inadequate. The third contribution became evident from observing users work with the prototype interface. This contribution is the realization that (a) not all required features are identified in the initial user-centered analysis (for example, the need to number the slots was not identified in the user-centered analysis), (b) not all features that seemed appropriate during the initial analysis were used during the trials (for example, the space for private notes), (c) the stresses related to arriving at a consensus may bias the evaluation of the interface. The findings based on user feedback are limited by the following. First, only a few groups used the system. Second, the findings are based on feedback that was provided informally, not on controlled studies.  6.2  Future Directions The study has made useful contributions, but has also raised many questions which  need to be examined further. A few of the questions are discussed here.  54  Chapter 6. Concluding Remarks and Future Directions The effect of the initialframing of the problem: One of the limitations of the study is that the design features identified are biased by the initial artifacts provided to the subjects performing the ranking task. It would be interesting to conduct similar studies, but with different sets of initial artifacts. The purpose would be twofold. One purpose would be to examine if there is a possible way to map the initial artifacts to the design features that emerge. The second purpose would be to see if it is possible to triangulate on a core set of design features. Justifying the lengthy user-centered analysis: In this study, the user-centered design yielded insight beyond the preliminary intuitive designs of programmers. A follow-up study could focus on comparing the user-centered design based on observing subjects in the non-computer environment to an iterative design starting from programmer intuition. It is conceivable that multiple iterations of the design-test cycle would lead to the same or better results than those obtained by analyzing observations in the non-computer environment. Going beyond the observing users: In this study, subjects were observed while ranking in the non-computer environment. Such observations present a pattern of interactions that are based on manipulations efficient in the non-computer environment. Computers allow easy performance of many manipulations which may be cumbersome or impossible to perform in the non-computer environment. So, basing interface design solely on observations in the non-computer environment may lead to sub-optimal designs. The challenge to the researcher is to formulate a methodology to identify and incorporate those manipulations that are easier to perform with the computer and that are beneficial to the users. Comparing group rankings based on the editing model and the dynamic interactive model: Existing GSS aggregate individual rankings to come up with the group ranking, (i.e. they use the aggregating model). The current design would enable group members to come up with the group ranking on the basis of a more dynamic interaction. It would be interesting to compare empirically the fmal group preferences for the rankings in the editing model to the group preferences for ranks in the dynamic interaction model.  55  Chapter 6. Concluding Remarks and Future Directions  In short, the field of multi-user interfaces is in its infancy. As systems to support collaborative work proliferate, the need for multi-user interfaces will proliferate correspondingly. Efforts to identify the requirements for multi-user interfaces for specific tasks and to develop efficient methods to build multi-user interfaces need to continue to meet the anticipated proliferation of collaborative applications.  56  References  REFERENCES Baecker, Ronald, M.; Nastos, Dimitrios; Posner, Ilona R.; and Mawby, Kelly L. “The User-centered Iterative Design of Collaborative Writing Software.” Proceedings of the Conference on Human Factors in Computing Systems, INTERACT ‘93 and CHI ‘93, 1993, p. 399-405. Baron, Sheldon; Kruser, Dana S.; and Huey, Beverly Messick (Ed.) Quantitative Modeling of Human Performance in Complex, Dynamic Systems. National Academy Press, Washington, D.C. 1990.  Bier, Eric A.; and Freeman, Steve. “MMM: A User Interface Architecture for Shared Editors on a Single Screen.” Proceeding of the Fourth Annual Symposium on User Interface Software and Technology, 1991, p. 79-86. Bewley, W.L.; Roberts, T.L.; Schroit, D.; and Verplank, W.L. “Human Factors Testing in The Design of Xerox’s ‘Star’ Office Workstation.” Human Factors in Computing Systems, by A. Janda (Ed.), 1983, p. 72-77. Boritz, James; Booth, Kellogg S.; Cowan, William B. “Fitts’s Law Studies of Directional Mouse Movement” Proceedings of Graphics Interface by Canadian Information Proceeding Society, Toronto, 1991, P. 216-223. Callahan, Jack; Hopkins, Don; Weiser, Mark; and Shneiderman, Ben. “An Empirical Comparison of Pie vs. Linear Menus.” Proceeding of Computer-Human Interaction, 1988, . 100 95 p. Card, S.K.; Moran, T.P.; and Newell, A. “Computer Text-Editing: An Information Processing Analysis of a Routine Cognitive Skill” Cognitive Psychology, 12, 1980a, p. 32-74. Card, S.K.; Moran, T.P.; and Newell, A. “The Keystroke-Level Model For User Performance Time With Interactive Systems”. Communications of the ACM, v.23, 1980b, p. 396-410. Cullingford, R. “SAM.” Inside Computer Understanding, by R.C. Schank and C.K. Riesbeck (Eds.), 1981, p. 75-119. Davis, Fred D. “Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology.” MIS Quarterly, Sept 1989, p. 3 19-339. Dennis, Alan R.; George, Joey F.; Jessup, Len M.; Nunamaker, Jay F. Jr.; and Vogel, Douglas R. “Information Technology to Support Electronic Meetings.” MIS Quarterly, Dec 1988, p. 591-624. DeSanctis, G., and Gallupe, R.B. “A Foundation for the Study of Group Decision Support Systems” Management Science 33:5, 1987, p. 589-609.  57  References Dewan, Prasun, and Choudhary, Rajiv “Experience with the Suite Distributed Object Model” Proceedings of the Second IEEE Workshop on Experimental Distributed Systems, 1990, p. 57-63. Dewan, Prasun, and Choudhary, Rajiv “Primitives for Programming Multi-User Interfaces” Proceedings of the 4th ACM SIGGRAPH Conference on User Interface Software and Technology, Nov 199lb, p. 69-78. Dickson, Gary W.; Poole, Marshall Scott; DeSanctis, Gerardine. “An Overview of the Minnesota GDSS Research Project and the SAMM System.” Computer Augmented Teamwork: A Guided Tour, by G.R. Wgner (Ed.), 1990. Eberts, Ray E., Eberts, Cindelyn G. Intelligent Interfaces: Theory, Research and Design. Elsevier Science Publishers B.V. (North-Holland), 1989, p. 69-127. Ellis, Clarence A.; Gibbs, Simon J.; and Rein, Gail L. “Groupware: Some Issues and Experiences” Communications ofACM 34:1, Jan 1991, p. 38-58. Elwart-Keys, Mary; Halonen, David; Horton, Marjorie; Kass, Robert; and Scott, Paul. “User Interface Requirements for Face to Face Groupware.” Proceedings of Computer-Human Interaction, CHI’90, Apr 1990, p. 295-301. Gilan, Douglas J.; Holden, Kritina; Adam, Susan; Rudisill, Marianee; and Magee, Laura. “How Does Fitts’ Law Fit Pointing And Dragging?” Proceedings of Computer-Human Interaction, CHI “90, Apr 1990, p. 227-234. Gould, John D.; Lewis, Clayton. “Key Principles and What Designers Think” Communications of the ACM 28:3, Mar 1985, p. 300-311. Greenberg, Saul, Roseman, Mark, and Webster, Dave “Issues and Experiences Designing and Implementing Two Group Drawing Tools” Proceedings of the Twenty-fifth Annual Hawaii Conference on Systems Sciences, 1992, p. 139-150. Grudin, Jonathan. “The Computer Reaches Out: The Historical Continuity of Interface Design” Proceedings of Computer-Human Interaction, CHI ‘90, Apr 1990, p. 261-268. Ishii, H. “Team Workstation: Towards a Seamless Share Space.” Proceeding of the Conference on Computer Supported Cooperative Work CSCW ‘90, L.A., Oct 1990, p. 13-26. ,  Ishii, H.; and Arita, K. “ClearFace: Translucent Multiuser Interface for Team Workstation.” Proceedings of the Second European Conference on ComputerSupported Cooperative Work, Sept 1991, p. 163-174. Jameson, A. “How is an Experimental Subject Like a Computer User?” Acta Psychologica 69, 1988, p. 279-298. Lu, Iva M., and Mantei, Marilyn M. “Idea Management in a Shared Drawing Tool” Proceedings of the Second European Conference on Computer-Supported Cooperative Work, 1991, p. 97-112. 58  References  Mandviwalla, M.; Gray, P.; Olfman, L.; and Satzinger, J. “The Claremont GDSS Support Environment.” Proceedings of HICSS-91, 1991, p. 600-609. Mantei, Marilyn. “Capturing the Capture Lab Concepts: A Case Study in the Design of Computer Supported Meeting Environment” Proceeding of Computer-Supported Cooperative Work, 1988, p. 257-270, Mantei, Marilyn M. “CSCW-WCSC: Computer-Supported Cooperative Work What Changes for the Science of Computing.” Proceedings of Graphics Interface ‘92, 1992, p. 130-139.  -  Marmolin, Hans; and Sundblad, Yngve. “An Analysis of Design and Collaboration in a Distributed Environment.” Proceedings ofthe Second European Conference on Computer-Supported Cooperative Work, Sept 1991, p. 147-162. Olson, J.R., and Olson, G.M. “The Growth of Cognitive Modelling in HumanComputer Interaction Since GOMS” Human-Computer Interaction, 5, 1990, p. 221-265. Posner, Ilona R., and Baecker, Ronald M. “How People Write Together”, Proceedings of the Twenty-fifth Annual Hawaii International Conference on Systems Sciences, 1992, p. 127-138. Roseman, Mark, and Greenberg, Saul. “GroupKit. A GroupWare Toolkit for Building Real-Time Conferencing Applications.” Proceeding of ComputerSupported Cooperative Work, CSCW ‘92, 1992, p. 43-50. Rumelhart, D.E.; and Norman, D.A. “Accretion, tuning and restructuring: three modes of learning.” Schooling and the acquisition of knowledge. by J.W. Cotton and R. Klatzky (Eds.), 1983.  Sarin, Sunil and Greif, Irene, “Computer-Based Real-Time Conferencing Systems.” IEEE Computer 18:10, Oct 1985, p. 33-49. Schank, R.C., and Abelson, R.B. Scripts, plans, goals are understanding. Hillsdale, New Jersey: Lawrence Eribaury Associates, 1977. Sellen, A., and Nicol, A., “Building User-Centered On-line Help”. The Art of Human Computer Interface Design, Reading, by Laurel, Brenda (Ed.) Mass: Addison Wesley Publishing Co., 1991, p. 143-153. Souza, Flavio de and Bevan, Nigel “ The Use of Guidelines in Menu Interface Design: Evaluation of a draft standard. “ Human-Computer Interaction INTERACT ‘90, 1990, p. 435-440. -  Streitz, Norbert A. “Cognitive Compatibility as A Central Issue in Human Computer Interaction: Theoretical Framework and Empirical Findings” Cognitive Engineering in the Design of Human-Computer Interaction and Expert Systems, by G. Salvendy, Elsevier (Ed.) Science Publishers, 1987.  59  References  Tang, J.C. “Listing, Drawing and Gesturing in Design: A Study of the Use of Shared Workspaces by Design Teams.” Research Rp SSL-89-3, Xerox Palo Alto Research Center, Palo Alto, CA, 1989. Tang, 3.C. and Minneman, S.L. “Videodraw: A Video Interface For Collaborative Drawing.” Proceedings of ACM SIGCHI Conference on Human Factors in Computing System, Apr 1990, P. 313-320. Thimblebly, Harold. User Interface Design. Company. ACM Press. New York, 1990, p. 18.  Addison-Wesley Publishing  Thomas, K and O’Brien, R. “Occupational Status and Prestige: Perceptions of Business, Education, and Law Students.” Vocational Guidance Quarterly, September, 33:1, 1984, p. 70-75. Wasserman, Anthony I.; Pircher, Peter A.; Shewmake, David T.; and Kersten, Martin L. “Developing Interactive Information Systems with the User Software Engineering Methodology” Readings in Human-Computer Interaction, 1987, p. 508-527. Wei, K. K.; Tan, B.C.Y.; and Raman, K.S. “SAGE: A HyperCard-Based GDSS.” Proceedings of HICSS-92, 1992, p. 14-22. Young, R.M., Green, T.R.G., and Simon, T. “Programmable User Models for Predictive Evaluation of Interface Designs.” Proceedings of Computer-Human Interaction, Human Factors in Computing Systems, CHI ‘89, 1989, p. 15-19.  60  Appendix 1.  APPENDIX 1 LIST OF ITEMS USED IN THE MANUAL RANKING SESSION The following twenty five occupations were taken from Thomas & O’Brien [‘84] and were used as items to be ranked in the manual ranking study. Army Captain Banker Barber Bricks Carrier Bus Driver (Motorman)  Carpenter  Civil Engineer Coal Miner Ditch Digger Electrician Elementary School Teacher Farmer Foreign Missionary Grocer Insurance Agent Janitor Lawyer Machinist Mail Carrier (Postman) Physician Plumber Soldier Superintendent of Schools Travelling Salesman Truck Driver  61  Appendix 2a.  APPENDIX 2a PROGRAMMER STUDY: BACKGROUND INFORMATION QUESTIONNAIRE  Study of Human-Computer Interaction ID:_____ 1.  Programming experience (you can specify courses you’ve taken and your working experience [in years]):  2.  What kind of computer systems have you worked with? Mac___ IBM____ Sun NeXT_____ other  3.  Do you have experience in application programming? Yes/No if yes, please specify: Accounting application Order processing Database system others  4.  Have you written programs with the following interface? command line interface menu-driven icon-driven others  62  Appendix 2b.  APPENDIX 2b PROGRAMMER STUDY: INSTRUCTIONS  Ranking Task You are asked to write a program which allows users to do a ranking task. The ranking program would be used by more than one user, and the result captured by the program should be the consolidated result of all users. For instance, a group of 4 users are asked to rank 25 items of job description in terms of their importance to the human community. Assuming that users of the program are administrative personnel with minimum amount of computing experience. Thus, the program designed should be easy to use and require minimum amount of learning and training. Describe your design of the program, specifically the interface design. For instance, be specific in describing how you would present the 25 items to users and how you would consolidate the result of all users. You don’t have to worry about technical details of the implementation in your design.  63  Appendix 3a.  APPENDIX 3a USER FEEDBACK STUDY: INSTRUCTIONS (D  Study of Human-Computer Interaction General Information (MG-i) Introduction Ranking is the task when you are given a list of items and are asked to arrange the items in some order depending on a specified criterion.  What will you be asked to do in this study? You will be performing a ranking task using two different computer programs: System A and System B. Twenty jobs/professions are given to you in each case and you are asked to rank those 20 items in terms of their importance to the society. System A:  In this system, each person in the group will see exactly the same order of items on the screen as other people see. Any changes made on the screen of one machine will appear on the screen of all other machines.  You are going to work in a group and there will be no further instruction given. Please feel free to explore the system and you can have as much time as you want.  64  Appendix 3b,  APPENDIX 3b USER FEEDBACK STUDY: INSTRUCTIONS (Ifl  Study of Human-Computer Interaction General Information (MG-2)  What will you do in this part? In this part of the study, you will be using System B to perform the ranking task. System B:  In this system, each person in the group will not see the same order of items as other people see. Any changes made on one screen will not appear on the screen of the other machines.  You are going to work in a group and there will be no further instructions given. Please feel free to explore the system and you can have as much time as you want. Again, Thanks for participating in this study!  65  Appendix 4a.  APPENDIX 4a USER FEEDBACK STUDY: BACKGROUND INFORMATION QUESTIONNAIRE  Study of Human-Computer Interaction ID:________ Background Information: 1.  Are you currently working? If yes, your position is  2.  You are in the age group: 18-24 a. 25-34 b. 35-44 c. 45-54 d. above 54 e.  3.  Have you work with computer before? Yes No If yes, what kind of computer have you work with? (eg. IBM, Macintosch....)  4.  What do you use computer for? (eg. Word processing, spreadsheet  5.  Have you have any computer courses before? Yes No If yes, what are they? (eg. word processing, programming....)  Yes  66  No  )  Appendix 4b,  APPENDIX 4b USER FEEDBACK STUDY: QUESTIONNAIRE  Study of Human-Computer Interaction ID:________ Group: Ranking Program:  1. What do you like about System A (Multi-user Ranker)?  2. What do you dislike about System A (Multi-user Ranker)?  3. How would you suggest to improve System A (Multi-user Ranker)?  4. What do you like about System B (GDSS Ranker)?  5. What do you dislike about System B (GDSS Ranker)?  6. How would you suggest to improve System B (GDSS Ranker)?  67  


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items