UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

The effects of using icons and direct manipulation interfaces : an empirical study Lee, Robert Kar Lok 1991

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1992_spring_lee_robert.pdf [ 3.63MB ]
Metadata
JSON: 831-1.0086424.json
JSON-LD: 831-1.0086424-ld.json
RDF/XML (Pretty): 831-1.0086424-rdf.xml
RDF/JSON: 831-1.0086424-rdf.json
Turtle: 831-1.0086424-turtle.txt
N-Triples: 831-1.0086424-rdf-ntriples.txt
Original Record: 831-1.0086424-source.json
Full Text
831-1.0086424-fulltext.txt
Citation
831-1.0086424.ris

Full Text

THE EFFECTS OF USINGICONS AND DIRECT MANIPULATION INTERFACES:AN EMPIRICAL STUDYbyROBERT KAR LOK LEEB.Sc., University ofVictoria, 1981M.P.A., University of Victoria, 1985A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCE (BUSINESS ADMINISTRATION)inTHE FACULTY OF GRADUATE STUDIESCOMMERCE AND BUSINESS ADMINISTRATIONWe accept this thesis as conformingto the required standardTHE UNIVERSITY OF BRITISH COLUMBIADecember 1991© Robert Kar Lok Lee, 1991In presenting this thesis in partial fulfilment of the requirements for an advanceddegree at the University of British Columbia, I agree that the Library shall make itfreely available for reference and study. I further agree that permission for extensivecopying of this thesis for scholarly purposes may be granted by the head of mydepartment or by his or her representatives. It is understood that copying orpublication of this thesis for financial gain shall not be allowed without my writtenpermission.(Signature)Department of  Commerce and Business AdministrationThe University of British ColumbiaVancouver, CanadaDate  December 30, 1991DE-6 (2/88)AbstractWith the advancements in computing technology, applications have been developedbefore the true utility of the innovation is known and before appropriate testing can beundertaken to determine the best means of implementing it. Many failures in informationsystems have been traced to the failure of systems professionals to adequately consider thehuman component of the system, and the relationship between the individual andtechnology. The purpose of this study was to examine the performance impact of usingicons and direct manipulation in human-computer interfaces.An icon based interface uses pictures or images to represent commands, objects, andsystem information. These images can be invoked, activated, moved by the user simply bypointing to the desired image with a device such as a mouse or a light pen. The uniquenessof how graphical information is processed in our mind provide a sound theoretical groundto suggest that icon based interfaces can improve user performance and have the ability tospeed learning, when compared to non-icon based interfaces.A direct manipulation interface refers to an interface which has three unique properties:1) continuous representation of the object of interest; 2) physical actions or "labeled buttonpresses" instead of complex syntax; and 3) rapid, incremental, reversible operations whoseimpact on the object of interest is immediately visible. The advantage of the directmanipulation is the fact that users are in direct control of the model world rather thanworking through some intermediary such as a command language.A laboratory experiment was conducted to examine the icon and direct manipulationeffects. Four types of interfaces: icon based direct manipulation, non-icon based directiimanipulation, icon based non-direct manipulation, and non-icon based non-directmanipulation interfaces were tested using a simple office task. Twelve subjects wererandomly assigned to each of the four interface types from the 48 recruits. Each subjectwas required to complete three experimental sessions in two separate time periods. Theindependent variables were total time taken and number of actions required to completethe task, number of actions per second, number of errors made in the task, and thepercentage of incorrect actions. Results show that there is not sufficient evidence toconclude icon based interfaces are better than non-icon based interfaces. However, thereis a strong indication that direct manipulation interfaces are superior to non-directmanipulation interfaces in terms of time taken to perform the task and number of actionscompleted per second. There is also indication that over time, subjects could master theinterface and perform the task more efficiently.Several directions for future research emerge from the results of the study. Redesign ofinterfaces, past computer experience, and extended period to measure the time effect aresuggested for further research on this topic.iuTable of ContentsAbstract ^  iiList of Tables ^  viiList of Figures ^  viiiAcknowledgement ^  ixChapter 1: Introduction ^  11.1 Definition of Icon Based Interface ^  31.2 Examples of Icon Based Interface  31.3 Definition of Direct Manipulation ^  41.4 Examples of Direct Manipulation Systems  51.5 Organization of the Thesis ^  7Chapter 2: Using Icons in Human-Machine Interfaces ^  82.1 A Taxonomy of Icons ^  82.2 Advantages of Icon Based Interfaces ^  92.3 Disadvantages of Icon Based Interfaces  132.4 Empirical Research on Icon Based Interfaces ^  14Chapter 3: The Concept of Direct Manipulation ^  183.1 Two Aspects of Directness ^  193.1.1 Distance ^  203.1.2 Direct Engagement ^  23iv3.2 Empirical Research on Direct Manipulation Systems ^  253.3 Problems with Direct Manipulation ^  27Chapter 4: Research Methodology and Hypotheses ^  304.1 Methodology ^  304.1.1 Internal Validity  304.1.2 External Validity ^  324.2 Hypotheses Development 334.2.1 Icon Effect ^ 344.2.2 Direct Manipulation Effect ^ 354.2.3 Time Effect ^  364.2.4 Interaction Effects  37Chapter 5: The Experiment ^  385.1 The Interface Design  385.2 The Experimental Design ^ 405.3 Subject Selection  405.4 The Task Environment ^ 415.5 The Experimental Procedures  42Chapter 6: Experimental Results ^  456.1 Icon Effect ^ 456.2 Direct Manipulation Effect ^ 466.3 Time Effect ^ 486.4 Interaction Effects  506.5 General Observation ^  51Chapter 7: Conclusions ^  577.1 Discussion of the Study ^  577.2 Limitation of the Study  587.3 Directions for Future Research ^  59References ^ 61Appendix A: Screen Displays for the Four Interfaces ^ 65Appendix B: Task Descriptions ^ 72Appendix C: Computer Printouts of MANOVA Tables ^ 76viList of TablesTable 1: Demographic Data of the Subjects ^  42Table 2: Time Taken to Complete the Task by Icon, Manipulation, and Time Effects ^ 52Table 3: Number of Actions by Icon, Manipulation, and Time Effects ^ 53Table 4: Number of Actions Per Second by Icon, Manipulation, and Time Effects ^ 54Table 5: Number of Errors Made in the Task by Icon, Manipulation, and Time Effects ^ 55Table 6: Percentage of Errors by Icon, Manipulation, and Time Effects ^ 56vuList of FiguresFigure 1: The Two Gulfs of Directness ^  20Figure 2: The Relationships between Distances and Gulfs ^  21Figure 3: The 2 x 2 x 3 Factors Repeated Measurement Design  41Figure 4: The Interaction Effect between Direct Manipulation and Time ^ 51viiiAcknowledgementMany thanks to my thesis supervisor, Professor Izak Benbasat, who gave me his timeand ideas during the course of this study and in the preparation of this thesis. I also expressmy sincere appreciation to my wife Fanny, for her patience and understanding whencountless hours of my evenings and weekends were spent to work on this thesis, and for herencouragement and support all along. My special thanks goes to my employer, IBMCanada Ltd., for its generous financial support under the Tuition Refund Program.ixChapter 1: IntroductionComputerized information systems are proliferating in the general business and officeenvironment. A major problem associated with this rapid growth is that office computersystems become more complex, both in terms of usage and in terms of their capabilities.This complexity makes the system difficult to operate by novice users who are unfamiliarwith computers. Furthermore, with the many advancements in computing technology,applications have been developed before the true utility of the innovation is known andbefore appropriate testing can be undertaken to determine the best means of implementingit. Many failures in information systems have been traced to the failure of systemprofessionals to adequately consider the human component of the system, and therelationship between the individual and technology. This aspect is particularly importantin the area of office automation as the individuals utilizing these systems will be casualusers whose primary task is not associated with expertise in computing (Martin, 1973).This class of users is likely to have little patients for coping with the particularidiosyncrasies of a computer based system (Benbasat and Wand, 1984).This study examined the effects of using icons and direct manipulation inhuman-computer interfaces. The study was based on two design principles: icon and directmanipulation. A laboratory experiment was conducted to test and compare four differenttypes of interfaces: icon based direct manipulation, icon based non-direct manipulation,non-icon based direct manipulation, and non-icon based non-direct manipulationinterfaces.1Traditionally, human-computer interface designs have been based on commandlanguages. They provide a precise and powerful means of interaction. However, mostcommand based interfaces require an exact syntactical match. Users must learn, by rote,all of the commands and variations they wish to use. They may also be required to knowthe technical side of the computer system (Kuo, 1988) and the rules for using thecommands (Houghton, 1986). This type of interfaces can be inappropriate for novice orcasual users who often find them difficult to learn, a burden to remember, and inconsistentacross products (Lodding, 1983).A menu-driven interface is an alternate and improved version of command languagebased interface design. The majority of this type of interface are text based interfaces. Textmenus, while not requiring the user to learn specific command syntax, demand that the userlearn a specific nomenclature in order to identify the proper command (Lodding, 1983).Menus sometimes are slow to appear on the screen and therefore can hamper userproductivity (Houghton, 1986). If there are too many levels, users can get lost in thesystem. This type of design is appropriate for novices and users with minimal computertraining, but are generally too slow and contain too much explanatory text for skilledoperators.The increased popularity of computer graphics opens up a new channel forhuman-computer interface design. Many late model computer systems, especially thosedesigned for educational purposes such as the Apple Macintosh, utilize images to representunderlying objects in a computer system. These objects may be processes or data, and therepresentation can indicate their attributes, their association or their states. An importantfeature of images is that they may be used to indicate characteristics of these system2objects, even when the end-user is unfamiliar with the image in question (Gittins, 1986).This type of interface is referred to as an icon based interface.1.1 Definition of Icon Based InterfaceLodding (1983) defines icon based communication as the use of images to convey ideasor information in a nonverbal manner. The images are chosen to relate to the idea eitherby resemblance (pictograph), by analogy (symbol), or by being selected from a previousdefined and learned group of arbitrarily designed images (sign). In most icon basedinterfaces, images that represent commands, objects, and system information can beinvoked, activated, moved by the user simply by pointing to the desired image with a devicesuch as a mouse or a light pen.The degree to which users can rely on a particular image to carry a specific message ishard to determine. If an image bears a close resemblance to a particular object, the imagebecomes very clear and easy to recognize. However, symbols and signs, as defined above,will need additional clarification to ensure users can accurately decode the message.Chapter Two will elaborate on the use of icons in human-computer interfaces.1.2 Examples of Icon Based InterfaceIn the mid eighties, icon based interfaces were mostly hardware dependent. They werelimited to a specific item of hardware or a particular product range, and might also belimited to a specific operating system. These icon based interfaces were referred to aspropriety systems (Gittins, 1986). The most popular and generally available proprietysystems included the Apple Macintosh and the Xerox Star. Other propriety systems3include products by the Sun, Apollo, and Digital Research (Gittins, 1986). There were alsoresearch prototypes such as SAPPHIRE (Myers, 1984), and CEDAR (Tietalman, 1985).All of these systems had fast performance and a rich repertoire of graphical elements(Gittins, 1986). During the same period, there were few examples of any relatively portableicon systems -- interfaces that were independent of either operating systems or graphicdevices (Gittins, 1986). One example was the UNICON system (Gittins, Winder, and Bez,1984). The disadvantage of this portable system was its limited graphics productioncapability.Today, the situation has been reversed. The Apple Macintosh is probably the lonesurvivor of propriety systems. On the contrary, portable icon systems have beenproliferating. New technology allows portable icon systems run on almost all computersystems. There are thousands of commercially available portable icon based software.Some are available as utility programs. An example is Disk Operating Systems Version 5.0.Others are application based, for example, Microsoft Windows which is a programmanagement product, Icon Pak which is a collection of icons designed to use inconjunction with Microsoft Windows.1.3 Definition of Direct ManipulationThe term "Direct Manipulation", according to Shneiderman (1982, 1983, 1987), refersto interfaces having these properties:1. Continuous representation of the object of interest.2. Physical actions or "labeled button presses" instead of complex syntax.43. Rapid, incremental, reversible operations whose impact on the object of interest isimmediately visible.The concept of direct manipulation is not new. Sutherland (1963) first proposed agraphical design program called Sketchpad. The goal was to devise a program that wouldmake it possible for a person and a computer "to converse rapidly through the medium ofline drawings." Sutherland also discussed the power of graphical interfaces, the conceptionof a display as "sheets of paper", and the use of pointing devices.1.4 Examples of Direct Manipulation SystemsMany contemporary systems utilize the direct manipulation principle. Some of theexamples are display editors, electronic spreadsheets, spatial data management systems,arcade video games, and computer-aided design systems (see Shneiderman, 1982, 1983, and1987 for detailed discussion).Display editors utilize the entire screen area for editing. They are capable of displayinga full 24 to 66 lines of text, showing the document in a final, ready-to-print form, makingthe cursor visible and easy to manipulate by updating the results or accepting a pointingdevice such as a mouse for actions. Users can use a mouse to cut and paste the documentdirectly on the screen.Electronic spreadsheets simulate a matrix (tabular) worksheet and hence make it easyfor novices to comprehend the objects and permissible actions. Similar to the text editors,spreadsheets also show a ready-to-print form of the final calculated result on the screen.Many spreadsheet programs integrate advanced features such as graphics and database.5Some advanced systems even permit the display of data items and graphics on the samescreen and be manipulated directly on the screen.Spatial data management systems give a spatial representation of data in the form of amap that provides a familiar model of reality. These systems are commonly found ingeographic applications. A pointing device is used to traverse information spaces or zoomin on a map and see more details about the object of interest.Contemporary arcade video games provide stimulating entertainment, a challenge fornovices and experts, and many intriguing lessons in the human factors of interface design.What makes a video games so attractive, aside from its entertainment value, is that mostof them are simple to understand since it is an abstraction of reality -- learning is byanalogy. There are no commands to remember. Only physical actions, such as buttonpresses, joystick motions, knob rotations, or hand movement, are used. Results of anaction are shown immediately on the screen.Computer-aided design systems have long adopted the direct manipulation principles.The operator uses light pen to touch the screen in order to move the components to designthe object of interest. When the design is complete, the system can provide valuableinformation about the resulting design and warn the designer about inconsistencies ormanufacturing problems. The facility of using these systems stem from the capacity tomanipulate the object of interest directly and to generate multiple alternatives rapidly.61.5 Organization of the ThesisThis paper consists of two major parts: a review of literature on the underlying theoriesof icons and direct manipulation and a report of the experiment. Chapter Two describesthe theoretical aspects of icons, the advantages and disadvantages of using them incomputer interfaces, and reports on the results of previous empirical studies on icons.Chapter Three describes the theoretical aspects of direct manipulation -- distance anddirect engagement, reports on the results of previous studies on direct manipulation, anddiscusses some problems associated with direct manipulation.The experiment is presented in Chapter Four to Chapter Seven. Chapter Four discussesissues of internal and external validity of an experiment, how this study is designed tohandle the problems, and presents the research hypotheses. The purpose of the study is toexamine the performance impact of using icons and direct manipulation inhuman-computer interface design. Relevant performance variables include time, numberof actions, and errors. Chapter Five outlines the experimental design and procedures whileChapter Six reports the fmdings. Chapter Seven presents the conclusions and thelimitation of the study. In addition, suggestions for future research are discussed.7Chapter 2: Using Icons in Human-Machine InterfacesIcon based interfaces may seem easier to users than equivalent text menu or commandline interfaces. They only require users to recognize the images on the screen. However,this recognition can be "fuzzy" because images can be identified even though users couldnot accurately describe the image in detail if asked to do so (Lodding, 1983).The study of icons is fragmented amongst a variety of disciplines. According toLodding (1983), no unique discipline of iconic communication exists. To study theeffectiveness of icon based human-computer interfaces, it requires an understanding andintegration of such areas as art, cognitive and perceptual psychology, learning, linguistics,and computer graphics.2.1 A Taxonomy of IconsA taxonomy of icons can aid in ordering the confusion of images that are used to conveyinformation. This ordering of images permits us to identify fundamental design styles, thetypes of information conveyed, and in what manner that external factors, for example,context and time, influence the interpretation of the icons.Lodding (1983) presents a taxonomy of icons based on the function supported byimages. He uses Arnheim's (1969) definition of image functions: picture, symbol, and sign,to classify icons into three design styles: representational, abstract, and arbitrary, with eachdesign style being associated with each image function.8A representational image is one that serves as an example for a general class of objects.Examples are gas pump and accommodation icons near highway exits. These icons do notinclude detail components of the real world artifacts that they are representing, but onlycarry the relevant quantities such as the shape of the basic structure. Typical icon basedbusiness system interfaces such as those found on the Apple Macintosh computer andMicrosoft Window software utilize representational icons. The icons are intended to berepresentative of a set of objects found in a typical office environment, for example, desks,garbage cans, file cabinets, in and out baskets. The icons which were used in this studybelong to this type.Abstract icons are used to present a concept to the viewers that is apart from theconcrete images. These icons are intended to convey a certain concept rather than torepresent the objects. An example is the fragile symbol. In an abstract design the image isreduced to its essential elements in an attempt to focus upon that property of the imagethat carries the intended concept.Finally, arbitrary icons are those with special meaning assigned. This form of design isemployed when the purpose of the icon is not one ofportrayal. The meaning of these iconsmust be learned. They cannot be easily recognized from the context displayed. Arbitraryicons are generally used where there is no means of tying the intended message to an objector quality of an object. An example is the radiation warning sign.2.2 Advantages of Icon Based InterfacesPeople have always found it natural to communicate with images. This stems from thefact that visual information and language are processed quite differently in the human9brain. Robey and Taggart (1982) suggest that in most people the right hemisphereprocesses visual/spatial information while the left controls verbally based analytic tasks.In addition to this, the two types of information are processed quite differently. The lefthemisphere functions in a serial mode, processing information one piece at a time. Theright hemisphere functions in a parallel mode, taking a large amounts of information all atonce.Essentially, verbal information such as text and numerals is "read" in a sequentialmanner, with the information being buffered in a short-term verbal memory. The capacityof this short-term verbal memory is restricted (Miller, 1956). The information is seriallyprocessed in the short-term verbal memory and then transferred to the long-term memory.It is believed that there is no upper limit to the long-term memory capacity. When theinformation is recalled, the process is reverse: sequential process starts from the long-termmemory and then moves to the short-term verbal memory (Kent, 1981).Images are processed quite differently. An image is captured as a whole, processed in aparallel manner. The semantics of the image are then transferred directly into thelong-term memory from the sensory memory or via a short-term visual memory analogousto that of the verbal memory. As with the verbal long-term memory, there seems to be noupper bound to the visual memory capacity (Miller, 1956).The superiority of visual information processing over verbal information processinggives icon based interfaces a supposedly advantage over non-icon based interfaces(Shepard, 1967). It is thought to be easier for individuals to recognize and recall imagesthan it is to recall verbally based stimulus such as those found in traditional commandbased interfaces. In fact, people have almost perfect recognition for objects they have seen1 0previously (Paivio, 1971). This may relate to Miller's (1956) notion of chunking wherebyvisual symbols carry a much denser information load than verbal information, thus moreinformation can be conveyed in a more compact form.Icon based interfaces could affect learning. This is based on the assumption that peoplerely on their prior knowledge at the beginning of a learning curve. When mastering a newcomputer system, users make comparison of the new system to what they already knowfrom another system. Even if they have never used a computer before, they can still use thismethod to form an initial model in their mind. This initial model is often called mentalmodel of the system. Van der Veer (1989) considers this mental model to be the source ofthe expectation the user has about the effects of actions towards the system. It will guidehis or her planning of the interaction, and it will help the interpretation of the system'sreactions. Icon based interfaces contain representations of objects which enable users tomake direct comparison between the icon representations and the real world objects. Thiscomparison helps the user to develop a mental model which could be used to understandthe system. Carroll and Thomas (1982) call this process the Metaphor Principle. Themental model is often called the analogical model (Halasz and Moran, 1981) because of itsone-to-one mapping between the known source system and the unknown target system.Schild, Power, and Karnaugh (1980) also see the advantages of pictorial interfaces asbeing their ability to speed learning. In addition, they see that users of pictorial interfacesrequire only low initial cognitive effort to utilize the system. This is attributed to the factthat typical icon based interfaces deal with a physical object, the characteristics of whichthe user is already familiar with. Furthermore, they point out that image processing doesnot interrupt other cognitive activities, a difficulty faced when commands are verballyprocessed. This would be a definite advantage of icon based interfaces as they would11reduce the cognitive strain that has been postulated as a major impediment to successfulinteraction with computer-based systems (Keen, 1979). The problem of cognitive strainlikely stems from the requirement that individuals are required to focus their attention attwo levels, firstly on the primary task of "problem solving," and secondly on the interactionwith the system to achieve that primary end. Interface designs have been demonstrated toimpede problem solving ability if they require constant movement between the task at handand the requirements of dealing with the supporting interface (Sears, 1982).Icons also possess considerable carry-over value in the learning process (Hemenway,1981). Since many elements are shared amongst icons, once the user has correctlyunderstood the function or properties of one icon it is relatively easy to transfer thatknowledge to others. This would especially facilitate the performance ofnaive users. Sucha characteristic allows the user to infer or predict the action of an unfamiliar icon based onthe properties it shares with one already known. Thus a well designed set of icons shouldgreatly facilitate learning.Icon based interface is often viewed as a "desk top" simulation in office automationsystem (Witten and Greenberg, 1983). This means commands such as FILE and ERASEare represented by graphic images on the screen. This form of display is considered to be ahighly effective means of communicating with users. Designers of the Star interface(Smith, Irby, Kimball, Verplank, and Harslem, 1982, p.260) state that:"A subtle thing happens when everything is visible: the displaybecomes reality. The user model becomes identical with that is on thescreen. Objects can be understood purely in terms of their visiblecharacteristic."Thus, icon based systems are clearly viewed as being conducive to communication; this isresponsible for their reputed "impact" on performance.122.3 Disadvantages of Icon Based InterfacesIcon based systems also have potential disadvantages. The difficulty with icon basedsystems lies in structuring a set of pictorial representations that convey appropriatemeaning to the user without invoking other undesirable properties or connotations.Shneiderman (1987) warns that a user may rapidly grasp the analogical representation butthen make incorrect conclusions about permissible actions. Ives (1982, p.37) states that:"Selecting an appropriate icon is a challenging task. It is frequentlynoted in articles written by proponents of graphics that a picture isworth a thousand words. Unfortunately, these are not always theparticular thousand words we had in mind. An icon which seemsmost appropriate to the designer may have quite a different meaningto the ultimate user."Learning the meaning of the graphic representation could jeopardize the acclaimedperformance advantage of icon based interfaces (Lodding, 1983). There is no generallyaccepted universal set of icons, they are usually created for each new setting or application.Consequently, an icon may be meaningful to only the designer. For a novice user of anicon based system, learning the meaning of the graphic representation may require as muchor more time than that of words.Lodding (1983) also points out several difficulties in designing appropriate groups oficons. Excessive detail in an icon may impede recognition, interpretation, and processing.On the other hand, if too clean a stylized design is used there may be several meaningfulinterpretations. Consistency of icons across settings is another problem. If an iconconveys one meaning in a specific context and causes another action when the contextchanges, problems in use may arise. This is a common problem in command based13interfaces in which knowledge applies across all but a few program modules and theexceptions must be learned through rote memorization. Such occurrences place excessivecognitive strain on users and can be highly demoralizing for novices (Gaines, 1981). In thisregard, there is little to distinguish icon based interfaces from standard system interfaces.Consistency is a critical element to success. According to the designers of the Xerox Star(Smith et al., 1982, p.268):"Everyone agrees that consistency is an admirable goal. However itis perhaps the single hardest characteristic of all to achieve in acomputer system."2.4 Empirical Research on Icon Based InterfacesMuch has been made of the potential for icon based interfaces to positively affect userlearning and performance (Shepard, 1967; Schild et al., 1980; Lodding, 1983; and Wittenand Greenberg, 1983). However, there is little solid empirical evidence exists to supportthese claims. Results from previous studies were unable to support the theoreticaladvantages of using icons in interface design (see for example, Rohr and Keppler, 1984;Whiteside, Jones, Levy, and Wixon, 1985).A fundamental question in icon based interface research is whether the presence ofgraphics in human-computer interfaces has any effect on user performance. Muter andMayson (1986) studied the role of graphics in item selection from menus. The purpose oftheir study was to examine if the addition of graphics to the alternatives on computerchoice (menu) pages facilitates user performance. In their study, three forms of videotexchoice pages were created employing word items in the experimental Telidon data-base(Telidon is a form of videotex developed by the Department of Communications ofCanada). The first form was the Text-Only pages that provided a simple, double-spaced14linear arrangement of text items. The second form was the Graphics pages whichcontained the identical text alternatives. However, the alternatives were distributedaround the page in a non-linear fashion, and were supplemented by appropriate simplegraphics. The last form was the Control pages which were identical to the Graphics pagesexcept that the graphics were not included. The use of the Control pages was to ascertainthe effect of the non-linear arrangement, which might contribute to any differencesbetween the Graphics condition and the Text-Only condition. The authors found thatgraphics had no effect on response time, but improved accuracy: the error rate in thegraphics condition was only half that in text-only condition.Whiteside et al. (1985) suggest that icon based interfaces may not be suitable for alltypes of users. They looked at seven different types of interfaces representing command,menu, and icon based interfaces and made the following conclusions:1. There are large usability differences between interfaces. Performance figures showedno advantage to the two icon based interfaces. Novice users performed significantlyworse with the two icon based systems than either the command or menu basedsystems. For experienced users, there is no significant performance difference betweenthe two icon based systems and the command or menu based systems.2. There is no trade-off between ease of use and ease of learning. The authors claimedthat if there is a trade-off, then making things easier for the new users must necessarilymake things more difficult for the experts. Results of the study showed that thecommand system is the best interface for experienced users and it is also the bestinterface for new users. The menu system was found to be the worst interface for forboth groups. Thus the study provides no evidence for a trade-off between learning andease of use.153. Interface style is not related to performance or preference. In fact, the experimentfound that the style that is supposedly the easiest for new users (the icon basedinterface) is actually the most difficult to use.Another fundamental question in icon based interface research is the learning effect.Foss, Rosson, and Smith (1982) used a file folder metaphor to teach subjects how to learna text editor. The results show that subjects who were given the metaphor learned more inless time. Rohr and Keppler (1984) examined the relevant features of icons against verbal(text) commands in a real system and task environment. They found that iconsrepresenting single commands (i.e., PRINT) are as meaningful to the users as verbalcommands. Guastello, Traut, and Korienek (1989) also have similar findings. Theyexamined verbal versus pictorial representation of objects in a human-computer interface.In their experiment, a mixed types of representation were used. These included pictorialicons, short and long abbreviated verbal icons, and mixed modality icons which containboth pictures and names. The authors found that mixed modality icons were rateddistinctively more meaningful than icons that utilize verbal or pictorial elements only.The results of these empirical studies seem to support only the theoretical disadvantagesof using icons in human-computer interfaces. They also appear to contradict most of thepsychological, perceptual, memory, and learning advantages postulated in the literature.This leaves considerable room for further research into the effectiveness of icon basedinterfaces, especially in an office setting. What is required is a study which will investigatethe use of icons in the business or office computing environment. As mentioned in anearlier section, the theoretical advantages of icon based interfaces centered in two areas:learning and memory. While some of these studies had examined the icon effect onlearning, none had looked at the effect on knowledge retention. Recall that for most16people, visual information processing is superior to verbal information processing. Thisprovides a strong theoretical ground to suggest that users of icon based interfaces can recallmore basic system functions than can users of other interface types, even if the interface hasnot been used for a long period of time. The present study attempted to demonstrate thisempirically.17Chapter 3: The Concept of Direct ManipulationThe concept of direct manipulation in human-computer interaction refers to a broadrange of ideas and techniques (Te'eni, 1990). Some researchers talk of techniques that giveusers a sense of satisfaction while interacting with a system, others discuss the apparentconcreteness of the way objects are visibly represented and influenced (Shneiderman, 1983;Hutchins, Hollan, and Norman, 1986). Hartson and Hix (1989) call the directmanipulation interface a model world where the end user interacts with the computersystem by grabbing and manipulating (i.e., with a mouse) visual representations of objects.Their view, however, also includes icons in the system design.Hutchins et al. (1986) suggest that the best way to describe a direct manipulationinterface is by example. They present a simple statistical computation by directmanipulation. The goal of the computation is to analyze the numbers, to see whatrelations exist among the rows and columns of a matrix. The data are contained in thematrix and are represented on a computer display screen by an icon. At the bottom of thescreen are basic icons that represent possible statistical functions. To perform a function,the icon representing the desired function is moved to the screen and connected up witharrows. The result looks like a flow chart on the screen.The above example illustrates a powerful manipulation medium for computation. Thedesired operations are done by moving the appropriate icons onto the screen andconnecting them together. Although icons are used in this example, it is not a necessarycondition in the design of direct manipulation interfaces. On the contrary, many iconbased interfaces are associated with the ability to mimic the real world by having the userdirectly manipulate the objects of interest displayed on the screen.18The use of icons in direct manipulation interfaces results in a shift from the actionorientation of a typical command-driven interface to an object oriented interface based onmore concrete representations of objects and concepts (Witten and Greenberg, 1983).Such a change appears to imply that from the outset the use of icon based interfaces willrequire a different understanding of, and approach to, computer interaction. The objectitself becomes the primary focus and the manipulation occurs by making contact with theicon directly rather than through the intermediary.Several authors have attempted to describe the principles of direct manipulation. Thegeneral approach of direct manipulation can be described as "What you see is what youget" -- the image of current status is displayed on the screen. This approach has beenapplied to advanced office automation systems such as word processors. Thimbleby (1983)expands in this direction by proposing "What you see is what you got." He suggests thatthe display should indicate a more complete image ofwhat the current status is, what errorshave occurred, and what actions are appropriate.3.1 Two Aspects of DirectnessHutchins et al. (1986) argue that the notion of direct manipulation is not a unitaryconcept nor even something that can be quantified in itself. It is an orienting notion. Theydescribe direct manipulation as the feeling of involvement directly with a world of objectsrather than of communicating through some intermediary (i.e., command language). Intheir paper, they propose two separate and distinct aspects of the feeling of directness:distance and engagement.19G0ALS Gulf of ExecutionPHYSICALSYSTEM•^Gulf of EvaluationFigure 1. The Gulfs of Execution and Evaluation.3.1.1 DistanceDistance refers to the space between one's thoughts and the physical requirements ofthe system under use. A short distance means that the translation is simple andstraightforward, that thoughts are readily translated into the physical actions required bythe system, and that the system output is in a form readily interpreted in terms of goals ofinterest to the user. In this aspect, directness involves a relationship between the task theuser has in mind and the way that task can be accomplished via the interface. The criticalissue here is minimizing the cognitive effort required to bridge the gap between the user'sgoal and the way they must be specified to the system.An interface introduces distance between a person's goals and knowledge and the levelof description provided by the system with which the person must deal. The gap betweenthe user and the system can be represented by two unidirectional gulfs: the Gulf ofExecution and the Gulf of Evaluation (Figure 1). The two gulfs can be bridged in twodirections. The designer can bridge the gulfs by starting at the system side. He willconstruct the input and output characteristics of the interface such that they will better20Semantic^—^Articulatory -1 ^--0.1 4—^—II. lDistance^—^Distance^- PHYSICALSYSTEMG0ALSGulf of Execution ^■•^ Gulf of EvaluationSemantic^—^ArticulatoryI 4----^----0.14---^--oldDistance^—^Distance^-Figure 2. The Relationships between Distances and Gulfsmatch to the psychological needs of the user. The user can bridge the gulfs by creatingplans, action sequences, and interpretations that move the normal description of the goalsand intentions closer to the description required by the physical system. In other words,the Gulf of Execution is bridged by making the commands and mechanisms of the systemmatch the thoughts and goals of the user as much as possible. The Gulf of Evaluation isbridged by making the output displays present a good conceptual model of the system thatis readily perceived, interpreted, and evaluated. In order to decrease the distance needed tobe bridged by the effort of the user, much more of the gulfs should be spanned by thesystem interface. The use of direct manipulation in system interface may be a way todecrease the distance.Whether an interface is constructed based on the conversation metaphor (commandbased interfaces) or the model world metaphor (direct manipulation interfaces), the userstill needs an interface language to interact with the system. The interface language is21symbolic in the sense that there is an arbitrary relationship between the form of avocabulary item and its meaning. The reference relationship is established by conventionand must be learned. Because of the relative independence of meaning and form, there aretwo properties of the interface language: semantic directness and articulatory directness.This in turn divides the gulfs into two segments: semantic distance and articulatorydistance (Figure 2).Semantic directness concerns the relation of the meaning of an expression in theinterface language to what the user wants to say. On the execution side, semanticdirectness requires matching the level of description required by the interface language tothe level at which the user thinks of the task. The user must generate some informationprocessing structure to span the gulf. Semantic distance in the gulf of execution reflectshow much of the required structure is provided by the system and how much by the user.The more that the user must provide, the greater the distance to be bridged. On theevaluation side, semantic directness requires making the system output into terms that arecompatible with the user's intention so that it is easier for the user to determine whether thegoal has been achieved. The designer can reduce the semantic distance by constructinghigher-order and specialized languages that move towards the user, making the semanticsof the input and output languages match that of the user.Articulatory directness refers to the relationship between the meanings of expressionsand their physical forms. On the input side, the form can be a sequence of key strokes fora command based interface or the movement of a mouse and the associated "mouse clicks"in a pointing device interface. On the output side, the form may be a string of characters,a change in an icon shape, or a graph. One way to achieve articulatory directness in theinput side is to provide an interface that permits specification of an action by mimicking it,22thus supporting an articulatory similarity between the vocabulary item and its meaning.Articulatory directness at the output side is similar. If the user is following the changes insome variables, a moving graphical display can provide articulatory directness. In general,articulatory directness is highly dependent upon input and output technology. However,the user can manipulate the articulatory distance by reconceptualizing the mental modeladopted.3.1.2 Direct EngagementThe second aspect of the feeling of directness, engagement, concerns the qualitativefeeling of directly manipulating the objects of interest. There are two major metaphors forthe nature of human-computer interfaces: conversation metaphor and a model worldmetaphor. In a system built on the conversation metaphor, the interface is a languagemedium in which the user and system have a conversation about an assumed, but notexplicitly represented world. In this situation, the interface is an implied intermediarybetween the user and the world about which things are said. The user is in direct contactwith linguistic structures, structures that can be interpreted as referring to the objects ofinterest, but that are not the objects themselves. In a system built on the model worldmetaphor, the interface is itself a world where the user can act, and that changes stateimmediately in response to user actions. In this case, the world of interest is explicitlyrepresented and there is no intermediary between the user and the world of interest. Directengagement occurs when a user experiences direct interaction with the objects in a domain.For example, if we are playing a game, we should be manipulating directly the game world,touching and controlling the objects in that world, with the output of the systemresponding directly to our actions, and in a form compatible with them.23Historically, most interfaces have been built on the conversational metaphor. There ispower in the abstractions that language provides, but the implicit role of interface as anintermediary to a hidden world denies the user direct engagement with the objects ofinterest. Instead, the user is in direct contact with linguistic structures, structures that canbe interpreted as referring to the objects of interest, but that are not those objectsthemselves.Making the central metaphor of the interface that of the model world supports thesensation of directness: instead of describing the actions of interest, the user performs thoseactions. In the conventional interface, the system describes the results of the actions; in themodel world the system would present directly the actions taken upon the objects.Building interfaces on the model world metaphor requires a special relationship bespecified between the input interface language and the output interface language. Theoutput language must represent its subject of discourse in a way that natural language doesnot normally do. The expressions of a direct manipulation output language must behavein such a way that the user can assume that they, in some sense, are the things they refer to.Furthermore, the nature of the relationship between input and output language must besuch that an output expression can serve as a component of an input expression. Forexample, if the user asks for a listing of files, the result would be a physical display of files(output expression). This display can, in turn, be used directly to specify further operationssuch as selecting a file by pointing at the screen representation (input expression). In thissituation, the output expression (the files) is served as a component of an input expression(the pointing). When these conditions are met, it is as if we are directly manipulating thethings that the system represents.24When an interface presents a world of action rather than a language of description,manipulating a representation can have the same effects and the same feeling asmanipulating the thing being represented. The user of a well designed model worldinterface can willfully suspend belief that the objects depicted are artifacts of some programand can thereby directly engage the world of the objects. This is the essence of the"first-personness" feeling of direct engagement.In order to have a feeling of direct engagement, the interface must provide the user witha world in which to interact. The objects of that world must feel like they are the objectsof interest, that one is doing things with them and watching how they react. Hence, theoutput language must present representations of objects in forms that behave in the waythat the user thinks of the objects behaving. Whatever changes are caused in the objectsby the set of operations must be depicted in the representation of the objects. This use ofthe same object as both an input and output entity is essential to providing objects thatbehave as if they are the real thing. It is because an input expression can contain a previousoutput expression that the user feels the output expression is the thing itself and that theoperation is applied directly to the thing itself3.2 Empirical Research on Direct Manipulation SystemsAlthough there is a proliferating use of direct manipulation in program design, there arefew empirical works to examine the effect of direct manipulation interfaces.Direct manipulation interfaces can have the ability to speed learning. In evaluatingcomputer text editors, Roberts (1980) found that, with line-oriented editors, overall25performance times were twice as long as with display editors. Training time with displayeditors was also reduced.Some contradictory results were found by Marchionini (1989). In an attempt todevelop views of users' mental models and explore how these mental models are used whenusers begin to work with new systems, he asked the subjects to perform some editing taskson encyclopedias first in print form and then in electronic form. He found that subjectswere able to make the transition from print to electronic encyclopedia at satisfactory ratherthan optimal performance level by adopting existing mental models and ignoring manyelectronic features. Some subjects appeared to develop distinct mental models for theelectronic encyclopedias by adapting their existing mental models and hence were able totake good advantage of the full text features. Most took almost twice as much time andposed more questions in the electronic search compared to the print search.Direct manipulation interfaces can also be used to aid problem-solving. Research hasshown that representations of problems are crucial to solution fording and to learning. Forexample, Montessori (1964) proposed use of physical objects such as wooden sticks toteach children to learn simple mathematical operations and size comparison; and Bruner(1966) extended the same idea to cover polynomial factoring and other mathematicalprinciples.Physical, spatial, or visual representations also appear to be easier to retain andmanipulate than to textual or numerics representation. Wertheimer (1959) found thatsubjects who memorized a formula for an area of a parallelogram rapidly succeeded indoing such calculation. On the other hand, subjects who were given the structuralexplanation of cutting off a triangle from one end and placing it on the other end could26more effectively retain the knowledge and generalize it to solve related problems. Carroll,Thomas, and Malhotra (1980) examined the spatial representation of a problem (designinga layout for a business office) versus the temporal representation of an isomorphic problem(scheduling the stages in a manufacturing process). They observed that subjects given theformer could solve problems more rapidly and successfully than subjects given the latter.The developers of a prototype of a spatial data management system see users of spatialdata management systems benefit from the ability to access computer-residentinformation, while retaining a familiar, visual orientation (Herot, 1984). In a previous casestudy, Carlson, Grace, and Sutton (1977) developed an interactive Geo-data Analysis andDisplay System (GADS) to study interactive problem-solving. In particular, they wantedto use GADS to examine unskilled computer users to solve unstructured problems in theirprofessions. The GADS allows users to store and display data in terms of maps. Theauthors found that the interactive system with problem-related functions and graphicstechniques for procedure specification enables users to begin solving the problems withinfour to eight hours.Te'eni (1990) sees the spatial and movement elements of direct manipulation interfacesare important in promoting spatial information processing. He demonstrated that directmanipulation interfaces enhance cognitive control in a judgement task.3.3 Problems with Direct ManipulationDirect manipulation systems have both advantages and disadvantages. At the firstglance, these systems provide immediate feedback and translate intentions to actions whichmake some tasks easy to perform by novice users. However, not all tasks are best be done27directly. A task which requires repetitive operations, for instance, is probably best done viaan intermediary such as a command based interface. The reason is that directmanipulation interfaces have difficulty handling variables (Hutchins et al., 1986).Another problem concerns the trade-offs in both designing and using a directmanipulation interface. "Directness" is not absolute but relative to a user's previousknowledge on a similar system. With sufficient practice many interfaces can come to feeldirect. The goal of a designer is to minimize the amount of learning required and provide anatural mapping to the task domain. For this reason, the interface designer may have totrade off directness for generality. Consider a text editing task that requires the use of apointing device. Whenever the user wants to initiate an activity, he/she has to take a handoff the keyboard, locate the pointing device, initiate the action by manipulating thepointing device, and then resume typing -- quite a disruption for a skilled typist. A moregeneral design which is command based or utilizes labeled buttons would eliminate thisproblem.A more fundamental problem arises from the fact that direct manipulation interfacesallow the users to think in the familiar terms of the application domain rather than thoseof the medium of computation. Hutchins et al. (1986) argue that if designers are restrictedto build interfaces that allows people to do things they can already do and to think in waysthey already think, no one will be able to provide new ways to think of and to interact witha domain.Finally, a direct manipulation system does not automatically imply the system is easy touse. If the interface is really invisible, then the difficulties within the task domain gettransferred directly into the difficulties for the user (Hutchins et al., 1986). For this reason,28if a user has a poor understanding of the task domain, he may complain that the system isdifficult to use. In fact, the difficulty may lie in the task domain, not in the interfacelanguage. Direct manipulation interfaces are not used to assist in overcoming problemsthat result from poor understanding of the task domain.29Chapter 4: Research Methodology and HypothesesMany studies in MIS considered laboratory experiments as the ideal research design.However, this ideal research design is not without its problem. One problem associatedwith experiments is the issue of validity of the test. Very few of these MIS studies,especially studies in icons or direct manipulation, had discussed the issue of validity.Similar to many previous MIS research, this study utilized a laboratory experiment as theresearch design. However, it is necessary at this point to discuss the issue of validity of thetest.4.1 MethodologyCampbell and Stanley (1963) discussed two forms of validity: internal and externalvalidity. Each of these two forms of validity is discussed below.4.1.1 Internal ValidityInternal validity is the basic minimum without which any experiment is notinterpretable (Emory, 1976). It ensures that the conclusions drawn from experimentalresults are accurately reflecting what have gone on in the experiment itself -- it is in fact theexperiment treatments which make a difference. Campbell and Stanley (1963) identifiedseven threats to internal validity of every experiment:1. History -- Historical events may occur during the course of the experiment. Theseevents could affect subjects' performance, hence confound the experimental results.302. Maturation -- Changes may take place in the dependent variable which are a functionof the passage of time. These changes are not specific to any particular events orcondition.3. Testing -- Often the process of testing and retesting will influence people's behavior,thereby confounding the experimental results.4. Instrumentation -- This threat to internal validity results from changes, betweenobservations, in the measuring instrument or observers. For example, a mechanicaldevice may malfunction or become unreliable. However, the greatest threat is thehuman factor such as observer boredom, fatigue, experience, or anticipation of results.5. Selection Biases -- Another important threat to internal validity is the differentialselection of persons to be included in the experimental and control groups.Comparison do not have any meaning unless the groups are comparable.6. Statistical Regression -- This factor operates especially when study groups have beenselected on the basis of their extreme scores. Usually there will be a shift of the meanof the extreme scores in the pre-test toward the direction of the overall mean in thepost-test.7. Experimental Mortality -- Often experimental subjects drop out of the experimentbefore it is completed. The statistical comparison and conclusions drawn can beaffected by these changes.Every effort was made to ensure the internal validity was achieved to the degree that anydifferences in the experimental results were solely the effects of the various interfaces. The312 x 2 x 3 repeated measures design with random assignment of subjects could guard againstthe problem of history. If any events that might happen outside the experiment, it shouldaffect all subjects in all four groups, and there should still be a difference in theexperimental results. The random assignment of subjects also eliminated the problems ofmaturation, selection bias, and statistical regression. Instrumentation would not be aproblem since all four groups were subject to the same test and testing equipment. Inaddition, the observer fatigue and boredom would not present as the experimenter was onlyinvolved in administering the experiment, not measuring the results nor observing subjectperformance. To guard against the problem of experimental mortality, monetary prizeswere provided as incentives. Monetary incentives have proved valuable in the past in orderto attract subjects to studies (see for example, Benbasat and Dexter, 1985; Benbasat,Dexter, and Todd, 1986). In addition, since some prizes were performance based, theywere useful for encouraging subjects to maximize their commitment to the study. Thismight contribute greatly to the validity of the research. Finally, this study attempted tomeasure the learning effect. The problem of testing actually provided useful informationfor such a measurement. To ensure subject performance and learning process were notinfluenced, participants were reminded not to discuss the exercise with other participantsafter they had completed the required tasks.4.1.2 External ValidityA second problem associated with any experiment is the issue of external validity. Itrefers to the notion of generalizability: to what populations, settings, treatment variables,and measurement variables can this effect be generalized. In a logical sense, it is aninductive process of extrapolating beyond the data collected (Emory, 1976). This studyattempted to secure as much external validity as was compatible with the internal validity32requirements by making the experimental conditions as similar as possible to an officeenvironment. These included the choice of the task domain, equipment used in the test,and interface designs. The task domain used in this experiment was document based. Itconsisted of a range of generic tasks that would occur in a typical office environment. Thecomputer used in the test was commercially available and could be found in many offices.Finally, the interface designs used in the experiments were icon based and menu based.These types of designs could be found in many contemporary application software.4.2 Hypotheses DevelopmentPreceding chapters suggest that there is a need to design an effective interface andmeasure its potential effectiveness. Although many empirical studies have been done,there are very few that specifically address the use of icons and direct manipulation in thehuman-computer interface. In the business or office computing environment, icons anddirect manipulation interfaces are increasingly becoming important means through whichuser and computer interact. It is therefore necessary to investigate the effectiveness oficons and direct manipulation interfaces in this computing environment.From the theoretical standpoint, an effective system should be easy to learn and use, aswell as easy to remember when not using it after a reasonable length of time. Interfacesthat use pictorial icons or direct manipulation seem to provide all of these advantages.The purpose of this study was to examine the performance impact of using icons anddirect manipulation in human-computer interface design. There were three mainobjectives of the study. The first one was to examine the relationship between the use ofthe various interface types and individual performance. The second one was to investigate33the various interface types on the learning process. The third one was to examine the issueof recall and retention of knowledge pertaining to the operation of a specific interface.Four interface mechanisms, based on a combinations of the two design principles --icon and direct manipulation, were used in the experiment. These interfaces were iconbased direct manipulation, icon based non-direct manipulation, non-icon based directmanipulation, and non-icon based non-direct manipulation interfaces.The dependent variables were performance related. The three key measurements were1) time taken to complete the task; 2) number of actions required to complete the task; and3) number of errors made in execution of the task. Two additional measurements were alsoincluded: speed and error rate. Speed is expressed as number of actions per second. Thisis to measure the relationship between time and number of actions. A fast performer willtake less time to do more. Error rate is the percentage of actions that are errors. This is tomeasure accuracy in performance. The specific research questions of the study are outlinedbelow. All hypotheses are stated in null form.4.2.1 Icon EffectThe first question for this study was whether the use of graphics or images in designingan interface would improved user performance. The previous review of the psychologicalliterature indicated that visual information processing is faster and less prone to error thanis verbal based information processing (see Chapter Two). This gives a good indicationthat users of icon based interfaces will have better performance than users of non-iconbased interfaces. Therefore, it was expected that the hypotheses:Ho(1): There will be no difference in:34Ho(1)a: time taken to complete the taskHo(1)b: number of actions required to complete the taskHo(1)c: performance speedHo(1)d: number of errors made in execution of the taskHo(1)e: percentage of error madebetween icon based and non-icon based interfaceswould be rejected. Users of icon based interfaces will outperform those using non-iconbased interfaces by taking less time and making fewer errors.4.2.2 Direct Manipulation EffectResearch showed that direct manipulation interfaces can have the ability to speedlearning (see for example, Roberts, 1980). This is because users of direct manipulationinterfaces are in direct contact with the world of interest that the users are familiar with.Furthermore, there are no command syntax to remember when using a direct manipulationinterface. This suggests that direct manipulation interfaces are less prone to error than arenon-direct manipulation interfaces. Therefore, it was expected that the hypotheses:Ho(2): There will be no difference in:Ho(2)a: time taken to complete the taskHo(2)b: number of actions required to complete the taskHo(2)c: performance speedHo(2)d: number of errors made in execution of the taskHo(2)e: percentage of error madebetween direct manipulation and non-direct manipulation interfaceswould be rejected. Users of direct manipulation interfaces will outperform those usingnon-direct manipulation interfaces by taking less time and making fewer errors.354.2.3 Time EffectAn effective system should be easy to learn. The learning effect can be studied from thelearning curve that the user has to go through when using a new computer system. Thelearning curve can be established by measuring the user performance during the learningprocess. This can be operationalized by measuring how much the user can recall and retainthe knowledge on some basic set of system functions over a period of time. Themeasurement made at different points of time since the user has began using the newsystem will provide a good indication of the learning effect. Comparison of the learningcurves will provide an accurate assessment of the theoretical superiority of icon overnon-icon based interfaces, and direct manipulation over non-direct manipulationinterfaces. Over time, it is expected that users of icon based interfaces will have superiorrecall of the system knowledge. This is due to the fact that visual memory is superior toverbal memory and that the icons allow for a "fuzzy" understanding requiring that the useronly "recognize" and "point" (Lodding, 1983). Users of direct manipulation interfaces willalso have good recall of system knowledge due to the fact that the user is in direct contactof the model world rather than through some intermediary which require rote memory.Therefore, it was expected that the hypotheses:Ho(3): There will be no difference in:Ho(3)a: time taken to complete the taskHo(3)b: number of actions required to complete the taskHo(3)c: performance speedHo(3)d: number of errors made in execution of the taskHo(3)e: percentage of error madebetween each successive session36would be rejected. User performance will improve from the first session to the thirdsession. Over time, they can complete the task with less time, fewer errors, and moreefficiently.4.2.4 Interaction EffectsIn terms of interaction effects among icon, direct manipulation, and trials, it wasexpected that the general hypotheses:Ho(4): There will be no interaction effect i) between icon based and directmanipulation interfaces, ii) between icon based interfaces and trials, iii) between directmanipulation interfaces and trials, and iv) among icon based interfaces, directmanipulation interfaces, and trials in terms ofHo(4)a: time taken to complete the taskHo(4)b: number of actions required to complete the taskHo(4)c: performance speedHo(4)d: number of errors made in execution of the taskHo(4)e: percentage of error madewould be rejected. The general hypotheses represent a total of 20 hypotheses (fivealternatives for each of the four interaction effects). Users of icon based interfaces willoutperform users of non-icon based interfaces during the first session. This is due to thefact that icon systems requires a low level of cognitive effort for initial use. Finally, overallsubject performance will improve over time for the icon groups. This is because iconspossess considerable carry-over value in the learning process as discussed in Chapter Two.37Chapter 5: The ExperimentA preliminary study was conducted prior to the main study to measure individualperceptions of the underlying meaning of various icon representations. The purpose of thispreliminary study was to determine whether the icons chosen to represent system functionswere clear and unambiguous. This ensures an unconfounded experimental design for themain study. In the preliminary study, subjects were asked to identify the different versionsof icons on the screen and describe their functions. The result showed that subjects had nodifficulty identifying all icons and describing their functions.5.1 The Interface DesignThe interface used in the experiment was an electronic message system. This messagesystem was hierarchical. At the top level, users could open or create a memo, make copiesof it, send it, file it, or delete it. The second level carried further operations from level oneactivities. For example, to open the file folder is a level one action; to select a letter fromthe file folder is a level two action. Not all level one actions have a second level of actions.The four interfaces were all written in Hypercard programming language on an AppleMacintosh computer. The two direct manipulation interfaces simulate a desk top whichcould be found in a typical office. This desk top simulation was based on the analogicalmodel described in Chapter Two. In this study, the message system was the target systemand the office desk top was the source. The objects in the base domain such as the pen,memo pad, and trash can were concrete everyday objects. The encompassing attributessuch as size and color were not relevant for the analogy.38The desk top, which is the top level of the electronic message system, consisted of iconrepresentations of a pen, a memo pad, a copier, an in-tray, a mail box, a file folder, a trashcan, help, and a key. The difference between icon based and non-icon based interfaces wasthe icon set that was used. For the former, the pictorial version was used and for the latter,the text (non-icon) version. The screen displays of the four interfaces are shown inAppendix A.If the subject selected to open a memo, create a memo, open the file folder, or open thein-tray, a second level of display would be shown on the screen. The second level consistedof the contents of a memo, memos in the file folder, and memos in the in-tray. The last twoallows further activities -- opening a memo in the file folder or in the in-tray.To invoke an action, the subject had to use the mouse to drag an icon onto another. Forexample, to write a memo, the subject had to move the mouse so that the pointer on thescreen was touching the icon representing the Pen, then dragged the Pen to the iconrepresenting the Memo Pad.For the two non-direct manipulation interfaces, the screen was arranged in a menuformat with the left hand side consisted of actions and the right hand side consisted ofobjects. For example, the left hand side would have the pictorial icons such as the Pen andthe Key for the icon based interface or the text icons such as WRITE and OPEN for thenon-icon based interface. Similarly, the right hand side would have the pictorial icons suchas the Memo Pad and the File Folder or the text icons such as MEMO PAD and FILE.To invoke an action, the subject had to first select an action-object pair from the twomenus by clicking the circle next to the command that was to be selected, and then to39execute the action-object pair by clicking the icon represented by the Happy Face at thebottom of the menus. Clicking another choice on the same menu before clicking theHappy Face would cancel the previous choice on that menu.5.2 The Experimental DesignThis study had three independent variables: icon, manipulation, and time. Theexperiment is a 2 x 2 x 3 factorial with repeated measures on time (Figure 3). This designpermits the examination of differences in the main effects while at the same time controllingfor the potential interaction among the three independent variables. Furthermore, thesubject factor is nested under both icon and manipulation factors, there can be nointeraction between these latter factors and the subject factor.The dependent variables were performance related such as the total time taken tocomplete the task, total number of actions required to complete the task, and numbererrors made in execution of the task.5.3 Subject SelectionParticipation in the experiment was entirely voluntary. Forty-eight students from theFaculty of Commerce and Business Administration at the University of British Columbiawere recruited to participate in the experiment. Subjects were provided with monetaryprizes as incentive. In addition, extra cash prizes were awarded to the subjects based ontheir performance in accomplishing the tasks. These subjects were randomly assigned toeach one of the four segments. A total of 12 subjects were assigned to each of the four40Icon Based^Non—Icon BasedDirectManipulationNon—DirectManipulationSession 1 Session 1Session 2 Session 2Session 3 Session 3Session 1 Session 1Session 2 Session 2Session 3 Session 3Figure 3. The 2 x 2 x 3 Factors Repeated Measure Designtreatment combinations. The demographic data of the subjects are presented in Table 1.With a sample size of 12 per cell, a power of approximately 0.80 is associated with 4' = 1.7,degrees of freedom for treatment effect vl = 3 and for error effect v2 = 44 (See Kirk, 1968,pp.107-109 for discussion). This means the probability of rejecting a false null hypothesisis approximately 0.8.5.4 The Task EnvironmentThis study employed a range of basic generic tasks which would occur in a typical officeenvironment. Some of these were the creation of a new memo, filing an existing memo,sending a completed memo, retrieving a new memo, and deleting an existing memo. Thespecific task in this study was to find a date to set up a meeting with three other co-workersin the same project team using the experimental interface (Appendix B). The use ofdocument based task ties in well with the notion of an icon based system as a desk top. Itis also an area where previous research has been conducted and standardized tasks can beutilized to enable comparison of results.41Icon Based^Non—Icon BasedDirectManipulationNon—DirectManipulationa) 8 7b) 20.50 21.75c) 3.17 3.09d) 7 16 620.75 20.803.58 3.251 5a) Number of males.b) Average age.c) Average number of years of university studies.d) Number of subjects who have previous computer experience.Table 1. Demographic Data of the Subjects5.5 The Experimental ProceduresThe experimental sessions were held in a quiet room equipped with an Apple Macintoshcomputer. During each session, the Research Assistant was to remain in the room to assistthe subject. Each subject was asked to come to three sessions. The first two sessions wereheld consecutively while the third session was held a week later. In each session, the subjectwas asked to complete the same task using the same interface type. However, the subjectwould get a different data set in each session. The data set was used to determine whetherthe recipient of a memo was busy on the date that was chosen by the subject. There were atotal of eight data sets. However, the minimum number of steps required to accomplish thetask was the same for each data set.42Each session consisted of the following steps:1. The subject was asked to identify the icons and describe their functions with respect tothe task performed. Subjects assigned to the icon based interface were presented thepictorial icons while subjects assigned to the non-icon based interface were given thetext icons. When the subject completed this task, the Research Assistant would showthe correct answers. At this time, the subject could ask any questions regarding theicon set.2. During the first session only, the subject was given a tutorial that he or she could workon with the experimental interface to which he or she was assigned. The purpose ofthetutorial was solely intended to acquaint the subject with the operation of the mouse,the message system, and the task domain. This helped to remove some learning fromthe experimental session which might otherwise affect the overall performance. Thetutorial gave the subject only a general guideline. The subject had to master the detailshimself. This process was to encourage the subject to think about the system beingused. The Research Assistant was available for help when the subject was working onthe tutorial.3. When the subject was ready, he or she would be given the task to be performed. Recallthat the task was to find a date to set up a meeting with three other co-workers in thesame project team, using the experimental interface.4. After the subject was clear about the task domain, the Research Assistant would startup the experiment. During the experiment, the subject was not allowed to use anywriting aids such as pencil and paper. Any errors made during the experiment wereimmediately fed back to the subject by error messages displayed on the screen. A43congratulatory message would be displayed if the subject had found the earliestcommon date available. Otherwise, a message would be displayed indicating that thesubject had not found the earliest common date and had to keep trying.5. Upon completion of the task in the second and third sessions, the subject was requiredto complete a post-experiment questionnaire to solicit comments and suggestions onthe experiment. This concluded the experiment and the subject was asked not todiscuss the experiment with other subjects.6. At the end of the second session, the subject was scheduled to come back a week laterfor the third session.44Chapter 6: Experimental ResultsSeparate Multivariate Analysis of Variance (MANOVA) were run on each dependentvariable using the BMDP statistical package. The dependent variables in this study are asfollows:1. Total time taken to complete the task.2. Total number of actions required to complete the task.3. Speed (number of actions per second).4. Total number of errors made in the task.5. Error rate (percentage of incorrect actions).A p-value of 0.05 was used as the significance level in the statistical tests. Printouts of thecomputer statistical tables are shown in Appendix C. These results are highlighted in Table2 to Table 6.6.1 Icon EffectThere are no differences between the icon based interfaces and the non-icon basedinterfaces with respect to each dependent variable at 0.05 level. Therefore, all hypothesesHo(1)a, Ho(1)b, Ho(1)c, Ho(1)d, and Ho(1)e are not rejected. This suggests that subjectsusing the two icon based interfaces did not perform any better than those using thenon-icon based interfaces. The results are summarized in the "Overall by Icon" column inTable 2 to Table 6.45A possible reason to explain why such results were obtained is the difficulty of designinga set of pictorial representations that convey the appropriate meaning to the user. In thisstudy, the text icon set display the commands in plain English whereas the pictorial icon setcontains only graphical representations of the objects. Some of these objects must beinterpreted as actions. For example, the Pen should be interpreted as the action WRITE.On the other hand, subjects who used the non-icon based interface could see the text iconWRITE on the screen. This result is consistent with the findings by Rohr and Keppler(1984) where pictorial icons were not rated as more meaningful than verbal (text) icons;and with the findings by Landauer, Galotti, and Hartwell (1983) where beginner levelsystem users prefer names that have meaning in their own reality.6.2 Direct Manipulation EffectThere is strong evidence showing subjects using the two direct manipulation interfacescompleted the task in much shorter time than subjects using the two non-directmanipulation interfaces. Ho(2)a is therefore rejected. The average (see the "Overall byManipulation" column in Table 2) is 358 seconds for the direct group and 513.7 seconds forthe non-direct group (F(1,44) = 13.39, p < 0.0001).This result may be inconclusive due to the design of the interfaces. There were morephysical hand movements to complete an action in the non-direct manipulation interfacesthan in the direct manipulation interfaces. This occurred only at the top level. For subjectsin the direct manipulation group, an action was a continuous one, from choosing the firsticon to dragging it to the second. The total physical movements included hand movementto the first icon, one click at the mouse button, held the button while moving the mouse tothe second icon, and released the mouse button. For subjects in the non-direct46manipulation group, the physical movements included hand movement to the action menuon the left, one click to make the first choice, hand movement to the object menu on theright, one click to make the second choice, hand movement to the bottom of the menus,and one click at the Happy Face icon. The click at the Happy Face icon was to tell thecomputer to perform the requested action. In the non-icon non-direct manipulationinterface, the Happy Face icon was replaced by the text icon EXECUTE. The abovecomparison shows that the non-direct manipulation interfaces required one extra step,namely the hand movement to the bottom of the menus and the click at the Happy Faceicon (EXECUTE). The data showed that an average of 52 actions were recorded at the topmenu level. This means 52 extra steps were performed in the non-direct manipulationinterfaces. The difference in total time required to complete a task between the directgroup and the non-direct group is 155.7 seconds. If this difference is due to the extra stepneeded to complete an action in the non-direct manipulation interfaces, then the extra steptook almost three seconds to complete. The average speed of the subjects in the directgroup is 0.3747 action per second, or 2.67 seconds per action. This average time is for onecomplete action which includes all hand movements. Therefore, it would be unreasonableto say that one required three seconds to complete an extra hand movement in thenon-direct manipulation interfaces when a full action required less than three seconds tocomplete. This implies that the time used to perform the extra step required on thenon-direct manipulation interfaces may be insignificant to confound the result.Unfortunately, the test data showed only how long it took a subject to complete one action.They did not capture the duration of individual hand movements. Although the time effectmay seem to be inconclusive due to the interface design, it provides a direction for futureresearch.47Ho(2)b is not rejected as there is no difference in the total number of actions (all actionsat all three levels) between the direct and the non-direct groups. The means are 122.7actions for the direct group and 123 actions for the non-direct group (see the "Overall byManipulation" column in Table 3).Ho(2)c is rejected as there is a significant effect on speed (number of actions completedper second). The direct group could invoke 0.3747 action per second compared to the0.2553 action per second for the non-direct group (F(1,44) = 75.2, p < 0.0001). This resultis shown in the "Overall by Manipulation" column in Table 4. The difference is 0.1194action per second or 7.164 actions per minute. Since speed is expressed as the ratio ofactions and time, the significant effect on speed is due to the significant effect on the totaltime taken to complete a task (i.e. Ho(2)a is rejected). This result, again, may beinconclusive due to the extra hand movement required in the non-direct manipulationinterfaces.Finally, there is no statistical significance on errors, either the total number or as apercentage of total actions. Ho(2)d and Ho(2)e are therefore not rejected. The results areshown in the "Overall by Manipulation" column in Table 5 and Table 6, respectively.6.3 Time EffectThe time effect over task 1, 2 and 3 is significant on every dependent variable.Therefore, Ho(3)a, Ho(3)b, Ho(3)c, Ho(3)d, and Ho(3)e are all rejected. For all subjects,the total time taken to complete a task dropped from an average of 628.2 seconds in thefirst session down to an average of 359.7 seconds in the second session and dropped againto an average of 319.5 seconds in the third session (F(2,88)= 52.79, p < 0.0001). Although48the improvement from session one to session two is statistically significant(F(1,44) = 48.94, p < 0.0001), the small improvement from session two to session three isfound to be not significant (see Table 2).In terms of total number of actions required to complete a task, the results show asignificant drop from the first time to the third time (F(2,88) = 19.27, p < 0.0001). Themeans are 147.1, 118.8, and 102.6 for the first, second, and third session, respectively (seeTable 3). In both situations, the improvement from the first session to the second and theimprovement from the second session to the third, are found to be statistically significant(F(1,44) = 11.11, p <0.001 and F(1,44) = 6.00, p <0.05, respectively).For the speed, the results (see Table 4) show first an increase from 0.2541 action persecond in the first session to 0.3518 action per second in the second session, and then aslight drop to 0.3393 action per second in the third session. The overall changes are foundto be statistically significant (F(2,88)= 116.24, p < 0.0001). Comparing each sessionseparately, the increase in speed from the first session to the second session is significant(F(1,44)= 207.46, p < 0.0001) while the change in speed from the second to the third sessionis not significant. This suggests that subjects are more efficient in completing the tasksduring subsequent sessions than the first one. It also suggests that after not using thesystem for one week, the subjects' performance remained the same as the last session.The results on the total number of errors made in completing a task show a downwardpattern (see Table 5). The average number of errors per session is 3.667 for the first session,then dropped to 1.771 for the second session, but went up to 2.063 for the third session.The overall result is significant (F(2,88) = 13.94, p < 0.0001). For individual sessions, thechange from the first session to the second is statistically significant (F(1,44) = 19.08,49p < 0.001) but the small change between session two and three is statistically notsignificant. This suggests that subjects made fewer errors when they were asked to use theinterface again, whether in a very short period of time or after a week.Finally, there is a difference in the percentage of incorrect actions made (error rate) incompleting the task (F(2,88)= 5.29, p < 0.0001). In the first session, an average of 2.45%of the actions were mistakes. Subjects improved substantially in the second session withan average of 1.451% of errors (F(1,44) = 10.08, p < 0.01). After a week when subjectsreturned for the third session, the errors went up to 2.023%, but the change is consideredstatistically not significant (see Table 6).These findings imply that there is a learning effect. Subject performance improvedsignificantly in the second session implies that the interfaces were easy to learn and use.Such a performance did not degrade after a week.6.4 Interaction EffectsOnly one interaction effect out of the 20 potential interaction effects was found to besignificant in this study. It is between the direct manipulation and trials on the dependentvariable "speed" (F(2,88) = 9.32, p < 0.001). This indicates that the number of actionscompleted per second in each session depends upon which interface type (directmanipulation versus non-direct manipulation) is used and in which experimental session.In other words, the shape of the performance curves (number of actions invoked persecond) for the direct manipulation group and the non-direct manipulation groups are notidentical in this experiment. For the direct manipulation group, the means are 0.30, 0.42,and 0.41 actions per second in session one, two, and three, respectively. For the non-direct50# of ActionsPer SecondDirect ManipulationNon—DirectManipulation0. 14 -0.3 —0.2—0.1 —Si^S2^S3^SessionFigure 4. The Interaction Effect between Direct Manipulation and Timegroup, the means are 0.21, 0.28, and 0.27. The effect is shown in Figure 4. The differencebetween the two groups from the first session to the second session is found to bestatistically significant (F(1,44) = 16.59, p < 0.001) but the difference between the twogroups from the second session to the third session is found to be not significant. Althoughthe result shows significance, such differences are minimal and are considered immaterialin practice.6.5 General ObservationOne interesting observation was that subjects rarely used the online help facility. Inmost cases, the help facility was never used during the experiment. In few cases, the facilitywas used only once or twice. This could mean that the subjects had mastered the basicsduring the tutorials and therefore no further helps were needed during the experimentalsessions.51Overall byIcon Based^Non—Icon Based^Manipulationa) 495.1 564.9b) 281.0 304.2c) 246.1 256.5791.1 661.9410.6 443.1428.8 346.5DirectManipulationNon—DirectManipulation3 58.0513.7Overall by^442.1^429.5IconSession 1^Session 2^Session 3Overall by Trials^628.2 359.7^319.5a) Overall time in seconds for Session 1.b) Overall time in seconds for Session 2.c) Overall time in seconds for Session 3.Table 2. Time Taken to Complete the Task by Icon, Manipulation, and Time Effects52a) 148.1 146.8b) 116.3 123.6c) 104.2 96.9162.8 130.8113.5 121.7118.0 91.3DirectManipulationNon—DirectManipulation122.7123. 0Overall byIcon Based^Non—Icon Based^ManipulationOverall by^127.2^118.5IconSession 1^Session 2^Session 3Overall by Trials^147.1 118.8^102.6a) Overall number of actions taken in Session 1.b) Overall number of actions taken in Session 2.c) Overall number of actions taken in Session 3.Table 3. Number of Actions Required to Complete the Task by Icon, Manipulation, and Time Effects53Overall byIcon Based^Non—Icon Based^Manipulationa) 0.3171 0.2760b) 0.4297 0.4139c) 0.4289 0.38290.2121 0.21130.2873 0.27620.2795 0.2658DirectManipulationNon—DirectManipulation0.37470.2553Overall by^0.3257^0. 3043IconSession 1^Session 2^Session 3Overall by Trials^0.2541 0.3518^0.3393a) Overall number of actions per second in Session 1.b) Overall number of actions per second in Session 2.c) Overall number of actions per second in Session 3.Table 4. Number of Actions Per Seconds by Icon, Manipulation, and Time Effects54Overall byIcon Based^Non—Icon Based^Manipulationa) 3.333 3.500b) 1.417 2.000c) 1.750 1.7504.167 3.6672.083 1.5832.250 2.500DirectManipulationNon—DirectManipulation2.2922.708Overall by^2.500^2.500IconSession 1^Session 2^Session 3Overall by Trials^3.667^1.771^2.063a) Overall number of errors made in Session 1.b) Overall number of errors made in Session 2.c) Overall number of errors made in Session 3.Table 5. Number of Errors Made in the Task by Icon, Manipulation, and Time Effects55a) 2.286 2.134b) 1.228 1.396c) 1.641 1.6792.604 2.7751.866 1.3161.882 2.892DirectManipulationNon—DirectManipulation1.7272.222Overall byIcon Based^Non—Icon Based^ManipulationOverall by^1.918^2.032IconSession 1^Session 2^Session 3Overall by Trials^2.450 1.451^2.023a) Overall percentage of error made in Session 1.b) Overall percentage of error made in Session 2.c) Overall percentage of error made in Session 3.Table 6. Percentage of Errors by Icon, Manipulation, and Time Effects56Chapter 7: ConclusionsIn this study, four types of human-computer interfaces, icon based direct manipulation,non-icon based direct manipulation, icon based non-direct manipulation, and non-iconbased non-direct manipulation interfaces, were tested using a simple office task in alaboratory environment. Twelve subjects were randomly assigned to each of the fourinterface types from the 48 recruits. Each subject was required to complete threeexperimental sessions in two separate time periods.7.1 Discussion of the StudyThe results of this study highlighted three major points. First, the current study showedthat there are no significant differences in performance between icon based interfaces andnon-icon based interfaces. This is consistent with the findings by previous studies whereresearchers have not been able to show the superiority of icon based interfaces overnon-icon based interfaces (see Chapter Two). This leads to further research is necessary tofind out in what ways, if any, icons can be used to improve user performance, as well as howicons can be used to contribute to the development of a successful information system.Perhaps there are other factors such as individual differences or past computer experiencethat must be considered in order to determine if icon based interfaces are suitable forselected groups of users. These factors can be considered in future studies.Second, this study found that there is evidence showing direct manipulation interfacesare better than non-direct manipulation interfaces in terms of performance. This suggeststhat a revisit of traditional command or menu based interfaces is necessary. In designingnew computer system interfaces, designers shall utilize the user's direct involvement in the57model world rather than through some intermediary. This can be achieved by 1) makingthe use of direct manipulation devices such as the mouse and light pen part of the systemrequirement, and 2) redesigning output languages so that the output expression will beshown immediately on the screen and can be used as a component of an input expression.Finally, this study demonstrated that subject performance improved significantly overa short period of time, and such a performance did not degrade after a week's of not usingthe system. This effect can be explained by Borgman's (1982) mental model developmentprocess. She suggests that the user starts with a simple mental model which is arepresentation of the relationship within the system. The person goes through the modelto produce a mental simulation of the system's actions. From the simulation, ways ofdealing with the system are developed. The strong significance of the time effect suggeststhat people learn to work more efficiently over time.7.2 Limitation of the StudyOne key limitation was identified in the study. It was related to the interface design.The current design requires one extra step to complete an action in the non-directmanipulation interfaces. Recall that an action in the non-direct manipulation interfacesconsisted of selecting an item from the action menu on the left and an item from the objectmenu on the right, and moving the pointing device to the bottom and clicking on theHappy Face icon (EXECUTE). The last action is an extra step which is not found in thedirect manipulation interfaces. This extra step could confound the results. However, thepresent study did not capture the effect, if any.587.3 Directions for Future ResearchDirections for future research emerge from the findings of this study as discussed above.Three key suggestions are described here.The first suggestion is related to the interface design. The findings of this study showthat the direct manipulation effect on the time variable may be inconclusive due to theextra step required in the non-direct manipulation interfaces. For future research, it isnecessary to redesign the two non-direct manipulation interfaces and eliminate the extrastep required to complete an action. An alternative to this is to capture each individualhand movement on the non-direct manipulation interfaces. This allows the experimenterto subtract the time for the extra step from the total time required to complete the action.The second suggestion is the development of a training method which will help the userto develop or discover the mental model in his or her mind. In conjunction with this is themeasurement of subjects' prior computer experience so that instruction set can be designedto match each individual's past experience. Sein and Bostrom (1987) suggest that atraining program can influence the level of a user's knowledge or understanding of thesystem and his or her motivation to use it. A training program, however comprehensive,can give a novice user a complete understanding of a system. Therefore, the objectives ofa novice training program is only used to 1) help the novice user to develop an initialunderstanding of the new system and 2) leave the user with a motivation to use it. Theauthors concluded that there is a need to tailor instructional aids to suit individualdifferences in a computer learning environment. In a related article, Bostrom, Olfman, andSein (1987) suggest that in any experiment that involves learning, prior experience coulddetermine how well subjects perform in the experimental task. This implies that carefully59designed training programs based on the user's past computer experience may have stronginfluence on their level of understanding of the new system and subsequently, affect theirperformance.The final suggestion for future research direction is the issue of recall and retention ofknowledge pertaining to the operation of a specific system. Since computing powerbecomes more widely distributed and is more likely to be utilized on a irregular basis bylarge groups of casual users, it is necessary to investigate an individual's ability to recall abasic set of system functions after not having utilized the system for an extended period oftime. The current study's one week elapsed time may be too short to provide an accurateassessment of long term recall ability. A longer period, for example, four to six months, ofnon-use of the system would be more appropriate to measure the degradation inperformance.In summary, the present study provides empirical evidence to show that directmanipulation interfaces are better than non-direct manipulation in terms of userperformance. Although this study does not show that icon based interfaces are superior tonon-icon based interfaces, improvement in future research such as those mentioned abovemay provide new insights into the effect of using icons in human-computer interfaces.60ReferencesArnheim, R., 1969. Visual Thinking, California: University of California Press.Benbasat, I. and A.S. Dexter, 1985. An experimental evaluation of graphical andcolor-enhanced information presentation. Management Science, Vol. 31, No. 11, pp.1348-1364.Benbasat, I. and Y. Wand, 1984. A structured approach to designing human-computerdialogues. International Journal of Man Machine Studies, Vol. 21, August, pp.105-126.Benbasat, I., A.S. Dexter, and P. Todd, 1986. The influence of color and graphicalinformation presentation in a managerial decision simulation. Human-ComputerInteraction, Vol. 2, pp. 65-92.Borgman, C.L., 1982. Mental models: ways of looking at a system. ASIS Bulletin, Dec.,pp. 38-39.Bostrom, R.P., L. Olfman, and M.K. Sein, 1987. End user computing: a framework toinvestigate the training/learning process. In Human Factors in MIS, New Jersey:Ablex Publishing Corporation.Bruner, J., 1966. Toward a Theory of Instruction, Massachusetts: Harvard UniversityPress.Campbell, D.T., and J.C. Stanley, 1963. Experimental and Quasi-Experimental Designs forResearch, Chicago: Rand McNally & Co.Carlson, E.D., B.F. Grace, and J.A. Sutton, 1977. Case studies of end user requirementsfor interactive problem solving systems. MIS Quarterly, Vol. 1, No. 1, pp. 51-63.Carroll, J.M. and J.C. Thomas, 1982. Metaphor and the cognitive representation ofcomputing systems. IEEE Transactions on Systems, Man, and Cybernetics, Vol. 12,No. 12, pp. 143-153.Carroll, J.M., J.C. Thomas, and A. Malhotra, 1980. Presentation and representation indesign problem-solving. British Journal of Psychology, 71, pp. 143-153.Emory, C.W., 1976. Business Research Methods, Illinois: Richard D. Irwin Inc.Foss, D.J., M.B. Rosson, and P. L. Smith, 1982. Reducing manual labor: an experimentalanalysis of learning aids for a text editor. Human Factors in Computer SystemsProceedings, Washington, D.C.: National Bureau of Standards.Gaines, B.R., 1981. The technology of INTERACTION - dialogue programming rules.International Journal of Man-Machine Studies, 14, pp. 20-23.61Gittins, D.T., 1986. Icon-based human-computer interaction. International Journal ofMan-Machine Studies, 24, pp. 519-543.Gittins, D.T., R.L.Winder, and H.E. Bez, 1984. An icon-driven, end-user interface toUNIX. International Journal of Man-machine Studies, 21, pp. 451-461.Guastello, S.J., M. Traut, and G. Korienek, 1989. Verbal versus pictorial representationsof objects in a human-computer interface. International Journal of Man-MachineStudies, 31, pp. 99-120.Halasz, F. and T.P. Moran, 1981. Analogy considered harmful. Computer HumanInteraction '81 Proceedings, pp. 383-386.Hartson, H.R. and D. Hix, 1989. Human-computer interface development: concepts andsystems for its management. ACM Computing Survey, Vol. 21, No. 1, pp. 5-92.Hemenway, K., 1981. Psychological issues in the use of icons in command menus.Computer Human Interaction '81 Proceedings, pp. 20-23.Herot, C.F., 1984. Graphical user interfaces. In Vassiliou, Y., Ed., Human Factors andInteractive Computer Systems, New Jersey: Ablex Publishing, pp. 83-104.Houghton Jr., R., 1986. Designing user interfaces: a key to system success. Journal ofInformation Systems Management, Vol. 3, No. 3, pp. 56-62.Hutchins, E.L., J.D. Hollan, and D.A. Norman, 1986. Direct manipulation interfaces. InNorman, D.A. and S.W. Draper, Eds., User Centered System Design: NewPerspectives on Human-Computer Interaction, New Jersey: Lawrence ErlbaumAssociates.Ives, B., 1982. Graphical user interfaces for business information systems. MISQuarterly, Special Issue, Dec., pp. 15-42.Keen, P., 1979. DSS and the marginal economics of effort. CISR - Sloan Working Paper,No. 1089-79, October.Kent, E.W., 1981. The Brains of Men and Machines, New York: McGraw-Hill.Kirk, R.E., 1968. Experimental Design: Procedures for the Behavioral Sciences, California:Brooks Cole.Kuo, F., 1988. An object-oriented approach to the design of a mail system for aheterogeneous environment. Information and Management, Vol. 15, No. 3, pp.173-182.Landauer, T.K., K.M. Galotti, and S. Hartwell, 1983. Natural command names and initiallearning: a study of text-editing terms. Communications of the ACM, 26, pp. 495-503.Lodding, K.N., 1983. Iconic interfacing. IEEE Computer Graphics and Applications,Mar/Apr, pp. 11-20.62Marchionini, G., 1989. Making the transition from print to electronic encyclopedias:adaptation of mental models. International Journal of Man-Machine Studies, 30, pp.591-618.Martin, J., 1973. Design of Man-Computer Dialogues, New York: Prentice Hall.Miller, J., 1956. The magical number seven, plus or minus two: some limits on ourcapacity for processing information. The Psychological Review, Vol. 63, No. 2, Mar.,pp. 81-97.Montessori, M., 1964. The Montessori Method, New York: Schocken.Muter, P. and C. Mayson, 1986. The role of graphics in item selection from menus.Behaviour and Information Technology, Vol. 5, No. 1, pp. 89-95.Myers, B.A., 1984. The user interface for SAPPHIRE. IEEE Computer Graphics andApplications, 4, pp. 13-23.Paivio, A., 1971. Imagery and Verbal Processes, New York: Holt, Rinehart, and Winston.Roberts. T.L., 1980. Evaluation of computer text editors. Ph.D. Thesis, StanfordUniversity, University Microfilm #8011699.Robey, D. and W. Taggart, 1982. Human information processing in information anddecision support systems. MIS Quarterly , Vol. 6, No. 2, pp. 61-73.Rohr, G. and E. Keppel, 1984. Iconic interfaces: where to use and how to construct. InHendrick, H.W. and 0. Brown Jr., Eds., Human Factors in Organizational Design andManagement, North-Holland: Elsevier Science Publishers.Schild, W., L.R. Power, and M. Karnaugh, 1980. Pictureworlds: A Concept for FutureOffice Systems, New York: IBM Thomas J. Watson Research Centre, RC 8384(#36518).Sears, J.A., 1982. Business modelling systems: comparing performance and identifyinglearning complexities. Ph.D. Thesis, University of Arizona, University Microfilm#8227369.Sein, M.K. and R.P. Bostrom, 1987. The Influence of Individual Differences in Determiningthe Effectiveness of Conceptual Models in Training Novice Users, Indiana: IndianaUniversity School of Business, and Institute for Study of Developmental Disabilities.Shepard, R.N., 1967. Recognition memory for words, sentences and pictures. Journal ofVerbal Learning and Verbal Behavior, Vol. 6, No. 2, pp. 156-163.Shneiderman, B., 1982. The future of interactive systems and the emergence of directmanipulation. Behaviour and Information Technology, Vol. 1, No. 3, pp. 237-256.Shneiderman, B., 1983. Direct manipulation: a step beyond programming languages.IEEE Computer, August, pp. 57-69.63Shneiderman, B., 1987.^Designing the user Interface: Strategies for EffectiveHuman-Computer Interactions, Massachusetts: Addison-Wesley.Smith, D.C., C. Irby, R. Kimball, B. Verplank, and E. Harslem, 1982. Designing the Staruser interface. Byte, Vol. 4, No. 4, pp. 289-308.Sutherland, I.B., 1963. Sketchpad, a man-machine graphical communication system.Proceedings of AFIPS Conference, Vol. 23, pp. 329-346.Te'eni, D., 1990.^Direct manipulation as a source of cognitive feedback: ahuman-computer experiment with a judgement task.^Internal Journal ofMan-Machine Studies, 33, pp. 453-466.Thimbleby, H., 1983.^Guidelines for 'manipulative' text editing. Behaviour andInformation Technology, Vol. 2, No. 2, pp. 127-161.Tietalman, W.E., 1985. A tour through CEDAR. IEEE Transactions on SoftwareEngineering, V ol. 11, No. 3, pp. 285-382.Veer, G.C. Van der, 1989. Individual differences and the user interfaces. Ergonomics, Vol.32, No. II, pp. 1431-1449.Wertheimer, M., 1959. Productive Thinking, New York: Harper and Row.Whiteside, J., S. Jones, P. Levy, and D. Wixon, 1985. User performance with command,menu, and iconic interfaces. Computer-Human Interaction '85 Proceedings, April, pp.185-191.Witten, I.H. and S. Greenberg, 1983. User Interfaces for Office Systems, Man-MachineSystems Laboratory, Department of Computer Science, University of Calgary.64Appendix A: Screen Displays for the Four Interfaces65Figures Al and A2 are both direct manipulation interfaces. The difference is theuse of icon set: Figure Al uses the icon version and Figure A2 uses the non-icon(text) version. The design simulates a desk top which could be found in a typicaloffice. The desk top allows the following movements (selections in brackets are fornon-icon based interface):Task^MovementCreate a memo^Move Pen (PEN) to Memo (MEMO PAD)Copy a memo^Move Memo (MEMO)* to Copier (COPIER)File a memo^Move Memo (MEMO)* to File Folder (FILE FOLDER)Send a memo^Move Memo (MEMO)* to Mailbox (OUT TRAY)Receive a memo^Move Key (KEY) to In Tray (IN TRAY)Open the file^Move Key (KEY) to File Folder (FILE FOLDER)Delete a memo^Move Memo (MEMO)* to Trash Can (TRASH CAN)Get help^Move Key (KEY) to Question Mark (HELP)* For the icon based interfaces the memo pad is represented by an icon with a whitebackground A memo with contents, either it was created by the user or sent bysomeone, is represented by an icon with a dark background. For the non-icon baseinterfaces, the two were represented by the text icons MEMO PAD and MEMO,respectively.66Figures A3 and A4 are both non-direct manipulation interfaces. The difference,again, is the use of icon set: Figure A3 uses the icon version and Figure A4 uses thenon-icon (text) version. These two interfaces are designed as menu based. Theyrequired one extra step to activate any action: click at the Happy Face or Execute.The allowable movements are (selections in brackets are for the non-icon basedinterface):Task^Selection from Menu 1^Selection from Menu 2Create a memoCopy a memoFile a memoSend a memoReceive a memoOpen the fileDelete a memoGet helpPen (WRITE)Copier (COPY)Folder with Memo (STORE)Mail Box (SEND)Key (OPEN)Key (OPEN)Trash Can (DESTROY)Key (OPEN)Memo (MEMO PAD)Memo (MEMO)*Memo (MEMO)*Memo (MEMO)*In Tray (IN TRAY)File Folder (FILE FOLDER)Memo (MEMO)*Question Mark (HELP)67FROM:TO: 9Figure Al: Icon Based Direct Manipulation68 FILEFIBERI IN!TRAYTRI1SHCANIMMOPRO COPIERGUTTROY KEYHELPFigure A2: Non-Icon Based Direct Manipulation69• •Figure A3: Icon Based Non-Direct Manipulation70Menu 10 Write^0 Open0 Send^°Destroy°Copy^0 StoreMenu 20 In Tray^°Memo Pad0 File Folder°HelpEn ecuteFigure A4: Non-Icon Based Non-Direct Manipulation71Appendix B: Task Descriptions72Task #1 DescriptionYour manager has just asked you to be the leader of a new project. He alsoassigned three other team members, MARY, PHIL, and HELEN, to work on this projectwith you.Your task is to set up a first meeting date with your team members. Since theproject is very important, you want to meet with them AS SOON AS POSSIBLE.You will be using a computerized electronic mail system to communicate with yourteam members. You will not be given any other aids such as pencil and paper. Theelectronic mail system will allow you to send and receive messages, store them, changethem, etc.By the way, be aware that your MANAGER may send you messages from time totime. Of course, you should pay careful attention to these messages.When you have found a common meeting date:1) You will get a congratulatory message if the date you found is the EARLIESTCOMMON DATE AVAILABLE.2) Otherwise, you will get a message indicating that you have not found the earliestcommon date and to keep trying.In performing this task your objectives are On order of importance):1) to find the earliest meeting date,2) to accomplish this task in the shortest time possible.Remember that there will be additional cash prizes awarded (up to $25) based onyour performance (that is, those who have found the earliest meeting date will beranked according to how quickly they have accomplished the task).*** Please remember that you can only have one MEMO (symbol in black) on thescreen at one time. Similarly, you can only have one COPY of a MEMO on thescreen at one time.73WIWI ISeveral weeks ago, your manager asked you to be the leader of a new project. Healso assigned three other employees, MARY, PHIL AND HELEN, to work on thisproject with you.At your previous meeting, you assigned each of your team members a specific taskto perform. Now you want to set up another meeting with your team members anddiscuss the progress. Since the project is very important, you want to meet with themAS SOON AS POSSIBLE.You will be using a computerized electronic mail system to communicate with yourteam members. You will not be given any other aids such as pencil and paper. Theelectronic mail system will allow you to send and receive messages, store them, changethem, etc.By the way, be aware that your MANAGER may send you messages from time totime. Of course, you should pay careful attention to these messages.When you have found a common meeting date:1) You will get a congratulatory message if the date you found is the EARLIESTCOMMON DATE AVAILABLE.2) Otherwise, you will get a message indicating that you have not found the earliestcommon date and to keep trying.In performing this task your objectives are (in order of importance):1) to find the earliest meeting date,2) to accomplish this task in the shortest time possible.Remember that there will be additional cash prizes awarded (up to $25) based onyour performance (that is, those who have found the earliest meeting date will beranked according to how quickly they have accomplished the task).*** Please remember that you can only have one MEMO (symbol in black) on thescreen at one time. Similarly, you can only have one COPY of a MEMO on thescreen at one time.74Task #3 DescriptionTwo months ago, your manager asked you to be the leader of a new project. Healso assigned three other employees, Mary, Phil, and Helen, to work on this projectwith you. However, Phil has been assigned to another project and his replacement isJohn. YOUR TEAM NOW CONSISTS OF MARY, JOHN, AND HELEN.You have met with your team members several times before to discuss the project.Everything seems to be going smoothly and now is the time to conclude the project.You are now required to set up one last meeting with your team members to discussthe final report. Since the report must be submitted to your manager before he goeson holiday, you want to meet with your team members AS SOON AS POSSIBLE.You will be using a computerized electronic mail system to communicate with yourteam members. You will not be given any other aids such as pencil and paper. Theelectronic mail system will allow you to send and receive messages, store them, changethem, etc.By the way, be aware that your MANAGER may send you messages from time totime. Of course, you should pay careful attention to these messages.When you have found a common meeting date:1) You will get a congratulatory message if the date you found is the EARLIESTCOMMON DATE AVAILABLE.2) Otherwise, you will get a message indicating that you have not found the earliestcommon date and to keep trying.In performing this task your objectives are (in order of importance):1) to find the earliest meeting date,2) to accomplish this task in the shortest time possible.Remember that there will be additional cash prizes awarded (up to $25) based onyour performance (that is, those who have found the earliest meeting date will beranked according to how quickly they have accomplished the task).*** Please remember that you can only have one MEMO (symbol in black) on thescreen at one time. Similarly, you can only have one COPY of a MEMO on thescreen at one time.75Appendix C: Computer Printouts of MANOVA Tables76All statistical analyses were performed using the BMDP statistical package. Thefive computer printouts of the Multivariate Analysis of Variance Tables are includedin the following pages. Each table shows the results of the three main effects, as wellas interaction effects. Within cell subject is n = 12 with a grand total of N= 48. Ineach table, main effects are represented by the following symbols:I^Icon EffectM^Manipulation EffectT^Trial (Time) Effect77WITHIN EFFECT: 08$: WITHIN CASE MEANEFFECT^VARIATE^STATISTICOVALL: GRAND MEANDEP_VARF OFSS. 0.273496E+08MS* 0.273496E+08 419.72 1, 44 0.0000I:^ICONDEP_VARSS* 5724.81MS= 5724.81 0.09 1, 44 0.7683M: MANIPDEPVARSS* 872688.MS= 872688. 13.39 1, 44 0.0007IMDEP_VARSS* 79752.1MS* 79752.1 1.22 1, 44 0.2746ERRORPEP VARSS* 2867078.5MS* 65160.875WITHIN EFFECT: T: TRIALEFFECTTVARIATE^STATISTICDEP_VARF DF^TSQ*^74.9834^WCP 55* 0.270495E+0736.64 2, 43 0.0000WCP MS*^0.135247E+07 52.79 2, 88 0.0000GREENHOUSE-GEISSER ADJ.^OF 52.79 1.49, 65.60 0.0000(T) XHUYNH-FELDT ADJUSTED OF(I:^ICON)52.79 1.64, 72.01 0.0000DEP_VAR^TS0=^2.26367MCP SS= 29644.61.11 2, 43 0.3401WCP M5*^14822.3 0.58 2, 88 0.5628GREENHOUSE-GEISSER ADJ.^DF 0.58 1.49, 65.60 0.5149(T) XHUYNH-FELDT ADJUSTED DF(M:^MANIP)0.58 1.64, 72.01 0.5300PEP VAR^TSQ*^.747507WCP SS. 30002.90.37 2, 43 0.6961MCP MS=^15001.5 0.59 2, 88 0.5590GREENHOUSE-GEISSER ADJ.^DF 0.59 1.49, 65.60 0.5116(T) XHUYNH-FELDT ADJUSTED DF(IM)0.59 1.64, 72.01 0.5265DEP VAR^.^TSQ*^2.35205WCP 55* 65144.41.15 2, 45 0.3264MCP MS*^32572.2 1.27 2, 88 0.2856GREENHOUSE-GEISSER ADJ.^DF 1.27 1.49, 65.60 0.2797HUYNH-FELOT ADJUSTED Of 1.27 1.64. 72.01 0.2820ERRORDEP VAR- MCP 55*^2254562.0WCP MS= 25620.023661 EPSILON-^0.74548H-F EPSILON=^0.81825Table DI: MANOVA - Time Taken to Complete the Task78WITHIN EFFECT: OBS: WITHIN CASE MEANEFFECT^VARIATE^STATISTICOVALL: GRAND MEANDEP_VARF OFSS* 0.217292E+07MS= 0.217292E+07 697.38 1, 44 0.0000I:^ICONDEP_VAR SS= 2678.06MS= 2678.06 0.86 1 44 0.3589M: MANIPDEP_VAR SS= 5.06250MS= 5.06250 0.00 1 44 0.9680IMDEP_VAR SS= 2425.56MS= 2425.56 0.78 1 44 0.3824ERROR DEP_VAR SS= 137096.64MS= 3115.6327WITHIN EFFECT:^T: TRIALEFFECT^VARIATE STATISTIC F OFTDEP_VAR^TSG=^48.9149WCP SS= 48807.4WCP MS=^24403.7GREENHOUSE-GEISSER ADJ.HUYNH-FELDT ADJUSTED DF OF23.9019,2719.2719.272,2,1.75,1.95,438877.2285.660.00000.00000.00000.0000(T)^X (I:^ICON)DEP_VAR^TSG=^3.56125WCP SS■ 4802.67 1.74 2, 43 0.1876WCP MS=^2401.33 1.90 2, 88 0.1563GREENHOUSE-GEISSER ADJ. OF 1.90 1.75, 77.22 0.1619(T)^XHUYNH-FELDT ADJUSTED OF(M: MANIP)1.90 1.95, 85.66 0.1575DEP VAR^TIM=^.327724WCP SS. 271.500 0.16 2, 43 0.8525WCP MS=^135.750 0.11 2, 88 0.8985GREENHOUSE-GEISSER ADJ. OF 0.11 1.75, 77.22 0.8742(T)^XHUYNH-FELDT ADJUSTED OF(IM)0.11 1.95, 85.66 0.8937DEP VAR^ISO=^.909323WCP SS= 1544.67 0.44 2, 43 0.6442WCP MS=^772.333 0.61 2, 88 0.5457GREENHOUSE-GEISSER ADJ. DF 0.61 1.75, 77.22 0.5254HUYNH-FELDT ADJUSTED OF 0.61 1.95, 85.66 0.5415ERROR DEP VARWCP SS=^111457.78WCP MS= 1266.5657GGI EPSILON.^0.87749H-F EPSILON=^0.97336Table D2: MANOVA - Number of Actions Required to Complete the Task79WITHIN EFFECT: 08S: WITHIN CASE MEANEFFECT^VARIATE^STATISTICOVALL: GRAND MEANDEP_VARF OFSS= 14.2921MS= 14.2921 2094.35 1, 44 0.0000I:^ICONDEP_VAR SS= 0.165047E-01MS= 0.165047E-01 2.42 1 44 0.1271.M: MANIPDEP_VAR SS= .513172MS= .513172 75.20 1, 44 0.0000IMDEP_VAR SS. 0.598176E-02MS= 0.598176E-02 0.88 1. 44 0.3543ERROR DEP_VAR SS= .30026227MS= 0.68241425E-02WITHIN EFFECT:^T: TRIALEFFECT^VARIATE STATISTIC F OFTDEP_VAR^TSQ=^253.853^WCP SS■ .271111 124.04 2, 43 0.0000WCP MS.^.135555 116.24 2, 88 0.0000GREENHOUSE-GEISSER ADJ.^OF 116.24 1.99, 87.38 0.0000(T)^XHUYNH-FELOT ADJUSTED DF(I:^ICON)116.24 2.00, 88.00 0.0000DEP_VAR^ISO=^1.27044WCP SS= 0.160622E-02 0.62 2, 43 0.5423WCP MS=^0.803111E-03 0.69 2, 88 0.5049GREENHOUSE-GEISSER ADJ. OF 0.69 1.99, 87.38 0.5040(T)^XHUYNH-FELOT ADJUSTED Of(M:^MANIP)0.69 2.00, 88.00 0.5049DEP_VAR^TSQ=^20.3521WCP SS= 0.217350E-01 9.94 2, 43 0.0003WCP MS.^0.108675E-01 9.32 2, 88 0.0002GREENHOUSE-GEISSER ADJ. OF 9.32 1.99, 17.38 0.0002(T)^XHUYNH-FELOT ADJUSTED OF(IM)9.32 2.00, 88.00 0.0002DEP_VAR^TS0=^1.81485WCP SS= 0.208788E-02 0.89 2, 43 0.4194WCP MS.^0.104394E-02 0.90 2, 88 0.4122GREENHOUSE-GEISSER ADJ.^OF 0.90 1.99, 87.38 0.4116HUYNH-FELDT ADJUSTED OF 0.90 2.00, 88.00 0.4122ERROR DEP VARWCP SS.^.10262399WCP MS= 0.11661817E-02GGI EPSILON=^0.99293H-F EPSILON=^1.00000Table D3: MANOVA - Number of Actions Per Second80SS=^6.25000MS= 6.25000^0.63^1^44 0.4318SS=^2.25000MS. 2.25000^0.23^1,^44 0.6364SS=^436.83333MS= 9.9280303M: MANIPDEP_VARINDEP_VARERROR DEP_VARWITHIN EFFECT: OBS: WITHIN CASE MEANEFFECT^VARIATE^STATISTIC^F^OFovALL: GRANO MEANDEP_VAR^SS=^900.000MS= 900.000^90.65^1^44 0.0000I: ICONDEP_VAR^SS=^*-0.225875E-20MS=^*-0.225875E-20^-0.00^1,^44 1.0000* ABOVE STATISTIC POSSIBLY ACCURATE TO ONLY 0 DIGITS.^NUMERICALLY CONSERVATIVE F:^-0.00^1,^44 1.0000WITHIN EFFECT: T: TRIALEFFECTTVARIATE^STATISTICDEP_VARF Of^TSB■^20.1165WCP SS. 100.042 9.83 2, 43 0.0003WCP MS.^50.0208 13.94 2, 88 0.0000OREENHOUSE-GEISSER ADJ. DF 13.94 1.74. 76.43 0.0000(T) XHUYNH-FELOT ADJUSTED OF(I:^ICON)13.94 1.93, 84.72 0.0000DEP VAR^TSB=^.128257WCP SS= .541667 0.06 2, 43 0.9393WCP MS=^.270833 0.08 2, 88 0.9273GREENHOUSE-GEISSER ADJ. OF 0.08 1.74, 76.43 0.9045(T) XHUYNH-FELOT ADJUSTED OF(M: MANIP)0.08 1.93, 84.72 0.9215DEP_VAR^TSB=^.682881WCP SS= . 1.62500 0.33 2, 43 0.7181WCP MS=^.812500 0.23 2, 88 0.7978GREENHOUSE-GEISSER ADJ. OF 0.23 1.74, 76.43 0.7667(T) XHUYNH-FELDT ADJUSTED OF(IM)0.23 1.93, 84.72 0.7895DEP_VAR^TSB=^1.26546WCP SS= 2.79167 0.62 2, 43 0.5436WCP MS=^1.39583 0.39 2, 88 0.6788OREENHOUSE-GEISSER ADJ. OF 0.39 1.74, 76.43 0.6497HUYNH-FELDT ADJUSTED OF 0.39 1.93, 84.72 0.6710ERROR DEP_VARWCP SS=WCP MS=001 EPSILON=H-F EPSILON=315.666673.58712120.868520.96275Table D4: MANOVA - Number of Errors Made in the Task81WITHIN EFFECT: 08S:' WITHIN CASE MEANEFFECT^VARIATE'OVALL; GRAND MEANOEPVARSTATISTICSS=^561.639F OFMS= 561.639 101.96 1 44 0.0000I:^ICONDEP_VARSS= .468948HS= .468948 0.09 1, 44 0.7718M: MANIPDEP_VARSS. 8.82269MS= 8.82269 1.60 1, 44 0.2123IMDEP_VARSS= .334967MS= .334967 0.06 1, 44 0.8064ERRORDEP_VARSS. 242.37798MS. 5.5085905WITHIN EFFECT: T: TRIALEFFECT VARIATE^STATISTICPEP VAR^T542=^10.0970WCP SS= 24.0910WCP MS=^12.0455GREENHOUSE-GEISSER ADJ.HUYNH-FELDT ADJUSTED OFOFF4.935.295.295.292,2,1.99,2.00,OF438887.6788.000.01180.00670.00680.0067(T) X (I:^ICON)DEP_VAR^TS0=^1.46125WCP SS= 3.264770.71 2, 43 0.4954WCP MS=^1.63238 0.72 2. 88 0.4908GREENHOUSE-GEISSER ADJ. OF 0.72 1.99. 87.67 0.4903(T) XHUYNH-FELDT ADJUSTED OF(M: MANIP)0.72 2.00, 88.00 0.4908DEP_VAR^ISO=^.529050WCP SS= 1.209380.26 2, 43 0.7734WCP MS=^.604690 0.27 2, 88 0.7672GREENHOUSE-GEISSER ADJ. OF 0.27 1.99, 87.67 0.7664HUYNH-FELDT ADJUSTED OF 'O.27 - 2. -70. 88.00 0.7672(T) X (IM)DEP VARTSQ=^1.86651 0.91 2. 43 0.4093WCP SS= 4.36360WCP MS=^.2.18180 0.96 2, 88 0.3873GREENHOUSE-GEISSER ADJ. OF 0.96 1.99. 87.67 0.3870HUYNH-FELDT ADJUSTED OF 0.96 2.00. 88.00 0.3873ERRORDEP_VARWCP SS.WCP MS=GGI EPSILON=H-F EPSILON=200.220302.27523070.996291.00000Table D5: MANOVA - Percentage of Errors82

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0086424/manifest

Comment

Related Items