Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Visualization of multivariate data using preattentive processing Healey, Christopher G. 1992

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1992_spring_healey_christopher_g.pdf [ 2.61MB ]
Metadata
JSON: 831-1.0051273.json
JSON-LD: 831-1.0051273-ld.json
RDF/XML (Pretty): 831-1.0051273-rdf.xml
RDF/JSON: 831-1.0051273-rdf.json
Turtle: 831-1.0051273-turtle.txt
N-Triples: 831-1.0051273-rdf-ntriples.txt
Original Record: 831-1.0051273-source.json
Full Text
831-1.0051273-fulltext.txt
Citation
831-1.0051273.ris

Full Text

Visualization of Multivariate DataUsing Preattentive ProcessingbyCHRISTOPHER G. HEALEYB.Math, University of Waterloo, 1990A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEIN THE FACULTY OF GRADUATE STUDIESDEPARTMENT OF COMPUTER SCIENCEWe accept this thesis as conforming to the required standardTHE UNIVERSITY OF BRITISH COLUMBIAOctober, 19920 Christopher G. Healey, 1992In presenting this thesis in partial fulfilment of the requirements for an advanceddegree at the University of British Columbia, I agree that the Library shall make itfreely available for reference and study. I further agree that permission for extensivecopying of this thesis for scholarly purposes may be granted by the head of mydepartment or by his or her representatives. It is understood that copying orpublication of this thesis for financial gain shall not be allowed without my writtenpermission.Department of___________________The University of British ColumbiaVancouver, CanadaDate O’ i, I’lfl_DE-6 (2/88)AbstractA new method for designing multivariate data visualization tools is presented. Multivariatedata visualization involves representation of data elements with multiple dimensions in a lowdimensional enviroment, such as a computer screen or printed media. Our tools are designedto allow users to perform simple tasks like estimation, target detection, and detection of databoundaries rapidly and accurately. These techniques could be used for large datasets wheremore traditional techniques do not work well or for time-sensitive applications that requirerapid understanding and informative data displays.Our design technique is based on principles arising in an area of cognitive psychology calledpreattentive processing. Preattentive processing studies visual features that are “preattentively” detected by the human visual system. Viewers do not have to focus their attention onparticular regions of an image to determine whether elements with certain features are presentor absent. Examples of preattentive features include colour, orientation, intensity, size, shape,curvature, and line length. Because this ability is part of the low-level human visual system,detection is performed very rapidly, almost certainly using a large degree of parallelism. Inthis thesis we investigate the hypothesis that these features can be used to effectively represent multivariate data elements. Visualization tools that use this technique will allow users toperform rapid and accurate visual processing of their data displays.We chose to investigate two known preattentive features, colour and orientation. The particular question investigated is whether rapid and accurate estimation is possible using thesepreattentive features. Experiments that simulated displays using our preattentive visualizationtool were run. The experiments used data similar to that which occurred in a set of salmonmigration studies. This choice was made to investigate the likelihood of our techniques being relevant to real-world problems. Analysis of the results of the experiments showed thatrapid and accurate estimation is possible with both colour and orientation. A second question, whether interaction occurs between the two features, was answered negatively. Additionalinformation about exposure durations and feature and data interaction were also discovered.11ContentsAbstract iiTable of Contents iiiList of Tables vList of Figures viiAcknowledgements ix1 Introduction 12 Related Work 5Standard Visualization Tools 9Volume Visualization 13Flow Visualization 14Multivariate Data Visualization 15Visual Interactive Simulation 19Future Directions 203 Preattentive Processing 22Feature Integration Theory 23Texton Theory 27Similarity Theory 28Three-Dimensional Icons 31Interference Experiments 33111Iconographic Displays.354 Salmon Migration Simulations 39Standard Visualization Tools 445 Psychological Experiments 50Task Selection 51Experiment Design 54Colour Selection 59Experiment Procedure 676 Analysis of Results 69Estimation Ability 71Feature Preference 86Data Type Preference 88Feature Interference 89Feature and Data Type Interaction 90Exposure Duration Experiments 927 Conclusions 978 Future Work 1009 Bibliography 10310 Appendix A 108Discrimination Experiment Results 10811 Appendix B 113Exposure Duration Experiment Results 113ivList of TablesTable 5.1: Experiment Overview 58Table 5.2: Monitor-Displayable Colours 64Table 6.1: Combined Landfall/Colour Results 78Table 6.2: Combined Landfall/Orientation Results 78Table 6.3: Combined Stream Function/Colour Results 85Table 6.4: Combined Stream Function/Orientation Results 85Table 6.5: t-test of Colour and Orientation Trial Error Rates 86Table 6.6: t-test of Landfall and Stream Function Trial Error Rates 88Table 6.7: t-test of Control and Experiment Colour Trial Error Rates 89Table 6.8: t-test of Control and Experiment Orientation Trial Error Rates 90Table 6.9: F-test of Feature and Data Type Influence on Error Rates 91Table 6.10: Exposure Duration Estimation Error Rates 95Table 6.11: t-test of Control and Experiment Trial Exposure Error Rates 95Table 10.1: Subject 1 Discrimination Results 110Table 10.2: Subject 2 Discrimination Results 110Table 10.3: Subject 3 Discrimination Results 111Table 10.4: Subject 4 Discrimination Results 111Table 10.5: Combined Subject Discrimination Results 112Table 11.1: Subject 1 Exposure Duration Results 114VTable 11.2: Subject 2 Exposure Duration Results 114Table 11.3: Subject 3 Exposure Duration Results 114Table 11.4: Subject 4 Exposure Duration Results 115Table 11.5: Subject 5 Exposure Duration Results 115Table 11.6: Combined Subject Exposure Duration Results 115viList of FiguresFigure 2.1: Coherency Visualization 18Figure 3.1: Target Detection 23Figure 3.2: Boundary Detection 24Figure 3.3: Feature Map From Early Vision 26Figure 3.4: Textons 28Figure 3.5: N-N Similarity 29Figure 3.6: Three-Dimensional Icons 32Figure 3.7: Hue and Brightness Segregation 33Figure 3.8: Form and Hue Segregation 34Figure 3.9: “Stick-Men” Icons 36Figure 3.10: Chernoff Faces 37Figure 4.1: British Columbia Coast 40Figure 4.2: OSCURS Output 42Figure 4.3: Latitude of Landfall Visualization Tool 45Figure 4.4: Swim Speed Visualization Tool 46Figure 4.5: Date of Landfall Visualization Tool 47Figure 5.1: Rectangle Orientations 54Figure 5.2: Example Experiment Display 56Figure 5.3: Experiment Design Overview 58viiFigure 5.4: Munsell Colour Space 61Figure 5.5: Discrimination Graph 66Figure 6.1: Landfall/Colour Control Block 1 Graph 72Figure 6.2: Landfall/Colour Control Block 2 Graph 73Figure 6.3: Landfall/Colour Experiment Block Graph 74Figure 6.4: Landfall/Orientation Control Block 1 Graph 75Figure 6.5: Landfall/Orientation Control Block 2 Graph 76Figure 6.6: Landfall/Orientation Experiment Block Graph 77Figure 6.7: Stream Function/Colour Control Block 1 Graph 79Figure 6.8: Stream Function/Colour Control Block 2 Graph 80Figure 6.9: Stream Function/Colour Experiment Block Graph 81Figure 6.10: Stream Function/Orientation Control Block 1 Graph 82Figure 6.11: Stream Function/Orientation Control Block 2 Graph 83Figure 6.12: Stream Function/Orientation Experiment Block Graph 84Figure 6.13: Experiment Subsection Average Error Graph 87Figure 6.14: Exposure Duration Graph 94VIIIAcknowledgementsI would like to thank both my supervisors Dr. Kellogg Booth (Computer Science) and Dr.James Enus (Psychology) for the effort, support, and guidance they provided me throughoutmy thesis. Dr. Paul LeBlond (Oceanography) also offered valuable suggestions and access toresearch being conducted in his department.A number of people in the Department of Oceanography provided me with ideas, direction,and research results which I used during my experiments. Dr. Keith Thomson designed andran the original salmon migration simulations, and spent many hours deriving data I needed formy experiments. He also helped me choose an experiment task which addressed a question ofinterest to Oceanography. Dr. James Ingraham wrote the OSCURS simulation software whichwas used to run the salmon migration simulations. He spent time modifying his software toprovide me with data I needed for my experiments.Researchers in the Department of Psychology offered a number of insights and suggestions.Dr. Lana Trick helped design the original framework for the psychological experiments. Dr.Ronald Rensink wrote the software which was used to run the experiments. He also providedmany helpful suggestions and references in the area of preattentive processing.Many other people took time and effort to improve my work. Thanks to Mr. ChrisRomanzin for reading the ilnal draft of my thesis and proposing a number of importantchanges, and to Ms. Tomoko Maruyama for looking at my original visualization designs, andfor continued support and encouragement which helped me complete my work.Perhaps the most important acknowledgement goes to my parents. Without their supportand guidance, most of the opportunities I’ve enjoyed would not have been possible, includinga chance to complete the work contained here.Financial support for this research was provided in part by a graduate scholarship from theNatural Sciences and Engineering Research Council of Canada and by the University of BritishColumbia.ixChapter 1Introduction“A picture shows me at a glance what it takes dozens of pages of a book toexpound”— Fathers and Sons, TurgenevMany different disciplines use computers to store and analyse data. The data are generatedin various ways. Computer simulations are being used to model real-world systems. Theseprograms produce large datasets, reflecting the state of the system at a given point in time.Experiments in areas like physics, chemistry, astronomy, and management science often producea variety of data. These are usually post-processed using a computer. Real time systems such asair traffic control and medical imaging often require a rapid and intuitive method for displayinginformation as it is generated.Different methods have been used to convert raw data into a more usable visual format.The best known example is the conversion of numeric data into different types of graphs.Many commercial statistical languages supply primitives for various types of visualization thatgo beyond simple bar graphs, pie charts, and scatter plots. Recently, specialized visualizationsoftware tools such as the Application Visualization System (AVS), apE, VIS-5D, and theWavefront Data Visualizer have been developed for computer graphics workstations. The fieldof visual interactive simulation has emerged to research ways of adding useful visualization and1Chapter 1. Introduction 2user interaction to simulation programs.Empirical methods and guidelines are now needed to build more complex visualizationsystems. Work in vision and psychology has shown how various object properties affect thehuman visual system. Certain features such as colour, size, shape, orientation and textureseem to “pop out” of a scene immediately. Careful use of these visual properties may beimportant when trying to design informative visualization tools. Many of these propertiesare studied in preattentive processing, a phenomenon investigated in cognitive psychology.Suggestions from this area on how to apply these features to visualization have arisen. Thework reported in this thesis represents the first two steps in a three-step process to developeffective visualization tools for a particular class of multivariate data. The first step wasexamining the literature on preattentive processing to find new techniques for visualization.The second step was designing and experimentally testing prototype visualization tools basedon these preattentive features. The final step will involve building robust visualization toolsfor actual applications by incorporating the experience gained from the prototypes.The Department of Oceanography at UBC is currently using simulation programs to determine causal effects of ocean currents on sockeye salmon migration patterns. This investigationcould benefit from a number of different types of visualization. One common problem is theneed to plot multidimensional data in some intuitive form in a two-dimensional environment,such as a computer screen or printed report. Plots may need to be animated in time, to studytemporal trends and relationships.In order to help researchers in Oceanography analyse their simulation data, we designed aset of “standard” visualization tools. These tools were written in an ad-hoc manner, withoutexplicitly using results from research in preattentive processing. Three tools were designed: oneto compare latitude of landfall for two different years, one to compare salmon swim speeds fortwo different years, and one to compare landfall dates for two different years. Strong evidencewas found to support oceanography’s hypothesis that ocean currents affect salmon migrationpatterns. Results from this work are included in two papers to be published in the literatureChapter 1. Introduction 3[Tho92a][Tho92b]. The researchers in Oceanography are currently conducting the second andthird phases of their experiment. Data from these simulations will be analysed using the samethree visualization tools.The researchers made a number of suggestions after using our visualization tools. In particular, they wanted to combine multiple displays into a single presentation format. To addressthis request, we designed a multivariate data visualization tool that explicitly uses results andvisual features from preattentive processing. We wanted a tool that worked well with a real-world task and real-world data. Researchers in Oceanography helped us choose a task that wasof interest to them. We used data from the salmon migration simulations when we tested thevisualization design.We decided to study the use of colour and orientation to represent multiple data valuesin a single display. Psychological experiments were designed to test these features. A displaythat contained 174 rectangles was shown to a subject for 450 milliseconds. Rectangles werecoloured either blue or red. Rectangles were oriented either at 00 or at 60° (Figure 5.2). Halfof the subjects were asked to estimate the percentage of blue rectangles in the display. Theother half were asked to estimate the percentage of rectangles oriented at 60°. Each displaysimulated a visualization tool presenting salmon migration data. Subjects were performing thetask proposed by oceanography when they estimated the number of rectangles with a givenpreattentive feature. Results from these experiments were analysed to answer the followingquestions:• Can estimation be performed reliably within the exposure duration we chose?• Do subjects perform better with a particular feature (i.e., is one feature better for theestimation task)?• Do features interfere with one another? Does encoding a data value irrelevant to the estimation task with a secondary preattentive feature affect the subject’s estimation ability?Chapter 1. Introduction 4We computed a number of statistics, including t-test and analysis of variance F-test values,to analyse the data generated during the experiments. Results from these tests indicated thefollowing conclusions:• rapid and accurate estimation can be performed using both colour and orientation withina 450 millisecond exposure duration• there is no evidence of a subject preference for either colour or orientation during theestimation task• there is evidence of a subject preference for the underlying data being displayed duringthe estimation task• there is no evidence that orientation interferes with a subject’s ability to perform colourestimation• there is no evidence that colour interferes with a subject’s ability to perform orientationestimation• there is no evidence of interaction between the primary preattentive feature and theunderlying data being displayedChapter 2 of this thesis describes research related to scientific visualization. The four visualization methods, volume visualization, flow visualization, multivariate data visualization, andvisual interactive simulation are examined. Chapter 3 discusses material related to preattentiveprocessing. Three opposing theories that hypothesize how the human visual system performspreattentive processing are explalned. Chapter 4 describes Oceanography’s salmon migrationsimulations and the visualization techniques used to examine their data. The task selectionand design process for our psychological experiments are presented in Chapter 5. Chapter 6analyses data from the experiments, statistically testing for a number of hypotheses. Conclusions from the analysis are presented in Chapter 7, and directions for future work are discussedin Chapter 8.Chapter 2Related Work“Felix qui potuit rerum cognoscere causas[Lucky is he who could understand the causes of things]”— Virgil (70-19 B.C.)Scientific visualization as a discipline within computer graphics is a relatively recent development. The first reference to “scientific visualization” per se occurred sometime in the late1980s. Panels and workshops in a variety of different disciplines are now addressing scientificvisualization and its relationship to their work [Wol88][Tre89][Bec9lb]. The area is expandinginto a number of subfields that use computer graphics to solve various types of problems. Examples of these techniques include volume visualization, medical imaging, flow visualization,and multivariate data visualization.The term “visualization” is often used to refer to the presentation of information to a viewer.Common presentation methods include tables, graphs, pictures, and sound. However, some mayquestion where the actual visualization occurs. Is the presentation of data “visualization”, ordoes “visualization” occur in the visual and auditory processing of the presentation done by theuser? Vande Wettering notes visualization is “. . . an inteffigent, cognitive process performedby human beings” and that visualization tools “. . . merely facilitate mental insight by the5Chapter 2. Related Work 6researchers who use them”1. Much of the published work to date seems to blur or ignore thisdistinction. Our notion of “visualization” is the presentation of data in a format specificallydesigned to make use of the human visual system, especially low-level human vision. In this waywe take into account both the representation of data and the cognitive processing performedby the viewer.Visualization can be defined in a mathematical context to more formally illustrate thepath from data to information presentation. The most general setting looks at visualization ofarbitrary relations. We will be content to concentrate on functional relationships. We representthese symbolically asf: DR (2.1)where (informally) the domain D is the set of “independent” values, and the range R is the setof “dependent” results. Assuming D is a subset of ]11m and R is a subset of 11?” then f : D Rcan be considered an (m + n)-dimensional dataset. Visualization is a similar mapping of datato some presentation format.g: data = presentation (2.2)Researchers usually want to determine or verify the mapping f. The visualization mappingg is designed to display data in a way that makes it easy to understand or validate f. Thiscan be done in a variety of ways. g can present data elements in a more useable or intuitivemanner. It can be used as a filter to extract and juxtapose some subset of the data. It canhighlight specific data elements or relationships between different elements. An importantaspect of g is the choice of features used to represent data elements. Common features includespatial location within a display, colour, size, shape, orientation, and sound. The set data oftenincludes values not only from D and R, but from other sources as well. Known results can bejuxtaposed with experimental results to ensure f simulates some real-world system in a logicalor expected way. A simulated time axis can be added to a dataset to allow frame-by-frame‘Visualization Tools: An Introduction. Pixel, 1(2), 1990, pg. 32Chapter 2. Related Work 7animation.Visualization tools are utilities that perform the mapping g. They take as input the setdata, and produce as output a presentation of the data elements in some specified format.Most tools require the user to define g explicitly. Others automate this process by attemptingto choose an appropriate mapping based on the information the user wants to extract fromthe data. This requires knowledge about the type of presentation that is most appropriate forthe given task. A number of papers have been written that discuss visualization for specifictypes of problems and environments. Relatively few papers discuss rules or guidelines for thedesign and use of general visualization tools. Researchers are now searching for visualizationtechniques that can be applied to a variety of visualization problems.Visualization of data and real-world phenomena is an area of study with a long history.Tufte speaks of some 500 years of information representation, starting with the perfection ofperspective drawings during the Italian Renaissance [Tuf9O]. Books on graphic design showcharts and graphs dating back some 150 years to the mid 1800s [Her74] [Tuf83]. More recently,computer simulations have prompted the design of a variety of computer-based visualizationtechniques. Researchers have found that appropriate visual displays can significantly increasethe usefulness of a simulation program. Tools have been written that allow simulation languagesto create graphical displays. Programs that post-process data from computer and mathematicalmodels also create graphical displays. These programs can visualize data in a variety of complexand useful ways.The current stage of research is concerned with the intelligent design of visualization tools.Scientists are now turning to computer graphics, psychology, and visual arts to understand howthe human visual system analyses images. This has led to the use of visual properties to makedisplays more intuitive. These visual properties take advantage of the fundamental workings ofthe human visual system itself. Some of the most interesting work has used the psychologicalphenomena known as preattentive processing. This allows users to quickly detect interestingtrends or properties in data through the use of abstract features such as colour, texture, shape,Chapter 2. Related Work 8and orientation.Not everyone believes that graphical displays are better than text descriptions. DeSanctisreviewed research that compared graphical and textual displays [DeS84]. He found that mostexperiments tested one of the following areas:• interpretation accuracy, to ensure data is correctly interpreted• problem comprehension, to see how quickly data is understood• task performance, to see how presentation of data affects solving the task at hand• decision quality, to see how presentation of data affects the quality of decisions made• speed of comprehension, to see how quickly data can be summarized• decision speed, to see how quickly a decision can be reached• memory for information, to see how presentation of data affects ability to recall specificfacts• viewer preference, to see which type of data presentation users preferDeSanctis notes some obvious results, for example that tables are usually better than graphsfor obtaining specific data values, but graphs are usually better than tables for identifyingtrends. He also states some controversial proposals, for example that colour does not enhancecomprehension or task performance. In fact, DeSanctis claims that contradictory results existfor all the experiment criteria. In some cases, researchers found graphics were better than text.In other cases, text was favoured over graphics. DeSanctis feels these contradictions implygraphics do not necessarily offer advantages for any of the experimental criteria.One should remember that DeSanctis focused on the comparison of graphs versus textualtables only. He did not research possible advantages of more complex visual displays. Thecontradictions he found suggest that graphics should be applied oniy in certain situations. IfChapter 2. Related Work 9used correctly, they can offer an improvement over textual displays. It certainly seems unlikelythat graphical displays never offer an improvement over textual displays in any situation. Infact, DeSanctis suggests that guidelines be developed that tell designers when graphics may beuseful.Standard Visualization ToolsVisualization in scientific computing is attempting to address a number of related problems.The National Science Foundation panel on scientific visualization calls these the “domain ofvisualization” [McC87], by which they mean the application areas to which visualization is beingapplied, the tools being brought to bear, and the research problems arising as a result. Threekey areas being researched are meaningful representation of data, immediate visual feedbackand interaction, and management and analysis of large datasets.One obvious use of visualization is the presentation of data in a useful, intuitive, or meaningful manner. This is usually done by attaching “features” such as colour, spatial location,and sound to each data element. Features are chosen to show properties within and relationships among data elements. An example of this technique is the presentation of data fromcomputer and mathematical simulations. Originally, visualization of this data consisted of socalled “business graphics”. This included printed tables of formulas or numbers and graphsproduced using groups of characters to represent bars or lines. Simulation programs that generated the data provided no commands to display results in a concise or coherent manner[Gor69][Fra77][Bra83j. A set of final results was printed when the simulation ended. Printingintermediate values to ensure the program was working correctly often produced a seeminglyendless stream of data. Commercial software manufacturers have attempted to solve this problem by providing separate programs or libraries of routines that perform different types ofvisualization. Some packages take data produced by a simulation program and display it in ameaningful way. Others add commands to the language, allowing visualization to be includedChapter 2. Related Work 10as part of the simulation program.A problem with traditional simulation systems is the lack of immediate visual feedbackand interaction. Users want to be able to view their results in real time, as they are beingproduced. This allows experimentation when interesting phenomena are discovered duringsimulation execution. Users want to be able to “steer” the simulation, to direct its path to followinteresting trends as the data is generated. This class of software tool is called visual interactivesimulation. Interaction will increase productivity if the visualization leads to a good choice.A number of researchers have created visual interactive simulation tools that allow interactiveanalysis [Mel85][OKe86]{Set88J. They provide various empirical and anecdotal results that showvisual interactive simulation as an improvement over existing simulation models [Kau8l][Bel87].The requirements for visual interactive simulation are similar to another important classof problem, the visualization of output from real-time applications. Systems like air trafficcontrol require rapid and informative visuaJization of multivariate data. These displays areoften shared by different operators, who visually acquire different data from different parts ofthe display at the same time. The visualization technique must allow a variety of tasks tobe performed rapidly and accurately on dynamically changing subsets of the overall display.Medical imaging systems such as CT, MRI, and ultrasound are another type of applicationthat could benefit from real-time visualization techniques. This method of exploratory analysispresents data to the user in real time. The user analyses the data and decides how to proceed.An informative visualization technique that allows rapid and accurate visual analysis woulddecrease the amount of time needed to complete the analysis task. This is important, becausethese types of systems often cannot be time-shared.Recent developments in computing have created a number of high volume data sources. Examples include supercomputers, satellites, geophysical monitoring stations, and medical scanners. Much of this data is stored without ever being analysed, due in part to the amountof time required to apply traditional analysis techniques. Research in visualization tries toaddress both the management and presentation of this type of data. The underlying databaseChapter 2. Related Work 11must allow efficient query and retrieval, filtering, and transformation of data elements. A highspeed communication system may be needed to move data from a repository to a workstationwhere visualization is performed. Different visualization tools can use a single database andcommon access methods to obtain the data they require. This data is then presented to theuser in some predefined format, depending on the tool being used.A number of dedicated visualization tools have been written to solve some of the problems mentioned above. They allow post-processing of experiment and simulation data, and areusually run on graphics workstations. The programs offer a number of very powerful visualization techniques such as two- and three-dimensional modeffing and rendering, volume rendering,temporal and spatial animation, and even stereo displays. Most of the tools also allow userinteraction both before and during the visualization process.The Application Visualization System (AVS) is a visualization tool built from several data-manipulation modules [Van9Oa]. It runs as a set of individual UNIX processes combined andcontrolled by a network flow manager. The AVS modules are grouped into four basic types:• data sources, that convert data from a given input format into an AVS specific datatype (bytes, integers, reals, strings, fields, colour maps, geometric objects, volumes, pixelmaps)• data filters, that convert an input data type to a new output data type. Filters can beused for tasks like cropping and transposing• data mappers, that convert an input field into a geometric output. Standard mappersinclude alpha-blending, isosurface rendering, and field flow visualization• renderers, that allow manipulation and rendering of AVS objects. Standard renderersinclude a geometry editor, an image editor, and a volume visualization moduleUsers can place modules in a configuration of their choice to convert, ifiter, map, and renderdata in different ways. A network manager binds and controls communication between eachChapter. Related Work 12module. Users can also write their own modules, to extend the standard AVS package.apE is a set of visualization tools written at Ohio State University [Van9Ob]. Like AVS,apE allows users to place individual modules in a flowchart-like configuration. Input data isconverted to a proprietary data format called flux. A network flow manager directs data from aset of input files through each module. Rendering modules at the end of the flow display resultsin different ways. apE includes its own set of modules to perform data conversion, filtering,transformation, editing, and rendering. Standard display modules include contour and colourmaps, isosurfaces, volume visualization, and rendering of geometric objects.The VIS-5D visualization system was written by the University of Wisconsin Space Scienceand Engineering Center. It was originally designed to help earth scientists interactively analyseand animate large datasets [Hib9O]. VIS-5D expects data to be formatted as a five-dimensionalgrid. Three dimensions are used to represent spatial location, one to represent time, and oneto represent multiple physical variables. Users can interactively change the presentation of thedata by controlling the following factors:• the viewpoint in three dimensions• the combination of variables being simultaneously displayed• choice of visual feature for any variable being displayed• time dynamics• the spatial extents of the region being shown• calculation of new variables from existing ones• an object’s transparencyAlthough AVS, apE, and VIS-5D give users access to previously unavailable visualizationtechniques, they do have drawbacks. Most excel at a specific subset of tasks required forChapter 2. Related Work 13general visualization. There is no simple way to integrate the packages together to form asingle powerful unit. The tools are usually used to post-process already existing data. Theirabifity to produce data in a simulation-like manner seems to be limited. Finally, many peopledo not have access to the equipment or expertise required to use the software. These peopleneed a set of tools that allow intuitive visualization using simple graphic primitives.Volume VisualizationVolume visualization is a rendering technique that attempts to display data representing volumetric objects or models. Volume visualization allows users to address two related problems:• a need to visualize three-dimensional objects in order to obtain a better understandingabout some aspect of the object• a need to visualize not only the surface but also the interior of the objectA number of traditional three-dimensional rendering techniques such as scan-line rendering orray tracing can solve the first requirement. The second requirement is more difficult to satisfy.Early graphics systems employed “cutting planes” to solve the problem. More recently, variousmethods have been proposed to allow a user to see “inside” the objects they are visualizing.Yagel, Kaufman, and Zhang have developed a technique that allows a user to render andmanipulate objects in a three-dimensional environment [Yag9lj. Their system presents a setof tools that are both familiar and intuitive to the user. Volumetric objects are discretizedinto volume elements (voxels) using a scan-conversion algorithm. The voxel-based objects arethen rendered using a modified ray-tracing algorithm called 3D raster ray tracing. Users canmanipulate their objects using the following tools:• a user can place mirrors in the scene to obtain multiple views of an object or objectsChapter 2. Related Work 14• a user can tell the system to cast shadows to help visualize the spatial location andorientation of objects• a user can change the specularity of an object to help visualize the object’s surface features• a user can perform a variety of constructive solid geometry operations on an object (e.g.,dissection, clipping, filtering, thresholding). One obvious use of these operations wouldbe to clip away part of the surface, to see what was inside the objectAnother way of looking inside an object is to make the outer layers of the object semitransparent. This allows us to “see” through these layers to whatever lies inside the model.Drebin used this technique to visualize medical images [Dre88]. The body’s outer surfaces(tissue and fat) are rendered as though they are transparent. The inner surfaces (bone) aresolid. The resulting image is a skeleton surrounded by a semi-transparent “skin”. Users canthus obtain visual information about both the surface and the interior of the body beingdisplayed.Flow VisualizationFlow visualization is concerned with the simulation, visualization, and analysis of atmosphericand fluid flows. Wind tunnel simulations study how air flows around and through variousobjects. Fluid mechanics uses visualization tools to analyse phenomena such as vortices, turbulence, and eddies in a fluid flow field.Sethian has designed a flow visualization tool that runs interactively in real time on a parallel processor computer, the Connection Machine CM-2 [Set881. Perhaps the most importantfeature of Sethian’s visualization tool is its real-time interactive ability. Users can watch theprogress of a simulation and immediately respond to interesting events. This often involveschanging system parameters to explore unexpected results as they evolve. A more completedescription of such “visual interactive simulation” techniques is given later in this chapter.Chapter 2. Related Work 15Sethian’s visualization tool allows a user to design the shape of a flow tunnel and placevarious geometric objects inside it. Coloured particles (smoke and dye) can be interactivelyinjected to follow the fluid flow. Objects that provide a constant stream of particles from afixed spatial location are also available. These are used to analyse the direction of the flow overtime. Sethian provides a number of examples where unexpected flow fields are easily observedusing his visualization software.A group at the NASA Ames Research Center has been using virtual reality as part of thedesign of their wind tunnel simulation software [Bry9l]. Users interact with their “virtual windtunnel” using a data glove and a head-mounted display. They can place tufts and colouredparticles into their world to help analyse the flow field. Tufts are small vanes that are anchored at a fixed position in the flow. They show changes in the flow field at that spatiallocation. Coloured particles move through the flow field over time. They form particle pathsand streamlines, which help users understand and analyse the shape of the flow field.Multivariate Data VisualizationMultivariate data visualization deals with the problem of representing high-dimensional data ina low-dimensional environment. High-dimensional data is a set of data elements, each of whichencapsulates a relatively large number of data values. An example we will see later in this thesisis a set of data elements that represent migrating salmon. Each data element, or salmon, hasa number of data values associated with it, like current geographical location, future locationof landfall, swim speed, swim orientation, and type of salmon. Researchers need a way tovisually represent this information in a two- or three-dimensional environment. Multivariatedata visualization attempts to solve this problem by addressing two related questions. First,is it useful or even reasonable to visually display the data elements with all their associateddata values? There may be a limit to the amount of information we can process at one time.Second, if we do want to “see” all the data values, how can we display the information in aChapter 2. Related Work 16low-dimensional environment, such as a computer screen or printed media? One possibility is todisplay the data so the user can distinguish between different data values. Another possibilityis a display that shows trends in the data, rather than individual values.Treinish has been developing a flexible multidimensional visualization system [Tre9l]. Heis designing an environment that helps scientists analyse the data they produce. This goalhas led to a number of system requirements. Treinish feels visualization techniques shouldtake advantage of the power of the human visual system. The system should be usable by thescientist directly and should not require a visualization specialist. A discipline-independentsystem should handle arbitrary types of data. In order to meet these requirements, Treinishproposes a construct called the visualization pipeline as follows:• the raw data is stored in a data repository in some abstract, data-independent format(Treinish suggests using the Common Data Format). Users can select the data they wantto visualize from the repository• the selected data can be passed through a number of filters, in order to convert it into thedesired format. These may include scaling and projecting the data, as well as conversionto a specific format or coordinate system• if the data is continuous, it may need to be sampled or gridded to a user-selected resolution• finally, the data is visualized or rendered using one of a number of visualization toolsprovided with the system (e.g., two- and three-dimensional plots, contour plots, surfaces,or field flows)Treinish stresses the importance of the data repository and the supporting data managementfacilities. He feels this is key to allowing discipline-independent visualization. Data selectionand manipulation is separated from data visualization. This gives the user enough flexibilityto work with arbitrary data sets and a variety of visualization techniques.Chapter 2. Related Work 17Ware and Beatty have designed a method that uses colour to represent multidimensionaldata elements [War85j[War88]. Data with up to five dimensions can be processed by their system. Elements are represented as spatially located coloured squares. Individual data values areassigned to the following properties of the square representing the data element: x-coordinate,y-coordinate, red intensity, green intensity, and blue intensity. An obvious extension to sixdimensions would be to plot elements in a simulated three-dimensional environment.Ware and Beatty’s visualization tool is used for coherency testing (cluster analysis). Coherency occurs when data elements can be separated into groups of elements with commonproperties. The data values of the data elements in a given group are similar. Coherent datais represented visually as spatial groups of similarly coloured squares. The data in Figure 2.lacontains either three or four coherent groups. It is difficult to determine whether squares inthe center of the figure belong to one or two groups. The use of colour in Figure 2.lb showsvery clearly that the data contains four coherent groups. These are represented visually by thefour coloured “clouds” of squares. The importance of colour is illustrated by the four overlapping data elements in the center of the figure, two of which are yellow and two of whichare purple. Colour allows us to classify these elements into their respective groups. Withoutcolour, we would probably guess all four elements belonged to the purple group. This methodallows users to quickly determine whether or not a dataset is coherent. One possible drawbackto this technique is the fact that it cannot support data elements with more than five or sixdimensions.Pickett and Grinstein have been using results from cognitive psychology as part of the designof their visualization tools [Pic88]{Gri89][Lev9O]. Pickett and Grinstein are also developingtools for coherency testing. Their tools display structure in the data as a set of textures andboundaries. Groups of data elements with similar values appear as a spatial group in thedisplay with a unique texture. Different groups are identified by their differing textures.Pickett and Grinstein’s method has a number of important features. The use of textureis based on an area of research in cognitive psychology called preattentive processing. TheChapter 2. Related Work 18(a)(b)Figure 2.1: Examples of Ware and Beatty’s coherency visualization technique. This 5-dimensional dataset is made up of four coherent groups of elements: (a) data representedby four “clouds” of grey squares; (b) data represented by four “clouds” of coloured squaresaChapter 2. Related Work 19display mechanism is specifically designed to take advantage of the human low-level visionsystem. This method is also able to display data with more than five dimensions. This is animprovement over colour contour plots and Ware and Beatty’s colour square method. Recently,Pickett and Grinstein have extended their method to use other visual primitives, such as colour[Lev9l] and sound. A more detailed explanation of their visualization tools is presented in the“Iconographic Displays” section in the next chapter of this thesis.Visual Interactive SimulationVisual interactive simulation (VIS) was a term first conceived by Hurrion in 1976. VIS isconcerned with two important problems: visualization of simulation data and user interactionwith a running simulation [Bel87]. After building a number of models for various companiesin the United Kingdom, Hurrion offered a set of anecdotal observations supporting the useof VIS [Hur8Oj. He suggested VIS made users more attentive, partly because pictures weremore interesting than text and partly because users felt they had more control over the system.Visualization of intermediate results sometimes gave rise to interesting situations the user hadnever envisioned. This often led to a new line of research and experimentation.A number of papers have been written showing practical examples of VIS applications.Kaufman showed an example of converting a traditional simulation program into an interactiveprogram that used graphics [Kau8lj. Researchers in the United Kingdom have developed aset of library routines that allow VIS on an Apple microcomputer [OKe86]. After applyingtheir software to various practical simulation problems, they concluded that a machine withrelatively simple graphic primitives and computational power offered an acceptable platformfor VIS software.Scientists at AT&T Bell Labs have developed a general VIS package, the PerformanceAnalysis Workstation [Me185]. The workstation allows design and testing of queueing networks.All the tasks on the workstation are graphical. Users design the network topology by positioningChapter 2. Related Work 20resources on the screen and placing links between them. They can allocate screen space forvarious graphs or informational messages. The simulation can be started, interrupted, modified,and monitored using commands available through pop-up menus. The workstation visuallydisplays activity in the network, including movement and queueing of requests as the simulationexecutes.Workgroups in Bell Labs have been using the Performance Analysis Workstation for a variety of applications. Results from the workstation mirror the observations of Hurrion, specificallythat users enjoy using the workstation, and that interesting phenomena often arise while thesimulation is running.Future DirectionsVarious panels and workgroups that consulted scientists studying visualization seem to confirmopinions suggested in previous work. These people provided some interesting insight on thecurrent state of scientific visualization [McC87][Bro88][Tre89]. They feel visualization is madeup of a number of different disciplines, including physical science, computer graphics andcomputer science, psychology, and visual arts. The recent need for some form of inteffigentdata representation has stimulated the formation of interdisciplinary groups. These groupsare pooling their expertise in an attempt to come up with guidelines and solutions to thevisualization problem.Another interesting point was the strong support shown for visual interactive simulation.The panel members agreed that tip to this point visualization has been a post-production task.Scientists visualize data that was obtained some time in the past. People now want to producevisual displays while the data is being generated. Through user interaction, they want to beable to “steer” the path of computation in response to the visualization they see while thesimulation runs. This is exactly the concept Bell and O’Keefe suggested in their research paper[Be187]. Visual interaction simulation methods can also be used to visualize data from real-timeChapter 2. Related Work 21systems such a air traffic control and medical imaging systems.Finally, researchers recognize the fact that not everyone has the sophisticated computerequipment needed to generate complex visual images. This creates two additional visualizationrequirements. First, there should be some simple way to transport data from where it isgenerated to where it will be visualized. Various groups are working to produce advancednetworking tools that will allow supercomputers and workstations to communicate efficiently.Second, there is an understanding that visualization does not need to be complicated in orderto be useful; some data can benefit from advanced techniques such as stereo display or volumerendering, but most data can be visualized using simple wireframe objects or colour-codedplots. This type of simple visualization can be done on personal workstations or even personalcomputers.In order to address these requirements, we decided to use the built-in processing of thehuman visual system to assist with visualization. In particular, we studied results from anarea of cognitive psychology called preattentive processing. Preattentive processing describesa set of simple visual features that are detected in parallel by the low-level human visualsystem. Background information on preattentive processing is presented in the next chapter.We hypothesize that the use of preattentive features in a visualization tool will allow users toperform rapid and accurate visual tasks such as grouping of similar data elements, detection ofelements with a unique characteristic, and estimation of the number of elements with a givenvalue or values. We tested this hypothesis using psychological experiments that simulated apreattentive visualization tool. This is described in two chapters later in this thesis.Chapter 3Preattentive Processing“One man does not see everything in a glance”— Phcenissce, EuripidesResearchers in psychology and vision have been working to explain how the human visual systemanalyses images. One interesting result has been the discovery of visual properties that are“preattentively” processed. These properties are detected immediately by the visual system.Viewers do not have to focus their attention on an image to determine whether elements withthe given property are present or absent.An example of preattentive processing is detecting a filled circle in a group of empty circles(Figure 3.1). The target object contains a preattentive property, “filled”, that the distractorobjects do not. A viewer can quickly glance at the image to determine whether the target ispresent or absent.Properties that are preattentively processed can be used to highlight important image characteristics. Experiments in psychology have used preattentive properties to assist in performingthe following visual tasks:• target detection, where users attempt to rapidly and accurately detect the presence or22Chapter 3. Preatterztive Processing 2300 0 00 0000 • .00000 000 00•0..(a) (b)Figure 3.1: Examples of target detection: (a) target can be preattentively detected becauseit contains the unique feature “filled”; (b) filled circle target cannot be preattentivelydetected because it contains no preattentive feature unique from its distractorsabsence of a “target” element that uses a unique preattentive property within a field ofdistractor elements (Figure 3.1)• boundary detection, where users attempt to rapidly and accurately detect a textureboundary between two groups of elements, where all of the elements in each group havea common preattentive property (Figure 3.2)• counting, where users attempt to count or estimate the number of elements in a displaythat use a unique preattentive propertyFeature Integration TheoryTriesman has provided some exciting insight into preattentive processing by researching twoimportant problems [Tri85]. First, she has tried to determine which properties are detectedpreattentively. She calls these properties “preattentive features”. Second, she has formulateda hypothesis about how the human visual system performs preattentive processing.Triesman ran experiments using target and boundary detection to classify preattentiveChapter 3. Preattentive Processing 24ODOOD •E1••OOOLIDD LILI•0•R•• E•DOO••LIS•(a) (b)Figure 3.2: Examples of boundary detection: (a) the horizontal boundary between twogroups is preattentively processed because each group contains a unique feature; (b) thevertical boundary is not apparent, because both groups use the same features (filled versusempty and square versus circle)features. For target detection, subjects had to determine whether the target was present orabsent in a field of distractors. Boundary detection involved placing a group of objects thatused a unique feature within a set of distractors to see if the boundary could be preattentivelydetected.Researchers test for preattentive target detection by varying the number of distractors ina scene. If search time is relatively constant and below some chosen threshold, independentof the number of distractors, the search is said to be preattentive. Similarly, for boundarydetection, if users can classify the boundary within some fixed time, the feature used to definethe boundary is said to be preattentive. A common threshold time is 250 miffiseconds, becausethis ailows subjects “one look” at the scene. The human visual system cannot decide to changewhere the eye is looking within this time frame.Objects that are made up of a conjunction of unique features cannot be detected preattentively. A conjunction occurs when the target object is made up of two or more features, eachof which is contained in the distractor objects. Figure 3.lb shows an example of a conjunction target. The target is made up of two features, filled and circle. Both of these featuresChapter 3. Preattentive Processing 25occur in the distractor objects (filled squares and empty circles). Thus, the target cannot bepreattentively detected.Triesman has compiled a list of features that can be preattentively detected [Tri85][Tri88J.These features include line length, orientation, contrast, hue, curvature, and closure. It isimportant to note that some of these features are asymmetric. For example, a sloped linein a sea of vertical lines can be preattentively detected. However, a vertical line in a sea ofsloped lines cannot be preattentively detected. Another important consideration is the effectof different types of background distractors on the feature target. These kinds of factors mustbe taken into consideration when trying to design systems that rely on preattentive processing.Triesman breaks human early vision into a set of feature maps and a master map of locationsin an effort to explain preattentive processing. Each feature map registers activity in responseto a given feature. Triesman proposes a manageable number of feature maps, including one foreach of the human vision colour primaries red, yellow, and blue as well as separate maps fororientation, shape, texture, and other preattentive features.When the human visual system first sees an image, all the features are encoded in parallelinto their respective maps. One can check to see if there is activity in a given map, and perhapsget some indication of the amount of activity. The individual feature maps give no informationabout location, spatial arrangement, or relationships to activity in other maps.The master map of locations holds information about intensity or hue discontinuities atgiven spatial locations. Focused attention acts through the master map. By examining a givenlocation, one automatically gets information about all the features present at that location.This is provided through a set of links to individual feature maps (Figure 3.3).This framework provides a general hypothesis that explains how preattentive processingoccurs. If the target has a unique feature, one can simply access the given feature map to seeif any activity is occurring there. Feature maps are encoded in parallel, so feature detectionis almost instantaneous. A conjunction target cannot be detected by accessing an individualChapter 3. Preattentive Processing 26feature map. Activity there may be caused by the target, or by distractors that share thegiven preattentive feature. In order to locate the target, one must search serially through themaster map of locations, looking for an object with the correct features. This requires focusedattention and a relatively large amount of time.In later work, Triesman has expanded her strict dichotomy of features being detected eitherin parallel or in serial [Tri88][Tri9lj. She now believes that parallel and serial represent twoends of a spectrum. “More” and “less” are also encoded on this spectrum, not just “present”ColourMapsOrientationMapsV VblueyellowredFigure 3.3: Framework for early vision that explains preattentive processing. Individuaimaps can be accessed to detect feature activity. Focused attention acts through a serialscan of the master map of locations.Map ofLocationsAttentionChapter 3. Preattentive Processing 27and “absent”. The amount of differentiation between the target and the distractors for a givenfeature will affect search time. For example, a long vertical line can be detected immediatelyamong a group of short vertical lines. As the length of the target shrinks, the search timeincreases, because the target is harder to distinguish from the distractors. At some point,the target line becomes shorter than the distractors. If the length of the target continues todecrease, search time decreases, because the degree of similarity between the target and thedistractors is decreasing.Texton TheoryTexture segregation involves preattentively locating groups of similar objects and the boundaries that separate them. Triesman used texture segregation during her experiments withboundary detection. Figure 3.2a is an example of a horizontal texture boundary with emptyshapes on the top and filled shapes on the bottom. Figure 3.2b is an example of a verticaltexture boundary with filled circles and empty squares on the left, and empty circles and filledsquares on the right.Julèsz has also investigated texture perception and its relationship to preattentive processing [Ju181][Ju183][Ju184]. He has proposed his own hypothesis on how preattentive processingoccurs. Julèsz believes that the early vision system detects a group of features called textons.Textons can be classified into three general categories:1. Elongated blobs (e.g., line segments, rectangles, effipses) with specific properties such ashue, orientation, and width2. Terminators (ends of line segments)3. Crossings of line segmentsJulèsz believes that only differences in textons or their density can be detected preatChapter 3. Preattentive Processing 28eIDuI\S\O/\\S\Figure 3.4: Example of similar textons: (a) two textons that appear different ut isolation; (b) the same two textons cannot be distinguished in a randomly oriented textureenviromenttentively. No positional information about neighboring textons is available without focusedattention. Like Triesman, Julèsz believes preattentive processing occurs in parallel and focusedattention occurs in serial.Figure 3.4 shows an example of an image that supports the texton hypothesis. Althoughthe two objects look very different in isolation, they are actually the same texton. Both areblobs with the same height and width. Both are made up of the same set of line segments andeach has two terminators. When oriented randomly in an image, one cannot preattentivelydetect the texture boundary between the two groups of objects.Similarity TheorySome researchers do not support the dichotomy of serial and parallel search modes. Initialwork in this area was done by Quinlan and Humphreys [Qui87]. They investigated conjunctionChapter 3. Preattentive Processing 29L(a) (b)Figure 3.5: Example of N-N similarity affecting search efficiency: (a) high N-N similarityallows easy detection of target shaped like the letter L; (b) low N-N similarity increasesdifficulty of detecting target shaped like the letter Lsearches by focusing on two factors. First, search time may depend on the number of items ofinformation required to identify the target. Second, search time may depend on how easily atarget can be distinguished from its distractors, regardless of the presence of unique preattentivefeatures. Note that in later work Triesman addressed this second factor [Tri88]. Quinlan andHumphreys found that Triesman’s feature integration theory was unable to explain all theresults obtained from the above experiments.Duncan and Humphreys have now proposed their own explanation of preattentive processing, which assumes search ability varies continuously depending on the type of task and displayconditions [Dun89][Mül9O]. According to their theory, search time depends on two importantcriteria, T-N similarity and N-N similarity. T-N similarity is the amount of similarity betweenthe targets and nontargets (distractors). N-N similarity is the amount of similarity within thenontargets themselves. These two factors affect search time as follows:Chapter 3. Preattentive Processing 30• as T-N similarity increases, search efficiency decreases and search time increases• as N-N similarity decreases, search efficiency decreases and search time increases• T-N similarity and N-N similarity are related (Figure 3.5). Decreasing N-N similarityhas little effect if T-N similarity is low. Increasing T-N similarity has little effect if N-Nsimilarity is highTriesman’s feature integration theory has difficulty explaining the results of Figure 3.5. Inboth cases, the distractors seem to use exactly the same features as the target, namely oriented,connected lines of a fixed length. Yet experimental results show displays similar to Figure 3.5aproduce an average search time increase of 4.5 milliseconds per additional distractor, whiledisplays similar to Figure 3.5b produce an average search time increase of 54.5 milliseconds peradditional distractor.In order to explain the above and other search phenomena, Duncan and Humphreys proposea three-step theory of visual selection.1. The visual field is segmented into structural units. Individual structural units share somecommon property (e.g., spatial proximity, hue, shape, motion). Each structural unit mayagain be segmented into smaller units. This produces a hierarchical representation of thevisual field. Within the hierarchy, each structural unit is described by a set of properties(e.g., spatial location, hue, texture, size). This segmentation process occurs in parallel.2. Because access to visual short-term memory is limited, Duncan and Humphreys assumethat there exists a limited resource that is allocated among structural units. Becausevision is being directed to search for particular information, a template of the informationbeing sought is available. Each structural unit is compared to this template. The betterthe match, the more resources allocated to the given structural unit relative to otherunits with a poorer match.Chapter 3. Preattentive Processing 31Because units are grouped in a hierarchy, a poor match between the template and astructural unit allows efficient rejection of other units that are strongly grouped to therejected unit.3. Structural units with a relatively large number of resources have the highest probabilityof access to the visual short-term memory. Thus, structural units that most closely matchthe template of information being sought are presented to the visual short-term memoryfirst. Search speed is a function of the speed of resource allocation and the amount ofcompetition for access to the visual short-term memory.Given these three steps, we can see how T-N and N-N similarity affect search efficiency.Increased T-N similarity means more structural units match the template, so competitionfor visual short-term memory access increases. Decreased N-N similarity means we cannotefficiently reject large numbers of strongly grouped structural units, so resource allocation timeand search time increases.Three-Dimensional IconsTo date, most of the features identified as preattentive have been relatively simple properties.Examples include hue, orientation, line length, and size. Enns and Rensink have identifieda class of three-dimensional elements that can be detected preattentively [Enu9Oa][Enn9Ob].They have shown that the three-dimensionality is what makes the elements “pop-out” of thevisual scene. This is important, because it suggests that more complex high-level concepts maybe processed preattentively by the low-level vision system.Figure 3.6 shows an example of these three-dimensional icons. The elements in Figure 3.6aare made up of three planes. The planes are arranged to form an element that looks like athree-dimensional cube. Subjects can preattentively detect the group of cubes with a threedimensional orientation that differs from the distractors. The elements in Figure 3.6b are madeCq•.rCD CroooCDCDCDCDCD. CD CDCD+ CD-tC.—.CD Ci) o —CD1. I.Chapter 3. Preattentive Processing 33up of the same three planes. However, the planes are arranged to produce an element with noapparent three-dimensionality. Subjects cannot preattentively detect the group of elements thathave been rotated 180 degrees. Apparently, information about three-dimensional orientation ispreattentively processed.Enns and Rensink have also shown how lighting and shadows provide three-dimensional information that is preattentively processed [Enn9Oc]{Enn9Od]. Spheres are drawn with shadowsso they appear to be lit either from above or from below. Subjects can preattentively detectthe group of spheres that appear to be lit differently than the distractors.Interference ExperimentsCallaghan has done research to see how similarity within feature groups affects texture segregation [Cai9O]. She found that varying certain irrelevant features within a group can interferewith boundary detection. Her initial experiments dealt with locatioll of a horizontal or vertical....................................(a) (b)Figure 3.7: Hue and brightness segregation: (a) hue texture boundary with varying brightness across the array; (b) brightness texture boundary with varying hue across the arrayChapter 3. Preattentive Processing 34texture boundary [Ca184]. Subjects were presented with a six by six array of elements. Thetexture boundary was formed by either differences in hue or differences in brightness. For huesegregation, the brightness in both groups varied randomly between two values. For brightnesssegregation, hue varied randomly between two values (Figure 3.7). Subjects had to determinewhether the texture boundary was vertical or horizontal. Control experiments were run tosee how quickly subjects could detect simple hue and brightness boundaries. The control arrays had a uniform brightness during hue segregation, and a uniform hue during brightnesssegregation.Callaghan found that non-uniform brightness will interfere with hue segregation. It tooksubjects longer to determine where the texture boundary occurred, relative to the control array.However, a non-uniform hue did not interfere with brightness segregation. A brightness textureboundary can be detected in a fixed amount of time, regardless of whether the hue fluctuatesor not. This asymmetry was verified through further experimentation [Ca190].••m•.•..................(a) (b)Figure 3.8: Form and hue segregation: (a) hue boundary is preattentively detected, eventhough form varies in both groups; (b) hue interferes with detection of form boundaryCallaghan’s more recent work has shown a similar asymmetry between form and hue [Ca189J.Chapter 3. Preattentive Processing 35As before, subjects were asked to locate a horizontal or vertical texture boundary in a six bysix array. During the experiment, the arrays were segregated by either hue or form. For huesegregation, form varied randomly within the array (circle or square). For form segregation,hue varied randomly. Results showed that variation of hue interfered with form segregation.However, variation of form did not interfere with hue segregation (Figure 3.8).These interference asymmetries suggest some preattentive features may be “more important” than others. The visual system reports information on one type of feature over and aboveothers that may also be present. Callaghan’s experiments suggest that brightness overrides hueinformation and that hue overrides shape information.Iconographic DisplaysPickett and Grinstein have been working to develop a method of displaying multidimensionaldata in a two-dimensional environment [P1c88]. The two most common display mediums,computer screens and printed documents, are both two-dimensional.Initially, work has focused on spatially or temporally coherent data sets. This type of datagenerally contains clusters of data elements with similar values. Previously, data with up tothree dimensions was plotted in colour. Each data dimension controlled the intensity of one ofthe three primary colours red, green, and blue. Coherent areas within the data set occurredwhere colour values were similar. Relationships between data elements were shown as spatialchanges in colour. Pickett decided to use texture as a medium that could show relationshipsamong higher dimensional data. The texture segregations would preattentively display areaswith similar data elements. Researchers could then quickly decide whether further analysis wasrequired.Pickett developed icons that could be used to represent each data element. An icon consistsof a body segment plus a number of limbs (Figure 3.9). Each value in the data element controlsChapter 3. Preattentive Processing 36(a) (b) (c)Figure 3.9: Examples of “stick-men” icons: all three icons have four limbs and one bodysegment (shown in bold), so they can support five-dimensional data elements; (d) aniconographic display of 5-D weather satellite data from the west end of Lake Ontarioxi(d)Chapter 3. Preattentive Processing 37(a) (b)Figure 3.10: Examples of Chernoff faces: notice that the various facial characteristics,such as nose length, eyes, mouth, jowls, etc., are controlled by various data values in eachfacethe angle of one of the limbs. The icons in Figure 3.9 can support five-dimensional dataelements. The four limbs support the first four dimensions. The final dimension controls theorientation of the icon’s body in the image.Once the data-icon mapping is defined, an icon can be produced for each data element.These icons are then displayed on a two-dimensional grid in some logical fashion. The resultis an image that contains various textures that can be preattentively detected. Groups of dataelements with similar values produce similar icons. These icons, when displayed as a group,form a texture pattern in the image. The boundary of this pattern can be seen, because iconsoutside the given group have a different form and produce a different texture pattern.The key to this technique is designing icons that, when displayed, produce a good textureboundary between groups of similar data elements. To this end, Pickett and Grinstein havebeen working on an icon toolkit, to allow users to produce and test a variety of icons with theirdata sets [Gri89]. They have also added an audio component to the icons. Running the mouseacross the image will produce a set of tones. Like the icons, the tones are mapped to the valuesin each data element. It is believed these tones will allow researchers to detect interestingrelationships in their data.Chapter 3. Preattentive Processing 38Other researchers have suggested using various types of “icons” to plot individual dataelements. One of the more unique suggestions has been to use faces (Figure 3.10) with differentexpressions to represent multidimensional data [Che73] [Bru78].Each data value in a multidimensional data element controls an individual facial characteristic. Examples of these characteristics include the nose, eyes, eyebrows, mouth, and jowls.Chernoff claims he can support data with up to eighteen dimensions. He also claims groupingsin coherent data will be drawn as groups of icons with similar facial expressions. This technique seems to be more suited to summarizing multidimensional data elements, rather thansegmenting them. Still, it shows that researchers are exploring a wide variety of ideas.We will look at two preattentive features, hue and orientation, and will investigate theiruse for a common task, estimation. Psychological experiments that simulate a visualizationtool for estimation using these features will be described in Chapter 5.Chapter 4Salmon Migration Simulations“Di qua, di là, di giz’t, di su ii menu[Hither, tither, upward, downward they are driven]”— Inferno, DanteA salmon is a well-known type of fish that lives, among other areas, on the western Canadiancoast. Salmon are split into six species: Atlantic, Sockeye, Coho, Chinook, Chum, and Pink.The life history of a salmon consists of four stages: birth, freshwater growth stage, ocean growthstage, and migration and spawning [Pea92]. Salmon are born as fry in freshwater rivers andstreams. After birth, the fry spend time feeding and maturing before swimming downstream tothe open ocean. The amount of time spent can vary widely from a few months to several years,depending on the type of salmon. Upon reaching the ocean, the salmon moves to its “openocean habitat”, where it spends most of its ocean life feeding and growing. Ocean habitats forthe various species of salmon are not well defined. Sockeye salmon are thought to feed in theSubarctic Domain, an area of the Pacific Ocean north of 400 latitude stretching from the coastof Alaska to the Bering Sea. After a period of one to six years, salmon begin their migrationrun. This consists of an open ocean stage back to the British Columbia coast and a coastalstage back to a freshwater stream to spawn. Salmon almost always spawn in the stream wherethey were born. Scientists now know salmon find their stream of birth using smell when they39Chapter 4. Salmon Migration Simulations 40Figure 4.1: The British Columbia coast, showing Vancouver Island, the Juan de FucaStraight, the Johnstone Straight, and the Fraser River. Arrows represent the migrationpatterns of returning salmonreach the coast. The direction finding methods used to navigate from the open ocean habitatto the coast are still being researched. Once the salmon reaches its stream of birth, it spawnsand then dies. Typical spawning produces between 2,000 and 5,000 eggs.The ocean phase of sockeye salmon migration is not well understood [LeB9O]. It is recognized that it is rapid, well directed, and well timed. Previous work has examined the effect ofclimate and ocean conditions during migration to see how they affect the point of landfall forFraser River sockeye. The entrance to the Fraser River is located on the southwest coast ofBritish Columbia, near Vancouver. When the Gulf of Alaska is warm, sockeye make landfallChapter . Salmon Migration Simulations 41at the north end of Vancouver Island and approach the Fraser River primarily via a northernroute through the Johnstone Strait. When the Gulf of Alaska is cold, sockeye are distributedfurther south, make landfall on the west coast of Vancouver Island, and approach the FraserRiver primarily via a southern route through the Juan De Fuca Strait (Figure 4.1). The percentage of sockeye returning around the northern end is called the Johnstone Strait Diversion(hereafter JSD). The JSD has a distinct interannual variability and is measured by records ofsalmon caught by fishermen each year. Research is being conducted to determine the factorsthat drive this variability.Recent work in plotting ocean currents has provided scientists with a possible explanationfor sockeye migration patterns. It has been speculated that the interannual variability ofocean surface circulation has impact on where the sockeye make landfall. A multi-institutionalinvestigation has been initiated to examine the influences of currents, temperature and salinityon open ocean return migrations of sockeye salmon. Researchers are using the OSCURS (OceanSurface Circulation Simulation) model [1ng88j[1ng89J. Possible return migration paths of thesockeye are plotted by placing 174 simulated salmon at fixed locations in the open ocean(Figure 4.2). The simulated salmon use a common set of “rules” to find their way back to theBritish Columbia coast. Three separate experiments have been proposed. Salmon from eachexperiment react to different sets of environmental stimuli. These experiments are designed totest the following:1. The influence of currents on compass-oriented migrations. Compass-oriented salmon takea single “look” before their migration run to determine the direction to the coast. Theyuse a biological “compass” to swim in this fixed direction during their migration run,regardless of external forces (e.g., currents) that may shift their migration path2. The influence of currents on bicoordinate-orientated migrations. Bicoordinate-orientedsalmon make multiple “looks” during their migration run. Each “look” allows them toadjust for external forces by updating the direction they need to swim to reach the coastChapter 4. Salmon Migration Simulations 42(a)(b)Figure 4.2: Examples of output from OSCURS program: (a) dots represent starting positions of 174 simulated salmon; (b) trailer beginning at each salmon’s starting positiontracks its path to the British Columbia coastChapter 4. Salmon Migration Simulations 433. The influence of currents, temperature and salinity on compass- and bicoordinate-orientatedmigrationsThe simulations address the following questions regarding the ocean phase of sockeye returnmigrations:• What is the range of sockeye swimming speeds (as opposed to migration rates)?• What direction-finding mechanisms are used by sockeye (i.e., compass-orientation, plotting and/or bicoordinate-orientation)?• How does the interannual variability of open ocean current, temperature and salinityfields impact the observed interannual variability of the point of landfall and timing ofsockeye returns?The model simulations of the first phase are complete. They focused on two years withsignificantly different JSDs, 1982 (22% JSD) and 1983 (80% JSD). Simulations of passive salmonfor each of the two years confirmed that the magnitudes of the surface currents are sufficientto influence the migrating sockeye. The average net north-south current drifts were 2.4 and5.1 kilometres per day, respectively. Over a migration period of sixty days, the difference inthese drifts could account for sockeye landfall in 1983 of up to 150 kilometres further norththan 1982. A shift in landfall of this distance might explain the large difference in JSDs.Fifty-four simulations of compass-orientated sockeye were run. Each simulation “released”174 sockeye, distributed over the Northeast Pacific sub-arctic habitat (Figure 4.2a). The following model parameters were used:• 1982 and 1983 surface currents• sockeye swim speeds of 20.8 cm/s (18 km/d), 34.7 cm/s (30 km/d) and 55.6 cm/s (48km/d)Chapter . Salmon Migration Simulations 44• sockeye swim directions of 90°, 112.5°, and 135°• “release” times (simulated start dates) of May 1, June 1, and July 1These studies ifiustrate the problems discussed in Chapter 2, namely display of multidlimensional data in a two-dimensional environment, time animated displays, and displays thatallow comparison of plots to quickly detect differences.Standard Visualization ToolsWe started our analysis of Oceanography’s simulation data by building a set of “standard”visualization tools. These tools were written in an ad-hoc manner, without using results fromresearch in preattentive processing. This helped us understand the Oceanography simulationsand the type of data analysis researchers required. We also wanted to see what kind of visualization tools we could build using standard design techniques. The design process involvedconsulting with the end users and writing tools to analyse a single specific problem. Threedifferent programs were written. The first compared latitude of landfall for two different years.The second compared salmon swim speeds for two different years. The third compared date oflandfall for two different years.An example of output from the latitude of landfall visualization tool is show in Figure 4.3.The top and center images show latitude of landfall values for the two years under study,1982 and 1983, respectively. The researchers in oceanography subdivided the range of possiblelandfall values into three regions: from 45° to 51° latitude, from 51° to 54° latitude, and from54° to 60° latitude. Salmon making landfall below 45° or above 60° are not displayed. Eachregion is represented by a unique grey scale range. A salmon’s latitude of landfall is shown bydrawing a grey square at the salmon’s starting position. The intensity of the square identifiesboth the region and the exact latitude where the salmon made landfall.The bottom image shows the difference in latitude of landfall between 1982 and 1983. The514560*!51_________45Difference::+66050 . 045:-330160 150 140LongitudeFigure 4.3: Example output from the landfall visualization tool. Top image shows latitudeof landfall for 1982. Center image shows latitude of landfall for 1983. Bottom image showsdifference in latitude of landfall between 1983 and 1982Chapter 4. Salmon Migration Simulations 45Migration Start: MAY 1Swlm Speed - 20.8 cm/sOrientation: 90.0 T19826050 [45160 150 1401983360555045-I160 150 140Migration Start: MAY 1Swim Speed = 20.8 cmlsOrientation: 90.0 T+9.20Chapter 4. Salmon Migration Simulations 461982::::[ “.::::j 1:160 150 140605535045605535045•1:J1983L_I.- 9.8÷12.20-14.8:t70:0C)- 7.0160 150 140Difference6055rI5045-jL_ I160 150 140LongitudeFigure 4.4: Example output from the swim speed visualization tooi. Top image showsdifference in swim speed and migration speed for 1982. Center image show difference inswim speed and migration speed for 1983. Bottom image show difference in migrationspeed between 1983 and 1982Chapter 4. Salmon Migration Simulations 47:1Migration Start: JUNE 1Swim Speed- 34.7 cm/sOrientation: 90.0 T198245.;6055-J5036055so45160 150 1401983160 150 140Difference250Ad222Ch192 .St178250Ad2220Ch192St178+ 9.00-15.060555014045160 150LongitudeFigure 4.5: Example output from the date of landfall visualization tool. Top image showsstock classification for salmon in 1982, based on date of landfall. Center image showsstock classification for salmon in 1983. Bottom image shows difference in date of landfallbetween 1983 and 1982Chapter 4. Salmon Migration Simulations 48range of possible values is subdivided into two regions. A salmon that landed farther north in1983 versus 1982 is shown by drawing a light grey square at its starting position. A salmon thatlanded farther south in 1983 versus 1982 is shown by drawing a dark grey square at its startingposition. Salmon with the same latitude of landfall in both 1982 and 1983 are represented byan empty square. Again, salmon that made landfall outside the range 45° to 600 in either ofthe two years are not displayed. The data in Figure 4.3 support the hypothesis that oceancurrents affect salmon migration patterns. The majority of the squares in the bottom imageare light grey, indicating the majority of salmon in 1983 landed farther north than they did in1982. These results are similar to the real data, which shows a 1982 JSD of 22% and a 1983JSD of 80%.The other two programs visualize data in a similar way. The swim speed visualization toolcompares a salmon’s swim speed and its actual migration speed (Figure 4.4). Swim speed isa constant speed used by the salmon to swim towards the British Columbia coast. Migrationspeed is the speed the salmon makes after taking into account the effect of the current. Thecurrent may assist or resist the salmon’s forward progress. The top and center images compareswim speed and migration speed for 1982 and 1983 respectively. A salmon with a migrationspeed faster than its swim speed is coloured light grey. A salmon with a migration speedslower than its swim speed is coloured dark grey. The bottom image compares the difference inmigration speed between 1982 and 1983. Salmon that are faster are coloured light grey, whilesalmon that are slower are coloured dark grey.The date of landfall visualization tool classifies salmon as belonging to one of three differentstocks, Stikine, Chilko, or Adams (Figure 4.5). “Stock” refers to a specific area of the FraserRiver where the salmon are born and where they migrate to spawn. Researchers know the timeperiods during which salmon from a given stock arrive at the mouth of the Fraser river. Thevisualization tools can assign a salmon to a given stock using its date of landfall. Salmon thatarrived during the Stikine migration period are coloured light grey. Salmon that arrived duringthe Chilko migration period are coloured dark grey. Salmon that arrived during the AdamsChapter 4. Salmon Migration Simulations 49migration period are coloured black. Salmon that arrived during none of the above migrationperiods are represented by an empty square. The bottom image shows the difference in date oflandfall between 1983 and 1982. Salmon that landed earlier in 1983 versus 1982 are colouredlight grey. Salmon that landed later are coloured dark grey.The visualization tools could be more useful if they had some way to “superimpose” allthree frames of data into a single display. This would allow the researchers to search more easilyfor dependencies among the different variables. To design this type of tool, one must answerthe question “How should the variables be combined and displayed simultaneously?” An adhoc assignment of features such as colour, shape, orientation, and intensity to individual datavalues may not result in a useful visualization tool. In fact, as discussed in the next chapter, itmay result in a tool that inhibits the user’s ability to extract the desired information. Resultsfrom research in preattentive processing can be used to design a “preattentive” visualizationtool. Such tools intelligently represent multivariate data using multiple preattentive features.Chapter 5Psychological Experiments“Errors, like straws, upon the surface floatHe who would search for pearls must dive below”— John Dryden (1631-1700)Through experimentation, we hope to determine whether or not research in preattentive processing can help design more useful and intuitive scientific visualization tools, as described inChapter 3. DeSanctis gave a number of criteria that measure the usefulness of a graphicaldisplay: interpretation accuracy, problem comprehension, task performance, decision quality,speed of comprehension, decision speed, memory for information, and viewer preference. Weuse estimation error and reaction time to measure accuracy for three of these criteria. In ourexperiments, a “more useful” visualization tool will be one with an improved interpretationaccuracy, task performance, or decision speed.The experiments used data from the salmon migration simulations. They were limited to aspecific set of preattentive features and a specific type of task. Due to the general nature of ourdata and task, we believe these experiments can be extended to a broad range of visualizationproblems. In general, we expected to address the following issues:• are preattentive features “better” than standard visualization methods?50Chapter 5. Psychological Experiments 51• in what situations are preattentive features useful or not useful?• are preattentive visualization tools “intuitive” to the user, or do they require a largeamount of initial explanation?• does encoding separate data values with different preattentive features cause an interference effect for the given task due to feature preference?• do our observations match what we would expect, given results from previous experimentsin preattentive processing?Task SelectionWe consulted with researchers in Oceanography to identify a suitable task to use during thepsychological experiments. We wanted to choose a task that was simple, but that still poseda question of interest to Oceanography. Part of this involved using results from the salmonmigration simulations during the experiments. This allowed us to see how our visualizationtechniques performed with “real-world” data. We decided to ask subjects to estimate thenumber of simulated salmon that made landfall north of some fixed latitude. Relative landfallwas encoded on a two-dimensional map of the open ocean at the spatial position the salmonstarted its migration run. A preattentive feature was used to represent position of landfall. Forexample, during one experiment salmon that landed north of the given latitude were colouredblue, while salmon that landed south were coloured red. Subjects were asked to estimate thepercentage of blue elements in each display.We wanted to see how preattentive features interfere with one another. Callaghan’s experiments showed a “feature preference” hierarchy for her texture boundary detection task[Ca184][Cal89j. We wanted to see if irrelevant preattentive features interfere with a subject’sestimation ability, similar to the way irrelevant preattentive features interfere with boundarydetection ability. We decided to use the stream function for our “interference” variable. TheChapter 5. Psychological Experiments 52stream function is a scalar value that represents current direction and speed at a given positionin the ocean, for a given instant in time. It is the potential function for the velocity field. Giventhe stream function 4(x, y), the x and y components of the current vector can be calculated asfollows.Vr(5.1)vy=A stream function value was encoded at each spatial position where a salmon started a migrationrun. A preattentive feature was used to represent the stream function.Stream function is an unnecessary piece of information, because our estimation task asksonly about latitude of landfall. We needed to ensure that stream function acted only as aninterference value, independent of latitude of landfall. In fact, we suspect that stream function and landfall are correlated. Subjects might “learn” about this dependence during theexperiment (which is exactly what researchers in Oceanography had to do). It would then bepossible to use either landfall or stream function information to complete the estimation task.We would be unable to determine how different choices for our primary preattentive featureaffect estimation ability. Subjects would simply gather information about the data value encoded with the “preferred” feature. If stream function and latitude of landfall are dependent,information about either one might be used to give a landfall estimation.The latitude of landfall values from the migration simulations were modified to give us aneven distribution of “northern salmon” (this is explained further below). These changes alsoprovided the desired independence. Any dependence that might have existed was lost whenwe modified the landfall values. Note that when Oceanography studies the actual simulationdata, they will be looking for a hypothesized correlation between latitude of landfall and thestream function field.Chapter 5. Psychological Experiments 53Open ocean current data was available for the years 1946 to 1990. The salmon migrationsimulation was run for each of these years. The latitude position where each salmon madelandfall was recorded. This allowed us to calculate an average latitude of landfall for a salmon,based on its starting position. We compared these average values to individual simulationresults. For a given year, this allowed us to classify a salmon as landing farther north or southversus the average landfall data.Each simulation ran for a predetermined number of simulated days. Stream function information for each day was recorded. This allowed us to calculate a set of average stream functionvalues for each year from 1946 to 1990. As mentioned above, these values were encoded at eachspatial position where a salmon started a migration run.In summary, our proposed task was as follows:• our primary data value was whether a salmon lands north or south of the average latitudeof landfall for the salmon’s starting position• the primary data value was encoded at the spatial position the salmon started its migration run, using preattentive feature A• our secondary data value was the average stream function value at every spatial locationwhere a salmon started a migration run• the secondary data value was encoded using preattentive feature B• subjects were asked to estimate the percentage of elements in the display with a givenpreattentive feature (i.e., the number of salmon landing north of the average latitude oflandfall) to the nearest 10%• estimation error (percentage of over- or under-estimation) and reaction time were usedas a measure of subject performanceChapter 5. Psychological Experiments 54S(a) (b)Figure 5.1: Examples of the rectangle orientations used during the experiment: (a) 0°rotation; (b) 60° rotationExperiment DesignThrough experimentation, we hoped to answer two specific sets of questions about preattentivefeatures and their use in visualization tools:• is it possible for subjects to provide a reasonable estimation of the relative number ofelements in a display with a given preattentive feature? What features under whatconditions allow this?• how does encoding an “irrelevant” data dimension with a secondary preattentive featureinterfere with a subject’s estimation ability? Which features interfere with one anotherand which do not?We decided to examine two features, hue and orientation. These have been shown to bepreattentive in various experiments by Julész [Jul83] and Triesman [Tri88]. Two unique lineorientations were used: 00 rotation and 60° rotation (Figure 5.1). Two different hues wereused, H and H2. These hues were chosen from the Munsell colour space, and they satisfiedChapter 5. Psychological Experiments 55the following properties:1. The hues were isoluminent, that is, the perceived brightness of both hues were equal2. The perceived difference between hues H1 and H2 were equal to the perceived differencebetween a rectangle rotated 00 and one rotated 600An explanation of how we determined these properties will be given later in this chapter. Ourdesign allowed us to use oriented, coloured rectangles to represent latitude of landfall andstream function values at the starting position of each salmon.Latitude of landfall and stream function values for each year were split into two subgroupsbefore the experiment began. Latitude of landfall was divided into a subgroup of values southof the average latitude of landfall position, and a subgroup of values equal to or north of theaverage latitude of landfall position. The stream function was divided into two equally sizedsubgroups. All the values in the first subgroup were less than or equal to all the values in thesecond subgroup. We could then use either of our two preattentive features to encode landfalland stream function values.The experiment was divided into four subsections or “blocks”, B1, B2, B3, and B4. Theprimary and secondary data value varied within each block, as did the primary and secondarypreattentive feature. This gave us the following:1. Primary data value was latitude of landfall, represented by hue. Secondary data valuewas stream function, represented by orientation (Figure 5.2)2. Primary data value was latitude of landfall, represented by orientation. Secondary datavalue was stream function, represented by hue3. Primary data value was stream function, represented by hue. Secondary data value waslatitude of landfall, represented by orientationChapter 5. Psychological Experiments 56fl. ——nfl —n4fnnnnØflnnnn.,,tafl fl_n_a•##*flflaIaE•fl#flfl-nn.nnn_.nn*ntnnFigure 5.2: Example of a display from block B1, latitude of landfall represented by hue,stream function represented by orientationChapter 5. Psychological Experiments 574. Primary data value was stream function, represented by orientation. Secondary datavalue was latitude of landfall, represented by hueEach block was further divided into two control subsections and one experiment subsection(Figure 5.3). The control subsections used a fixed value for the secondary feature. This allowedus to see how a real interference feature affected estimation error and reaction time. As anexample, block B1 was subdivided as follows:• Control subsection 1 consisted of 36 trials. Landfall was encoded using hue. Streamfunction was ignored, and every rectangle was oriented 00• Control subsection 2 consisted of 36 trials. Landfall was encoded using hue. Streamfunction was ignored, and every rectangle was oriented 60°• The experiment subsection consisted of 72 trials. Landfall was encoded using hue. Streamfunction was encoded using orientationThe 72 trials used in control subsections were almost the same as the 72 trials used in theexperiment subsection. The only difference was whether stream function was ignored or usedto determine orientation. In fact, there were only 36 unique trials. Each trial was shown fourtimes during the experiment to obtain the desired number of trials. Each of the 36 uniquetrials contained a certain percentage of target elements. We wanted an equal number of trialsfor each percentage value. For example, in block B1, the target element was a salmon thatlands north of the average landfall position. We wanted 4 trials in which 5-15% of the salmonwere “northern”, 4 trials in which 15-25% of the salmon were “northern”, and so on up to 85-95%. These percentage values are called intervals, and are numbered from 1 (5-15% northern)to 9 (85-95% northern). The original simulation data was modified to give us the desireddistribution.Blocks B2, B3, and B4 were subdivided in a similar manner. Each subject completed eitherblocks B1 and B3 (testing hue), or blocks B2 and B4 (testing orientation), for a total of 288Chapter 5. Psychological Experiments 58Block Block Primary Primary Secondary Secondary TotalName Type Value Feature Value Feature TrialsSub 1-1 control landfall hue — 00 orientation 36Sub 1-2 control landfall hue— 60° orientation 36Sub 1-3 experiment landfall hue stream orientation 72Sub 2-1 control landfall orientation— red 36Sub 2-2 control landfall orientation— blue 36Sub 2-3 experiment landfall orientation stream hue 72Sub 3-1 control stream hue— 0° orientation 36Sub 3-2 control stream hue— 60° orientation 36Sub 3-3 experiment stream hue landfall orientation 72Sub 4-1 control stream orientation— red 36Sub 4-2 control stream orientation— blue 36Sub 4-3 experiment stream orientation landfall hue 72Table 5.1: Information for each subsection in the experiment, including name, type, primary data value, primary preattentive feature, secondary data value, secondary preattentive feature, and total number of trialsFigure 5.3: Overview of the experiment designChapter 5. Psychological Experiments 59trials. The 144 trials within each block were presented in a random order to the subject.The only difference between blocks B1 and B3 and blocks B2 and B4 was the primary datavalue. Blocks B1 and B2 used landfall as the primary data value. Blocks B3 and B4 used streamfunction as the primary data value. Stream function and latitude of landfall were subdividedfor blocks B3 and B4 as follows. An average stream function value was calculated using datafor the years 1946 to 1990. A particular stream function value could then be classified as lowerthan, equal to, or greater than the average value. Latitude of landfall was split into two equalgroups for each year 1946 to 1990. The primary and secondary data values were subdivided ina way identical to blocks B1 and B2.The above variation was designed to give us two different types of spatial patterns. The“question” asked in blocks B3 and B4 is: what percentage of stream function values are equal toor greater than the average value? Although this is not a question of interest for Oceanography,it was asked to help validate results from the psychological experiments. Landfall values tendto subdivide into two separate “groups” of elements. Stream function values tend to subdivideinto a concentric ring-like pattern. We wanted to compare estimation error and reaction timebetween blocks B1 and B3 and blocks B2 and B4. A difference would suggest estimation abilitydepends, at least in part, on the type of spatial pattern presented to the subject.Colour SelectionThe hues used during the psychological experiments were chosen from the Munsell colour space.This colour space was originally proposed by Albert H. Munsell in 1898. It was later revisedby the Optical Society of America in 1943 to more closely approximate Munsell’s desire for afunctional and perceptually balanced colour system. A colour from the Munsell colour spaceis specified using the three “dimensions”, hue, chroma, and value.In Munsell space, hue refers to some uniquely identifiable colour, or as Munsell suggested,Chapter 5. Psychological Experiments 60“the quality by which we distinguish one colour from another, as a red from a yellow, a green,a blue, or a purple”. Hue is represented by a circular band divided into ten sections. Munsellnamed these sections red, yellow-red, yellow, green-yellow, green, blue-green, blue, purple-blue,purple, and red-purple (or R, YR, Y, GY, G, BG, B, PB, P, and RP for short). Each sectioncan be further divided into ten subsections if finer divisions of hue are needed. A numberpreceding the hue name is used to define the subsection (for example, 5R or 7BG).Value refers to a colour’s lightness or darkness. Munsell defined value as “the quality bywhich we distinguish a light colour from a dark one”2. Value is divided into nine sections,numbered 1 through 9. Dark colours have a low value, while lighter colours have a highervalue.Chroma defines a colour’s strength or weakness. Chroma is measured in numbered stepsstarting at 1. Weak colours have low chroma values. Strong colours have high chroma values.Greys are colours with a chroma value of zero. The maximum possible chroma value dependson the hue and value being used.A visual representation of the Munsell colour space is shown in Figure 5•43• The circularband represents hue. The pole running through the center of the colour space represents value.Colours with increasing chroma radiate outwards from the value pole. A Munsell colour isspecified by writing “hue value/chroma”. For example, R 6/6 would be a relatively strong red.BG 9/2 would be a weak cyan.We chose the Munsell space because it provides a number of desirable properties. Munselloriginally designed his colour space to be used by artists. One feature he tried to incorporateinto his system was perceptual “balance”. Hues directly opposite one another will be balanced,provided their value and chroma are equal. Thus, BG 5/5 is perceptually balanced with R 5/5,and Y2/3 is balanced with PB 2/3. Opposite hues with different values and chromas can also‘Munsell: A Grammar of Colour. New York, New York: Van Nostrand Rienhold Company, 1969, pg. 182lbid, pg. 203lbid, pg. 23Chapter 5. Psychological Experiments 61be balanced by varying the amount of each colour used within a given area. Given two Munsellcolours H1V1/C and 112V/C, we need V2C parts of hue Hi and V1C parts of hue 112.For example, colours R 5/10 and BG 5/5 can be balanced by using BG 5/5 in two-thirds of thearea, and R 5/10 in one-third of the area. As we would expect, the stronger chroma and highervalue take up less of the total area than the weaker chroma and lower value.A second and perhaps more important property is that Munsell colours with the same valueare isoluminent. Thus, colours R 5/5, G 5/6, B 5/3, and any other colours with value 5 are allperceived as having equal luminance. This property was provided when the Munsell colourtable was revised in 1943.Munsell colours must be converted into RGB triples to be displayed on a computer monitor.The first step in this process is calculating a CIE to RGB conversion matrix for the monitorbeing used. Given the CIE chromaticity values (Xr, Yr), (x9, yg), (x, Yb) for the monitor phosFigure 5.4: Munsell colour space, showing it’s three dimensions hue, value, and chromaChapter 5. Psychological Experiments 62phors, and the luminance of the monitor’s maximum-brightness red, green, and blue (Yr, 1’q, Yb)we can compute the following.Zr = 1 — Xr— YrZglXgYg (5.2)Zf, 1—— YbYrA colour is usually specified in CIE colour space with a triple (x, y, Y). These values areused to obtain X and Z as shown in Equation 5.4 below. The values X, Y and Z are theninserted into Equation 5.5 to obtain the monitor R, G, B values for the given colour [Fol9O].This process assumes the intensity steps produced by the video hardware are linear.x=YY (5.4)z= lYy—1R XrCr xgCg XbCb XG YrCr ygCg YbCb Y (5.5)B ZrCr ZgCg ZbCb ZOur experiments were run on a Apple Macintosh II microcomputer using a software package written by Rensink and Enns [Enn9l]. This software was specifically designed to runpreattentive psychology experiments. The microcomputer was equipped with an Apple RGB13-inch colour display and a Mac II High-Resolution Video Card. It was capable of displayingChapter 5. Psychological Experiments 63256 colours simultaneously. The CIE chromaticities for the monitor phosphors were suppliedby the manufacturer. A spot photometer was used to measure the luminance values for themonitor’s maximum-intensity red, green, and blue. This gave us the following values, whichwere used to obtain the conversion matrix shown in Equation 5.8.(xr,Yr,Zr) = (0.625, 0.340, 0.035)(Xg,yg,zg) = (0.280, 0.595, 0.125) (5.6)(xb,yb,zb) = (0.155, 0.070, 0.775)Y,.= 5.5Y = 16.6 (5.7)Y6= 2.8R 0.1313 —0.0574 —0.0211 XG = —0.0439 0.0806 0.0015 Y (5.8)B 0.0025 —0.0080 0.0325 ZTables available in Wyszecki and Stiles give CIE chromaticity values for a large numberof Munsell colours [Wys82]. Once we have the CIE values for a Munsell colour, we can useEquation 5.5 to convert them into RGB triples. Not all the Munsell colours can be displayed bythe monitor. Many of them fall outside the monitor’s colour gamut. In particular, the numberof displayable green, blue-green, and blue hues is quite limited. Table 5.1 shows the number ofdifferent chroma values available for each of the primary Munsell hue and value combinations.We know how to convert Munsell colours to monitor RGB colour space. We must still choosetwo Munsell colours for our experiment. Recall that we required the following properties of ourtwo hues.1. The hues will be isoluminent, that is, the perceived brightness of both hues will be equalChapter 5. Psychological Experiments 64Hue Value1 2 3 4 5 6 7 8 95R 2/5 4/7 5/8 6/9 8/10 9/9 7/7 5/5 3/35YR 1/2 2/3 3/4 4/6 5/7 5/9 6/10 6/7 3/35Y 1/1 1/2 2/3 2/4 3/6 3/7 4/8 4/9 5/105GY 1/2 2/3 2/4 3/5 3/6 4/7 4/8 5/10 5/95G 1/4 2/8 3/11 4/13 4/14 5/14 6/13 8/11 6/65BG 1/3 2/6 2/9 3/10 3/11 4/10 5/10 5/8 5/55B 1/3 2/5 2/6 3/7 4/8 4/8 5/7 4/4 2/25PB 2/5 3/8 4/9 6/10 7/9 7/7 5/5 3/3 1/15P 3/11 5/14 7/16 9/16 11/14 10/10 7/7 5/5 2/25RP 3/7 4/9 5/10 7/11 8/12 9/11 9/9 6/6 3/3Table 5.2: Number of displayable colours for all primary Munsell hue and value combinations (Apple RGB 13-inch colour display), in the format “displayable chroma/totalchroma”2. The perceived difference between hues Hi and H2 will be equal to the perceived differencebetween a rectangle rotated 0° and one rotated 60°Requirement 1 was satisfied by ensuring both hues had the same value in Munsell space. Wechose Munsell value 7, because that slice through Munsell space provided the largest range ofdisplayable colours for a variety of different hues.Requirement 2 was satisfied by running a set of preliminary experiments. We startedwith a simple target detection task. Users were asked to detect the presence or absence of arectangle rotated 60° in a field of distractor rectangles rotated 0°. Both the target and distractorrectangles were coloured SR 7/8. The experiment consisted of 30 trials. 15 of the 30 trials wererandomly chosen to contain the target. Each trial consisted of 36 rectangles (including thetarget, if present), drawn in random positions on the screen. The average reaction time fordetection was computed from the trials in which the user responded correctly.Chapter 5. Psychological Experiments 65After the first experiment, the target and distractors were changed. The target was arectangle coloured 1ORP 7/8. The distractors were rectangles coloured 5R 7/8. Note that thetarget is a single counter-clockwise “hue step” from the distractors in Munsell space. Boththe target and distractor rectangles were rotated 0°. The experiment consisted of 30 trials.15 of the 30 trials were randomly chosen to contain the target. Each trial consisted of 36rectangles (including the target, if present), drawn in random positions on the screen. Theaverage reaction time for detection was computed from the trials in which the user respondedcorrectly.The hues used for the target and distractors during the second experiment were very similar.Because of this, the average reaction time for the second experiment was higher than the averagereaction time for the first experiment. Additional experiments were run as follows.• the target was moved another counter-clockwise “hue step” away from the distractors(i.e., 5RP 7/8, lOP 7/8, and so on)• the second experiment was re-run, and average reaction time was computed• this process continued until an average reaction time equal to or below the average reaction time of the first experiment was obtainedThis process provided two isoluminent hues Hi and 112 with a perceived difference equal tothat of a 60° rotation, where perceived difference was measured by reaction time in the targetdetection experiment.Figure 5.5 shows a graph of the combined results of all subjects. The solid horizontal lineshows the average reaction time for detecting the rotated rectangle, 569 miffiseconds. Thedashed line shows the average reaction times for detecting each coloured rectangle, startingwith 5P 7/8, which is three hue steps from the distractors, and ending with 5BG 7/8, whichis ten hue steps from the distractors. The exact reaction times are provided in Table 8.5.At five hue steps, or 1OPB 7/8, the average reaction time for hue detection was less than theChapter 5. Psychological Experiments 66Reaction Time (ms)950.00 I I I I-hue Average RT900.00850.00800.00750.00700.00650.00600.00—a550.00500.00450.00400.00 ...oI I I I HueSteps2.00 4.00 6.00 8.00 10.00Figure 5.5: Results from hue and rotation difference experiments. The solid line showsreaction time for rotation discrimination. Points along the dashed line show reaction timefor discriminating the given Munsell hueaverage reaction time for target detection. We chose to use the hues 5R 7/8 and 5PB 7/8 duringthe experiment. These hues are 6 hue steps from one another. We assumed the equality inperceived difference would transfer from the target detection task to the estimation task.Chapter 5. Psychological Experiments 67Experiment ProcedureTwelve subjects were used during the experiment, nine males and three females, all students orstaff at the University of British Columbia. All subjects had normal or corrected vision and nonewere known to be colour blind. Experiments were run in the Department of Psychology’s visionlaboratory, using a Macintosh II microcomputer equipped with a 13-inch RGB monitor andvideo hardware capable of displaying 256 colours simultaneously. In general, it took subjectsbetween 50 and 80 minutes to finish the entire experiment. Subjects were paid an honorariumof $10 for their participation in the experiment.Each subject completed either blocks B1 and B3 (blocks using hue as the primary feature)or blocks B2 and B4 (blocks using orientation as the primary feature) for a total of 288 trials. Atthe beginning of the experiment, subjects were shown a sample display frame. The experimentprocedure and task were explained to the subject. For example, subjects completing the hueblocks were told:During the experiment, you will be shown a number of displays or “trials” similarto this one. Notice the trial is made up of rectangles, some of which are colouredblue, some of which are coloured red. Your task will be to estimate, to the nearest10%, the number of rectangles that are coloured blue. These trials will be shownin the following way. First, the screen will blank for a short period of time. Justbefore the trial is shown, a focus circle will be displayed, so you can prepare yourself.The trial will be shown for a fixed period of time. The screen will blank, and thecomputer will wait for you to type in your estimation. If your response is correct,a plus sign will be shown. If your response is incorrect, a minus sign will be shown.The computer will then proceed to the next trial. You can take as much time asyou like to make your estimation. However, in general, the longer you wait beforemaking your estimation, the less accurate your answer will beSubjects were then shown how to enter their estimation. This was done by typing a digiton the keyboard between 1 and 9, which corresponded to the percentage of rectangles theyChapter 5. Psychological Experiments 68estimated contained the target feature: 5-15%, 15-25%, and so on up to 85-95%. Subjects weretold no trial would contain less than 5% or more than 95% of the target rectangles.Subjects started with a set of practice trials called “Fixed Practice”. This consisted of ninetrials, one for each of the nine possible intervals. In one trial 10% of the rectangles were targets,in another 20% were targets, and so on up to 90%. The practice trials were designed to calibratethe subjects and to give them an idea of the speed of the trials and the experiment. Trialswere displayed one after another to the subject in the manner described above. If subjectsestimated correctly, they moved immediately to the next trial. If they estimated incorrectly,the trial was redisplayed, and they were told the correct answer.Next, subjects completed a second set of practice trials called “Random Practice”. This setof trials consisted of 18 trials, two for each of the nine possible intervals. Trials were displayedin a random order to the subject. This experiment was designed to run exactly like a realexperiment block. Trials in which the subject estimated incorrectly were not redisplayed andsubjects were not told the correct answer, although they were told whether their estimationwas right or wrong.Finally, subjects completed the two experiment blocks B1 and B3 or B2 and B4. Each blockconsisted of 72 control trials and 72 experiment trials. The 144 trials from each block werepresented to the subject in a random order. Subjects were provided with an opportunity torest after every 48 trials. Data from all four phases was saved for later analysis.Chapter 6Analysis of Results“King, father, royal Dane, 0! answer meLet me not burst in ignorance; but tell— Hamlet, ShakespeareA variety of information was obtained during the psychological experiments. Four data fileswere produced for each subject: a fixed practice file, a random practice file, a ifie for the firstexperiment block, and a file for the second experiment block. Data for each trial includeda record of the subject’s estimation, the correct estimation, response time, and trial type(control or experiment) for the experiment blocks. Estimation error is the absolute value ofthe difference between the subject’s estimation and the correct estimation for a given trial.Some trials showed large errors, usually when the correct estimation was in the lower 3or upper 3 intervals. We hypothesize this was caused by subjects accidentally estimating thewrong feature, either the number of red rectangles instead of blue rectangles or the numberof rectangles rotated 00 instead of 600. This was supported to some extent by subjects whomentioned they had made this type of mistake. We decided to modify the data by complementing subject estimations that displayed an unusually large error. Estimations were corrected asfollows:69Chapter 6. Analysis of Results 70for intervals 1 (5-15% targets), 2 (15-25% targets), and 3 (25-35% targets), estimationsgreater than 5 were “flipped” about 5, that is, 6 became 4, 7 became 3, 8 became 2, and9 became 1• for intervals 7 (65-75% targets), 8 (75-85% targets), and 9 (85-95% targets), estimationsless than 5 were “flipped” about 5, that is, 4 became 6, 3 became 7, 2 became 8, and 1became 9In total, less than 1% (21 of 2,304) of the trials were modified. This data formed the basisfor the analysis described in this chapter. We performed a number of i-tests and analysis ofvariance (ANOVA) F-tests while investigating our data. A i-test tests two independent samplesX1 and X2 to see if their means are equal within some confidence interval 1 — a. Equality ofmeans is called the null hypothesis H., : = j2. a represents the probability of rejecting H.,when it is in fact true. For example, an a = 0.10 gives a 10% probability of error. The I valuefor a given a and number of degrees of freedom v is denoted (1_)tv.An ANOVA F-test compares three or more independent samples X1,. . . , X,-, to see if theirmeans are equal within some confidence interval 1 — a. Equality of means is called the nullhypothesis H., : = = . a represents the probability of rejecting H., when it is in facttrue. The F value for a given a and number of degrees of freedom v is denoted (1..a)Fv. Theanalysis that follows addresses the following questions:• can rapid and accurate estimation be performed using each of hue and orientation?• is there evidence of a subject preference for either hue or orientation for the estimationtask?• is there evidence of a subject preference for the type of data being displayed during theestimation task?• is there evidence that orientation interferes with a subject’s ability to perform hue estimation?Chapter 6. Analysis of Results 71• is there evidence that hue interferes with the subject’s ability to perform orientationestimation?A t-test assumes X1 and X2 are independent and normally distributed with means ILl and P2and the same variance cr2. Similarly, an F-test assumes all X2 are independent and normallydistributed with means p and the same variance o.2 However, research by Glass and others[G1a84] has shown that the violation of normality has little effect on t-test robustness. Also,if the size of the samples is equal, that is, if n1 = n2, violation of homogeneity-of-variancehas little effect on t-test robustness. These assumptions extend to ANOVA F-test results.Because our data meets these criteria, we did not test for normality or equal variance duringour analysis.Estimation AbilityThe first question we addressed was whether subjects were able to perform accurate estimationin a 450 miffisecond exposure duration. Figures 6.1—6.12 show graphs of combined subject datafor the control and experiment subsections of blocks B1, B2, B3, and B4. Each graph plotsaverage subject response V for each interval, standard deviation of subject response o(V) foreach interval, and standard deviation of subject estimation error u(e) for each interval. Tables6.1—6.4 give exact values for these measurements, as well as average subject estimation errorfor each interval .The results show that accurate estimation was possible during the experiment for all fourblocks. In the experiment subsections the total ranged from a low of 0.54 in block B2 to ahigh of 0.65 in block B1. o-(e) was below 1.0 in all four blocks. This indicates that subjectresponses were clustered close to the correct estimate. Results from the two control subsectionsshow similar trends.Chapter 6. Analysis of Results 72Keypress9.008.508.007.507.006.506.005.505.004.504.003.503.002.502.001.501.000.50I I I I//User Value AverageV1DtlrrorSDpe&Value - -Interval.e_.--4-- ...... —____.__.._-e-•_— ...42.00 4.00 6.00 8.00Figure 6.1: Graph of combined results for control 1 subsection of block B1, primary datavalue landfall, primary feature hueChapter 6. Analysis of Results 73KeypressI I I lJser Value Average9.00 (Jser Value SD8.50 r1OiSD- -8.007.507.006.506.005.505.004.504.003.503.002.502.001.501.000.500.00IntervalI I I I2.00 4.00 6.00 8.00Figure 6.2: Graph of combined results for control 2 subsection of block B1,value landfall, primary feature hueprimary dataChapter 6. Analysis of Results 74I I I User Value Average///Keypress9.008.508.007.507.006.506.005.505.004.504.003.503.002.502.001.501.000.50User Value SDrrorSD- -Interval//0—/--2.00 4.00 6.00 8.00Figure 6.3: Graph of combined results for experiment subsection of block B1, primarydata value landfall, primary feature hueChapter 6. Analysis of Results 75Keypress9.008.508.007.507.006.506.005.505.004.504.003.503.002.502.001.501.000.500.00Figure 6.4: Graph of combined results for control 1 subsection of block B2, primary datavalue landfall, primary feature orientationrrorSDpected User Value - -2.00 4.00 6.00 8.00Chapter 6. Analysis of Results 76User Value Average/Keypress9.008.508.007.507.006.506.005.505.004.504.003.503.002.502.001.501.000.50/User Value SDTrhD- -Interval//-,( 0 —.-.--.,.•EI....——.— __——._I I I I2.00 4.00 6.00 8.00Figure 6.5: Graph of combined results for control 2 subsection of block B2, primary datavalue landfall, primary feature orientationChapter 6. Analysis of Results 77I I I I User Value AverageKeypress9.008.508.007.507.006.506.005.505.004.504.003.503.002.502.001.501.000.500.00User Value SDTrrorSD- -Interval0.._•_.e_.___— —.4- ..G — —---I I I I2.00 4.00 6.00 8.00Figure 6.6: Graph of combined results for experiment subsection of block B2, primarydata value landfall, primary feature orientationChapter 6. Analysis of Results 78Class Control 1 Control 2 ExperimentV u(V) o(e) V o(V) o(e) V a(V) a(e)1 1.25 0.53 0.25 0.53 1.33 0.70 0.33 0.70 1.29 0.68 0.29 0.682 1.83 0.82 0.58 0.58 2.04 0.86 0.62 0.58 2.17 0.83 0.46 0.713 2.71 0.75 0.46 0.66 2.75 0.85 0.67 0.56 2.79 0.71 0.54 0.504 4.17 1.13 0.75 0.85 3.75 1.11 0.83 0.76 3.83 1.49 1.08 1.035 5.50 1.32 1.00 0.98 5.08 1.67 1.42 0.83 5.54 1.41 1.25 0.846 5.96 1.27 0.96 0.81 6.71 1.23 1.21 0.72 6.31 1.17 0.94 0.767 6.83 1.01 0.75 0.68 7.42 0.78 0.67 0.56 7.19 0.73 0.52 0.558 8.13 0.80 0.46 0.66 8.33 0.56 0.42 0.50 8.15 0.62 0.40 0.499 8.71 0.55 0.29 0.55 8.96 0.20 0.04 0.20 8.65 0.53 0.35 0.53Total 5.01 2.71 0.61 0.75 5.15 2.84 0.69 0.74 5.10 2.72 0.65 0.77Table 6.1: Summary of combined landfall/hue experiment resultsClass Control 1 Control 2 ExperimentV cr(V) o(e) V u(V) u(e) V a(V) a(e)1 1.25 0.44 0.25 0.44 1.21 0.51 0.21 0.51 1.13 0.39 0.13 0.392 2.13 0.45 0.21 0.41 2.25 0.68 0.42 0.58 2.15 0.74 0.50 0.583 2.83 0.70 0.50 0.51 2.92 0.65 0.42 0.50 3.08 0.77 0.54 0.544 4.13 1.08 0.71 0.86 3.17 0.76 0.83 0.76 3.71 0.77 0.58 0.585 5.50 1.29 1.00 0.93 5.13 1.15 0.96 0.62 5.50 1.25 1.00 0.906 6.29 1.20 0.96 0.75 6.67 0.96 0.92 0.72 6.40 1.35 1.19 0.737 7.08 0.97 0.75 0.61 7.38 1.01 0.88 0.61 7.25 1.00 0.83 0.608 8.17 0.70 0.50 0.51 7.96 0.86 0.63 0.58 8.08 0.71 0.46 0.549 8.83 0.38 0.17 0.38 8.79 0.51 0.21 0.51 8.83 0.43 0.17 0.43Total 5.13 2.69 0.56 0.68 5.05 2.73 0.61 0.66 5.13 2.72 0.60 0.69Table 6.2: Summary of combined landfall/orientation experiment resultsChapter 6. Analysis of Results 79I I I IKeypress9.008.508.007.507.006.506.005.505.004.504.003.503.002.502.001.501.000.500.00—/User Value AverageOVâ1USDrrorSDt&iiXs&V - -Interval— I0_._9.___e.___ ....—__-..__---..-—2.00 4.00 6.00 8.00Figure 6.7: Graph of combined results for control 1 subsection of block B3, primary datavalue stream function, primary feature hueChapter 6. Analysis of ResultsKeypress9.008.508.007.507.006.506.005.505.004.504.003.503.002.502.001.501.000.500.00 I I I2.00 4.00 6.00 8.00Figure 6.8: Graph of combined results for control 2 subsection of block B3, primary data.value stream function, primary feature hue80I-///—////User Value AveragerrorSDtip&Viiie - -Interval///—,€ ..—.•0C. .—0-:_---.-U—-Chapter 6. Analysis of Results 81Keypress9.008.508.007.507.006.506.005.505.004.504.003.503.002.502.001.501.000.50I I User Value Average/////User Value SDiTOrSDxp&&Vaiiie - -Interval/—,( ..4-—:---- -_•3. - -I I2.00 4.00 6.00 8.00Figure 6.9: Graph of combined results for experiment subsection of block B3, primarydata value stream function, primary feature hueChapter 6. Analysis of Results 82I I IKeypress9.008.508.007.507.006.506.005.505.004.504.003.503.002.502.001.501.000.500.00User Value AveragelJser Value SD‘rrorSD- -Interval--_._o__•— _._j:_- -L2.00 4.00 6.00 8.00Figure 6.10: Graph of combined results for control 1 subsection of block B4, primary datavalue stream function, primary feature orientationChapter 6. Analysis of Results 83I I I User Value AverageKeypress9.008.508.007.507.006.506.005.505.004.504.003.503.002.502.001.501.000.500.00user Value SDrrorSD- -Interval—.—.a.._--:z..--- .......—— I I I2.00 4.00 6.00 8.00Figure 6.11: Graph of combined results for control 2 subsection of block B4, primary datavalue stream function, primary feature orientationChapter 6. Analysis of ResultsKeypress84I I I Is.-. -‘ .-....0---- ._. -User Value Average//9.008.508.007.507.006.506.005.505.004.504.003.503.002.502.001.501.000.500.00User Value SD‘rrorSD‘i&Vifiie - -Interval2.00 4.00 6.00 8.00Figure 6.12: Graph of combined results for experiment subsection of block B4, primarydata value stream function, primary feature orientationChapter 6. Analysis of Results 85Class Control 1 Control 2 ExperimentV u(V) u(e) V a(V) a(e) V a(V) o(e)1 1.13 0.34 0.13 0.34 1.21 0.41 0.21 0.41 1.23 0.47 0.23 0.472 1.83 0.70 0.50 0.51 2.00 0.59 0.33 0.48 2.17 0.52 0.29 0.463 3.17 0.82 0.50 0.66 3.21 0.78 0.63 0.49 3.27 0.87 0.65 0.644 3.71 0.86 0.71 0.55 4.54 1.14 0.96 0.81 4.42 1.09 0.92 0.715 5.08 0.78 0.58 0.50 5.63 0.97 0.88 0.74 5.02 0.91 0.69 0.596 6.04 0.95 0.71 0.62 6.21 0.88 0.83 0.65 6.27 1.16 0.90 0.787 6.71 0.55 0.38 0.49 7.00 1.10 0.83 0.70 7.33 0.66 0.50 0.558 7.83 0.70 0.42 0.58 7.67 0.82 0.58 0.65 7.69 0.59 0.35 0.569 8.75 0.44 0.25 0.44 8.75 0.44 0.25 0.44 8.69 0.47 0.31 0.47Total 4.92 2.60 0.46 0.55 5.13 2.58 0.59 0.66 5.12 2.56 0.54 0.63Table 6.3: Summary of combined stream function/hue experiment resultsClass Control 1 Control 2 ExperimentV o(V) a(e) V a(V) a(e) V u(V) cr(e)1 1.08 0.28 0.08 0.28 1.08 0.28 0.08 0.28 1.08 0.28 0.08 0.282 2.17 0.56 0.33 0.48 2.29 0.81 0.46 0.72 2.13 0.61 0.33 0.523 3.25 0.90 0.67 0.64 3.13 0.61 0.38 0.49 3.15 0.68 0.40 0.574 4.08 0.65 0.42 0.50 4.29 0.95 0.71 0.69 4.21 0.77 0.54 0.585 4.50 0.93 0.83 0.64 5.46 0.93 0.71 0.75 4.79 1.20 1.00 0.686 5.88 1.15 0.79 0.83 6.13 1.08 0.71 0.81 5.83 1.23 0.92 0.827 6.71 0.91 0.63 0.71 7.04 1.04 0.79 0.66 6.79 0.99 0.79 0.628 8.04 0.69 0.38 0.58 7.96 0.75 0.46 0.59 7.92 0.87 0.63 0.609 8.88 0.34 0.13 0.34 8.83 0.38 0.17 0.38 8.75 0.48 0.25 0.48Total 4.95 2.60 0.47 0.62 5.13 2.61 0.50 0.65 4.96 2.59 0.55 0.66Table 6.4: Summary of combined stream function/orientation experiment resultsChapter 6. Analysis of Results 86Figure 6.13 graphs across all nine intervals for the experiment subsection of blocks B1,B2, B3, and B4. for each block appears symmetric, with a maximum around interval 5 or 6,and a minimum at interval 1 or 9. It appears subjects had more trouble estimating trials fromintervals 4, 5, and 6, and less trouble estimating trials from intervals 1, 2, 8, and 9. However,this is a well known phenomena called the “end effect”. Because subjects have more freedomof choice at intervals 4, 5, and 6, their observed estimation error is higher than at intervals 1and 9, where there is only one direction of freedom available. This trend is also due in part tothe fact that we “flipped” subject estimations from intervals 1, 2, 3, 7, 8, and 9 that displayedan unusually large error.Feature PreferenceA point of interest was whether a subject’s estimation ability differed depending on the featurebeing estimated. A t-test was computed to see if estimation error mean was equal across primaryfeatures for both the control and experiment subsections. Blocks B1 and B3 were combined toform B. This super-block contained all trials where hue was the primary preattentive feature.Blocks B2 and B4 were combined to form B0. This super-block contained all trials whereorientation was the primary preattentive feature. The data displayed during Be’s trials is thesame data displayed during B0’s trials. The only differences are the features used to encode theprimary and secondary data values. The t-test compared means of the control and experimentsubsection’s estimation error.Subsection n1 n2 v tControl 1 432 432 862 0.36Control 2 432 432 862 1.43Experiment 864 864 1726 0.45Table 6.5: t-test results for estimation error rates from hue and orientation trialsChapter 6. Analysis of Results 87Error— I1.301.201.101.000.900.800.700.600.500.400.300.200.10/Landfall/ColouriColour/i1iinInterval/I /— I III‘,/AI I2.00 4.00 6.00 8.00Figure 6.13: Graph of average error values for the experiment subsection of all four blocksB1, B2, B3, and B4Chapter 6. Analysis of Results 88Table 6.5 shows the subsection, the number of hue trials n1, the number of orientation trials n2,the degrees of freedom v, and the i-value t. The control t-values are less than o.975t862 = 1.962and the experiment i-value is less than .975ty = 1.960. Therefore, there appears to be nofeature preference for the estimation task. Any difference in means is probably due to samplingerror, and not the choice of hue or orientation as a primary preattentive feature. We did notexpect to observe a feature preference, because we calibrated the perceived difference betweenour two hues and our two orientations to be equal before the experiment.Data Type PreferenceIt is possible that the spatial distribution of the data affects a subject’s estimation ability. Weused two different data sources during the experiment, latitude of landfall and stream functionvalues. A difference in estimation error mean across data types would indicate estimationability depends, at least in part, on the spatial distribution of the data being displayed. BlocksB1 and B2 were combined to form B1. This super-block contained all trials where latitudeof landfall was the primary data value. Blocks B3 and B4 were combined to form B8. Thissuper-block contained all trials where stream function was the primary data value. Both B,and B3 have 864 trials where hue was the primary preattentive feature, and 864 trials whereorientation was the primary preattentive feature. The main difference between the two super-blocks is the underlying data being displayed. The i-test compared means of the control andexperiment subsection’s estimation error.Subsection U1 n2 v tControl 1 432 432 862 2.06Control 2 432 432 862 1.73Experiment 864 864 1726 1.84Table 6.6: t-test results for estimation error rates from landfall and stream function trialsChapter 6. Analysis of Results 89Table 6.6 shows the subsection, the number of landfall trials n1, the number of stream functiontrials n2, the degrees of freedom v, and the t-values t. Control subsection l’s t-value is greaterthan o.975t862 = 1.962. This suggests that data type did have an effect on estimation error incontrol subsection 1. Control subsection 2’s t-value is less than 1.962, but it does fall between= 1.283 < p < 0.975t862 Similarly, the experiment subsection’s t-value falls between= 1.282 < p < o.975t1 26 = 1.960. The t-test results indicate the possibility of a datatype influence on estimation error. With an a value of 0.10, we would not be able to reject thenull hypothesis, that data type does not affect estimation error, in any of the subsections.Feature InterferenceOne question of interest was whether encoding an irrelevant data value with a secondary preattentive feature affected a subject’s estimation ability. We began by checking to see if orientationinterfered with a subject’s ability to estimate using hue. t-tests were computed to compare estimation error mean across control and experiment subsections for blocks B1 and B3, the blocksthat used hue as their primary preattentive feature.Subsection n1 n2 v tB1 432 432 862 0.03B3 432 432 862 0.21Table 6.7: t-test results for estimation error rates from control and experiment hue trialsTable 6.7 shows the block, the number of control trials n1, the number of experiment trialsn2, the degrees of freedom v, and the t-value t. The t-values for both blocks are less than= 1.962. Therefore, there appears to be no interference due to encoding of an irrelevantdata value using orientation. Any difference in means is probably due to sampling error.We continued to investigate interference by checking to see if hue interfered with a subject’sChapter 6. Analysis of Results 90ability to estimate using orientation. t-tests were computed to compare estimation error meanacross control and experiment subsections for blocks B2 and B4, the blocks that used orientationas their primary preattentive feature.Subsection n1 n2 v tB2 432 432 862 0.23B4 432 432 862 1.15Table 6.8: t-test results for estimation error rates from control and experiment orientationtrialsTable 6.8 shows the block, the number of control trials i1, the number of experiment trialsn2, the degrees of freedom v, and the t-value t. The t-values for both blocks are less than= 1.962. Therefore, there appears to be no interference due to encoding of an irrelevantdata value using hue. Any difference in means is probably due to sampling error.Feature and Data Type InteractionPrevious analysis has shown no subject preference for primary preattentive feature, but apossible subject preference for the spatial distribution of primary data values. We wantedto see if feature and data type interacted with one another. Suppose subjects prefer onetype of spatial distribution over another. Is the improvement in estimation ability differentdepending on our primary preattentive feature? That is, does estimation using hue receivea larger improvement from the preferred spatial distribution, compared to estimation usingorientation? Perhaps orientation receives a larger improvement versus hue. A 2-factor ANOVAF-test was computed to check for feature and data type interaction. A 2-factor ANOVA allowscomparison of means and interaction of two independent data values. We compared the twoindependent variables feature and data type, each of which has two levels, hue and orientationand landfall and stream function, respectively. These four groups of data correspond to blocksChapter 6. Analysis of Results 91B1, B2, B3, and B4.Subsection J v v Ff Fd F1Control 1 2 8 860 0.22 7.03 0.46Control 2 2 8 860 3.65 5.34 0.01Experiment 2 8 1724 0.35 5.83 0.89Table 6.9: F-test results of estimation error rates across feature and data typeTable 6.9 shows the subsection, number of levels J, between level degrees of freedom vb, withinlevel degrees of freedom v, and F-values Ff, Fd, and F1. F1 is the F-value for estimationerror mean across feature, Fd is the F-value for estimation error mean across data type, andF1 is the F-value for feature and data type interaction. The F1 values for all subsections areless than0.95F860 = 3.85, and the Fd values for all subsections are greater than0.95F860 Thisconfirms our previous analysis, namely feature type does not affect estimation error, but datatype may affect estimation error.The interaction F-values F, are all less than0.95F1724 = 3.84. Therefore, no interactionseems to be present. Neither feature gains a greater improvement in estimation error whenusing a more prefered data type.To conclude, we have addressed and statistically answered all the questions posed at thebeginning of the chapter. It has been demonstrated that the data are consistent with thefollowing conclusions:• rapid and accurate estimation can be performed using either hue or orientation• there is no evidence of a subject preference for either hue or orientation during theestimation task for the particular hue and orientation values used• there is evidence of a subject preference for the underlying data being displayed duringthe estimation taskChapter 6. Analysis of Results 92• there is no evidence that orientation interferes with a subject’s ability to perform hueestimation• there is no evidence that hue interferes with a subject’s ability to perform orientationestimation• there is no evidence of interaction between primary preattentive feature and the underlying data being displayedThese conclusions apply to data displayed for an exposure duration of 450 miffiseconds. Becausewe already have robust estimation ability, large increases in exposure duration probably wouldnot provide significant improvement in estimation ability. A more interesting direction ofinvestigation would be to decrease the exposure duration. This would allow us to examine twoimportant questions. First, at what exposure duration are subjects no longer able to performrobust estimation? At some exposure duration below 450 milliseconds we expect subjects tobe unable to give accurate estimations. Second, do any interference effects begin to appear atlower exposure durations? For example, we found that hue did not interfere with estimationof orientation at a 450 milliseconds exposure duration. It may be that an interference effectdoes exist, but 450 miffiseconds gives subjects enough time to overcome this effect. If this istrue, the interference should appear at lower exposure durations. Feature preference may alsobe dependent on exposure duration. We began investigating these possibilities by running aset of informal experiments described below.Exposure Duration ExperimentsWe conducted a set of post-experiments to obtain information on how exposure duration affectsthe estimation task. 90 control and 90 experiment trials from block B1 were used during theexperiment. Exposure duration for each trial varied among five possible values: 15 milliseconds, 45 miffiseconds, 105 milliseconds, 195 milliseconds, and 450 miffiseconds. PresentationChapter 6. Analysis of Results 93of trials differed somewhat from the previous experiment. A “mask” of randomly orientedgrey rectangles was displayed for 105 milliseconds immediately following each trial. This wasdesigned to remove any “after-image” of the trial that may have been present in the subject’svisual short term memory. In summary, trials were presented in the following way:• a blank screen was displayed for 195 miffiseconds• a focus circle was displayed for 105 milliseconds• the trial was displayed for its exposure duration (one of 15, 45, 105, 195, or 450 miffiseconds)• a mask of randomly oriented grey rectangles was displayed for 105 milliseconds• the screen blanked, and subjects were allowed to enter their estimationBecause trials came from block B1, our primary data value was latitude of landfall, rep-.resented by hue, and our secondary value was stream function, represented by orientation.Subjects estimated the number of blue rectangles in each trial. As before, an equal number oftrials for each interval was used. There were 10 control and 10 experiment trials where 5-15%of the rectangles were blue, 10 control and 10 experiment trials were 15-25% of the rectangleswere blue, and so on up to 85-95% for a total of 180 trials. Trials at each interval were splitevenly among the five exposure durations. For example, there were 2 control and 2 experimenttrials with an exposure duration of 15 milliseconds at each interval (5-15%, 15-25% and so onup to 85-95% for a total of 36 trials). Trials were presented to the subject in a random orderso the various exposure durations were intermixed.Analysis of data from the previous experiment showed estimation was accurate at everyinterval. Because of this, we combined trials with a given exposure duration into a singleblock of data. For example, trials that were displayed for 105 miffiseconds formed a singlegroup of 2 control and 2 experiment trials from each interval for a total of 18 control andChapter 6. Analysis of Results 94Average Error2.202.102.001.901.801.701.601.501.401.301.201.101.000.900.800.700.600.500.00Figure 6.14: Graph of average error across exposure duration for combined results fromexposure duration experiment100.00 200.00 300.00 400.00Exposure (ms)Chapter 6. Analysis of Results 95Exposure Control 1 Control 2 Experimentn u(e) n a(e) n o(e)15 ms 45 1.93 2.74 45 2.15 2.73 90 1.52 2.0745 ms 45 1.06 1.46 45 1.44 1.94 90 1.13 1.51105 ms 45 0.91 1.51 45 0.80 1.16 90 0.75 1.17195 ms 45 0.71 1.44 45 0.71 1.06 90 0.76 1.10450 ms 45 0.60 1.01 45 0.80 1.12 90 0.57 0.91Table 6.10: Average estimation error and standard deviation of estimation error for controland experiment trials at each exposure duration18 experiment trials. We plotted average estimation error versus exposure duration to seeif estimation ability was affected by display time. Figure 6.14 shows the graph of averageestimation error versus exposure duration for experiment trials. Table 6.10 shows the numberof samples n, average estimation error , and standard deviation of estimation error u(e) forthe control and experiment trials.Average estimation error and standard deviation of error seem to be reasonably stable, evendown to 105 miffiseconds. Below that duration both values increased rapidly to a maximumof 2.14 and 2.87 respectively. This indicates the minimum exposure duration for robust hueestimation lies somewhere between 45 and 105 miffiseconds.Exposure n1 n2 V t15 ms 90 90 178 1.4545 ms 90 90 178 0.51105 ms 90 90 178 0.53195 ms 90 90 178 0.31450 ms 90 90 178 0.83Table 6.11: t-test results of estimation error rates from control and experiment trials forall five exposure durationsChapter 6. Analysis of Results 96We concluded our analysis by checking to see if orientation interfered with hue estimationat any of the exposure durations. t-tests were computed to compare estimation error meansacross control and experiment subsections for all five exposure durations. Table 6.11 shows theexposure duration, the number of control trials ni, the number of experiment trials n2, thedegrees of freedom v, and the t-value t. The t-values for all durations are less than o.75t17g=1.972, except for the 15 miffisecond exposure duration, where o.90t178 = 1.286 < p < o.gst7 =1.653. This suggests that orientation is not interfering with hue estimation at any of theexposure durations tested. Additional experiments should test for hue interference, becauseCallaghan reported asymmetric interference patterns during her research.Chapter 7Conclusions“0 most lame and impotent conclusion!”— Othello, ShakespearePreattentive features can be used to design simple and efficient visualization tools. In ourstudies, we simulated a visualization tool based on preattentive features that allowed rapidand accurate estimation to be performed within 450 milliseconds. “Real world” data fromOceanography’s salmon migration simulations was used during the experiments. Half the subjects were asked to estimate the percentage of rectangles in each display coloured blue, to thenearest 10%. The other half were asked to estimate the number of rectangles rotated 600, tothe nearest 10%. Average subject estimation error across all intervals was never more than1.25. Standard deviation of error across all intervals was never more than 1.03. This showssubject estimations clustered around the correct answer for all intervals.There was an “end-effect” present during the estimation task. Subjects made the largestestimation errors when 35-65% of the rectangles were targets (intervals 4, 5, and 6). Subjectsmade the smallest errors when 5-15% or 85-95% of the rectangles were targets (intervals 1 and9).97Chapter 7. Conclusions 98Subjects showed no feature preference. They estimated targets that used hue and targetsthat used orientation equally well. This is no doubt due in part to the fact that we calibrated theperceived difference between our two hues and our two orientations to be equal for a relatedtarget detection task. There was evidence of a data type preference. Subjects performedbetter estimation when the primary data value was stream function. This indicates a subject’sestimation ability depends, at least in part, on the spatial distribution of the data values beingestimated. There was no evidence of interaction between feature and data type. Neither featuregained a greater improvement in estimation error when the prefered data type was used.Probably the most important new result is that no feature interference was observed. Encoding an irrelevant data value with orientation did not affect hue estimation ability. Encodingan irrelevant data value with hue did not affect orientation estimation ability. This suggeststhat the two preattentive features can be used together in a single display. Users might usehue and orientation to encode data elements in two different ways. They would then be ableto perform different types of preattentive data analysis on a single display.Further experiments that varied exposure duration showed robust estimation is possiblebelow 450 miffiseconds. Estimation seemed reasonably accurate down to an exposure durationof 105 milliseconds. Below that, estimation ability deteriorated rapidly. There was no evidencethat orientation interfered with estimation using hue at any of the five exposure durationstested.Some of our results could be predicted from research in preattentive processing. Otherswere more surprising. We expected estimation using hue and orientation to be an extensionof the results of Julész [Jul83] and Triesman [Tri88]. They used hue and orientation in theirwork to perform rapid and accurate target and boundary detection. Our data analysis showsthis turned out to be the case. On the other hand, we also expected to observe an interferenceeffect during our experiment. This would have been an extension of Callaghan’s work, whereshe showed a “feature hierarchy” of brightness-hue-shape [Ca184][Ca189].Chapter 7. Conclusions 99As our experiments suggest, results from preattentive processing can form a basis for thedesign of visualization tools, but the results cannot be applied without some additional investigation to ensure the desired effects hold in the particular visualization environment to whichthey are applied.Chapter 8Future Work“Issun saki wa yami[No man knows his future]”— Japanese ProverbOur experiments and related analysis leave a number of interesting avenues for future work.One obvious extension is to test for relationships among additional features. Intensity, size, andshape are three features often used for data visualization. Information on feature preferenceand interference would provide a more general and more useful set of guidelines on the use ofthese features in the design of visualization software.Many visualization tasks require more than two data values to be encoded at each spatiallocation. Additional work could examine how to encode higher-dimensional elements in a low-dimensional environment. One obvious possibility is using three or more features in a singledisplay. This type of visualization tool could exhibit new and unexpected types of interference.There may also be a limit to the amount of information a subject can extract and process atone time.Our analysis showed no evidence of feature interference. Colour did not interfere withorientation estimation, and orientation did not interfere with hue estimation. It is possible1o’Chapter 8. Future Work 101that interference is sensitive to exposure duration. Subjects may have been able to overcomeany interference that did exist within our 450 millisecond exposure duration. With experimentsdesigned to test for interference at different exposure durations, we could determine whetherinterference exists and the exposure durations at which it occurs. This type of experimentcould also be used to search for possible feature preferences that may occur during shorterexposure durations.Some subjects noted after the experiment they felt it would have been easier to estimatethe number of red rectangles, or the number of rectangles rotated 0°. There may be a subjectpreference within feature for estimating. That is, red may be easier to estimate than blue, 00rotation may be easier to estimate than 600 rotation, or vise-versa. It would be simple to setup a set of experiments similar to the ones we ran to test for these phenomena.We explicitly chose two hues whose perceived difference from one another was equal tothe perceived difference between two rectangles oriented 00 and 60°. A choice of featuresperceptually different from one another might cause a subject feature preference during theestimation task. For example, we could choose two isoluminent hues perceptually as far apartfrom one another as possible. A set of experiments could be run to see if estimation using huewas more rapid or accurate than estimation using orientation.Estimation was shown to depend on exposure duration. Estimation error remained stabledown to 105 miffiseconds, then increased rapidly as the exposure duration fell. It seems unlikely that increases in exposure duration from 450 milliseconds will produce noticeably betterestimation. Searching for feature interference and preference at exposure durations between45 and 105 miffiseconds would allow us to determine whether these phenomena occur at theboundary of estimation ability.The data values used in our experiment were derived from salmon migration studies inOceanography. More comprehensive studies based on actual tasks performed by researchersare needed before conclusive evidence will exist for using preattentive features. Other types ofChapter 8. Future Work 102data should be investigated as well if general visualization tools are to be based on preattentiveprocessing.Bibliography[Asi85] Asimov, D. (1985). The Grand Tour: A Tool for Viewing Multidimensional Data.SIAM Journal on Scientific and Statistical Computing, 6(1), 128—143.[Bec9la] Becker, R.A. and W.S. Cleveland (1991). Viewing Multivariate Scattered Data.Pixel, 2(2), 36—41.[Bec9lb] Becker, R.A. and W.S. Cleveland (1991). Take a Broader View of ScientificVisualization. Pixel, 2(2), 42—44.[Be187j Bell, P.C. and R.M. O’Keefe (1987). Visual Interactive Simulation—History, RecentDevelopments, and Major Issues. Simulation, 49(3), 109—116.[Bir69] Birren, F. (ed.) (1969). Munsell: A Grammar of Color. New York, New York: VanNostrand Reinhold Company.[Bra83] Bratley, P., B.L. Fox, and L.F. Schrange (1983). A Guide to Simulation. New York,New York: Springer-Verlag.[Bro88] Brown, M., D. Greenberg, M. Keeler, A.R. Smith, and L. Yaeger (1988). TheVisualization Roundtable. Computers in Physics, 2(3), 16—26.[Bru78J Bruckner, L.A. (1978). On Chernoff Faces. P.C.C. Wang (ed.), GraphicalRepresentation of Multivariate Data, 93—121. New York, New York: Academic Press.[Bry9l] Bryson, S. and C. Levit (1991). The Virtual Windtunnel: An Environment for theExploration of Three-Dimensional Unsteady Flows. Proceedings Visualization ‘91,17—24. San Diego, United States.[Ca184j Callaghan, T.C. (1984). Dimensional Interaction of Hue and Brightness inPreattentive Field Segregation. Perception 4 Psychophysics, 36(1), 25—34.[Ca189] Callaghan, T.C. (1989). Interference and Domination in Texture Segregation: Hue,Geometric Form, and Line Orientation. Perception é4 Psychophysics, 46(4), 299—311.[Ca190] Callaghan, T.C. (1990). Interference and Dominance in Texture Segregation. D.Brogan (ed.), Visual Search, 81—87. New York, New York: Taylor & Francis.[Che73] Chernoff, H. (1973). The Use of Faces to Represent Points in k-Dimensional SpaceGraphically. Journal of the American Statistical Association, 68(342), 361—367.103[DeF91] DeFanti, T.A. and M.D. Brown (1991). Visualization in Scientific Computing. M.C.Yovits (ed.), Advances in Computers, 33, 247—305. New York, New York: AcademicPress.[DeS84] DeSanctis, G. (1984). Computer Graphics As Decision Aids: Directions for Research.Decision Sciences, 15, 463—487.[Dre88] Drebin, R.A., L. Carpenter, and P. Hanrahan (1988). Volume Rendering. ComputerGraphics, 22(4), 65—74.[Dun89] Duncan, J. and G.W. Humphreys (1989). Visual Search and Stimulus Similarity.Psychological Review, 96(3), 433—458.[Enn9Oa] Enns, J.T. and R.A. Rensink (1990). Sensitivity to Three-Dimensional Orientationin Visual Search. Psychology Science, 1(5), 323—326.{Enn9Obj Enns, J.T. and R.A. Rensink (1990). Influence of Scene-Based Properties on VisualSearch. Science, 247, 721—723.[Enn9Oc] Enns, J.T. (1990). The Promise of Finding Effective Geometric Codes. ProceedingsVisualization ‘90, 389—390. San Francisco, United States.[Enn9Od] Enns, J.T. (1990). Three-Dimensional Features that Pop Out in Visual Search. D.Brogan (ed.), Visual Search, 37—45. New York, New York: Taylor & Francis.[Enn9l] Enns, J.T. and R.A. Rensink (1991). VSearch Colour: Full-Colour Visual SearchExperiments on the Macintosh II. Behaviour Research Methods, Instruments, é4Computers, 23(2), 265—272.[Fol9O] Foley, J.D., A. Van Dam, S.K. Feiner, and J.F. Hughes (1990). Computer Graphics,Principles and Practice. Reading, Massachusetts: Addison-Wesley.[Fra77] Franta, W.R. (1977). The Process View of Simulation. New York, New York: North-Holland.[G1a84] Glass, G.V. and K.D. Hopkins (1984). Statistical Methods in Education andPsychology. Englewood Cliffs, New Jersey: Prentice-Hall.[Gor69] Gordan, G. (1969). System Simulation. Englewood Cliffs, New Jersey: Prentice-Hall.[Gri89] Grinstein, G., R. Pickett, and M. Wi]liams (1988). EXVIS: An ExploratoryVisualization Environment. Proceedings Graphics Interface ‘89, 254—261. London,Canada.[Her74] Herdeg, W. (ed.) (1974). Graphis/Diagrams: The Graphical Visualization of AbstractData. New York, New York: Hastings House Publishers.[Hib9O] Hibbard, B. and D. Santek (1990). The VIS-5D System for Easy InteractiveVisualization. Proceedings Visualization ‘90, 28—35. San Francisco, United States.[Hur8O] Hurrion, R.D. (1980). An Interactive Visual Simulation System for IndustrialManagement. European Journal of Operational Research, 5, 86—93.104[1ng88] Ingraham, W.J., Jr. and R.K. Miyahara (1988). Ocean Surface Current Simulationsin the North Pacific Ocean and Bering Sea (OSCURS Numerical Model). NMFSF/NWC-130. National Oceanic and Atmospheric Association Technical Memo, 155pp. Seattle, United States.[1ng89] Ingraham, W.J., Jr. and R.K. Miyahara (1989). OSCURS Numerical Model toOcean Surface Current Measurements in the Gulf of Alaska. NMFS F/NWC-1 68.National Oceanic and Atmospheric Association Technical Memo, 67 pp. Seattle,United States.[Jul81] Julész, B. (1981). Textons, the Elements of Texture Perception, and theirInteractions. Nature, 290, 91—97.[Jul83] Julész, B. and J.R. Bergen (1983). Textons, the Fundamental Elements inPreattentive Vision and Perception of Textures. The Bell System Technical Journal,62(6), 1619—1645.[Jul84] Julész, B. (1984). A Brief Outline of the Texton Theory of Human Vision. Trendsin Neuroscience, 7(2), 41—45.[Kau8l] Kaufman, A. and M.Z. Hanani (1981). Converting a Batch Simulation Program toan Interactive Program with Graphics. Simulation, 36(4), 125—131.[LeB9O] LeBlond, P. (1990). Influences of Currents, Temperature and Salinity on Open OceanMigrations of Sockeye Salmon in the Northeast Pacific Ocean. DFO/NSERC ScienceSubvention Program Submission. Department of Oceanography, University of BritishColumbia.[Lev9O] Levkowitz, H. and R.M. Pickett. Iconographic Integrated Displays of MultiparameterSpatial Distributions. Rogowitz, B.E. and J.P Allebach (eds.), Human Visionand Electronic Imaging: Models, Methods, and Applications, 345—355. Beffingham,Washington: SPIE.[Lev9l] Levkowitz, H. (1990). Color Icons: Merging Color and Texture Perception forIntegrated Visualization of Multiple Parameters. Proceedings Visualization ‘91, 164—170. San Diego, United States.[McC87] McCormick, B.H., T.A. DeFanti, and M.D. Brown (1987). Visualization in ScientificComputing—A Synopsis. IEEE Computer Graphics 81 Applications, 7(7), 61—70.[Me185] Melamed, B. and R.J.T. Morris (1985). Visual Simulation: The PerformanceAnalysis Workstation. IEEE Computer, 18, 87—94.[Miil9O] Muller, H.J., G.W. Humphreys, P.T. Quinlan, and M.J. Riddoch (1990). CombinedFeature Coding in the Form Domain. D. Brogan (ed.), Visual Search, 47—55. NewYork, New York: Taylor & Francis.[OKe86] O’Keefe, R. and R. Davies (1986). A Microcomputer System for SimulationModeffing. European Journal of Operational Research, 24, 23—29.105[Pea92] Pearcy, W.G. (1992). Ocean Ecology of North Pacific Salmonids. Seattle,Washington: University of Washington Press.[Pic88] Pickett, R. and G. Grinstein (1988). Iconographic Displays for VisualizingMultidimensional Data. Proceedings of the 1988 IEEE Conference on Systems, Man,and Cybernetics, 514—519. Beijing and Shenyang, China.[Qui87] Quinlan, P.T. and G.W. Humphreys (1987). Visual Search for Targets Defined byCombinations of Color, Shape, and Size: An Examination of the Task Constraintson Feature and Conjunction Searches. Perception 1 Psychophysics, 41(5), 455—472.[Reu9O] Reuter, L.H. (1990). Human Perception and Visualization. Proceedings Visualization‘90, 401—406. San Francisco, United States.[Set88] Sethian, J.A., J.B. Salem, and A.F. Ghoniem (1988). Interactive ScientificVisualization and Parallel Display Techniques. Proceedings Supercomputing ‘88, 132—139. Orlando, United States.[Tho92a] Thomson, K.A., W.J. Ingraham, M.C. Healey, P.R. LeBlond, C. Groot, andC.G. Healey (1992). The Influence of Ocean Currents on Latitude of Landfalland Migration Speed of Sockeye Salmon Returning to the Fraser River. FisheriesOceanography Journal, 1(2), 163—179.[Tho92b] Thomson, K.A., W.J. Ingraham, M.C. Healey, P.R. LeBlond, C. Groot, and C.G.Healey (1992). The Influence of Ocean Currents on Return Timing of Fraser RiverSockeye Salmon. Fisheries Oceanography Journal, in press.[Tre89] Treinish, L.A., J.D. Foley, W.J. Campbell, R.B. Haber, and R.F. Gurwitz (1989).Effective Software Systems for Scientific Data Visualization. Computer Graphics,23(5), 111—136.[Tre9l] Treinish, L.A. and T. Goettsche (1991). Correlative Visualization Techniques forMultidimensional Data. IBM Journal of Research and Development, 35(1/2), 184—204.[Tri85] Triesman, A. (1985). Preattentive Processing in Vision. Computer Vision, Graphics,and Image Processing, 31, 156—177.[Tri88j Triesman, A. and S. Gormican (1988). Feature Analysis in Early Vision: Evidencefrom Search Asymmetries. Psychological Review, 95(1), 15—48.[Tri9l] Triesman. A. (1991). Search, Similarity, and Integration of Features Between andWithin Dimensions. Journal of Experimental Psychology: Human Perception andPerformance, 17(3), 652—676.[Tuf83] Tufte, E.R. (1983). The Visual Display of Quantitative Information. Chesire,Connecticut: Graphics Press.[Tuf9O] Tufte, E.R. (1990). Envisioning Information. Chesire, Connecticut: Graphics Press.106[Van9Oaj Vande Wettering, M. (1990). The Application Visualization System—AVS 2.0. Pixel,1(3), 30—33.[Van9Ob] Vande Wettering, M. (1990). apE 2.0. Pixel, 1(4), 30—35.[War85] Ware, C. and J.C. Beatty (1985). Using Colour As A Tool in Discrete Data Analysis.Technical Report CS-85-21. Computer Science Department, University of Waterloo.[War88] Ware, C. and J.C. Beatty (1988). Using Colour Dimensions to Display DataDimensions. Human Factors, 30(2), 127—142.[Wo188] Wolff, R.S. (1988). Visualization in the Eye of the Scientist. Computers in Physics,2(3), 29—35.[Wys82] Wyszecki, G. and W.S. Stiles (1982). Color Science: Concepts and Methods,Quantitative Data and Formulae, 2nd Edition. New York, New York: John Wiley &Sons, Inc.[Yag9l] Yagel, R., A. Kaufman, and Q. Zhang (1991). Realistic Volume Imaging. ProceedingsVisualization ‘91, 226—231. San Diego, United States.107Chapter 10Appendix ADiscrimination Experiment ResultsThe following tables list the discrimination experiment results. Subjects were asked to detectthe presence or absence of a target element in a field of distractor elements. Each subjectcompleted 11 “blocks”, where a block corresponded to a unique target. For each block, thedistractors were rectangles rotated 00 and coloured 5R 7/8. In one block the target was arectangle rotated 60° and coloured 5R 7/8. The targets for the remaining blocks were rectanglesrotated 0° and coloured to be anywhere from 1 to 10 Munsell “hue steps” away from thedistractors. The hue steps 1 through 10 correspond to the Munsell hues 1ORP 7/8, 5RP 7/8,lop 7/8, 5P 7/8, 1OPB 7/8, 5PB 7/8, lOB 7/8, 5B 7/8, 10GB 7/8 and 5GB 7/8.Within each block subjects completed 30 “trials”. 15 of the 30 trials were randomly selectedto contain the target element. A trial consisted of 36 rectangles (including the target, if present)presented in a random pattern on the screen.Tables 10.1—10.4 list summary information for all 11 experiment blocks. Each block issubdivided into target present trials, target absent trials, and the combination of both present108Appendix A 109and absent trials. The tables show number of correct responses n, average response time ,response time variance a2, and response time standard deviation a. Subjects were unable todiscriminate between the target and distractors for the first two “hue-step” blocks (Munsellhues 1ORP 7/8 and 5RP 7/8). Table 10.5 shows the combined results for all subjects.Appendix A 110Target Present Absent Combineda y2 a600 13 403 2762 53 15 491 14367 120 28 450 10660 103illS — — —2HS — — —3 HS 8 772 40858 202 12 836 41049 203 20 810 39856 2004 HS 14 501 8105 90 13 529 5507 74 27 515 6798 825 HS 14 469 5810 76 15 566 5114 72 29 520 7697 886 HS 15 434 2571 51 15 506 8115 90 30 470 6519 817 115 14 401 2490 50 15 495 7219 85 29 449 7065 848 HS 15 394 1150 34 15 447 5177 72 30 420 3775 619 HS 15 421 8552 92 15 426 3278 57 30 423 5716 7610 HS 14 394 2426 49 15 548 24496 157 29 474 19516 140Table 10.1: Summary of discrimination experiment results for subject 1Target Present Absent Combinedn a2 a n a2 a n a2 a60° 15 618 16628 129 15 1191 44119 210 30 905 114061 3381115 — — —2HS — — —3 HS 11 1193 193292 440 10 1285 217785 467 21 1237 196903 4444 HS 14 822 120381 347 15 841 16605 129 29 832 64282 2545 HS 14 735 44779 212 14 834 77901 279 28 784 61605 2486 HS 15 577 71369 267 15 805 26238 162 30 691 60581 2467 HS 15 481 2738 52 14 857 162389 403 29 663 113263 3378 HS 15 477 16007 127 15 620 32697 181 30 549 28801 1709 HS 15 386 3098 56 14 504 4212 65 29 443 7100 8410 HS 15 436 10274 101 15 491 5677 75 30 463 8469 92Table 10.2: Summary of discrimination experiment results for subject 2Appendix A 111Target Present Absent Combinedn a2 o . a600 15 555 29160 171 15 557 9987 99 30 556 18851 137illS ———2HS— ——3 HS 10 966 89299 299 15 984 17150 131 25 977 43570 2094 HS 12 715 38513 196 14 931 35342 188 26 831 47387 2185 HS 15 496 17275 131 13 584 11293 106 28 537 15999 1266 HS 15 527 4809 69 15 768 7273 85 30 648 20771 1447 11$ 15 453 15091 123 15 492 2462 50 30 473 8869 948 HS 15 413 3246 57 14 518 11575 108 29 464 9826 999 HS 15 393 2746 52 15 425 2326 48 30 409 2711 5210 HS 15 361 1119 33 15 412 2481 50 30 386 2409 49Table 10.3: Summary of discrimination experiment results for subject 3Target Present Absent Combinedn a2 a n Y a2 a n a2 a60° 15 368 406 20 15 346 468 22 30 357 553 241HS———2HS ———3 HS 11 791 77716 279 12 720 102299 320 23 754 87777 2964 HS 14 431 8765 94 13 451 4107 64 27 441 6379 805 HS 13 369 2039 45 15 343 1765 42 28 355 1987 456 HS 15 405 3733 61 15 459 7191 85 30 432 6052 787 HS 15 341 420 20 15 368 1941 44 30 354 1328 368 HS 15 337 573 24 15 334 2184 47 30 336 1335 379 HS 15 337 2244 47 15 335 1521 39 30 336 1819 4310 HS 14 330 968 31 15 343 1644 41 29 336 1317 36Table 10.4: Summary of discrimination experiment results for subject 4Appendix A 112Target Present Absent Combined. .600 58 489 23003 152 60 646 122813 350 118 569 79353 282illS — —2HS — — —3 HS 40 941 127659 357 49 945 118882 345 89 943 121423 3844 HS 54 614 67568 260 55 698 56186 237 109 656 63039 2515 HS 56 519 34780 186 57 577 53383 231 113 549 44621 2116 HS 60 486 24486 156 60 635 35457 188 120 560 35314 1887 HS 59 419 7931 89 59 548 72188 269 118 484 43882 2098 HS 60 405 7507 87 59 479 23500 153 119 442 16667 1299 HS 60 384 4879 70 59 421 6224 79 119 403 5837 7610 HS 58 381 5177 72 60 448 14318 120 118 415 10894 104Table 10.5: Summary of discrimination experiment results for all subjectsChapter 11Appendix BExposure Duration Experiment ResultsThe following tables list the exposure duration experiment results. Subjects were shown trialsthat contained 174 rectangles coloured red and blue. Subjects were asked to estimate thepercentage of rectangles coloured blue, to the nearest multiple of 10%. Each subject completed180 trials. In 90 trials, the rectangles were randomly chosen to be oriented either 0° or 600.In 45 trials, the rectangles were all oriented 00. In the remaining 45 trials, the rectangles wereall oriented 60°. Each trial was shown for a fixed exposure duration. There were an equalnumber of trials from five possible exposure durations: 15 milliseconds, 45 milliseconds, 105milliseconds, 195 miffiseconds, and 450 miffiseconds.Tables 11.1—11.5 list summary information for all subjects, grouped by exposure duration. Eachduration is divided into control 1 trials (0° orientation), control 2 trials (60° orientation), andexperiment trials (random orientation). The tables show the number of trials n, the averageestimation error , and the standard deviation of estimation error (e). Estimation error isthe absolute value of the correct response minus the subject’s response. Table 11.6 shows thecombined results for all subjects.113Appendix B 114Exposure Control 1 Control 2 Experimentn a(e) n a(e) n o(e)15 ms 9 2.88 3.74 9 1.88 2.57 18 1.05 1.6645 ms 9 0.77 1.27 9 1.00 1.36 18 0.88 1.13105 ms 9 0.88 1.11 9 0.77 1.06 18 0.55 0.84195 ms 9 0.44 0.86 9 0.66 0.86 18 0.44 0.76450 ms 9 0.55 0.93 9 0.88 1.22 18 0.44 0.76Table 11.1: Summary of exposure duration experiment results for subject 1Exposure Control 1 Control 2 Experimentn u(e) n cr(e) n a(e)15 ms 9 2.33 3.69 9 1.77 2.50 18 1.77 2.1145 ms 9 1.44 1.90 9 2.44 3.00 18 1.72 2.07105 ms 9 1.66 2.80 9 1.00 1.45 18 0.72 0.93195 ms 9 1.66 3.02 9 0.66 1.11 18 1.16 1.51450 ms 9 1.00 1.54 9 0.88 1.32 18 0.72 1.05Table 11.2: Summary of exposure duration experiment results for subject 2Exposure Control 1 Control 2 Experimentn cr(e) n u(e) 11 o(e)15 ms 9 1.22 1.96 9 2.33 2.80 18 1.50 2.2345 ms 9 1.33 1.93 9 0.77 1.27 18 0.94 1.47105 ms 9 0.77 1.27 9 0.77 1.45 18 0.94 1.30195 ms 9 0.44 0.70 9 0.55 0.93 18 0.77 1.13450 ms 9 0.44 0.70 9 0.77 1.06 18 0.88 1.23Table 11.3: Summary of exposure duration experiment results for subject 3Appendix B 115Exposure Control 1 Control 2 Experimentn cr(e) n o(e) n o(e)15 ms 9 1.33 2.12 9 2.55 3.37 18 1.22 1.7145 ms 9 0.88 1.22 9 2.00 2.59 18 1.16 1.55105 ms 9 0.55 0.79 9 0.55 0.93 18 0.77 1.60195 ms 9 0.66 0.86 9 1.11 1.50 18 0.77 1.18450 ms 9 0.44 0.86 9 0.77 1.27 18 0.44 0.84Table 11.4: Summary of exposure duration experiment results for subject 4Exposure Control 1 Control 2 Experimentn u(e) n u(e) n o(e)15 ms 9 1.88 2.31 9 2.22 3.00 18 2.05 2.7345 ms 9 0.88 1.11 9 1.00 1.27 18 0.94 1.35105 ms 9 0.66 1.11 9 0.88 1.11 18 0.77 1.13195 ms 9 0.33 0.61 9 0.55 1.06 18 0.66 0.90450 ms 9 0.55 1.06 9 0.66 1.00 18 0.38 0.64Table 11.5: Summary of exposure duration experiment results for subject 5Exposure Control 1 Control 2 Experimentn u(e) n a(e) n a(e)15 ms 45 1.93 2.74 45 2.15 2.73 90 1.52 2.0745 ms 45 1.06 1.46 45 1.44 1.94 90 1.13 1.51105 ms 45 0.91 1.51 45 0.80 1.16 90 0.75 1.17195 ms 45 0.71 1.44 45 0.71 1.06 90 0.76 1.10450 ms 45 0.60 1.01 45 0.80 1.12 90 0.57 0.91Table 11.6: Summary of exposure duration experiment results for all subjects

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0051273/manifest

Comment

Related Items