UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

An evaluation of overviews for large tree navigation Bodnar, Adam Michael 2006

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2006-0157.pdf [ 10.73MB ]
Metadata
JSON: 831-1.0051590.json
JSON-LD: 831-1.0051590-ld.json
RDF/XML (Pretty): 831-1.0051590-rdf.xml
RDF/JSON: 831-1.0051590-rdf.json
Turtle: 831-1.0051590-turtle.txt
N-Triples: 831-1.0051590-rdf-ntriples.txt
Original Record: 831-1.0051590-source.json
Full Text
831-1.0051590-fulltext.txt
Citation
831-1.0051590.ris

Full Text

An Evaluation of Overviews for Large Tree Navigation by Adam Michael Bodnar B.Sc. Hon., Queen's University 2003 A THESIS S U B M I T T E D IN P A R T I A L F U L F I L M E N T OF T H E R E Q U I R E M E N T S F O R T H E D E G R E E OF Master of Science The Faculty of Graduate Studies (Computer Science) The University Of British Columbia March 2006 © Adam Michael Bodnar 2006 ii Abst rac t As the amount of information that must be understood by people continues to grow, techniques for efficiently exploring large datasets become increasingly im-portant. Pan and zoom interaction has been shown to be effective for exploring small datasets. However, pan and zoom on its own provides no visual cues about regions of the dataset outside the current field of view, which can result in loss of orientation, leading to inefficient patterns of navigation. Overviews offer one possible solution to this problem by providing the user with contextual informa-tion regarding regions outside the current field of view, at the cost of reducing the screen real estate available for the primary detail view, and imposing the need to switch attention between multiple views. Focus+Context techniques of-fer another solution to this problem by integrating focus and context regions into a single view, often using distortion-based methods, which themselves impose a cost of tracking objects undergoing nonlinear transformations. While overviews have been shown to be beneficial for pan and zoom interfaces, no study to date has explored the potential benefits of adding an overview to Focus+Context interfaces. This thesis presents two studies that evaluate overviews for large tree navi-gation. Interfaces implementing these techniques were used by 80 subjects, over two studies, to perform a task exploring a large hierarchical tree dataset, which was motivated by the needs of evolutionary biologists. Our first study was de-signed to investigate the optimal size for an overview for both pan and zoom and Focus+Context interfaces. Our results show that the size of the overview did not affect performance, but the presence of an overview did impact the strategy users adopted. Our second study was designed to compare the performance Abstract iii of pan and zoom and rubber sheet navigation techniques with and without an overview. This thesis also presents the first step towards a taxonomy of tasks for large tree navigation. Our taxonomy is informed from interviews with evolution-ary biologists who use large trees to investigate the evolutionary relationships between species. All interfaces implemented guaranteed visibility, a recent innovation in the field of information visualization, which ensures that regions of interest remain visible to the user at all times, independent of navigation actions. We discuss" the implications of this research, including the relationship between overviews and guaranteed visibility, and propose directions for future work. iv Contents Abstract 1 1 Contents iv List of Tables viii List of Figures ix Acknowledgements xi 1 Introduction 1 1.1 Motivation 3 1.2 Contributions 5 1.3 Organization 6 2 Related Work 7 2.1 Visualization and Interaction 7 2.1.1 Overview+Detail Techniques 8 2.1.2 Focus+Context Techniques 10 2.2 Guaranteed Visibility 14 2.3 Tree Visualization and Evaluation 15 2.4 Summary 19 3 Task and Dataset 20 3.1 Task 22 3.1.1 Task Development Process 22 Contents v 3.1.2 Task Categories 23 3.1.3 Four General Tasks 24 3.1.4 Experimental Task 28 3.2 Dataset 29 4 S tudy 1 30 4.1 Pilot 30 4.2 Hypotheses 31 4.3 Task and Dataset 31 4.4 Interfaces 32 4.4.1 Navigation 32 4.4.2 Multiple Foci and Overviews 33 4.4.3 Guaranteed Visibility 33 4.4.4 Context Levels 35 4.4.5 Interface 1: Rubber sheet navigation 37 4.4.6 Interface 2: Pan and zoom navigation 38 4.4.7 Interface 3: Rubber sheet navigation with overview . . . . 39 4.4.8 Interface 4: Pan and zoom navigation with overview . . . 40 4.5 Apparatus 41 4.6 Participants 42 4.7 Experimental Design 42 4.8 Procedure 43 4.9 Measures 44 4.10 Results 44 4.10.1 Learning Effects 45 4.10.2 Level of Context 46 4.10.3 Error Rate 47 4.10.4 Summary of Results 47 4.11 Discussion 48 4.11.1 Size of Overview 48 4.11.2 Strategy 49 Contents vi 5 S tudy 2 50 5.1 Hypotheses 50 5.2 Task and Dataset 51 5.3 Interfaces 51 5.3.1 Interface 1: Rubber sheet navigation 53 5.3.2 Interface 2: Pan and zoom navigation 53 5.3.3 Interface 3: Rubber sheet navigation with overview . . . . 54 5.3.4 Interface 4: Pan and zoom navigation with overview . . . 55 5.4 Apparatus 56 5.5 Participants 57 5.6 Experimental Design 57 5.7 Procedure 57 5.8 Measures 59 5.9 Results 60 5.9.1 Learning Effects 60 5.9.2 Navigation 61 5.9.3 Presence of Overview 62 5.9.4 Error Rate 62 5.9.5 Summary of Results 62 5.10 Follow-up Investigation 64 5.11 Discussion 65 5.11.1 Presence of Overview 65 5.11.2 Guaranteed Visibility 66 6 Conclusions and Future W o r k 68 6.1 Limitations 68 6.2 Conclusion 70 6.3 Future Work 70 6.3.1 Exploring Patterns of Navigation in Overviews 70 6.3.2 Exploring the Relationship Between Overviews and Guar-anteed Visibility 71 Contents v i i Bibliography 72 A Study 1 Training Protocol 79 B Study 2 Training Protocol 86 C Study 1 Questionnaires 95 D Study 2 Questionnaires 115 viii List of Tables 3.1 Table of tasks related to phylogenetic analysis 24 4.1 Context levels examined in Study 1 42 ix List of Figures 1.1 Zooming in with pan and zoom navigation (PZN) 1 1.2 A separate and simultaneous overview for pan and zoom navigation 2 1.3 Zooming in with rubber sheet navigation (RSN) 4 2.1 Interactive route planning application with overview 8 2.2 Overview map visualization application 9 2.3 The Document Lens application 11 2.4 H3Viewer application using hyperbolic geometry 12 2.5 EdgeLens application 13 2.6 City Lights application 14 2.7 Fishnet application 15 2.8 TreeWiz application 16 2.9 TreeJuxtaposer application 17 2.10 SpaceTree application 18 3.1 A phylogenetic tree 21 3.2 Task 1: Determining the lowest common ancestor 25 3.3 Task 2: Comparing the topological distances between nodes . . . 26 3.4 Task 3: Determining whether two subtrees are adjacent 27 3.5 Task 4: Determining whether a subtree contains unmarked nodes 28 4.1 Halo guaranteed visibility technique 34 4.2 Calculation of levels of context in Study 1 RSN interfaces . . . . 36 4.3 Calculation of levels of context in Study 1 P Z N interfaces . . . . 36 List of Figures x 4.4 Study 1 Interface 1 (rubber sheet navigation without overview) . 37 4.5 Study 1 Interface 2 (pan and zoom navigation without overview) 38 4.6 Study 1 Interface 3 (rubber sheet navigation with overview) . . . 40 4.7 Study 1 Interface 4 (pan and zoom navigation with overview) . . 41 4.8 Mean completion times per trial for each interface by block in seconds 45 4.9 Mean completion trend line by level of context in seconds for rubber sheet navigation with an overview interface 46 4.10 Mean completion trend line by level of context in seconds for pan and zoom with an overview interface 47 5.1 Calculation of levels of context in Study 2 P Z N interfaces . . . . 52 5.2 Calculation of levels of context in Study 2 RSN interfaces . . . . 52 5.3 Study 2 Interface 1 (rubber sheet navigation without overview) . 53 5.4 Study 2 Interface 2 (pan and zoom navigation without overview) 54 5.5 Study 2 Interface 3 (rubber sheet navigation with overview) . . . 55 5.6 Study 2 Interface 4 (pan and zoom navigation with overview) . . 56 5.7 Mean completion times per trial for each interface by block in seconds 61 5.8 Boxplot of overview presence vs. completion time 63 5.9 Interaction of overview presence dependent variable 64 xi Acknowledgements Throughout the course of my academic career I have been fortunate to work with a number of talented people. To everyone who made this research possible, I thank you. My supervisors, Joanna McGrenere and Tamara Munzner, provided me with a great deal of support, guidance, and encouragement. Working with them has been an honour and a pleasure, and above all an amazing learning experience. My research partner, Dmitry Nekrasovski, brought a great deal of enthusiasm and creativity to this project. His insightful ideas and dedicated spirit have been essential to the success of this work. Frangois Guimbretiere contributed his time and expertise to countless dis-cussions. His contributions have been integral to directing the course of this project. Ronald Rensink provided insightful comments and ideas in his capacity as second reader for this thesis. I am grateful for his time and suggestions for improving the quality of this thesis. A number of other people contributed to this work. I would like acknowl-edge the support of my lab mates, including Richard Corbett, Joe Luk, Barry Po, Colin Swindells, Melanie Tory, Jocelyn Smith, Steve Yohanan, and Qixing Zheng. I would also like to express my gratitude to the Interaction Design Read-ing Group, including Meghan Allen, Lior Berry, Andrea Bunt, Jennifer Gluck, David Sprague, and Tony Tang, and to the Information Visualization Read-ing Group, including Dan Archambault, Heidi Lam, and Peter McLachlan. I would also like to acknowledge James Slack for his assistance with technical implementation. Acknowledgements xii I would also like to acknowledge the support of my friends from St. John's College, including Carolyn Beeson, Tyson Brust, Bryan Coad, Phil Gass, Mitchell Gray, Sarah Kidd, Patrick Kyba, Joel MacMull, and Erwin Tang. Your friend-ship helped make these past two years a truly enjoyable experience. I am es-pecially grateful to Kristina Deczky for her love, encouragement, and culinary contributions, which have been central in helping me achieve my goals. Finally, I would like to thank my parents Mike and Mary Bodnar, and my sister Andrea, who have provided me with much love and support throughout my academic career. This thesis is dedicated to you. 1 C h a p t e r 1 Introduct ion As the amount of information that people must understand continues to grow, techniques that facilitate efficient exploration of large datasets become increas-ingly important. Two interaction methods commonly used in combination are panning, which allows users to change the visible region of the dataset through horizontal and vertical translations, and zooming, which modifies the scale at which the dataset is viewed. Pan and zoom interaction, illustrated in Figure 1.1, is easy to understand as it mimics the real-world semantics of moving a viewpoint with respect to a piece of paper. However, pan and zoom interac-tion on its own provides no explicit visual cues about regions of interest outside the current field of view, and so it is easy to lose orientation and become lost during a series of navigation actions. For this reason, pan and zoom interfaces Figure 1.1: Selecting (left) and result of zooming into (right) a rectilinear region with pan and zoom navigation. Areas outside the zoomed region are pushed off-screen. Chapter 1. Introduction 2 Figure 1.2: A separate overview (top left) provides contextual information of regions outside the current field of view, as displayed in the larger detail view. The field of view box in the overview represents the extent of the detail view. are often augmented with an always available and visible global representa-tion of the dataset, called an overview. An overview, illustrated in Figure 1.2, provides the user with contextual information of regions outside the current field of view, but at reduced size and resolution. The primary field of view, or detail view, is typically represented in the overview as a moveable field of view box [20]. This class of techniques that combine an overview with a detail view are commonly referred to as Overview+Detail interfaces. As an alterna-tive to Overview+Detail interfaces, the information visualization community has proposed a class of techniques known as Focus+Context. These approaches integrate information within the users current region of interest, known as the Chapter 1. Introduction 3 focus, with information outside the users current region of interest, known as context, into a single view, often using distortion-based methods and nonlinear magnification [23, 25]. While both Overview+Detail and Focus+Context tech-niques aim to provide the user with context to aid navigation, the question of which approach is better remains controversial. 1.1 Motivation As users explore a dataset they construct a mental model, which in conjunction with the current visible information, allows users to understand their location within the dataset, and make decisions regarding future navigation actions [22]. The accuracy of a future navigation action depends on the correctness of the person's mental model, the amount of relevant visible information currently available, and the extent to which they are able to combine the two. The assumption underlying the use of Overview+Detail and Focus+Context tech-niques is that an explicit visual representation of contextual areas outside the primary focus region helps users maintain a mental model of the dataset, and navigate more efficiently. While the often-stated intent of Focus+Contect ap-proaches is to eliminate the need for an overview, it is possible that users may benefit from a separate non-distorted global overview of the dataset, in addition to the distorted context provided within the Focus+Context approach. This thesis presents two studies that evaluate the effect of overviews for large tree navigation in both pan and zoom and Focus+Context interfaces. While unconstrained pan and zoom interaction may suffice for small datasets, it suffers from drawbacks that become apparent with increases in dataset size, which include inefficient patterns of navigation [20], and loss of orientation in sparse or empty regions of the dataset, known as desert fog [22]. Overviews aim to overcome these drawbacks, but have been shown to impose at least two costs: reducing the screen real estate available for the detail view and forcing the need to switch attention and coordinate navigation between multiple views. The specific Focus+Context technique that we chose to augment with an Chapter 1. Introduction 4 Figure 1.3: Selecting (left) and result of zooming into (right) a rectilinear region with rubber sheet navigation. Areas outside the zoomed region are compressed around the edges of the view. overview is rubber sheet navigation [48, 50], illustrated in Figure 1.3. Rubber sheet navigation allows users to stretch and squish rectilinear focus regions as though the dataset was laid out on a rubber sheet with its borders tacked down, and is an attempt to create a constrained navigation metaphor that avoids the drawbacks of conventional pan and zoom interfaces. We chose rubber sheet navigation as the most appropriate representative Focus+Context technique for several reasons. First, like many Focus+Context techniques, rubber sheet nav-igation supports different levels of magnification, which enable users to explore areas of the dataset at multiple levels of detail. Second, rubber sheet navigation supports guaranteed visibility, described below, which has been shown to have benefits for navigating large datasets. Third, we had available to us an infras-tructure which currently supported rubber sheet navigation, which significantly decreased implementation time. Last, rubber sheet navigation is inherently similar to conventional pan and zoom interaction, as the expansion factor, sim-ilar to the zoom factor in pan and zoom interaction, is constant. With both techniques, the user manipulates a rectangular region of the dataset. While a common assertion made within the information visualization community is that the contextual information provided by Focus+Context techniques eliminates the need for a separate overview of the dataset, no previous evaluation has tested Chapter 1. Introduction 5 this assertion or quantified the differences with which users use context for the purposes of navigation. A common drawback in Overview+Detail and Focus+Context interfaces is that areas of interest to the user, such as search results or landmarks, can dis-appear from the primary field of view as a result of being to small to draw, occluded by other elements in the dataset, or as a result of users' navigation actions. Ensuring that regions of interest remain visible independent of navi-gation actions is termed guaranteed visibility [36]. Given that the evaluative literature on guaranteed visibility, as discussed in Chapter 2, has shown it to aid navigation, guaranteed visibility is provided in all interfaces in our studies. The task used in these studies is a generic version of a topological naviga-tion and comparison task, motivated by the challenges of phylogenetic analysis. Phylogenetic biologists require increasingly sophisticated tools to support their work [10], as the process of phylogenetic analysis relies heavily on visual inspec-tion and topological comparison of large trees. Both the dataset and the task used in these studies, as described in Chapter 3, are derived from this domain. 1.2 Contributions We performed an evaluation of the effect of overview size on performance for different navigation interfaces. The results indicate that overview size did not affect performance, but the existence of overview did impact strategies the users adopted for completing tasks. We also performed an evaluation of the effect of adding an overview to different navigation interfaces, including the first evalu-ation of using an overview in conjunction with a Focus+Context interface. The results indicate that overviews did not improve performance, but were reported to reduce physical demand and were perceived as beneficial for the purposes of navigation. We also present the first step towards a taxonomy of tasks for large tree navigation. Our taxonomy is based on interviews with evolutionary biolo-gists who use large trees to investigate the evolutionary relationships between species. Chapter 1. Introduction 6 1.3 Organization This thesis will focus on how overviews affect performance and user satisfaction in both pan and zoom and rubber sheet navigation interfaces. Related work relevant to this research is presented in Chapter 2. Several methodological deci-sions were made in the design of the studies documented in this thesis. Chapter 3 discusses the development of our task and choice of dataset, which are based on discussions with evolutionary biologists from multiple institutions. Chapter 4 discusses Study 1, which was designed to investigate the optimal size of an overview in both pan and zoom and rubber sheet navigation interfaces. Chap-ter 5 discusses Study 2, which was designed to compare the effect of adding an overview to both pan and zoom and rubber sheet navigation interfaces. Both studies measured performance by logging task completion times, navigation ac-tions, including pan, zoom in, and zoom out actions, reset actions, and errors. Qualitative feedback using questionnaires and follow-up interviews were also col-lected to gain insight into subjective components such as preference, perceived ease of use, and workload. Chapter 6 discusses the implications of this research, along with recommendations for future research. This thesis is part of a larger research project designed to evaluate pan and zoom and Focus+Context navigation techniques, with and without an overview, for large tree visualization, which was joint work with Dmitry Nekrasovski. Within this project, Adam Bodnar took the lead on developing hypotheses and analyzing results related to effects of overviews, while Nekrasovski was the lead on the hypotheses and results related to effects of navigation. As a result of this collaborative work, Sections 4.3, 4.5, 4.6, and 4.9 of Chapter 4 and Sections 5.2, 5.4, 5.5, and 5.8 of Chapter 5 are jointly authored with Nekrasovski, while chapter 3 is based on a version jointly authored with him. Additionally, a joint paper co-authored by Dmitry Nekrasovski, Joanna McGrenere, Francois Guimbretiere, and Tamara Munzner, has been accepted to the 2006 conference on human factors in computing systems (CHI 2006), covering much of the work described in this thesis [38]. Chapter 2 7 Related W o r k In this chapter we review the literature on Overview+Detail and Focus+Context techniques, with an emphasis on the relevant evaluation comparing these two ap-proaches. We also discuss guaranteed visibility, and the results of a recent study exploring its benefits. This is followed by a discussion of recent work exploring different techniques for visualizing large tree structures, including phylogenetic trees. 2.1 Visualization and Interaction Overview+Detail and Focus+Context interfaces aim to help users preserve their mental model of the dataset by providing them with an explicit visual represen-tation of areas outside the primary focus region. While several Overview+Detail and Focus+Context techniques have been developed, the domain suffers from a lack of evaluative literature, specifically for large, non-artificial datasets. Recent research into Overview+Detail and Focus+Context techniques has been motivated by several factors, including scalability [43], universal usability [44] and evaluation [24]. The visualization of evolutionary trees is also a moti-vating factor, and has resulted in the development of several techniques [28, 36]. However, as these techniques also lack evaluation, our ability to understand and characterize them is limited. Chapter 2. Related Work 8 2.1.1 O v e r v i e w + D e t a i l Techniques Many everyday applications, from interactive route planning tools, illustrated in Figure 2.1, to multiplayer video games, benefit from the addition of an overview. Research into Overview+Detail techniques has resulted in the development of Mnp - Mn 1.it nil ShcHi ft T 0e E* View Format Took Rotfe Hslp 'Or, Brtsh Columbia, Canada *J Pmd j ^ • & y # m 9A ! Lsgend and Ovorvtei £ <f * r»i * - *a #i \j) „, ,L COI I (Mill A < .w*«t*v A L A ^ a ^ OrUtpIlt ffecifc'c ^nWQeorye^ Ulmorrtun.., ••, UNITED STftTFS>*ii>«^ PorlHirfly. Kw ^ 1 CampbelRiver" A f^'.'.'fl -.1 Populated Places El 1i .II»|ifjii.iiion 1 Parks and R Her vet j El Miscellaneous wilevP*ik Vancouver •.MlsH.-W.i , A .. i f Cxittfllph .. . ^ .atvy.iQh^v n 1 K H 1 t i S •IN' 'f..'..".'.'.'". El 211. Ave £ T F IT*-v Unliwi tily tridumtiMiiit <» 1 II M II A M iA "•' 1 NU •fi • si -•-ft "J VsrSijtuwRr j „ ;[ >* G.2S»)iAy«j ! W4tSU T ' ;:! 8 Sj n S? ' -..^1 jpM .SAv L E*9th*ve p : " | i ' j 4 • W..ty>>Avo | W571h.Avi ' h>die .... i m j a «„« . [ , , f < ^==4 Richmond I \ "» o • O © B *' • i i • A. • = Figure 2.1: An interactive route planning application [33] provides users with an overview (top left) for orientation. several design guidelines. According to Ahlbert and Shneiderman [1], navigation between the overview and detail views should be tightly coupled, so that any navigation action in either view is immediately reflected in the other. Unifying navigation commands between the overview and detail view has also been pro-posed [20, 47], so that all navigation commands available in one view are also available in the other. Additionally, Plaisant et al. [45] argued that the most usable overview sizes are task dependent, as the size of an overview effects how much information can be displayed in it as well as how easy it is to navigate. Chapter 2. Related Work g Figure 2.2: The overview interface used by Hornbaek et al. [20]. In the top right corner of the interface is the overview. The shaded area in the overview is the field of view box, which indicates which part of the map is currently shown in the detail view. The majority of the literature comparing interfaces with and without overviews has reported overviews to be beneficial. Studies have shown that navigation is faster with overviews since users are able to navigate in both the overview and the detail view [5, 39]. The contextual information provided by the overview also helps users maintain orientation [45], and make decisions about future nav-igation actions [21]. The overview has also been found to provide users with a feeling of control [53]. However, the addition of an overview means that users must either divide or switch their attention between two separate views to accomplish their task. This extra cognitive load has been shown to strain memory and increases the time for visual search [8]. A recent study performed Chapter 2. Related Work 10 by Hornbaek et al. [20] evaluated zoomable interfaces with and without an overview, illustrated in Figure 2.2. Most users preferred the interface with an overview, but the no-overview interface was as fast or faster. The navigation technique used was semantic zooming [41], where the visual representation of an item adapts to the amount of screen real estate available, rather than the more conventional pan and zoom examined in our studies. 2.1.2 F o c u s + C o n t e x t Techniques Focus+Context techniques aim to overcome the drawbacks of conventional pan and zoom interfaces by integrating contextual information with the users re-gion of focus. The first interactive Focus+Context technique, the Generalized Fisheye View [14] aimed at dynamically filtering information according to the user's current point of interest in the data space. The Bi-Focal Display [57] introduced the concept of horizontal distortion and applied the technique to a calendar display. The perspective wall [30] built on the Bi-Focal Display tech-nique by adding a 3D perspective. The Document Lens [48], illustrated in Figure 2.3, extended this technique by unifying the perspective wall technique with a magnification glass interaction effect to provide both detailed information and context for document presentation. Rubber sheet navigation [50] is represen-tative of the subset of Focus+Context techniques that integrate low and high resolution regions using dynamic distortions [29]. Several other Focus+Context techniques, such as fisheye [15] or hyperbolic [25, 35], illustrated in Figure 2.4, also rely on distortion, but differ from the rubber sheet approach in that they use radial focus regions, affecting circular or spherical regions of the dataset. Other Focus+Context approaches that do not rely on distortion include aggre-gating context regions into glyphs [9, 46], and showing contextual information through layers of lenses [7]. The literature on the performance of Focus+Context techniques reveals mixed results. Distortion-based Focus+Context approaches have been found beneficial for tasks such as steering navigation [17], hierarchical network nav-Chapter 2. Related Work 11 Figure 2.3: The Document Lens [48] uses a magnification glass interaction effect to show the undistorted text of a document while maintaining context of the rest of the document. igation [52], web browsing [3], spatial collaboration [51], and calendar use [6]. However, other studies have found that distortion can negatively impact perfor-mance for tasks such as interactive layout [16], location recall [54], and visual scanning [24]. Recent studies have shown that the performance of Focus+Context tech-niques depend on the parameters of the distortion mechanism, including the extent of the distortion, magnification level of the focus, and the shape of the distortion area [16]. Other factors that may influence performance include non-uniform scaling around the focus area, as in the case of a fisheye lens, and the ability of users to precisely specify the focus area [50]. While the introduction of distortion has been found to impair performance for some tasks, a recent study has shown that there exists a no-cost zone where performance is unaf-Chapter 2. Related Work 12 Figure 2.4: H3Viewer [35] uses hyperbolic geometry to create a Focus+Context presentation of the data where a large neighborhood around the focus region is visible, while the remainder of the data is aggregated. fected by abrupt non-linear distortion transformations [27]. Another feature of Focus+Context interfaces is the ability to use multiple foci to simultaneously view and interact with two or more distant regions of the dataset, while pre-serving contextual information between them [50]. EdgeLens [60], illustrated Chapter 2. Related Work 13 Figure 2.5: EdgeLens [60] enables users to use multiple foci to dynamically curve graph edges in information-dense graphs to reveal structure and node relationships while preserving node layout. in Figure 2.5, enables users to specify regions for dynamically curving graph edges in information-dense graphs to reveal structure and node relationships while preserving node positions. This technique enables users to control multi-ple EdgeLenses to reveal detail in multiple different areas. Both TreeJuxtaposer [36] and SequenceJuxtaposer [56] also support multiple foci by enabling users to select multiple focus regions to be dynamically expanded, while contracting surrounding context regions. Techniques that employ multiple foci may be bet-ter suited for complex tasks that require comparison of multiple areas within the information space [50], however no evaluative research currently addresses this hypothesis. Chapter 2. Related Work hi 2 . 2 Guaranteed Visibility Guaranteed visibility is a relatively new idea in the information visualization literature and as such has not been evaluated extensively, though it has been implemented in conjunction in both pan and zoom [4, 61], illustrated in Figure 2.6, and rubber sheet navigation interfaces [36]. A recent study [3] compared a ? Workspace 1 JBi Tmlkyscan inrtwrupttragic signals.so da IWt r i ' H X . ! t K V . H f l llltll t l l » V I . I . . . j l . ! kt nsrt awiLilii* .m*lp..4s cinliit. left of an i temib.fen ttdln arMMtBHT, VlMiMf ftis tn.i « i# '* i ty«»«t;«r i»nlh« t:icks :u» St* cto«rfoiii« nglK i . ' . ,t yuti cMrnior ! an on *« flam: of «iwn a tradlt aa<*r Von n«sc! nut stop tor a bus un ths titter stds ot a divided oi a mutttUM highway. 1 'U: <ll dill 1 .• 4. • tl 1 i* o t it;->i..11, rrMtoreycki»J . . . -i NWri, ir it .iiii.fi-W.iM I Jui Sdty*» tiuuhs- at «w#!eil c*y$ det-ails tieytui-e' VJ)«< Ditvera- You must n r j hoses. Figure 2.6: City Lights [61] uses lines on the borders of windows to indicate the direction of off-screen objects. These visual cues show the height and width and orthographic direction of off-screen objects. Focus+Context web browser with guaranteed visibility, illustrated in Figure 2.7, to a standard panning interface in the context of reading electronic documents. Interfaces with guaranteed visibility were faster than the comparison interface for most tasks and were preferred by all subjects, a finding that motivated us to include this property in all our experimental interfaces. Chapter 2. Related Work 15 Figure 2.7: Fishnet [3] uses vertical distortion to compress peripheral content. Color-coded popouts are added to guarantee that search terms, such as Seattle and update are visible, even when they are in compressed context regions. 2.3 Tree Visualization and Evaluation Tree visualization, as discussed in a recent survey [19], is a highly active area of research, motivated by problems such as layout [46], scalability [36, 49], and navigation [24]. One domain where tree visualization is very important is phylogenetics, which studies the evolutionary relationships between and among species. Ef-fective phylogenetic analysis relies heavily on visual inspection, structural com-parison, and exploration of large trees [10], yet the domain is characterized by a lack of effective visualization techniques for exploring and navigating these Chapter 2. Related Work 16 Figure 2.8: TreeWiz [49], is a phylogenetic tree visualization tool capable of displaying trees of up to 75,000 nodes. Each navigation action spawns a new window. trees [36]. Current tools for phylogenetic tree visualization only handle small trees, with the exception of TreeWiz [49], illustrated in Figure 2.8, which scales to trees of 75,000 nodes. However, TreeWiz does not provide any features to support the structural comparison of phylogenetic trees, and also features an awkward interaction model where each navigation action spawns a new window. MacClade [31] is the current standard for phylogenetic tree visualization, but it is not designed for scalability [36]. TreeJuxtaposer [36], illustrated in Figure 2.9, was developed to facilitate the navigation and exploration of large phylo-genetic trees. Similar to the Focus+Context interfaces evaluated described in this thesis, TreeJuxtaposer combines rubber sheet navigation with guaranteed visibility to support structural comparison of trees consisting of hundreds of Chapter 2. Related Work 17 Figure 2.9: TreeJuxtaposer [36] is a highly scalable visualization tool designed to support the exploration and comparison of very large phylogenetic trees. thousands of nodes. Recent work has explored the benefits and drawbacks of different techniques for visualizing large tree datasets. Kobsa [24] performed a comparative ex-periment with five well known tree visualization interfaces, including Windows Explorer as a benchmark. Kobsa's evaluation used a large hierarchical dataset based on a subset of a taxonomy of items on Ebay, which consisted of 5 levels and a total of 5,799 nodes. Tasks were generated by the experimenters and informed from an early version of the tasks detailed in the InfoVis 2003 contest [13]. The results of this study revealed significant differences between the in-terfaces with respect to performance and user satisfaction. These results were attributed to inherent differences in data presentation and interaction afforded by each interface. Additionally, some interfaces were missing functionality re-quired to complete the tasks. SpaceTree [46] was evaluated in a controlled ex-periment against a hyperbolic tree browser and Windows Explorer. SpaceTree, illustrated in Figure 2.10, attempts to optimize tree layout given the current available screen space, while aggregating contextual topological information us-ing preview glyphs. The SpaceTree evaluation also used a large tree dataset of more than 7,000 nodes from the CHI '97 BrowseOff [34]. Tasks were generated by the experimenters, and included questions concerning tree topology. Rather than pitting the interfaces against each other, as in the Kobsa evaluation, the Chapter 2. Related Work 18 Eastern Region Manager " T e c h Asst Western Region Manager Big Oil President North Region Manager Safety Manager Operations Manager " O p s Asst Facilities Engr Supervisor : Support Functions Exploration Joint Ventures M Petroleum Engr Supervisor Earth Science Supervisor Safety T e a m Leader Drilling Manager Finance Supervisor B Planning Engr 4 Field Coordinator Drlg Supt Drlg Engr Mgr Cased Hole Supt Drlg Engr B Plan Analyst Completion Engr WO Plan/Coord WO Rep Drlg Rep Drlg Te Figure 2.10: SpaceTree [46] uses triangular preview glyphs to represent tree branches that cannot be displayed due to space limitations. When space is available, multiple levels can be opened at once. Darker icons correspond to branches with more nodes. Taller icons correspond to deeper branches, and wider icons correspond to a higher average branching factor. experimenters' stated goal was to understand what features appeared to help users perform certain tasks. The results of the study were mixed, revealing that SpaceTree performed significantly faster for some classes of topological tasks, but not for others. A common limitation of both of these studies is that the interfaces examined in them used widely different methods of data presentation and interaction, making their quantitative results difficult to interpret. Our evaluation aims to overcome this issue by focusing on interfaces that share visual presentation and Chapter 2. Related Work 19 interaction methods and differ only in terms of navigation techniques. 2.4 Summary Overview+Detail and Focus+Context techniques have been developed to over-come the navigational drawbacks of conventional pan and zoom interfaces, and facilitate navigation of large data sets. Evaluations of Overview+Detail and Focus+Context techniques reveal mixed results; often illustrating that perfor-mance of a given technique is task dependent. While guaranteed visibility may improve performance across these techniques, further evaluation is needed to explore its potential benefits. Our studies aim to fill a hole in the evaluative literature by formally evaluating how overviews affect performance in both pan and zoom and rubber sheet navigation interfaces, which include guaranteed visibility. By providing evidence to show that overviews are effective for facili-tating navigation in both pan and zoom and rubber sheet navigation interfaces, we hope to motivate further development of visualization applications so that users can efficiently explore large datasets and complete complex tasks. C h a p t e r 3 20 Task and Dataset In order to lend ecological validity to our experiments, we derived our experi-mental task and dataset from the domain of evolutionary biology. Evolutionary biology is concerned with investigating the history of species by modeling evo-lutionary relationships as phylogenetic trees. A phylogenetic tree, illustrated in Figure 3.1, represents a hypothesis regarding the evolutionary relationships between species. Subtrees within larger phylogenetic trees are known as clades, and represent a biological group of species that share an evolutionary ancestor. Tree nodes, which represent groupings of organisms such as species or classes, are known as taxa. An evolutionary tree is a hypothesis, reconstructing con-jectured evolutionary relationships. The leaves of the tree are known species, while the interior nodes, which represent common ancestors, must be inferred. Traditionally, biologists gathered data in the field about a limited number of species, and then proposed potential reconstructions of trees with dozens of leaves. With the advent of using DNA for phylogenetic reconstruction, biolo-gists are now dealing with hundreds or even thousands of species. A goal of many biologists for the next decade is to reconstruct a complete Tree of Life for all species on Earth, estimated to contain over ten million species. [36]. However, progress has been hampered by a lack of tools supporting exploration, visual inspection, and structural comparison in such large datasets. A recent survey of phylogenetic visualization techniques by Carrizo [10] pointed to a need for a better understanding of the challenges in this domain. Carrizo identified several problems in visualizing phylogenetic trees, including difficulties in lay-out, labeling and annotation problems, lack of support for tree comparison, and Chapter 3. Task and Dataset 21 "Nicotian a r 'Campanula r "Scaevola r "Stoke sia r •Dimorphotheca " "Senecio * "Gerbera 1 "Gazania " "Echinops [ r "Felicia r "Tagetes " "Chromolaena [ r "Blennosperma * "Coreopsis r "Ve rn o n i a r "Cacosmia I r "Cichorium 1 -Achillea r -Carthamnus | | ' "Flaveria r "Piptocarpa ' "Helianthus I r "Tragopogon [ f "Chrysanthemum ' "Eupatorium 1 "Lactuca r "Barnadesia ' "Dasyphyllum Figure 3.1: A phylogenetic tree represents a hypothesis about the evolutionary relationships between species. Tree nodes, which represent groupings of organ-isms such as species or classes, are known as taxa. A clade represents a group of species that share a common evolutionary ancestor. a lack of tools for editing and modifying existing phylogenetic trees. Carrizo also stressed the importance of preserving users' mental model as they navigate through a large tree, and suggested that displaying the entire tree, as in the case of a separate overview, may provide the user with an indication of the overall structure within it. These challenges have motivated us to choose tasks informed by discussions with phylogenetic biologists and a phylogenetic tree dataset for our studies. This chapter documents the development of our design for both the task and dataset used in our studies. Chapter 3. Task and Dataset 22 3.1 Task We conducted informal interviews with a group of seven phylogenetic biologists, led by Dr.Wayne Maddison, from the University of British Columbia. These discussions enabled us to gain an understanding of the challenges involved in phylogenetic analysis, and examine the current state of visualization tools that aim to support these challenges. Substantial effort was taken to understand and characterize the tasks carried out by phylogenetic biologists. This chapter documents the iterative process we used to develop our ecologically valid tasks, and the criteria we used to select a single task for the experiments described in Chapters 4 and 5. 3.1.1 Task Deve lopmen t Process From our discussions, we learned that phylogenetic biologists use visualizations of large evolutionary trees to gain a deeper understanding of the relationships between and within groups of species, but that techniques for visualizing large trees are lacking. We also learned that through the process of topological analy-sis and comparison, these researchers aim to determine how species have evolved and co-evolved, and how characteristics are passed from one species to the next in an evolutionary lineage. Our discussions also revealed similar phylogenetic tree visualization problems identified by Carrizo, including layout and difficul-ties with navigation. Given these discussions, we conducted semi-structured interviews with biolo-gists at the 2004 Evolution Conference, which brought together researchers from the Society for the Study of Evolution, and the Society of Systematic Biologists. Our goal was to increase our understanding of how biologists use phylogenetic trees, and specifically identify what types of tasks are common in the process of phylogenetic analysis. During the course of the conference, we met with sev-eral researchers, including Dr.Samuel Donovan from the School of Education at The University of Pittsburgh. One of Donovan's research goals is to improve the methods of how evolution is taught, specifically with regards to phyloge-Chapter 3. Task and Dataset 23 netic tree analysis. Donovan identified four high level goals of phylogenetic tree analysis [12]: 1. Understanding how to interpret the topological structure of a tree 2. Understanding the evolutionary relationships among species in a tree 3. Understanding how to trace character changes within a tree 4. Understanding features of clades and their uses in trees Given these high level goals, we focused our attention on the specific research questions that phylogenetic researchers were attempting to solve. Examples of some of the specific questions we observed include: • Which clades contain the species Porphyra and Bangia? • Do the species Porphyra and Bangia belong to distinct clades? • Where are the specimens with Porphyra and Bangia topologically placed? 3.1.2 Task Categories Following our observations of the types of research questions that evolutionary biologists were attempting to solve using phylogenetic trees, we classified their research questions as specific instances of general visual tasks, as described in the visual task taxonomy of Wehrend and Lewis [59]. Using their formulation, we identified seven different types of tasks that are common in the process of analyzing phylogenetic trees, described in Table 3.1. Chapter 3. Task and Dataset 24 Task Type Specific Task Statement Locate (object previously known) Locate the species whale in this tree Identify (object not previously known) Identify a clade that has three taxa Distinguish Which clade is different from all other clades? Compare/Contrast Describe three ways in which these two trees are similar/different Classify/Categorize What is the phylogenetic relationship between camels and whales? Calculate What is the distance between the two marked clades? Correlate Which phylogenetic tree supports a given hypothesis? Table 3.1: Task categories and specific task instances related to phylogenetic analysis, informed from discussions with phylogenetic researchers. 3.1.3 Four General Tasks Having classified the types of tasks that relate to the analysis of phylogenetic trees, we developed a set of four general tasks, described and illustrated below. These general tasks are representative of the types of tasks carried out by phy-logenetic researchers, but require no knowledge of phylogenetic biology. These four tasks are each composed of several of the tasks we identified through our discussions, which are summarized in Table 3.1. The complexity of our tasks is especially important for our evaluation, as Plaisant [43] noted that many studies of information visualization tools suffer from evaluations which use simple tasks, such as find and identify, and do not reflect real-world tasks, which are often more complex and composed of several simple tasks. Thus, by using complex tasks in our evaluation, we aim to add ecological validity to our experiment. While these tasks appear simple in the small tree figures shown below, they Chapter 3. Task and Dataset 25 are considerably more complex in large trees, as several navigation actions are required to complete the task. For each task, a marked node is a node which has been colored to indicate that it is a target node for the task. Task 1: Determining the Least Common Ancestor In a phylogenetic tree, the least common ancestor (LCA) of two nodes is a species that is an ancestor of both the species in question, and that has the greatest depth in the tree, illustrated in Figure 3.2. This task requires the user to locate two nodes in a phylogenetic tree, and identify a third which represents the closest ancestor to the two nodes in question. B Figure 3.2: Task 1: Determining the lowest common ancestor. In this case, node A is the lowest common ancestor of nodes B and C. Task 2: Determining the Topological Distance Between Nodes Topological distance in a tree is the number of hops between two nodes, and is not the same as geometric distance, which may change with navigation, illus-trated in Figure 3.3. In a phylogenetic tree, the topological distance between two nodes is indicative of the number of evolutionary steps between the species they represent. Measuring topological distance is the primary function of phylo-genetic trees. This task requires the user to locate three nodes in a phylogenetic Chapter 3. Task and Dataset 26 tree, and calculate the topological distance between two sets of nodes to deter-mine which distance is smaller. A r i i • • • v_z ......m Figure 3.3: Task 2: Comparing the topological distances between nodes. In this case, node A is 2 topological hops from node B and 3 topological hops from node C, making node B topologically closer. Task 3: Determining Whether Two Subtrees are Adjacent In a phylogenetic tree, two subtrees are adjacent if no other node is between them, illustrated in Figure 3.4. In phylogenetic biology, this task represents determining whether the groups of species represented by the subtrees are sister groups. As sister groups share a common ancestor, they are each other's closest relative. This task requires the user to locate two subtrees within a phylogenetic tree, and identify if there exists a third subtree between the two subtrees in question. Chapter 3. Task and Dataset 27 Figure 3.4: Task 3: Determining whether two subtrees are adjacent. In this case, the subtrees labeled A and B are not adjacent. Task 4: Determining Whether a Subtree Contains an Unmarked Node In a phylogenetic tree, the presence of an unmarked node or subtree in a mostly colored subtree may indicate a character reversal, as illustrated in Figure 3.5. This reversal represents the loss of a character formerly present in an evolu-tionary line. An example of a character reversal would be a dodo bird, which like all other birds has wings, but unlike most other birds it cannot fly. This task requires the user to first locate a subtree within a phylogenetic tree, and identify if there exists an unmarked node within the subtree. Chapter 3. Task and Dataset 28 Figure 3.5: Task 4: Determining whether a subtree contains unmarked nodes. In this case, the subtree labeled A contains an unmarked node, B, which could indicate the presence of a unique trait. 3.1.4 E x p e r i m e n t a l Task Following pilot experiments with the four tasks described above, we decided to focus our studies on a single task in order to limit the effect of task as a factor. Task 2 was selected for further investigation due to its relative complexity, high importance to phylogenetic analysis, and the fact that it was shown, relative to the other tasks involved in our pilot, to require subjects to perform multiple nav-igation actions along well-defined paths, thus reducing performance variability. Pilot experiments were also used to ensure that each instance of task 2 was iso-morphic in difficulty. In particular, topological distances between nodes always conformed to a range of 7 to 10, and could not be determined without interact-ing with the interface for any of the task instances. Also, colored nodes were not located in close proximity to each other in order to ensure that at least one interaction had to be performed to determine each topological distance. Given the scope of this work, we were only able to evaluate a single complex task; however, we intend to leverage the task development effort by investigating all Chapter 3. Task and Dataset 29 4 of the general tasks in future studies. 3.2 Dataset The controlled experiments, described in Chapters 4 and 5, used the phylogeny-MatchesTaxonomy dataset, courtesy of David Hillis' lab at the University of Austin. This dataset is a binary tree consisting of 5,918 nodes, which also represents evolutionary relationships between species the kingdom Animalia, and is available from the Olduvai project website [40]. This dataset was cho-sen based on the results of initial pilot experiments, which used the animaliaA dataset from the 2003 InfoVis Contest [13], a phylogenetic tree of approximately 190,000 nodes representing the evolutionary relationships between species in the kingdom Animalia. Pilot results indicated that this dataset was not an optimal choice for our experiment, as its topology was not sufficiently deep to require subjects to perform a large amount of navigation, while its size necessitated start times of up to 45 seconds for our experimental interfaces. In comparison, the phylogenyMatchesTaxonomy dataset allowed for complex topological com-parisons, requiring a significant amount of navigation while reducing the start times for our tools to under 5 seconds. Node labels were removed from the dataset in order to enable the task to be performed by subjects without prior knowledge of evolutionary biology, as well as to avoid unnecessary node occlu-sion. Moreover, our discussions with biologists confirmed that their typical use of evolutionary trees involved very little reading of node labels. 30 C h a p t e r 4 Study 1 Study 1 consisted of a controlled experiment designed to investigate the effect of context level on the performance of pan and zoom and rubber sheet navigation techniques with and without an overview. The study involved four different interfaces, representing all combinations of the two navigation techniques with and without an overview. Our goal was to determine the optimal level of context for each interface. Subjects used these interfaces with varying levels of context, illustrated in Table 4.1, to solve a topological task in a large tree dataset. This chapter describes the experiment and presents its results, with an emphasis on how the level of context in overviews affected performance and user satisfac-tion. The results of how level of context affected performance for navigation are reported in detail in Nekrasovski's thesis [37]. 4.1 Pilot Prior to Study 1, we conducted a pilot with 12 subjects to examine experimen-tal parameters including task difficulty, described in Chapter 3, and interface usability. Our pilot also examined context level ranges, which are defined as the relative ratios between focus and context for a given technique. While our initial belief was that a single level of context would result in optimal perfor-mance across pan and zoom and rubber sheet navigation interfaces with and without an overview, pilot results indicated that the optimal level of context was interface dependent. As a result of this finding, we decided to evaluate each interface with a unique range of context levels, as described in Table 4.1, derived from our pilot tests. Pilot results were also used to verify that task sets used Chapter 4. Study 1 31 in the experiment were isomorphic in difficulty and to address usability issues with the interfaces. 4.2 Hypotheses Our hypotheses were motivated by findings reported in the literature and the results of our pilot study. Overall, we expected that performance would follow a U-shape trend, where increases in the level of context would result in perfor-mance benefits to a point where performance would plateau. After this point, subsequent increases in context would actually have a negative impact on per-formance, as more screen space would be needlessly devoted to the overview at the cost of reducing the screen real estate available for the primary detail view. Our detailed hypotheses are given below. Our hypotheses were based on the results of pilot experiments, and the level of context which led to the optimal performance for pilot subjects. HI: A context level of 40% results in optimal performance for pan and zoom interfaces. H2: A context level of 60% results in optimal performance for rubber sheet navigation interfaces. H3: For pan and zoom interfaces with an overview, an overview size of 10% results in optimal performance. H4: For rubber sheet navigation interfaces with an overview, an overview size of 10% results in optimal performance. 4.3 Task and Dataset The task used in Study 1 was a topological task that required subjects to com-pare the topological distances between colored nodes in a large tree dataset and determine which of the distances was smaller. Both the task and the dataset are described in detail in Chapter 3. Chapter 4. Study 1 32 4.4 Interfaces In order to provide consistent visual representation, drawing performance, and interaction model, our interfaces were built on the same software infrastructure [55], based on the TreeJuxtaposer scalable tree visualization application [36]. While TreeJuxtaposer was initially developed as a Focus+Context visualization tool using rubber sheet navigation, the inherent similarities between rubber sheet and pan and zoom navigation allowed us to extend its behaviour to support conventional pan and zoom interaction, as well as the presence of an overview and multiple foci or detailed views. This section discusses the implementation of each of these interface components. 4.4.1 N a v i g a t i o n The original TreeJuxtaposer application used rubber sheet-style expansions and contractions of arbitrary rectilinear regions for navigation, and included ad-vanced features such as linked navigation between multiple trees. Navigation in the original "TreeJuxtaposer enabled users to select rectangular regions using mouse drags, and resize their selection box to arbitrary size. We replaced this style of navigation with a unified set of navigation actions appropriate for each interface. All interaction occurs though mouse drags, and in our subsequent analysis, a discrete navigation action refers to a single mouse drag. All transi-tions are smoothly animated across 20 frames to ensure fluid interaction with the interfaces. In each interface, navigation was controlled using a two button mouse with a scroll wheel, with zoom in mapped to the left mouse button, pan-ning mapped to the right mouse button, and zoom out in the pan and zoom interfaces mapped to the scroll wheel. Each interface also supports a reset func-tion, which was mapped to the V key on a standard keyboard. Similar to the original rubber-sheet navigation style in TreeJuxtaposer, our implementation of rubber sheet navigation allows users to select a rectangular region using mouse drags, however, in our implementation, the users' selection box always ends up in a fixed area and aspect ratio. Chapter 4. Study 1 33 4.4.2 Multiple Foci and Overviews For this study, multiple foci were implemented in both pan and zoom and rubber sheet navigation interfaces to allow users to simultaneously view and interact with two or more distant regions of the dataset. In the rubber sheet navigation interfaces, illustrated in Figures 4.4 and 4.6, users could select one of two focus regions as the target for rectilinear zooming actions, allowing them to explore two non-adjacent regions of the dataset at different levels of compression. In the pan and zoom interfaces, illustrated in Figures 4.5 and 4.7, users could navigate in two separate views, allowing them to explore two different regions of the dataset at different scales. Overviews with movable field of view boxes were present in two of the inter-faces. For consistency between interfaces, the view dimensions in each interface were chosen to equalize the total screen real estate across them, with each in-terface always providing a total of 600,000 pixels of information. Based on the guidelines developed by Ahlberg and Shneiderman [1], we ensured that all navigation actions were tightly coupled between the overview and detail view. 4.4.3 Guaranteed Visibility As introduced in 2.2, guaranteed visibility is the property that regions of the dataset marked with color are always visible on screen, introduced by Munzner et al. [36]. Guaranteed visibility of marked areas is provided in both detail views and overviews for both pan and zoom and rubber sheet navigation interfaces. Munzner et al. discuss three types of guaranteed visibility, all of which are addressed in the interfaces used in our studies: 1. Off-Screen Off-screen guaranteed visibility is needed when there is a possibility that marked areas may move off-screen due to navigation actions. In rubber sheet navigation interfaces, navigation is constrained so that items outside the focus areas would be compressed along the periphery rather than move off-screen. In pan and zoom Chapter 4. Study 1 34 interfaces, off-screen guaranteed visibility is provided by encoding the direction to and distance from off-screen marked areas using circular arcs around the periphery of a view. This technique is based on Baudisch and Rosenholtz's Halo [4], as illustrated in Figure 4.1, which used arcs as visual cues to off-screen items in the context of viewing maps on small screen devices. Our initial implementation of pan and zoom interfaces used opaque arcs as off-screen visual cues. However, the results of our first study revealed that this opacity resulted in arcs occluding regions of the dataset. Based on this observation, we decided to make the arcs translucent, similar to those found in Figure 4.1 (a), such that they were still visually salient but did not fully occlude areas of the dataset. Figure 4.1: (a) Halo [4] is a technique for visualizing off-screen locations. As seen in (b), each off-screen location is at the center of a ring that reaches into the border of the view. Chapter 4. Study 1 35 2. Sub-Pixel Sub-pixel guaranteed visibility is needed when there is a possibility that the area of interest might be too small to be drawn. Using the TreeJuxtaposer application, which provides sub-pixel guaranteed visibility, as the basis for our interfaces ensures that items of interest in all views are visibly marked even when they are compressed to sub-pixel size. 3. Occlusion Occlusion of marked areas by other parts of the dataset is avoided by using a 2D rather than a 3D spatial layout. 4.4.4 Context Levels As the amount of contextual information provided by context areas in both pan and zoom and rubber sheet navigation interfaces could vary depending on user interaction, we used the total extent of the context areas as an approximation for the level of context within each interface. In addition to peripheral context areas, contextual information was also provided by overviews in those interfaces that used them. For the purpose of varying the level of context in this study, we therefore distinguished between two possible levels of context in each interface, illustrated in Figures 4.2 and 4.3: 1. Level of navigational context: Fraction of size of navigation-specific context areas C to the total size of focus and context areas in the detail view F+C. 2. Level of overview context: Fraction of size of the overview O to total size of all views O+F+C (0 for interfaces without an overview). Chapter 4. Study 1 36 1 -Q c c F C G Figure 4.2: Calculation of levels of context in Study 1 RSN interfaces. Level of navigational context is the fraction of the size of the peripheral context areas C to the total size of the detail view F+C. Level of overview context is the fraction of the size of the overview O to the total size of all views O+F+C. 0 G .-••••••..:...\^ '^C ! ( F 'r^H Figure 4.3: Calculation of levels of context in Study 1 PZN interfaces. The dotted line indicates the boundary between focus and context regions, which is not visually demarcated in the interfaces. Levels of navigational and overview context are as in Figure 4.2. Chapter 4. Study 1 37 4.4.5 Interface 1: Rubber sheet navigation As illustrated in Figure 4.4, this interface had no overview and allowed users to navigate the dataset using the metaphor of expanding and compressing a rubber sheet with its borders nailed down. Figure 4.4: Interface 1: Rubber sheet navigation without overview. A zoom action has stretched a region to fill the focus area. Nodes outside this region are compressed in the periphery, and marked nodes remain visually salient. Unlike in conventional pan and zoom interfaces, navigation actions did not push context regions off-screen, but compressed them in the periphery of the view, where they remained visually salient. Focus regions were demarcated by colored boxes, which were always located in the center of the view. A user could select a rectangular area of interest for zooming in by dragging out a box with the left mouse button. The contents of the selected area then expanded Chapter 4. Study 1 38 to fill the focus region in a smooth transition. An action analogous to panning was accomplished via horizontal and vertical drag motions with the right mouse button, allowing users to fine-tune focus region selections. Users could zoom out by dragging out a rectilinear region larger than the focus region, the contents of which were then compressed to fill the focus region. 4.4.6 Interface 2: P a n and z o o m nav iga t i on As illustrated in Figure 4.5, this interface had no overview and allowed users to navigate using conventional pan and zoom interactions. Figure 4.5: Interface 2: Pan and zoom navigation without overview. Zoom actions have filled the extent of the top area and the second bottom focus area. Arcs based by Halo [4] indicate direction and distance to off-screen marked nodes. Just as with Interface 1, a user could select a rectangular area of interest Chapter 4. Study 1 39 for zooming in with a left mouse drag, resulting in an animated transition that completely filled the view with the selected area. The user could fine-tune the focus selection by panning with horizontal and vertical right-mouse drags. The user could also gradually zoom out with vertical middle-mouse drags. For any marked region that moved off-screen due to navigation actions, a colored Halo-like arc appeared at the border of the screen, indicating the direction and the distance to the marked region. The arc was part of an ellipsoidal ring that centered on the off-screen marked region, and disappeared once the marked region is visible on-screen. Just like Interface 1, this interface had a peripheral context area in which these arcs could appear. However, whereas with Interface 1 the resolution of the visual representation was distorted within the context region and was explicitly delimited using a border, no distortion or visual delimitation was used within the context region of Interface 2. Also, the shape of the context region was oval rather than rectangular. 4 . 4 . 7 Interface 3: Rubber sheet navigation with overview Figure 4.6 illustrates that this interface used the same navigation controls as Interface 1. It also had an overview showing the field of view box corresponding to the extent of the detail view, which updated dynamically as navigation took place in the detail view. Users could perform the rubber sheet navigation equivalents of panning and zooming as implemented in Interface 1 directly in the overview by dragging the red box, which then updated the detail view. Chapter 4. Study 1 40 Figure 4.6: Interface 3: Rubber sheet navigation with overview. A zoom action has stretched the region shown by the field of view box in the overview to fill the focus area of the detail view. 4.4.8 Interface 4: P a n and z o o m n a v i g a t i o n w i t h ove rv iew Figure 4.7 illustrates that this interface had the same navigation controls as Interface 2, as well as an overview. Just as with Interface 3, the field of view box in the overview dynamically reflected navigation in the detail view and could be manipulated directly to control the detail view. Chapter 4. Study 1 41 Prass Rlu fSSfTIlia cutoi*iV at«Ma vknu Plots HKCAPF. 111 f l FAH th» cutrwn mauiw ; (Hu« :. Gr(*n I Suluntf • • r ;: (Xao with LEFT itiousu button 1o ZOOM IN ' i'"' ^z"i...";y::'..^ s , • / — [Mail vwrt MIWH.f nsiliiw fttfton In ZOOM OUI IV as Willi WGHT nouto liunni tn PAN In tliu OVEKMfW: . . . . . Figure 4.7: Interface 4: Pan and zoom navigation with overview. A zoom action has filled the extent of the detail view with the region shown by the field of view box in the overview. 4 . 5 Apparatus We conducted the study on two systems running Windows XP with Pentium 4 processors, 2.0 GB RAM, Nvidia GeForce2 video cards, and 19 inch moni-tors configured at a resolution of 1280x1024 pixels. The experimental software, including the interfaces, was fully automated and was coded in Java 1.4.2 and OpenGL, using the GL4Java bindings. Chapter 4. Study 1 42 4.6 Participants Forty subjects, consisting of 25 males and 15 females, between 18 and 39 years of age successfully completed the study and were each compensated $10 for their participation. All subjects were right-handed, had normal or corrected to normal vision, and had previous experience using a computer. They were recruited through advertisements posted throughout the university campus and through an online participant scheduling system. Originally, 45 subjects participated in the experiment. Two of the subjects were unable to follow the training instructions successfully, while three oth-ers followed the instructions but committed four or more errors (an error rate greater than 10%). These subjects were treated as outliers for the purpose of data analysis, leaving a total of 40 data points. 4.7 Experimental Design The evaluation used a 2 (navigation, between subjects) by 2 (presence of overview, between subjects) by 5 (context levels, within subjects) design, where each con-tained 5 trials. Subjects were randomly assigned to each of the four interfaces. To minimize learning effects, we counterbalanced the order of presentation for context level using a Latin Square. Table 4.1 summarizes the different context levels used for each interface. Context level ranges were chosen based on pi-lot experiments, which indicated that subjects reached optimal performance for levels within the specified ranges. Interface Level 1 Level 2 Level 3 Level 4 Level 5 RSN 40 50 60 70 80 PZN 30 40 50 60 70 RSN+Overview 5 10 15 20 25 PZN+Overview 5 10 15 20 25 Table 4.1: Context levels for each interface explored in this study. Chapter 4. Study 1 43 4.8 Procedure The experiment was designed to fit into a single 60 minute session. Subjects were first instructed on the use of the different navigation techniques afforded by the interface to which they had been randomly assigned. The experimenter then demonstrated a basic strategy for completing the task. This policy was motivated from results of pilot experiments. Initial piloting showed that dragging out long, thin horizontal selection areas improved task completion time in all interfaces. Since many of the paths between colored nodes were horizontal, this strategy enabled subjects to bring them rapidly into focus. In both the pan and zoom and rubber sheet navigation interfaces, subjects were instructed how to use both the overview and detail views for navigation and counting nodes, but were not explicitly told to navigate in either view. Following the discovery of one of the two topological distances, subjects were instructed to reset the interface and. continue using the same strategy to determine the second distance. This was motivated by results of the pilot study which revealed that subjects often spent more time navigating between the two halves of the task than completing the task itself. After being shown the strategy, subjects were given a training block of 5 trials, which used the middle level of context for the range of the interface being tested. At the end of the training session, subjects were given a one minute break. Subjects were then presented with 5 blocks, one per context level, each containing 5 trials, for a total of 25 trials. All subjects were presented with an identical set of questions; the grouping of questions to block was predetermined, but the order of blocks was randomly generated for each subject. The blocks of questions were verified to be isomorphic in difficulty in the pilot study. The experimenter was not present to observe the subject as they completed tasks. Subjects were given a one minute break between each block of questions. At the end of the experiment, subjects completed a questionnaire, repro-duced in Appendix A, which was used to collect information about their de-mographic background and previous computer usage. Space was also provided Chapter 4. Study 1 44 for subjects to comment on their experiences with the interfaces and provide suggestions for improvement. Short informal interviews were conducted with some of the subjects based on their questionnaire responses. 4.9 Measures Performance measures were based on logged data and included task completion times, number of discrete navigation actions of pan, zoom in, and zoom out, including the amount of time spent doing each action, reset actions, and errors. Self-reported measures were collected through the post-experiment question-naire, reproduced in Appendix A. Ratings of how easy to use, easy to navigate, and enjoyable the interfaces were on a 5-point Likert scale. 4.10 Results This section presents the results for both performance and self-reported mea-sures of the experiment. Only the results for context level of overview are reported in detail. The detailed results pertaining to the evaluation of context level by navigation technique are reported by Nekrasovski [37]. A series of ANOVAs was run to understand the effect of context level on the performance and self-reported measures. Prior to these analyses, outlier data lying more than 3 standard deviations from the means of each cell were removed from the analysis. The Greenhouse-Geisser adjustment was used for non-spherical data, and the Bonferroni adjustment for post-hoc comparisons. Along with statistical significance, we report partial eta-squared (v2), a measure of effect size, which is often more informative than statistical significance in applied human-computer interaction research [26]. To interpret this value, .01 is a small effect size, .06 is medium, and .14 is large [11]. Chapter 4. Study 1 45 4 .10.1 L e a r n i n g Effects The overall results for mean completion times per trial are illustrated in Figure 4.8, which represents the order in which subjects saw each block, regardless of context level. As expected, performance improved as subjects progressed 130 120 110 -100 -90 80 •o 8 70 CD </> E 60 50 40 30 20 10 0 3 4 Block R S N - O v e r v i e w -Er P Z N - O v e r v i e w -x- R S N + O v e r v i e w P Z N + O v e r v i e w Figure 4.8: Mean completion times per trial for each interface by block in sec-onds, where each block contained 5 trials (N=40). through the experiment, with a significant main effect of block .(F(4,144) = 12.309, p < .001, rf = .255). There was also a significant main effect of interface on completion time (F(3,36) = 2.924, p < .05, rj2 = .196), but post-hoc com-parisons revealed no significant pairwise differences between interfaces and no significant interaction effect between block and interface was present. Post-hoc analysis also revealed that performance did not plateau for any of the interfaces. Chapter 4. Study 1 46 4.10.2 L e v e l of C o n t e x t Counter to our hypotheses, level of context had no significant effect on any of the performance measures for H I (^(4,36) = 1.380, p = .260, rf = .133), or H2 (F(4,36) = 1.480, p = .229, if = .141). Figure 4.9 illustrates the trend for context level for the rubber sheet navigation interface with an overview. We can see that performance did not follow a strong U-shape trend as we had 900 800 H _ 700 in X) o 600 u 01 ID "of 500 E *~ 400 300 200 • Mean — Trend Line 1 1 2 3 Overview Size Level Figure 4.9: Mean completion trend line by level of context in seconds for rubber sheet navigation with an overview interface (N=10). hypothesized, but rather plateaued at low levels of context, and led to increased completion times for trials as context level increased. Figure 4.10 illustrates the trend for context level for the pan and zoom interface with an overview. We can see that performance did not follow a U-shape trend as we had hypothesized, but slightly increased as context level increased. Chapter 4. Study 1 47 900 -i 800 -_ 700 -o 600 -o 500 - . r — — — _ _ J ' T T . i I r~ - i — - — -H 400 - 1 1 300 -200 -I . , , r 0 1 2 3 4 Overview Size Level Figure 4.10: Mean completion trend line by level of context in seconds for pan and zoom with an overview interface (N=10). 4.10.3 Error Rate On average, subjects committed 0.75 errors over the course of the experiment, for a mean error rate of 3.0%. There were no significant main or interaction effects of navigation or presence of overview on error rate. 4.10.4 Summary of Results We summarize the results according to the experimental hypotheses: R l : Context level did not significantly affect performance in pan and zoom interfaces. R2: Context level did not significantly affect performance in rubber sheet nav-igation interfaces. M e a n • T r e n d L ine Chapter 4. Study 1 48 R3: Context level did not significantly affect performance in overviews for pan and zoom navigation interfaces. R4: Context level did not significantly affect performance in overviews for rub-ber sheet navigation interfaces. 4 . 1 1 Discussion This study raises several interesting issues with respect to the size of overview and user strategy. 4 .11 .1 Size of Overview Contrary to our hypotheses, the size of the overview in pan and zoom and rubber sheet navigation interfaces did not significantly affect user performance. While Plaisant [45] argued that most usable overview sizes are task dependent, as the size of an overview effects how much information can be displayed in it as well as how easy it is to navigate, we found that the size of the overview did not significantly affect any of our performance measures. We note that one reason that we did not observe an effect of overview size is that the range of overview sizes we evaluated was too small to observe a significant effect. We chose not to evaluate a larger range of overview sizes to increase precision in determining the optimal overview size for each technique. If we had evaluated a larger range of overview sizes, we speculate that we would have seen stronger and significant U-shaped trends that we hypothesized. We observed and confirmed through log data that that the addition of a second focus confounded navigation in overviews, since users could use one of the two focus windows as an overview, thus increasing the difficulty of our attempt to measure the effect of overview size. We also acknowledge that a lack of power may possibly explain the lack of a significant effect of overview size, as our results do indicate a large effect size. Based on this observation, it is possible that had we run more subjects we might have observed a significant effect of size of overview. Chapter 4. Study 1 49 4.11.2 S t ra t egy Another reason that we did not observe an effect of overview size may be the highly variable strategies that users developed over the course of the experi-ment. We note that, without sufficient training, subjects developed a variety of strategies, leading to highly variable performance results. Our results also indicate that learning continued for a significant period of time even after train-ing, meaning that we cannot separate for analysis the results where users had reached a performance plateau. To investigate the effect of strategy further, we analyzed subject log files and we grouped subjects by strategy. For the rubber sheet navigation interface with an overview, two distinct strategies emerged -reset and no-reset. Subjects who chose to reset the interface between parts of a trial performed significantly better than those who did not reset (7 (^1,48) = 4.825, p < .05, n2 = .091). For the pan and zoom interface with an overview, two distinct strategies also emerged - single-overview and double-overview. Subjects who chose to use one of the two focus windows as a second overview performed significantly better than those who simply used the single overview (F(l,43) = 22.876, p < .001, n2 = .347). This result, which shows that a much larger overview led to performance benefits, corroborates our earlier findings as it il-lustrates that a larger range of overview sizes may have resulted in a U-shaped performance curve. Given these results, we decided to perform a second experiment to compare the effect of presence of overview in pan and zoom and.rubber sheet navigation interfaces. This experiment is described in the next chapter. Chapter 5 50 Study 2 Study 2 consists of a controlled experiment designed to evaluate the performance of pan and zoom and rubber sheet navigation techniques with and without an overview. As in Study 1, subjects were asked to solve a topological task in a large tree dataset, described in Chapter 3. This chapter describes the experiment and presents its results, with an emphasis on how the presence of an overview affected performance and user satisfaction. 5.1 Hypotheses Our hypotheses were motivated by findings reported in the literature and the results of our pilot study. First, we expected rubber sheet navigation to perform better than pan and zoom because, as discussed in Chapter 2, Focus+Context approaches have been shown to perform better than pan and zoom interfaces for a variety of navigation tasks. HI: Rubber sheet navigation interfaces perform better than pan and zoom interfaces independently of the presence or absence of an overview. Second, we did not expect an overview to significantly improve the performance of rubber sheet navigation interfaces, because Focus+Context approaches by design attempt to provide the same contextual information as an overview, but in an integrated way. H2: For rubber sheet navigation interfaces, the presence of an overview does not result in better performance. Chapter 5. Study 2 51 Finally, we expected an overview would significantly improve the performance of pan and zoom interfaces, because most previous studies have shown that overviews decrease navigation time and help the user maintain orientation within a dataset. H3: For pan and zoom interfaces, the presence of an overview results in better performance. 5.2 Task and Dataset As in Study 1, the task used in the experiment was a topological task that required subjects to compare the topological distances between colored nodes in a large tree dataset and determine which of the distances was smaller. Both the task and the dataset are described in detail in Chapter 3. 5.3 Interfaces Study 2 used the same interfaces from Study 1, with identical navigation meth-ods and guaranteed visibility techniques. Given the results of Study 1, we removed multiple foci to simplify the interfaces, ensure that no overview use was available in the no overview condition, and to encourage subjects to stick to a single strategy to complete tasks. The calculation of levels of context for this study was similar to that in Study 1, but reflected the use of a single focus in each interface, as illustrated in Figures 5.2 and 5.1. As in Study 1, each interface always provided a total of 600,000 pixels of information in all views. Chapter 5. Study 2 52 o G V F • \ \ Figure 5.1: Calculation of levels of context in Study 2 PZN interfaces. The dotted line indicates the boundary between focus and context regions, which is not visually demarcated in the interfaces. Levels of navigational and overview context are as in Figure 5.2. •O c c c F G c c Figure 5.2: Calculation of levels of context in Study 2 RSN interfaces. Level of navigational context is the fraction of the size of the peripheral context areas, C, to the total size of the detail view, F+C. Level of overview context is the fraction of the size of the overview, O, to the total size of all views, O+F+C. Chapter 5. Study 2 53 5.3.1 Interface 1: Rubber sheet navigation As illustrated in Figure 5.3, this interface is the same as Interface 1 from Study 1, but has only one focus area. Figure 5.3: Interface 1: Rubber sheet navigation without overview. A zoom action has stretched a region to fill the focus area. Nodes outside this region are compressed in the periphery, and marked nodes remain visually salient. As opposed to Figure 4.4, there is only one focus. 5.3.2 Interface 2: Pan and zoom navigation As illustrated in Figure 5.4, this interface is the same as Interface 2 from Study 1, but has only one focus area. Chapter 5. Study 2 54 IJJfW <tM\ LEFT moma dimunla ZOOM IM CM,!U W*M MKOTt.F M U M I burtcti lu ZOOM 01 .••!.«! with MBHT mouse bimcmlo I'AN Pint* «lo RESET (ho viwualKMkin PtBMS f.Sf,«PF (o a FAR IDs cut w( n*iu*u 1 cts ITrC Figure 5.4: Interface 2: Pan and zoom navigation without overview. A zoom action has filled the extent of the detail view. Arcs inspired by Halo [4] indicate direction and distance to off-screen colored nodes. As opposed to Figure 4.5, there is only one focus. 5.3.3 Interface 3: Rubber sheet navigation with overview Figure 5.5 illustrates that this interface is the same as Interface 3 from Study 1, but has only one focus area. Chapter 5. Study 2 55 Figure 5.5: Interface 3: Rubber sheet navigation with overview. A zoom action has stretched the region shown by the field of view box in the overview to fill the focus area of the detail view. As opposed to Figure 4.6 , there is only one focus. 5.3.4 Interface 4: P a n and z o o m nav iga t i on w i t h ove rv iew Figure 5.6 illustrates that this interface is the same as Interface 1 from Study 1, but has only one focus area. Chapter 5. Study 2 56 Figure 5.6: Interface 4: Pan and zoom navigation with overview. A zoom action has filled the extent of the detail view with the region shown by the field of view box in the overview. As opposed to Figure 4.7, there is only one focus. 5.4 Apparatus As in Study 1, we conducted the study on two systems running Windows XP with Pentium 4 processors, 2.0 GB RAM, Nvidia GeForce2 video cards, and 19 inch monitors configured at a resolution of 1280x1024 pixels. The experimental software, including the interfaces, was fully automated and was coded in Java 1.4.2 and OpenGL, using the GL4Java bindings. Chapter 5. Study 2 57 5.5 Participants Forty subjects, consisting of 16 males and 24 females, between 18 and 39 years of age successfully completed the study and were each compensated $15 for their participation. All subjects were right-handed, had normal or corrected to normal vision, and had previous experience using a computer and three button mouse. They were recruited through advertisements posted throughout the university campus and through an online participant scheduling system. Originally, 44 subjects participated in the experiment. Two of the subjects were unable to follow the training instructions successfully, while two others fol-lowed the instructions but committed four or more errors (an error rate greater than 10%). These subjects were treated as outliers for the purpose of data analysis, leaving a total of 40 data points. 5.6 Experimental Design The evaluation used a 2 (navigation, between subjects) by 2 (presence of overview, between subjects) by 7 (blocks, within subjects) design, where each block con-tained 5 trials. As opposed to our Study 1 design, this design used 7 blocks rather than 5 to ensure that a performance plateau would be found, and the presentation order of the blocks was randomized. Subjects were randomly as-signed to each of the four interfaces. A between-subjects design was chosen due to the need for extensive training in order for subjects to effectively use each visualization interface. 5.7 Procedure The experiment was designed to fit into a single 90 minute session. Subjects were first instructed on the use of the different navigation techniques afforded by the interface to which they had been randomly assigned. The experimenter then demonstrated training strategies specific to each interface, and asked the Chapter 5. Study 2 58 subject to repeat them. The training strategies were derived from our initial pilot experiments and the results of Study 1. Initial piloting showed that dragging out long, thin horizontal selection areas improved task completion time in all interfaces. Since many of the paths between colored nodes were horizontal, this strategy enabled subjects to bring them rapidly into focus. However, the results of Study 1 indicated that that this strategy alone was not sufficient, as the cases where paths were not horizontal lead subjects to develop a variety of different strategies for each interface, with mixed results. Based on observation and log data, we determined the best performing strategy used by Study 1 subjects for each interface, and provided it as the training strategy during the Study 2 experiment. All training strategies started with dragging out a long thin selection area along the horizontal path between the nodes in question. For the rubber sheet navigation interfaces, selecting a long thin horizontal area had the effect of stretching the dataset along the vertical axis. Subjects were then instructed to count nodes that became visually salient. Following this step, long thin horizon-tal and vertical selection areas could be dragged out to expand other compressed regions along the path. For the pan and zoom interfaces, selecting a long thin horizontal area had the effect of zooming the contents of the focus box to fill the entire view. Subjects were then instructed to count nodes that became visu-ally salient. For the pan and zoom with overview interface, subjects were then instructed to slowly zoom out and add nodes as they appeared along the path up the tree. In both the pan and zoom and rubber sheet navigation interfaces with an overview, subjects were instructed how to use both the overview and detail views for navigation and counting nodes, but were not explicitly told to navigate in either view. Following the discovery of one of the two topological distances, subjects were instructed to reset the interface and continue using the same strategy to determine the second distance. This method was motivated by results of the pilot study which revealed that subjects often spent more time navigating between the two halves of the task than completing the task itself. Chapter 5. Study 2 59 After being shown the strategies, subjects were given a training block of 5 trials. For each of the first 2 trials, the experimenter demonstrated solving the question using the strategies and then asked the subject to repeat this solution. For the last 3 trials of the session, the subject solved the questions on their own, and the experimenter reminded the subject of the trained strategy as needed. At the end of the training session, subjects were given a one minute break.. Subjects were then presented with 7 blocks, each containing 5 trials, for a total of 35 trials. All subjects were presented with an identical set of questions; the grouping of questions to block was predetermined, but the order of blocks was randomly generated for each subject. The blocks of questions were verified to be isomorphic in difficulty in the pilot study, as described in Chapter 3. The experimenter continued to observe the subject throughout the study, but never intervened. Subjects were given a one minute break between each block of questions. At the end of the experiment, subjects completed a questionnaire, repro-duced in Appendix B, which was used to collect information about their de-mographic background and previous computer usage. Based on the results of Study 1, questionnaires also included the NASA-TLX scales [18], a standard-ized instrument for assessing various dimensions of workload. Space was also provided for subjects to comment on their experiences with the interfaces and provide suggestions for improvement. Short informal interviews were conducted with some of the subjects based on their questionnaire responses. 5.8 Measures Similar to Study 1, our performance measures were based on logged data and included task completion times, number of discrete navigation actions of pan, zoom in, and zoom out, including the amount of time spent doing each action, reset actions, and errors. Self-reported measures were collected through the post-experiment questionnaire, reproduced in Appendix B. These included the NASA-TLX ratings, as we wanted to gather further subjective information re-Chapter 5. Study 2 60 garding task workload, as well as ratings of how easy to use, easy to navigate, and enjoyable the interfaces were on a 5-point Likert scale. 5.9 Results This section presents the results for both performance and self-reported mea-sures of the experiment. Only the results for presence of overview are reported in detail. The detailed results pertaining to the evaluation of navigation technique are reported by Nekrasovski [37], A series of ANOVAs was run to understand the effect of overview on the performance and self-reported measures. Prior to these analyses, outlier data lying more than 3 standard deviations from the means of each cell were re-moved from the analysis. The Greenhouse-Geisser adjustment was used for non-spherical data, and the Bonferroni adjustment for post-hoc comparisons. Along with statistical significance, we report partial eta-squared {rj2), a mea-sure of effect size, which is often more informative than statistical significance in applied human-computer interaction research [26]. To interpret this value, .01 is a small effect size, .06 is medium, and .14 is large [11]. 5.9.1 L e a r n i n g Effects The overall results for mean completion times per trial are illustrated in Figure 5.7. As expected,. performance improved as subjects progressed through the experiment, although the rate of improvement did vary among the interfaces, with a significant main effect of block (F(3.174,114.26) = 44.568, p < .001, rj2 = .553) and a significant interaction between block and navigation (F(3.176,114.35 = 3.721), p < 0.02, rf = .094). Separate one-way repeated measures ANOVAs were run for each of the inter-faces to determine performance plateaus. Post-hoc pairwise comparisons showed no differences between blocks 5, 6, and 7 for any of the interfaces, indicating that, as opposed to the results of Study 1, performance had reached a plateau Chapter 5. Study 2 61 90 -i 80 -a R S N - O v e r v i e w >< P Z N - O v e r v i e w - e - R S N + O v e r v i e w -A- P Z N + O v e r v i e w F 30 -20 -10 -0 1 2 3 4 5 6 7 Block Figure 5.7: Mean completion times per trial for each interface by block in sec-onds (N=40). by the end of the experiment in all interfaces. Thus, for the remaining perfor-mance analyses, we focus exclusively on blocks 1 and 7, which represent rookie and adept user performance. This analysis enables us to examine whether any of our results differ after learning. For these analyses, 2 (navigation) by 2 (presence of overview) by 2 (block) ANOVAs were performed. 5.9.2 Navigation Counter to our hypothesis HI, both our logged and self-reported measures showed that pan and zoom outperformed rubber sheet navigation. The detailed results containing a complete description of results and discussion are reported by Nekrasovski [37]. Chapter 5. Study 2 62 5.9.3 Presence of O v e r v i e w Presence of overview had no significant effect on any of the performance mea-sures. This finding supports our hypothesis H2, but is counter to our hypothesis H3. The self reported measures did, however, favor an overview. The combined results for completion times for block 7 are illustrated in Figure 5.8, and broken down by navigation technique in Figure 5.9. There was no interaction effect of overview on completion time (F(l,36) = .724, p > .4, rf — .02) nor was there an overall effect (F(l,36) = .086, p > .7, rf = .002). The sizes of both these effects were extremely small. Similar results were obtained for navigation actions (F(l,36) = .665, p > .4, rj2 = .018) and resets (F(l,36) = .056, p > .8, rf = .002). There was, however, a significant main effect of overview on the T L X physical demand measure (F(l,36) = 6.215, p < .02, T?2 = .147), with subjects reporting a lower physical demand for interfaces with an overview. Interfaces with an overview were also rated as significantly more enjoyable than those without an overview (F(l,36) = 4.643, p < .05, rf — .114), a finding consistent with results previously reported in the literature. 5.9.4 E r r o r R a t e On average, subjects committed 1.6 errors over the course of the experiment, for a mean error rate of 4.7%. There were no significant main or interaction effects of navigation or presence of overview on error rate. 5.9.5 S u m m a r y of Resu l t s We summarize the results according to the experimental hypotheses: R l : Pan and zoom interfaces performed better than rubber sheet navigation interfaces in terms of completion times, navigation actions, and resets. Mental demand was also reported as lower in pan and zoom (see [37] for complete details). Chapter 5; Study 2 63 Presence of overview Figure 5.8: Boxplot of presence of overview vs. completion time per trial for block 7 (N=40). The line through each box represents the median. The lower and upper bounds of each box represent the 25TH and 75TH percentiles, while the range between the whiskers represents 95 percent of the data points. R 2 : For rubber sheet navigation, having an overview made no significant dif-ference in terms of completion times, navigation actions, or resets. Having an overview was, however, reported to reduce physical demand. R 3 : Similarly, for pan and zoom, having an overview made no significant dif-ference in terms of completion times, navigation actions, or resets. Having an overview was, however, reported to reduce physical demand. Chapter 5. Study 2 64 60 -, 50 -i — —* 40 - — - — i — Pan and Zoom econds) 30 -ir--»•• Rubber Sheet Navigation 'ime (s r— 20 -10 -n u ! 1 No Yes Presence of overview Figure 5.9: No interaction was found between presence and absence of overview for both pan and zoom and rubber sheet navigation vs. completion time per trial for block 7 (N=40). 5.10 Follow-up Investigation Based on the mixed findings for the overview factor, we decided to run a small follow-up investigation to explore overviews in more detail. To summarize the mixed findings, quantitative results indicated that the presence of an overview did not have a significant effect on performance, while self-reported measures indicated that overviews reduced physical effort and made the interface more en-joyable to use. We speculated that the off-screen guaranteed visibility, described in Section 4.4.3, may eliminate the need for an overview. The follow-up investi-gation therefore evaluated 10 additional subjects with the modified Interface 4 without off-screen guaranteed visibility to the 10 subjects from the Interface 4 Chapter 5. Study 2 65 condition in Study 2, with each interface maintaining sub-pixel visibility. The results of the follow-up investigation revealed no significant differences, in terms of completion times, navigation actions, and resets, between the two interfaces. The implications of Study 2 and this follow-up investigation are presented next. 5.11 Discussion Our work raises several interesting issues with respect to the presence of overviews and guaranteed visibility. 5.11.1 Presence of O v e r v i e w Contrary to most previous findings in the literature, the presence or absence of an overview in pan and zoom interfaces did not significantly affect user per-formance. Based on observational data, we distinguish two primary patterns of overview use in the pan and zoom with an overview interfaces examined in the controlled study and the follow-up investigation - glancing and interacting. While both types of overview use have a performance cost associated with them, we postulated that simply glancing at the overview for orientation is less costly than interacting with it. Based on observational and log data, subjects tended to adopt one of these patterns for the duration of the experiment. To investigate overviews further, we grouped and analyzed subject data based on patterns of overview use, but found no affect of overview use pattern on performance. Another possible reason for not observing any performance differences be-tween the glancing and interacting patterns of use may be explained by our choice of task. While our task was successful in requiring subjects to exercise the different navigation actions afforded by each interface, it was not designed to force a specific pattern of use of the overview. Subjects were therefore able to efficiently complete the task in both the controlled study and follow-up investi-gation without interacting with the overview. However, we note that subjects in our study reported interfaces with an overview to be more enjoyable to use and Chapter 5. Study 2 66 less physically demanding. We speculate that, similar to the results of [2], sub-jects attempted to minimize memory use during the task. Thus, in conditions where the overview was absent, subjects were forced to acquire information in-crementally during- the task, rather than rely on the overview for instant access to this information. We also speculate that while our task did not force subjects to interact with the overview, it was successful in encouraging subjects to glance at the overview, resulting in subjective benefits, but not performance benefits, revealed no impact of pattern on performance. We speculate that a different task which requires heavier interaction with the overview, and where glancing will not suffice, is likely to show a performance benefit with an overview. 5.11.2 Guaranteed Visibility The presence of guaranteed visibility across all interfaces used in the study may also explain the lack of effect of overview on the performance data. We speculate that the guaranteed visibility of colored nodes in the detail view rendered the overview less necessary, as users were not required to rely on the overview for orientation. In fact, the results of the follow-up investigation indicate that sub-pixel guaranteed visibility, as described in Section 4.4.3, on its own may perform just as well as having an overview, though further study is required to confirm this hypothesis. While our results are contrary to most of those previously reported, our findings are consistent with those of Hornbaek et al. [20], who found that an interface with an overview was not significantly faster than one without an overview. Hornbaek et al. attributed this difference to the use of semantic zooming, which provided users with similar navigation cues to those provided by an overview. In the case of our study, we believe that the presence of guaran-teed visibility provided similar navigation cues that rendered the overview less useful. We speculate that guaranteed visibility may therefore provide the same performance benefits as overviews in terms of navigation cues. However, we note that subjects in our study reported interfaces with overviews to be less physi-Chapter 5. Study 2 67 cally demanding and more enjoyable in both pan and zoom and rubber sheet navigation interfaces, which is also consistent with the results of Hornbaek et al. Thus, while overviews may not be differentiable from alternatives such as guaranteed visibility in terms of performance, we believe that they may act as a cognitive cushion, which provides the user with a greater feeling of satisfaction and enjoyment, but does not lead to performance benefits. We therefore recom-mend the use of overviews to provide contextual information if screen real estate is not an issue. However, in interfaces where contextual information cannot be provided with an overview, techniques such as guaranteed visibility may prove acceptable substitutes. 68 Chapter 6 Conclusions and Future Work The primary goal of the work presented in this thesis was to evaluate and com-pare the effect of adding an overview to pan and zoom and Focus+Context interfaces. Our studies were run to measure several aspects of the efficiency of pan and zoom and Focus+Context interfaces with an overview in the context of exploring large, hierarchical trees. While previous work has explored the bene-fits of overviews for pan and zoom interfaces, there has never been a controlled experiment which explored the potential benefits of adding an overview to Fo-cus+Context interfaces. This work addresses that gap and provides results to strengthen the existing body of research on the qualitative benefits of overviews. Additionally, this research strengthens previous work which brought into ques-tion the quantitative benefits of overviews, and recommends guaranteed visibil-ity as a possible substitute for overviews. The implications for interface design, including the potential tradeoffs between overviews and guaranteed visibility, will help to guide future research in the area. 6.1 Limitations Our two studies were conducted as controlled lab experiments. With any lab ex-periment, there is a trade-off between realism and generalizability for increased precision [32]. While our studies aimed for high ecological validity, specifically in our choice of task and dataset, there are still several limitations that should Chapter 6. Conclusions and Future Work 69 be discussed. The issue of generalizability arises for several reasons. First, out of the four tasks that we developed, we were only able to evaluate one within the scope of this work. We chose this task due to its relative complexity to the other tasks, its importance to phylogenetic analysis, and our belief that it would require subjects to perform multiple navigation actions to complete. Further research is required to determine if our results generalize to a wider class of topological tasks, as described in Chapter 3. Our choice of dataset also limits generalizability. Given the factors that we wished to examine in our studies, it was not possible to vary the choice of dataset within the scope of two experiments. We chose this dataset due to its topological complexity, which we believed would require subjects to perform a significant amount of navigation to solve a topological task. Again, further research is required to determine if the results of our studies generalize to a wider class of datasets, including n-ary trees. Furthermore, these experiments evaluated very specific Overview+Detail and Focus+Context interfaces. While the design and development of our inter-faces were guided by previous research and rigorous piloting, questions remain as to how generalizable these interfaces are to existing tools for exploring large datasets. In particular, as these are the first studies to combine an overview with a Focus+Context interface, specific design decisions had to be made with-out the support of prior guidelines in the literature. For this reason, it is possible that our implementation of an overview for a Focus+Context interface was not optimal. However, given that we are the first to combine overviews with Fo-cus+Context interfaces, this work should be considered an initial step towards exploring the potential benefits of combining overviews with Focus+Context interfaces, and characterizing the limitations of Focus+Context techniques. Chapter 6. Conclusions and Future Work 70 6.2 Conclusion In this thesis we have presented an evaluation of pan and zoom and Focus+Context interfaces with an overview. Our results indicate that the size of the overview did not affect performance, but the presence of an overview did impact the strategy users adopted. Moreover, the presence or absence of an overview also did not affect performance. Nevertheless, interfaces with overviews were found to be less physically demanding and more enjoyable. These mixed results may be explained by exploring the interaction between overviews and guaranteed visibility. Both techniques aim to provide the user with explicit information concerning regions outside the current field of view. However, guaranteed visibility aims to achieve this goal by integrating visual cues within the primary detail view, while an overview is inherently a separate view designed to provide similar information. We speculate that the presence of guaranteed visibility may eliminate the need for an overview for navigation purposes. However, the overview may still provide additional benefits, such as leading to a greater feeling of satisfaction and enjoyment, while using the interface. 6.3 Future Work In addition to extending our evaluation to different tasks, both topological and non-topological, as well as to different datasets, several possibilities for future work arise from our results. 6.3.1 Exploring Patterns of Navigation in Overviews As discussed in Section 5.11.1, we observed two distinct patterns of overview use - glancing and interacting. These patterns need to be investigated more precisely through the use of eye tracking technology. Eye tracking has already been used successfully to examine navigation patterns in 2D and 3D visualizations [58] and Focus+Context interfaces [42]. An important next step would be to determine Chapter 6. Conclusions and Future Work 71 the extent to which users glance at an overview, and thereby clarify the benefits of overviews of different sizes. 6.3.2 E x p l o r i n g the R e l a t i o n s h i p B e t w e e n O v e r v i e w s a n d G u a r a n t e e d V i s i b i l i t y The combined findings from our studies and our follow-up investigation suggest that guaranteed visibility on its own may provide performance benefits equiv-alent to overviews in terms of navigation. An obvious next step is our work is to conduct a formal experiment to explore these different methods of providing contextual information. Such an investigation could examine the trade-offs be-tween providing different types of guaranteed visibility, as discussed in Section 4.4.3, in both the overview and detail views of pan and zoom and Focus+Context interfaces. 72 Bibliography [1] Christopher Ahlberg and Ben Shneiderman. Visual information seeking: tight coupling of dynamic query filers with starfield displays. In Proc. ACM CHI 1994, pages 313-317, 1994. [2] D.H. Ballard, M.M. Hayhoe, and J.B. Pelz. Memory representations in natural tasks. Cognitive Neuroscience, 7(l):68-82, June 1995. [3] Patrick Baudisch, Bongshin Lee, and Libby Hanna. Fishnet, a fisheye web browser with search term popouts: a comparative evaluation with overview and linear view. In Proc. AVI 2004, pages 133-140, 2004. [4] Patrick Baudisch and Ruth Rosenholtz. Halo: a technique for visualizing off-screen locations. In Proc. ACM CHI 2003, pages 481-488, 2003. [5] David Beard and John Walker. Navigational techniques to improve the display of large two-dimensional spaces. Behav. Info. Techn., 9(6):451-466, 1990. [6] Benjamin B. Bederson, Aaron Clamage, Mary P. Czerwinski, and George G. Roberson. DateLens: A fisheye calendar interface for PDAs. ACM ToCHI 2004, 11(1):90-119, March 2004. [7] Eric A. Bier, Maureen C. Stone, Ken Pier, William Buxton, and Tony D. DeRose. Toolglass and magic lenses: the see-through interface. In Proc. ACM SIGGRAPH 1993, pages 73-80, 1993. Bibliography 73 [8] Stuart Card, Jock Mackinlay, and Ben Shneiderman. Readings in Informa-tion Visualization: Using Vision to Think. Morgan-Kaufman, San Fran-cisco, CA, 1999. [9] Stuart Card and David Nation. Degree-of-interest trees: a component of an attention-reactive user interface. In Proc. AVI 2002, pages 231-245, 2002. [10] Savrina F. Carrizo. Phylogenetic trees: an information visualization per-spective. In Proc. Conference on Asia-Pacific Bioinformatics, pages 315-320, 2004. [11] Jacob Cohen. Eta-squared and partial eta-squared in communication sci-ence. Human Communication Research, 28:473-490, 1973. [12] Sam Donovan. Assessing tree thinking and its role in understanding evo-lution (poster). In Four Year College Section of the National Association of Biology Teachers Meeting, 2004. [13] Jean Daniel Fekete and Catherine Plaisant. InfoVis contest 2003. h t tp: / / www.cs.umd.edu/hcil/iv03contest/datasets.html. Retrieved October 5, 2005. [14] G. W. Furnas. Generalized fisheye views. In Proc. ACM CHI 1986, pages 16-23, 1986. [15] Carl Gutwin. Improving focus targeting in interactive fisheye views. In Proc. ACM CHI 2002, pages 267-274, 2002. [16] Carl Gutwin and Chris Fedak. A comparison of fisheye lenses for interactive layout tasks. In Proc. Graphics Interface 2004, pages 213-220, 2004. [17] Carl Gutwin and Amy Skopik. Fisheyes are good for large steering tasks. In Proc. ACM CHI 2003, pages 201-208, 2003. [18] S.G. Hart and L.E. Staveland. Development of NASA-TLX Task Load In-dex: results of empirical and theoretical research. In Advances in psychol-Bibliography 74 ogy: human mental workload, pages 139-183. Elsevier Science, Amsterdam, North-Holland, 2000. [19] Ivan Herman, Guy Melancon, and M. Scott Marshall. Graph visualiza-tion and navigation in information visualization: a survey. IEEE TVCG, 6(l):24-43, January 2000. [20] Kasper Hornbaek, Benjamin B. Bederson, and Catherine Plaisant. Navi-gation patterns and usability of zoomable user interfaces with and without an overview. ACM ToCHI, 9(4):362-389, December 2002. [21] Kasper Hornbaek and Erik Frojaer. Reading of electronic documents: the usability of linear, fisheye, and overview+detail interfaces. In Proc. ACM CHI 2001, pages 293-300, 2001. [22] Susanne Jul and George Furnas. Critical zones in desert fog: aids to mul-tiscale navigation. In Proc. ACM VIST 1998, pages 97-106, 1998. [23] T. Alan Keahey and Edward L. Robertson. Nonlinear magnification fields. In Proc. IEEE InfoVis 1997, pages 51-59, 1997. [24] Alfred Kobsa. User experiments with tree visualization systems. In Proc. IEEE InfoVis 2004, pages 9-16, 2004. [25] John Lamping, Ramana Rao, and Peter Pirolli. A focus+context technique based on hyperbolic geometry for visualizing large hierarchies. In Proc. ACM CHI 1995, pages 401-408, 1995. [26] T. Landaurer. Behavioral research methods in human-computer interac-tion. In Handbook of human computer interaction, chapter 9, pages 203-227. Elsevier Science, 1997. [27] Keith Lau, Ronald A. Rensink, and Tamara Munzner. Perceptual invari-ance of nonlinear focus+context transformations. In Proc. ACM APGV 2004, Pages 65-72, 2004. Bibliography 75 [28] Bongshin Lee, Cynthia Sims Parr, and Dana Campbell. How users interact with biodiversity information using taxontree. In Proc. AVI 2004, pages 320-327, 2004. [29] Y.K. Leung and M.D. Apperley. A review and taxonomy of distortion-oriented presentation techniques. ACM ToCHI, 1(2):126-160, June 1994. [30] Jock D. Mackinlay, George G. Roberson, and Stuart K. Card. The per-spective wall: detail and context smoothly integrated. In Proc. ACM CHI 1991, pages 173-176, 1991. [31] Wayne P. Maddison and David R. Maddison. MacClade: Analysis of phy-logeny and character evolution (User's manual). Sinauer Associates, Sun-derland, MA, 1992. [32] Joseph E. McGrath. Methodology matters: doing research in the behavioral and social sciences. In Human-computer interaction: toward the year 2000, pages 152-169. Morgan Kaufmann, San Francisco, CA, 1995. [33] Microsoft streets and trips, http://www.microsoft.com/streets. Re-trieved October 21, 2005. [34] Kevin Mullet, Christopher Fry, and Diane Sano. On your marks, get set, browse! (the great chi'97 browse off). In Proc. ACM CHI Extended Ab-stracts 1997, 1997. [35] Tamara Munzner. Drawing large graphs with H3Viewer and site manager. In Proc. Graph Drawing 1998, Lecture Notes in Comp. Sci 1547, pages 384-393. Spring-Verlag, 1998. [36] Tamara Munzner, Francois Guimbretiere, Serdar Tasiran, Li Zhang, and Yunhong Zhou. TreeJuxtaposer: scalable tree comparison using fo-cus+context with guaranteed visibility. In Proc. ACM SIGGRAPH 2003, pages 453-462, 2003. Bibliography 76 [37] Dmitry Nekrasovski. An evaluation of pan&zoom and rubber sheet nav-igation. Master's thesis, Department of Computer Science, University of British Columbia, 2005. [38] Dmitry Nekrasovski, Adam Bodnar, Joanna McGrenere, Francois Guim-bretire, and Tamara Munzner. An evaluation of pan&zoom and rubber sheet navigation with and without an overview. Proc. ACM CHI 2006, to appear, 2006. > [39] Christopher North, Ben Shneiderman, and Catherine Plaisant. User con-trolled overviews of an image library: A case study of the visible human. In Proc. ACM Digital Libraries 1996, pages 74-82, 1996. [40] Olduvai project website, http://www.olduvai.sourceforge.net. Re-trieved September 23, 2005. [41] Ken Perlin and David Fox. Pad: An alternative approach to the computer interface. In Proc. ACM SIGGRAPH 1993, pages 57-64, 1993. [42] Peter Pirolli, Stuart K. Card, and Mija M. Van Der Wege. Visual informa-tion foraging in a focus+context visualization. In Proc. ACM CHI 2001, pages 506-513, 2001. [43] Catherine Plaisant. The challenge of information visualization evaluation. In Proc. AVI 2004, 2004. [44] Catherine Plaisant. Information visualization and the challenge of universal usability. In Exploring Geovisualization, chapter 3, pages 53-82. Elsevier, Oxford, 2005. [45] Catherine Plaisant, David Carr, and Ben Shneiderman. Image browsers: taxonomy, guidelines, and informal specifications. IEEE Software, 28(2):21-32, 1995. [46] Catherine Plaisant, Jesse Grosjean, and Benjamin B. Bederson. Space-Tree: supporting exploration in large node link tree, design evolution and empirical evaluation. In Proc. IEEE InfoVis 2002, pages 57-65, 2002. Bibliography 77 [47] Jef Raskin. The humane interface: new directions for designing interactive systems. Addison-Wesley, Reading, MA, 2000. [48] George G. Robertson and Jock D. Mackinlay. The document lens. In Proc. ACM VIST 1993, pages 101-108, 1993. [49] Ursula Rost and Erich Bornberg-Bauer. Treewiz: Interactive exploration of huge trees. Bioinformatics, 18(1):109-114, 2002. [50] Manojit Sarkar, Scott S. Snibbe, Oren J. Tversky, and Steven P. Reiss. Stretching the rubber sheet: a metaphor for viewing large layouts on small screens. In Proc. ACM VIST 1993, pages 81-91, 1993. [51] W. Schafer and D. Bowman. A comparison of traditional and fisheye radar view techniques for spatial collaboration. In Proc. Graphics Interface 2003, pages 23-46, 2003. [52] Doug Schaffer, Zhengping Zuo, Saul Greenberg, Lyn Bartram, John Dill, Shelli Dubs, and Mark Roseman. Navigating hierarchically clustered net-works through fisheye and full-zoom methods. ACM ToCHI, 3(2):162-188, June 1996. [53] Ben Shneiderman. Designing the User Interface. Addison-Wesley, Reading, MA, 1998. [54] Amy Skopik and Carl Gutwin. Finding things in fisheyes: memorability in distorted spaces. In Proc. Graphics Interface 2003, pages 67-75, 2003. [55] James Slack, Kristian Hildebrand, and Tamara Munzner. PRISAD: Parti-tioned rendering infrastructure for stable accordion drawing. In Proc. IEEE InfoVis 2005, pages 41- 48, 2005. [56] James Slack, Kristian Hildebrand, Tamara Munzner, and Katherine St. John. SequenceJuxtaposer: Fluid navigation for large-scale sequence com-parison in context. In Proc. German Conference on Bioinformatics 2004, pages 37-42, 2004. Bibliography 78 [57] Robert Spense and M. Apperley. Date base navigation: an office environ-ment for the professional. BIT, l(l):43-54, 1982. [58] Melanie Tory, M. Stella Atkins, Arthur E. Kirkpatrick, Marios Nicolaou, and Guang-Zhong Yang. Eyegaze analysis of displays with combined 2D and 3D views. In Proc. IEEE Visualization 2005, pages 66-73, 2005. [59] Stephen Wehrend and Clayton Lewis. A problem-oriented classification of visualization techniques. In Proc. IEEE Visualization 1990, pages 139-143, 1990. [60] Nelson Wong, Sheelagh Carpendale, and Saul Greenberg. EdgeLens: an interactive method for managing edge congestion in graphs. In Proc. IEEE InfoVis 2003, pages 51-58, 2003. [61] Polle T. Zellweger, Jock D. Mackinlay, Lance Good, Mark Stefik, and Patrick Baudisch. City Lights: contextual views in minimal space. In Proc. ACM CHI Extended Abstracts 2003, pages 838-839, 2003. Appendix A Study 1 Training Protocol Appendix A. Study 1 Training Protocol 80 A l l interfaces Thank you for your willingness to participate in our experiment. You will be helping us evaluate different techniques for visualizing large datasets. You will be asked to complete a series of tasks that involve determining relative distances in large trees. First, let's review some concepts that will help you to complete the tasks. Present subjects with paper tests. The task you will perform in this experiment consists of determining the topological distance between a series of marked nodes in the displayed tree, where topological distances are measured by the number of black squares be-tween marked nodes. Remember from the tests that you just completed that topological distance will not equal geometric distance. We will now explore the features of the interface you will use. R S N - O v e r v i e w This interface enables you to explore the dataset using a series of zooming and panning actions that use the metaphor of stretching a rubber sheet with its borders tacked down. The left mouse button will allow you to drag out a box, the contents of which will fill one of the RED or ORANGE focus boxes. The rest of the tree will then be squished around the focus box but will remain visible at all times. Ask participant to try dragging out a box As you are dragging out a box, you may hold down the SHIFT key to indicate that you would like this box to be the new RED focus box, or hold down the CTRL key to indicate that you would like this box to be the new ORANGE focus box. Note also that the SHIFT key is above the CTRL key, just like the RED focus box is above the ORANGE focus box. Ask participant to try dragging out a box using SHIFT and CTRL If you do not select either the SHIFT or CTRL key, the tool will choose which focus box to place the contents of your new box based on the proximately Appendix A. Study 1 Training Protocol 81 of your newly dragged out box to the existing RED and ORANGE focus boxes. You can zoom out by dragging out a box which is larger than either of the colored focus boxes. Ask participant to try zooming out The right mouse button will allow you to pan horizontally and vertically within the dataset using either a horizontal or vertical drag motions, which will let you fine tune your selection. Ask participant to try panning P Z N - O v e r v i e w This interface enables you to explore the dataset using two views which you can navigate through a series of pan and zoom actions. Show subject paper illustration of two views The two detail views are independent of one another, so you can navigate in one without affecting the other. It is also possible to overlap the two detail views, and even have one inside the other. The left mouse button will allow you to drag out a box which will become the new extent of your detail view. Ask participant to try zooming in Once you are zoomed in, you may hold down the right mouse button and pan in any direction. You cannot pan if you are zoomed out entirely. Ask participant to try panning Holding down the middle mouse button and dragging the mouse toward you will allow you to zoom out. As you zoom out, you may also drag the mouse in the opposite direction to zoom back in, but only to the extent that you first began to zoom out. Ask participant to try zooming out If a marked node is not currently in view, an arc will appear at the border of the detail view, indicating the direction and distance from your current focus box to the marked node. The arc is part of a circular ring that surrounds one of the nodes which is currently off-screen. This ring is just large enough to reach Appendix A. Study 1 Training Protocol 82 the border region of the display. The colour of the arc indicates the colour of the marked node it represents. Once a marked node is visible on screen, the arc will disappear. No marks will appear in the overview window since marked nodes are always visible. Arcs are view dependent. R S N + O v e r v i e w This interface enables you to explore the dataset using a series of zooming and panning actions that use the metaphor of stretching a rubber sheet with its borders tacked down. A separate window will provide you with an overview of the dataset, and will not be distorted. The left mouse button will allow you to drag out a box, the contents of which will fill one of the RED or ORANGE focus boxes. The rest of the tree will then be squished around the focus box but will remain visible at all times. Ask participant to try dragging out a box As you are dragging out a box, you may hold down the SHIFT key to indicate that you would like this box to be the new RED focus box, or hold down the CTRL key to indicate that you would like this box to be the new ORANGE focus box. Note also that the SHIFT key is above the CTRL key, just like the RED focus box is above the ORANGE focus box. Ask participant to try dragging out a box using SHIFT and CTRL If you do not select either the SHIFT or CTRL key, the tool will choose which focus box to place the contents of your new box based on the proximately of your newly dragged out box to the existing RED and ORANGE focus boxes. You can zoom out by dragging out a box which is larger than either of the colored focus boxes. Ask participant to try zooming out The right mouse button will allow you to pan horizontally and vertically within the dataset using either a horizontal or vertical drag motions, which will let you fine tune your selection. Ask participant to try panning Appendix A. Study 1 Training Protocol 83 A separate smaller window will provide you with an overview of the dataset, and indicate where in the dataset your current focus boxes are. In the overview, the left mouse button will allow you to drag out a box, the contents of which will fill one of the RED or ORANGE focus boxes. As you are dragging out a box, you may hold down the SHIFT key to indicate that you would like this box to be the new RED focus box, or hold down the CTRL key to indicate that you would like this box to be the new ORANGE focus box. Ask participant to try dragging out a box in the overview using SHIFT and CTRL You may also hold down the right mouse button while inside one of the boxes representing the location of your focus box and move it to wherever you like within the bounds of the overview using a series of drag actions. Ask participant to try panning in the overview P Z N + O v e r v i e w This interface enables you to explore the dataset using two detail views which you can navigate through a series of pan and zoom actions. Show subject paper illustration of two views The two detail views are independent of one another, so you can navigate in one without affecting the other. It is also possible to overlap the two detail views, and even have one inside the other. The left mouse button will allow you to drag out a box which will become the new extent of your detail view. Ask participant to try zooming in a detail view Once you are zoomed in, you may hold down the right mouse button and pan in any direction. You cannot pan if you are zoomed out entirely. Ask participant to try panning in a detail view Holding down the middle mouse button and dragging the mouse toward you will allow you to zoom out. As you zoom out, you may also drag the mouse in the opposite direction to zoom back in, but only to the extent that you first began to zoom out. Appendix A. Study 1 Training Protocol 84 Ask participant to try zooming out in a detail view A separate smaller window will provide you with an overview of the dataset, and indicate where in the dataset your current detail views are. In the overview, the left mouse button will allow you to drag out a box which will become the new extent of your detail view. As you are dragging out a box, you may hold down the SHIFT key to indicate that you would like this box to be the new RED detail view, or hold down the CTRL key to indicate that you would like this box to be the new ORANGE detail view. Ask participant to try zooming in overview using SHIFT and CTRL Note also that the SHIFT key is above the CTRL key, just like the RED detail view above the ORANGE detail view. You may also hold down the right mouse button while inside one of the boxes representing the location of your detail view and move it to wherever you like within the bounds of the overview using a series of drag actions. The modifier keys only work in the overview window. Ask participant to try panning in overview using SHIFT and CTRL If a marked node is not currently in view, an arc will appear at the border of the detail view, indicating the direction and distance from your current focus box to the marked node. The arc is part of a circular ring that surrounds one of the nodes which is currently off-screen. This ring is just large enough to reach the border region of the display. The colour of the arc indicates the colour of the marked node it represents. Once a marked node is visible on screen, the arc will disappear. No marks will appear in the overview window since marked nodes are always visible. Arcs are view dependent. A l l interfaces Do you have any questions about this interface? The R key can be pressed to reset your current view to its initial startup state. The ESC key can be pressed during a box drag action to cancel your current Appendix A. Study 1 Training Protocol 85 drag. A question panel at the top of the screen will display a question which will require you to use the interface to solve. The question will ask you to compare the topological distances between marked nodes in the tree. The topological distance between marked nodes will never be equal. The question will never change, but the location of the marked nodes will, thus you will be required to navigate and explore different areas within the large tree to answer the question correctly. When you have discovered the answer, we ask that you select the appropriate check box and click on the submit button. This will allow you to move onto the next question. An instruction panel at the left of the interface will serve as a reminder of interface specific controls We will now ask you to complete a series of training tasks using this interface. There is no time limit for completing these tasks - we want you to take as much time as you need to ensure that your answer is correct. We want to emphasize that we are evaluating the system and not your ability to use it. For this reason, you will receive no feedback as to whether your answers for the tasks were correct. A good strategy for solving the tasks is to draw out long horizontal thin boxes. This will help you to see the larger tree in more detail. Appendix B Study 2 Training Protocol Appendix B. Study 2 Training Protocol 87 A l l interfaces Thank you for your willingness to participate in our experiment. You will be helping us evaluate different techniques for visualizing large datasets. You will be asked to complete a series of tasks that involve determining relative distances in large trees. First, let's review some concepts that will help you to complete the tasks. Present subjects with paper tests. The task you will perform in this experiment consists of determining the topological distance between a series of marked nodes in the displayed tree, where topological distances are measured by the number of black squares be-tween marked nodes. Remember from the tests that you just completed that topological distance will not equal geometric distance. We will now explore the features of the interface you will use. R S N - O v e r v i e w This interface enables you to explore the dataset using a view which you can navigate using pan and zoom actions. The view uses the metaphor of stretching and squishing a rubber sheet with its borders tacked down. Note that the colored nodes are visible at all times, even if they are squished to the edges of the view. The left mouse button will allow you to drag out a box, the contents of which will fill the red box. The rest of the tree will then be squished around the red box but will remain visible at all times. A sk participant to try dragging out a box. You can zoom out by dragging out a box which is larger than the red box. Ask participant to try zooming out. The right mouse button will allow you to pan horizontally and vertically within the view using either horizontal or vertical drag motions, which will let you fine tune your selection. Ask participant to try panning. Appendix B. Study 2 Training Protocol 88 You can use the colored nodes as visual anchors to help maintain orientation while performing navigation actions. As you zoom or pan, you can monitor the location and size of the colored nodes, which will give you an idea of what path to follow and how much farther you have to go. PZN-Overview This interface enables you to explore the dataset using a view which you can navigate using pan and zoom actions. The left mouse button will allow you to drag out a box, the contents of which will then zoom to fill the view completely . Ask participant to try zooming in. Once you are zoomed in, you may hold down the right mouse button and pan in any direction. You cannot pan if you are zoomed out entirely. Ask participant to try panning. Holding down the middle mouse button and dragging the mouse toward you will allow you to zoom out. As you zoom out, you may also drag the mouse in the opposite direction to zoom back in, but only to the extent that you first began to zoom out. Ask participant to try zooming out. If a marked node is not currently in view, a colored arc will appear at the border of the detail view, indicating the direction and distance from your current focus box to the marked node. The arc is part of a circular ring that surrounds any marked node which is currently off-screen. The color of the arc indicates the color of the marked node it represents. Once a marked node is visible on screen, the arc will disappear. You can use the arcs as visual anchors to help maintain orientation of marked nodes while performing navigation actions. As you zoom out or pan, you can monitor the shape and size of the arc, which will give you an idea of what path to follow and how much farther you have to go. Appendix B. Study 2 Training Protocol 89 R S N - f - O v e r v i e w This interface enables you to explore the dataset using two views which you can navigate through using pan and zoom actions. The larger view will display detailed information about parts of the dataset. This view uses the metaphor of stretching and squishing a rubber sheet with its borders tacked down. Note that the colored nodes are visible at all times, even if they are squished to the edges of this view. The smaller view will provide you with an overview of the dataset, and indicate where in the dataset the detail view is at any given time. This view does not use the rubber sheet metaphor. The left mouse button will allow you to drag out a box, the contents of which will fill the red box. The rest of the tree will then be squished around the red box but will remain visible at all times. A sk participant to try zooming in in the detail view. You can zoom out by dragging out a box which is larger than the red box. Ask participant to try zooming out in the detail view. The right mouse button will allow you to pan horizontally and vertically within the view using either a horizontal or vertical drag motions, which will let you fine tune your selection. Ask participant to try panning in the detail view. In the smaller view, the left mouse button will allow you zoom into an area by dragging out a box which will become the new contents of the red box in the larger view. Ask participant to try zooming in overview. You can also hold down the right mouse button while inside the red box in the smaller view, and move it within the view using a series of drag actions. Ask participant to try panning in overview. You can use the colored nodes as visual anchors to help maintain orientation while performing navigation actions. As you zoom or pan, you can monitor the location and size of the colored nodes, which will give you an idea of what path Appendix B. Study 2 Training Protocol 90 to follow and how much farther you have to go. P Z N + O v e r v i e w This interface enables you to explore the dataset using two views which you can navigate through using pan and zoom actions. The larger view will display detailed information about parts of the dataset. The smaller view will provide you with an overview of the dataset, and indicate where in the dataset the detail view is at any given time. The left mouse button will allow you to drag out a box, the contents of which will then zoom to fill the larger view completely. Ask participant to try zooming in the detail view. Once you are zoomed in, you may hold down the right mouse button and pan in any direction. You cannot pan if you are zoomed out entirely. Ask participant to try panning in the detail view. Holding down the middle mouse button and dragging the mouse toward you will allow you to zoom out. As you zoom out, you may also drag the mouse in the opposite direction to zoom back in, but only to the extent that you first began to zoom out. Ask participant to try zooming out in the detail view. In the smaller view, the left mouse button will allow you zoom into an area by dragging out a box which will become the new extent of your detail view. Ask participant to try zooming in overview. You can also hold down the right mouse button while inside the red box in the smaller view, and move it within the view using a series of drag actions. Ask participant to try panning in overview. If a marked node is not currently in view, a colored arc will appear at the border of the detail view, indicating the direction and distance from your current focus box to the marked node. The arc is part of a circular ring that surrounds any marked node which is currently off-screen. The color of the arc indicates the color of the marked node it represents. Once a marked node is visible on Appendix B. Study 2 Training Protocol 91 screen, the arc will disappear. You can use the arcs as visual anchors to help maintain orientation of marked nodes while performing navigation actions. As you zoom out or pan, you can monitor the shape and size of the arc, which will give you an idea of what path to follow and how much farther you have to go. A l l interfaces Do you have any questions about this interface? The R key can be pressed to reset your current view to its initial startup state. The ESC key can be pressed during a box drag action to cancel your current drag. All the controls I just showed you are also listed at the left of the window in case you need a reminder. At the top of the window is the task you will perform in this experiment. You will need to determine whether the purple node is topologically closer to the blue node or the green node in the tree. The task will never change, but the location of the marked nodes will with each task. You cannot skip or go back to previously answered questions. Note that the topological distances to the blue node and the green node will never be equal, but they may be close. If it seems as though they are equal, perform more navigation, and you will discover that they are different from each other. Note that there is only one path between any two nodes in the tree. You can use this pen/pencil and sheet of paper to write down topologi-cal distances between nodes so that you don't have to remember them as you performing the task. When you are ready, select the appropriate answer and click on the submit button. This will allow you to move onto the next question. We want to emphasize that we are evaluating the system and not your ability Appendix B. Study 2 Training Protocol 92 to use it. For this reason, you will receive no indication of whether your answer is correct. There is no time limit for completing these tasks. Take as much time as you need to ensure that your answer is correct, but do work as efficiently as you can. RSN-Overview A good strategy for using this interface is to draw out long thin boxes. This will help you to see the larger tree in more detail. It's often helpful to draw long horizontal boxes to zoom into the details of the dataset, and to draw long vertical boxes to expand areas that are squished vertically. Demonstrate this, then ask participant to do it. Another useful strategy is to reset the interface when you have found one of the topological distances before you move onto another distance. Demonstrate this, then ask participant to do it. PZN-Overview A good strategy for using this interface is to draw out long thin boxes. This will help you to see the larger tree in more detail. Demonstrate this, then ask participant to do it. Once you have zoomed in to the area around either the blue or the green node, you can count the number of nodes on the path that are close to it. Then you can slowly zoom out and, as you see more nodes on the path to the purple node, add them to your count. Demonstrate this, then ask participant to do it. Additionally, you can reset the interface when you have found one of the topological distances before you move onto another distance. RSN+Overview A good strategy for using this interface is to draw out long thin boxes. This will help you to see the larger tree in more detail. It's often helpful to draw Appendix B. Study 2 Training Protocol 93 long horizontal boxes to zoom into the details of the dataset, and to draw long vertical boxes to expand areas that are squished vertically. Demonstrate this, then ask participant to do it. Another useful strategy is to first zoom in to the area around either the blue or the green node using the small view. Then you can use either view to explore the path to the purple node. Note that you can count nodes along the path in either view. If you need to make small adjustments, you can pan; for larger movements, you can zoom in either view. Demonstrate this, then ask participant to do it. You can also reset the interface when you have found one of the topological distances before you move onto another distance. Demonstrate this, then ask participant to do it. We strongly suggest you use these strategies as you are answering the ques-tions. P Z N + O v e r v i e w A good strategy for using this interface is to draw out long thin boxes. This will help you to see the larger tree in more detail.. Demonstrate this, then ask participant to do it. Another useful strategy is to first zoom in to the area around either the blue or the green node using the small view. Then you can use either view to explore the path to the purple node. Note that you can count nodes along the path in either view. If you need to make small adjustments, you can pan; for larger movements, you can zoom in either view. Demonstrate this, then ask participant to do it. Additionally, you can reset the interface when you have found one of the topological distances before you move onto another distance. Demonstrate this, then ask participant to do it. Appendix B. Study 2 Training Protocol 94 A l l interfaces We strongly suggest you use these strategies as you are answering the questions. A p p e n d i x C Study 1 Questionnaires Appendix C. Study 1 Questionnaires 96 UBC T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A Experimental Questionnaire Evaluation of Information Visualization Techniques Interface # 1 Subject # Appendix C. Study 1 Questionnaires 97 Part 1 1. A g e G r o u p O 19 and under O 20 - 29 O 30 - 39 O 40 - 49 O 50 + 2. Gender O Male O Female 3. E d u c a t i o n O Some high school O Completed high school O Some post-secondary education O Completed undergraduate degree O Some graduate or professional school O Completed postgraduate degree 4. C o m p u t e r Usage (hours per week): O o - io O 10 - 20 O 20 - 30 O 30 - 40 O 40 - 50 O 50 + Appendix C. Study 1 Questionnaires 98 Part 2 With respect to the visualization you worked with, a) please indicate the extent to which you agree or disagree with the following statements: SD = = Strongly Disagree D - Disagree N Neutral A = Agree SA = = Strongly Agree Appendix C. Study 1 Questionnaires 99 I found this visualization to be ef-ficient for completing the tasks. O S D O D O N O A O S A Navigating through the data was easy to do. O S D O D O N O A O S A Locating coloured nodes was easy. O S D O D O N O A O S A I found this visualization to be frustrating. O S D O D O N O A O S A Comparing topological distances between nodes was easy. O S D O D O N O A O S A I found it easy to get lost in this visualization. O S D O D O N O A O S A Using two coloured focus boxes helped me to complete the task. O S D O D O N O A O S A Being able to see compressed coloured nodes around the edges of the view made the task easier. O S D O D O N O A O S A I enjoyed using this visualization. Q S D O D O N O A O S A b) What particular aspect(s) of this visualization did you like? c) What particular aspect(s) of this visualization did you dislike? Appendix C. Study 1 Questionnaires 100 d) Please use this space to describe/illustrate any alternative strategies (other than those you were shown at the beginning of the experiment) that you believe would have worked better for you. e) Please use this space to make any other comments about the experiment or the visualization. Thank you for your time! Appendix C. Study 1 Questionnaires 101 UBC T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A Experimental Questionnaire Evaluation of Information Visualization Techniques Interface # 2 Subject # Appendix C. Study 1 Questionnaires 102 Part 1 1. Age Group O 19 and under O 20 - 29 O 30 - 39 O 40 - 49 O 50 + 2. Gender O Male O Female 3. Education O Some high school O Completed high school O Some post-secondary education O Completed undergraduate degree O Some graduate or professional school O Completed postgraduate degree 4. Computer Usage (hours per week): Oo - io O 10 - 20 O 20 - 30 O 30 - 40 O 40 - 50 O 50 + Appendix C. Study 1 Questionnaires 103 Part 2 With respect to the visualization you worked with, a) please indicate the extent to which you agree or disagree with the following statements: SD = = Strongly Disagree D = Disagree N Neutral A = Agree SA = = Strongly Agree Appendix C. Study 1 Questionnaires 104 I found this visualization to be ef-ficient for completing the tasks. OSD O D O N O A OSA Navigating through the data was easy to do. OSD O D O N O A OSA Locating coloured nodes was easy. OSD O D O N O A OSA I found this visualization to be frustrating. OSD O D O N O A OSA Comparing topological distances between nodes was easy. OSD O D O N O A OSA I found it easy to get lost in this visualization. OSD O D O N O A OSA Using two coloured focus boxes helped me to complete the task. OSD O D O N O A OSA The coloured arcs made naviga-tion easier. OSD O D O N O A OSA I enjoyed using this visualization. OSD O D O N O A OSA b) What particular aspect(s) of this visualization did you like? c) What particular aspect(s) of this visualization did you dislike? Appendix C. Study 1 Questionnaires 105 d) Please use this space to describe/illustrate any alternative strategies (other than those you were shown at the beginning of the experiment) that you believe would have worked better for you. e) Please use this space to make any other comments about the experiment or the visualization. Thank you for your time! Appendix C. Study 1 Questionnaires 106 UBC T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A Experimental Questionnaire Evaluation of Information Visualization Techniques Interface # 3 Subject # Appendix C. Study 1 Questionnaires 107 Part 1 1. Age Group O 19 a nd under O 20 - 29 O 30 - 39 O 40 - 49 O 50 + 2. Gender O Male O Female 3. Education O Some high school O Completed high school O Some post-secondary education O Completed undergraduate degree O Some graduate or professional school O Completed postgraduate degree 4. Computer Usage (hours per week): Oo - io O 10 - 20 O 20 - 30 O 30 - 40 O 40 - 50 O 50 + Appendix C. Study 1 Questionnaires 108 Part 2 With respect to the visualization you worked with, a) please indicate the extent to which you agree or disagree with the following statements: SD = = Strongly Disagree D = Disagree N = Neutral A = Agree SA = = Strongly Agree Appendix C. Study 1 Questionnaires 109 I found this visualization to be ef-ficient for completing the tasks. OSD O D O N O A OSA Navigating through the data was easy to do. OSD O D O N O A OSA Locating coloured nodes was easy. OSD O D O N O A OSA I found this visualization to be frustrating. OSD O D O N O A OSA Comparing topological distances between nodes was easy. OSD O D O N O A OSA I found it easy to get lost in this visualization. OSD O D O N O A OSA The presence of the smaller view made the task easier. OSD O D O N O A OSA Using two coloured focus boxes helped me to complete the task. OSD O D . O N O A OSA Being able to see compressed coloured nodes around the edges of the view made the task easier. OSD O D O N O A OSA I enjoyed using this visualization. OSD O D O N O A OSA b) What particular aspect(s) of this visualization did you like? c) What particular aspect(s) of this visualization did you dislike? Appendix C. Study 1 Questionnaires 110 d) Please use this space to describe/illustrate any alternative strategies (other than those you were shown at the beginning of the experiment) that you believe would have worked better for you. e) Please use this space to make any other comments about the experiment or the visualization. Thank you for your time! Appendix C. Study 1 Questionnaires 111 UBC T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A Experimental Questionnaire Evaluation of Information Visualization Techniques Interface # 4 Subject # Appendix C. Study 1 Questionnaires 112 Part 1 1. Age Group O 19 and under O 20 - 29 O 30 - 39 O 40 - 49 O 50 + 2. Gender O Male O Female 3. Education O Some high school O Completed high school O Some post-secondary education O Completed undergraduate degree O Some graduate or professional school O Completed postgraduate degree 4. Computer Usage (hours per week): O O - io O 10 - 20 O 20 - 30 O 30 - 40 O 40 - 50 Q 5 0 + Appendix C. Study 1 Questionnaires 113 Part 2 With respect to the visualization you worked with, a) please indicate the extent to which you agree or disagree with the following statements: SD = = Strongly Disagree D = Disagree N = Neutral A = Agree SA = = Strongly Agree Appendix C. Study 1 Questionnaires 114 I found this visualization to be ef-ficient for completing the tasks. OSD O D O N O A OSA Navigating through the data was easy to do. OSD O D O N O A OSA Locating coloured nodes was easy. OSD O D O N O A OSA I found this visualization to be frustrating. OSD O D O N O A OSA Comparing topological distances between nodes was easy. OSD O D O N O A OSA I found it easy to get lost in this visualization. OSD O D O N O A OSA The presence of the smaller view made the task easier. OSD O D O N O A OSA Using two coloured focus boxes helped me to complete the task. OSD O D O N O A OSA The coloured arcs made naviga-tion easier. OSD O D O N O A OSA I enjoyed using this visualization. OSD O D 0 N O A OSA A p p e n d i x D Study 2 Questionnaires Appendix D. Study 2 Questionnaires 116 UBC T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A Experimental Questionnaire Evaluation of Information Visualization Techniques Interface # 1 Subject # Appendix D. Study 2 Questionnaires 117 Part 1 1. Age Group O 19 and under O 20 - 29 O 30 - 39 O 40 - 49 O 50 + 2. Gender O Male O Female 3. Education O Some high school O Completed high school O Some post-secondary education O Completed undergraduate degree O Some graduate or professional school O Completed postgraduate degree 4. Computer Usage (hours per week): Oo - io O 10 - 20 O 20 - 30 O 30 - 40 O 40 - 50 Q50 + Appendix D. Study 2 Questionnaires 118 Part 2 With respect to the visualization you worked with, a) please indicate the extent to which you agree or disagree with the following statements: SD = = Strongly Disagree D = Disagree N Neutral A = Agree SA = = Strongly Agree I found this visualization to be ef-ficient for completing the tasks. OSD O D O N O A OSA Navigating through the data was . easy to do. OSD O D O N O A OSA Locating coloured nodes was easy. OSD O D O N O A OSA I found this visualization to be frustrating. OSD O D O N O A OSA Comparing topological distances between nodes was easy. OSD O D O N O A OSA I found it easy to get lost. OSD O D O N O A OSA Being able to see compressed coloured nodes around the edges of the view made the task easier. OSD O D O N O A OSA I enjoyed using this visualization. OSD O D O N O A OSA Appendix D. Study 2 Questionnaires 119 b) With respect to the visualization you worked with, please answer the following questions by marking an 'X' along the scale beside the corresponding question. How much mental and perceptual activity was required to complete the task(e.g., looking searching thinking deciding calculating, iremembering etc.)? MENTAL DEMAND 1 1 1 1 1 1 1 1 1 i 1 1 Lew i 1 i I I ! i i 1 PHYSICAL DEMAND 1 i 1 i 1 i 1 i 1 i 1 i Low ;How much physical activity was required to 1 complete the task(e.g., moving the mouse, dragging clicking pressing keys, etc.)? i 1 i 1 1 1 I 1 1 How much time pressure did you feel due to the rate orpace at which the tasks or task elements occurred? TEMPORAL DEMAND Li...!...!..! ! ! . . ! . ! J..J. Law i h h l i l ! His). ! How hard did you have to work (mentally and physically) to accomplish your level of performance? EFFORT 1 i I i 1 1 i Low 1 1 t 1 l 1 1 1 1 HitJ. | Ho1.;' successful do you think you were in accomplishing the goals of the task set by the experimenter(or yourself;? PERFORMANCE 1 1 1 1 1 ( ! 1 i ! I 1 Good 1 I 1 1 1 1 1 1 i Poor | How insecure, discouraged, irritated, stressed andj j u u s j ' j m K ^ annoyed versus secure, gratified, content, relaxed j . . i and complacent did you feel during the task? j 1 M i 1 1 i i ,! 1 1 i I Lew IJJJJJLLLJ High Appendix D. Study 2 Questionnaires c) What particular aspect(s) of this visualization did you like? 120 d) What particular aspect(s) of this visualization did you dislike? e) Please use this space to describe/illustrate any alternative strategies (other than those you were shown at the beginning of the experiment) that you believe would have worked better for you. f) Please use this space to make any other comments about the experiment or the visualization. Thank you for your time! Appendix D. Study 2 Questionnaires 121 UBC T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A Experimental Questionnaire Evaluation of Information Visualization Techniques Interface # 2 Subject # Appendix D. Study 2 Questionnaires 122 Part 1 1. Age Group O 19 and under O 20 - 29 O 30 - 39 O 40 - 49 O 50 + 2. Gender O Male O Female 3. Education O Some high school O Completed high school O Some post-secondary education O Completed undergraduate degree O Some graduate or professional school O Completed postgraduate degree 4. Computer Usage (hours per week): O o - io O 10 - 20 O 20 - 30 O 30 - 40 O 40 - 50 O 50 + Appendix D. Study 2 Questionnaires 123 Part 2 With respect to the visualization you worked with, a) please indicate the extent to which you agree or disagree with the following statements: SD = = Strongly Disagree D = Disagree N Neutral A = Agree SA = = Strongly Agree I found this visualization to be ef-ficient for completing the tasks. OSD O D O N O A OSA Navigating through the data was easy to do. OSD O D O N O A OSA Locating coloured nodes was easy. OSD O D O N O A OSA I found this visualization to be frustrating. OSD O D O N O A OSA Comparing topological distances between nodes was easy. OSD O D O N O A OSA I found it easy to get lost. OSD O D O N O A OSA The coloured arcs made naviga-tion easier. OSD O D O N O A OSA I enjoyed using this visualization. OSD O D O N O A OSA Appendix D. Study 2 Questionnaires 124 b) With respect to the visualization you worked with, please answer the following questions by marking an 'X' along the scale beside the corresponding question. How much mental and perceptual activity was j MENTALDEMAND required to complete the task (e.g., looking j [_J_Li_L searching thinking, deciding calculating, i t j 3 w remembering etc.)? PHYSICAL DEMAND IHow much physical activity was required to jcomplete the task(e.g., moving the mouse, [dragging, clicking pressing keys, etc)? How much time pressure did you feel due to the rate or pace at which the tasks or task elements occurred? TEMPORAL DEMAND jHowhard did you have to work (mentally and Iphysically) to accomplish your level of [performance? E F F O R T I t I I 1 I I I I » I I I I I I I I 1 I 1 How successful do you think you were in accomplishing the goals of the task set by the experimenter (or yourself)? PERFORMANCE I I i I I I I I 1 I 1 I 1 I I I I I I I 1 How insecure, discouraged, irritated, stressed and] annoyed versus secure, gratified, content* relaxed ? and complacent did you feel during the task? j J F R U S T R A T I O N I I 1 I I 1 1 I I I I I I I I I I. I 111 Appendix D. Study 2 Questionnaires c) What particular aspect(s) of this visualization did you like? 125 d) What particular aspect(s) of this visualization did you dislike? e) Please use this space to describe/illustrate any alternative strategies (other than those you were shown at the beginning of the experiment) that you believe would have worked better for you. f) Please use this space to make any other comments about the experiment or the visualization. Thank you for your time! Appendix D. Study 2 Questionnaires 126 UBC T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A Experimental Questionnaire Evaluation of Information Visualization Techniques Interface # 3 Subject # Appendix D. Study 2 Questionnaires 127 Part 1 1. Age Group O 19 and under O 20 - 29 O 30 - 39 O 40 - 49 O 50 + 2. Gender O Male O Female 3. Education O Some high school O Completed high school O Some post-secondary education O Completed undergraduate degree O Some graduate or professional school O Completed postgraduate degree 4. Computer Usage (hours per week): O O - to O 10 - 20 O 20 - 30 O 30 - 40 O 40 - 50 O 50 + Appendix D. Study 2 Questionnaires 128 Part 2 With respect to the visualization you worked with, a) please indicate the extent to which you agree or disagree with the following statements: SD = = Strongly Disagree D - Disagree N = Neutral A = Agree SA = = Strongly Agree I found this visualization to be ef-ficient for completing the tasks. OSD O D O N O A OSA Navigating through the data was easy to do. OSD O D O N O A OSA Locating coloured nodes was easy. OSD O D O N O A OSA I found this visualization to be frustrating. OSD O D O N O A OSA Comparing topological distances between nodes was easy. OSD O D O N O A OSA I found it easy to get lost. OSD O D O N O A OSA The presence of the smaller view made the task easier. OSD O D O N O A OSA Being able to see compressed coloured nodes around the edges of the view made the task easier. OSD O D O N O A OSA I enjoyed using this visualization. OSD O D O N O A OSA Appendix D. Study 2 Questionnaires 129 b) With respect to the visualization you worked with, please answer the following questions by marking an 'X' along the scale beside the corresponding question. How much mental and perceptual activity was MENTALDEMAJ© required to complete the task(e.g., looking. 1 1 I 1 1 1 1 1 1 1 1 | I | | | | | | 1 searching thinking deciding calculating 1 I i 1 l 1 1 i— j—£—£—J— Low remembering etc.)? PHYSICAL DEMAND How much physical activity was required to 1 i 1 i I i 1 i 1 l 1 l i 1 i 1 i 1 i 1 complete the task(e.g., moving the mouse. 1 1,. I 1 1 j I 1 1 1 1 1 Lav 1 1 1 1 1 1 1 1 i dragging clicking pressing keys, etc.)? How much time pressure did you feel due to the TEMPORAL DEMAND rate or pace at which the tasks or task elements 1 I ! I 1 i 1 , i , l i . i l 1 f i 1 i l„l„l i occurred? Low Hieh | How hard did you have to work(mentally and EFFORT i physically) to accomplish your level of 1 i l i 1 i 1 i 1 i I I 1 1 ( I 1 1 1 1 1 performance? Lav Hieh ; How successful do you think you were in PERFORMANCE accomplishing the goals of the task set by the 1 1 1 I 1 t 1 1 1 i 1 1 111 111 11 ! experimenter(or yourself)? 1 1 i i ! f | 1 1 ) 1 1 Cood Poor j How insecure, discouraged, irritated, stressed and FRUSTRATION 1 . 1 , ! . 1 . ! . 1 . annoyed versus secure, gratified, content, relaxed . 1 . E . 1 . 1 ' and complacent did you feel during the task? 1 1 1 1 1 1 1 I 1 1 1 1 .!„!,..l,U...l i Hfch j Law Appendix D. Study 2 Questionnaires c) What particular aspect(s) of this visualization did you like? 130 d) What particular aspect(s) of this visualization did you dislike? e) Please use this space to describe/illustrate any alternative strategies (other than those you were shown at the beginning of the experiment) that you believe would have worked better for you. f) Please use this space to make any other comments about the experiment or the visualization. Thank you for your time! Appendix D. Study 2 Questionnaires 131 UBC T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A Experimental Questionnaire Evaluation of Information Visualization Techniques Interface # 4 Subject # Appendix D. Study 2 Questionnaires 132 Part 1 1. Age Group O 19 and under O 20 - 29 O 30 - 39 O 40 - 49 O 50 + 2. Gender O Male O Female 3. Education O Some high school O Completed high school O Some post-secondary education O Completed undergraduate degree O Some graduate or professional school O Completed postgraduate degree 4. Computer Usage (hours per week): O o - io O 10 - 20 O 20 - 30 O 30 - 40 O 40 - 50 Q50 + Appendix D. Study 2 Questionnaires 133 Part 2 With respect to the visualization you worked with, a) please indicate the extent to which you agree or disagree with the following statements: SD = = Strongly Disagree D = Disagree N = Neutral A = Agree SA = = Strongly Agree I found this visualization to be ef-ficient for completing the tasks. OSD O D O N O A OSA Navigating through the data was easy to do. OSD O D O N O A OSA Locating coloured nodes was easy. OSD O D O N O A OSA I found this visualization to be frustrating. OSD O D O N O A OSA Comparing topological distances between nodes was easy. OSD O D O N O A OSA I found it easy to get lost. OSD O D O N O A OSA The presence of the smaller view made the task easier. OSD O D O N O A OSA The coloured arcs made naviga-tion easier. OSD O D O N O A OSA I enjoyed using this visualization. OSD O D O N O A OSA Appendix D. Study 2 Questionnaires 134 b) With respect to the visualization you worked with, please answer the following questions by marking an 'X' along the scale beside the corresponding question. H ow much m ental and p ere eptual activity was M E N T A L D E M A N D j required to complete the task (e.g., looking, searching thinking, deciding calculating remembering, etc.)? 1 i 1 i 1 i 1 i 1 i 1 i Law L l . i . ! . l ± J J 1 M i c h | P H Y S I C A L D E M A N D 1 i I i 1 i 1 i ! i i i Low How much physical activity was required to complete the task(e.g., moving the mouse, dragging, clicking pressing keys, etc.)? 1 1 1 1 1 1 1 1 [ How much time pressure did you feel due to the T E M P O R A L D E M A N D rate or pace at which the tasks or task elements 1 i ! i 1 i 1 i 1 i 1 i i l l 1 1 ! I 1 I occurred? L A V H i g h How hard did you have to work (mentally and E F F O R T physically) to accomplish your level of performance? l , I i 1 i 1 i I i 1 i Low 1 1 1 1 1 1 1 1 ' «%>• i How successful do you think you were in accomplishing the goals of the task set by the P E R F O R M A N C E 1 1 i 1 1 1 1 1 i 1 1 1 1 1 1 I 1 1 1 1 experimenter(or yourself)? 1 1 J t l l l I I I 1 1 Cood Poor How insecure, discouraged, irritated, stressed and annoyed versus secure, gratified, content, relaxed and c omplac ent did y ou fe el during the task? F R U S T R A T I O N 1 I 1 1 1 1 1 I i 1 1 I Low 1 1 1 1 1 1 1 1 H i g h j Appendix D. Study 2 Questionnaires c) What particular aspect(s) of this visualization did you like? 135 d) What particular aspect(s) of this visualization did you dislike? e) Please use this space to describe/illustrate any alternative strategies (other than those you were shown at the beginning of the experiment) that you believe would have worked better for you. f) Please use this space to make any other comments about the experiment or the visualization. Thank you for your time! 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0051590/manifest

Comment

Related Items