UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

3D task performance using head-coupled stereo displays Arthur, Kevin W. 1993

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_1993_fall_arthur_kevin.pdf [ 3.51MB ]
JSON: 831-1.0051425.json
JSON-LD: 831-1.0051425-ld.json
RDF/XML (Pretty): 831-1.0051425-rdf.xml
RDF/JSON: 831-1.0051425-rdf.json
Turtle: 831-1.0051425-turtle.txt
N-Triples: 831-1.0051425-rdf-ntriples.txt
Original Record: 831-1.0051425-source.json
Full Text

Full Text

3D TASK PERFORMANCE USINGHEAD-COUPLED STEREO DISPLAYSByKevin Wayne ArthurB. Math., University of Waterloo, 1991A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEinTHE FACULTY OF GRADUATE STUDIESDEPARTMENT OF COMPUTER SCIENCEWe accept this thesis as conformingto the required standardTHE UNIVERSITY OF BRITISH COLUMBIAJuly 1993© Kevin W. Arthur, 1993In presenting this thesis in partial fulfilment of the requirements for an advanceddegree at the University of British Columbia, I agree that the Library shall make itfreely available for reference and study. I further agree that permission for extensivecopying of this thesis for scholarly purposes may be granted by the head of mydepartment or by his or her representatives. It is understood that copying orpublication of this thesis for financial gain shall not be allowed without my writtenpermission.Department of  (An4p,ittV SCIell The University of British ColumbiaVancouver, CanadaDate VU1y 31) M3 (Signature)DE-6 (2/88)Abstract"Head-coupled stereo display" refers to the use of a standard graphics workstation todisplay stereo images of three-dimensional scenes using perspective projections defineddynamically by the positions of the observer's eyes. The user is presented with a virtualscene located within or in front of the workstation monitor and can move his or her headaround to obtain different views.We discuss the characteristics of head-coupled stereo display, the issues involved inimplementing it correctly, and three experiments that were conducted to investigate thevalue of this type of display.The first two experiments tested user performance under different viewing conditions.The two variables were (a) whether or not stereoscopic display was used and (b) whetheror not head-coupled perspective was used. In the first experiment, subjects were askedto subjectively rank the quality of the viewing conditions through pairwise comparisons.The results showed a strong acceptance of head-coupled stereo and a preference for head-coupling alone over stereo alone. Subjects also showed a positive response to head-coupledstereo viewing in answers to questions administered after the experiment.In the second experiment, subjects performed a task that required them to trace apath through a complex 3D tree structure. Error rates for this task showed an order ofmagnitude improvement with head-coupled stereo viewing compared to a static display,and the error rates achieved under head-coupling alone were significantly better thanthose obtained under stereo alone.The final experiment examined the effects of temporal artifacts on 3D task perfor-mance under full head-coupled stereo viewing. In particular, the effects of reduced frameiirates and lag in receiving tracker data were investigated. The same path tracing taskwas performed under a set of simulated frame rate and lag conditions. The results showthat response times of subjects increased dramatically with increasing total lag time, andsuggest that frame rate likely has less impact on performance than does tracker lag.iiiTable of ContentsAbstract^ iiList of Tables viiList of Figures^ viiiAcknowledgements ix1 Introduction 11.1 Historical Context ^ 21.2 Overview^ 52 Related Work 62.1 Head-Coupled Stereo Display ^ 62.2 Comparison of Depth Cues 92.3 Temporal Accuracy ^ 113 Head-Coupled Stereo Display 143.1 Stereo Display ^ 153.2 Head-Coupled Perspective ^ 173.3 Six Degree-of-freedom Tracking 203.4 Factors Affecting Performance ^ 223.4.1^Calibration ^ 223.4.2^Temporal Accuracy 23iv43.4.3^Auxiliary Cues ^Experiments24254.1 Experiment System Configuration ^ 254.2 General Experimental Procedure 264.2.1^Experiment Scenes ^ 294.3 Experiment 1: Subjective Impression of Three-dimensionality^ 314.3.1^Procedure ^ 314.3.2^Design 324.3.3^Results ^ 334.4 Experiment 2: Performance on a 3D Tree Tracing Task ^ 344.4.1^Procedure ^ 354.4.2^Design 374.4.3^Results ^ 384.5 Experiment 3: Effects of Lag and Frame Rate ^ 384.5.1^Procedure ^ 394.5.2^Design 404.5.3^Results ^ 425 Discussion 465.1 Subjective Evaluation of Head-Coupled Stereo Display ^ 465.2 3D Task Performance^ 475.3 Effects of Lag and Frame Rate^ 505.4 Applications ^ 526 Conclusions 546.1^Future Work^ 556.1.1 Experimental Studies ^  556.1.2 Extensions to Head-Coupled Stereo Display ^ 56Appendices^ 58A Experiment Results^ 58B Head-Coupled Stereo Display Software^ 62Bibliography^ 65viList of Tables4.1 The five possible viewing conditions used in the experiments. ^ 274.2 Pairwise comparison results from Experiment 1. ^ 334.3 Summary by viewing condition of the results from Experiment 1. 344.4 Experiment 2 timing and error results ^ 384.5 Experiment 3 conditions^ 41A.1 Experiment 1 subject comments from Question 1. ^ 58A.2 Experiment 1 subject comments from Question 2. 58A.3 Experiment 1 subject comments from Question 3. ^ 59A.4 Experiment 1 subject comments from Question 4. 59A.5 Experiment 1 subject comments from Question 5. ^ 59A.6 Experiment 1 subject comments from Question 6. 60A.7 Experiment 2 errors by subject   60A.8 Experiment 2 response times by subject (for correct responses only). 60A.9 Experiment 3 response times by subject (for correct responses only). 61viiList of Figures3.1 The effect of induced stereo movement^ 163.2 An illustration of the effects of head-coupled perspective^ 194.1 The head-coupled stereo display system. ^ 264.2 The five viewing conditions used in Experiments 1 and 2^ 284.3 The left eye and right eye images for the stereo test scene. ^ 294.4 The sphere and bent tube displays used in Experiment 1 304.5 An example of the tree display used in Experiments 2 and 3^ 374.6 Plot of response time versus total lag for Experiment 3 424.7 Plot of error rates versus total lag for Experiment 3. ^ 43viiiAcknowledgementsI am indebted to my supervisor, Kellogg Booth, for the considerable guidance, support,and encouragement he has provided me throughout the duration of my stay at the Uni-versity of British Columbia. I also wish to thank Colin Ware, who co-supervised muchof this research, for his time and for teaching me many things.Dave Forsey and Christopher Healey took the time to read the thesis and providedmany helpful comments, and for this I'm very grateful.Thanks to Alain Fournier for several useful discussions regarding this work, and tomany members of the Imager Lab for trying the system and offering their thoughts onthe display and the experiments.I'm grateful to Michael Deering of Sun Microsystems for his comments on early ver-sions of the experiments and the history of head-coupled stereo displays.Finally, I'd like to thank my family for their continuing support and encouragementof my endeavours, and my friends, for making these past two years so enjoyable.ixChapter 1IntroductionThe goal of creating truly three-dimensional displays has long been pursued by scientistsand engineers. With true 3D displays we could take advantage of our natural abilities tointeract in three dimensions and avoid having to interpret 3D scenes using intermediate2D displays.In recent years, much progress has been achieved towards this goal through the use ofcomputer graphics displays and real-time tracking technology. This thesis deals with onesuch type of 3D display technique, which we refer to as head-coupled stereo display. Wedefine this technique as the use of a computer graphics workstation to display stereoscopicimages of a 3D scene on a standard workstation monitor, with the images updated inreal time according to an observer's eye positions. The scene appears stable and 3D inthe sense that the observer can move around to obtain different views with binocularparallax and head-controlled motion parallax cues aiding depth perception.Initial implementations of head-coupled stereo display systems using conventionalworkstation monitors were reported in the early 1980's, and several research implemen-tations have been discussed since then. However, this type of display has yet to beaccepted and put to practical use in general settings.We describe previous work with this type of display and compare it with other 3Ddisplay techniques. The technical and human factors issues involved in implementing1Chapter 1. Introduction^ 2and using head-coupling and stereo are outlined. In addition, we describe three experi-mental studies that were conducted to evaluate the effectiveness of head-coupled stereo,to compare the relative effectiveness of head-coupling and stereo as 3D depth cues, andto investigate the effects of temporal artifacts in the display.The results from the first experiment show a high degree of user acceptance for thetechnique, as measured by subjective user preference tests comparing different viewingconditions. The second experiment shows, through objective measurements of subjectperformance on a 3D tree tracing task, that head-coupled stereo provides a significant im-provement over a static workstation display, and that the depth cues from head-couplingare superior to those from stereo for tasks of this type. The third study provides anindication of how temporal artifacts in the tracking and display effect user performance.In particular the results show a serious degradation in response times as lag is increased,even at relatively low lags of approximately 200 milliseconds.1.1 Historical ContextVarious techniques have been developed to provide 3D display, ranging from those usingoptical and mechanical elements to those using computer graphics.Optical techniques such as holography are best suited for creating static 3D imagesof objects. While some progress has been made recently in generating holograms usingcomputers [14], the goal of updating high resolution holograms in real time is not expectedto be attained for several years. The computational expense in doing so will most likelymake the technique much less attractive, even though holography wouldn't require usersto wear special glasses or tracking devices.Techniques using complex electrical and mechanical components have also been de-veloped for 3D display. The varifocal mirror technique displays volumetric images usingChapter 1. Introduction^ 3a vibrating mirror synchronized with the video output of a computer to display objectsor pixels at varying depths [26][42]. The image is viewable from any angle in a reasonablylarge range. However the technique has some serious drawbacks: no occlusion effects arepossible and all the objects appear semi-transparent. Rotating screen devices providetrue 3D display in a cylindrical volume through the use of a rotating LED array or arotating passive screen projected onto from below by a laser [9]. The technique is limited,however, with drawbacks such as the lack of occlusion effects.One of the most popular methods for adding three-dimensionality to images is todisplay stereoscopic images, that is, to provide separate images to the left and right eyes[29] [30]. Usually, special glasses are worn, and contain either coloured filters (anaglyphic)or polarized filters to direct the proper image to each eye. Stereo images can be presentedwithout the use of glasses by employing lenticular arrays; the technique effectively dis-plays different images depending on the angle at which the screen is viewed by usingoptical elements placed between the viewer and the screen. Stereoscopic imagery alone,however, suffers from some artifacts. In particular, when the image is viewed from theincorrect viewpoint (and especially if the observer is moving) the image appears to dis-tort.The advance of interactive 3D graphics has provided numerous techniques for simu-lating three-dimensionality and providing depth cues. It is possible to generate images ofscenes using perspective, shading, shadows and motion, among other techniques, to indi-cate depth. Interactive 3D graphics is used widely today in various application domains,such as scientific visualization and computer aided design.Traditional graphics displays employ a very simplified geometric model of the user.When displaying a 3D scene, the user's eyes are effectively modeled as a single pointlocated at some arbitrary distance translated directly out from the center of the screen.Hence the display is really only correct if it is viewed from this one position. Of course,Chapter 1. Introduction^ 4people are accustomed to viewing 3D scenes from a physically incorrect angle throughexperience watching television or movies, but the effect is still one of viewing a 3D scenethrough a 2D medium. When the computer takes into account the positions of theobserver's eyes it becomes possible to present a stable 3D scene which behaves correctlyas the observer's eyes move around.The idea of immersing the user in computer-generated 3D environments led to theconcept of virtual reality, which was first introduced by Ivan Sutherland in the 1960's[55][56]. In a virtual reality system, the user typically wears a head-mounted display thatcontains two small screens and optics to stretch the images over a wide field of view. Theuser is separated visually from the real world and is immersed in a virtual world. Thehead-mounted display is connected to a head-tracker, and the host computer generatesimages for the two eyes depending on the position of the user's head and eyes.Research in virtual reality progressed at various research labs [5] [22] [27], and by themid 1980's the technology had advanced far enough that off-the-shelf systems started toappear on the market [3] [32] [46] [57].While most uses of head tracking technology have concentrated on the use of headtracking with head-mounted display systems, a few researchers have experimented withusing head tracking with monitor-based graphics displays. Head tracking (and in effecteye position tracking) allows the computer to generate what we will call head-coupled per-spective, meaning that the images displayed on the screen are computed with perspectiveprojections defined by the positions of the observer's eyes.We will use the term head-coupled display to refer to monitor-based systems usinghead-coupled perspective. Many such systems are also stereoscopic, or simply stereo,meaning that two images are presented to the observer, one for each eye. Combining thetwo techniques gives us what we will call head-coupled stereo display, meaning a displaysystem employing both head-coupled perspective and stereoscopic images.Chapter 1. Introduction^ 5Another term sometimes used for head-coupled stereo display is fish tank virtual realitybecause the effect of the display is to present a small (fish tank-sized) virtual world tothe user. For clarity, we will use the term immersive virtual reality when referring tosystems that use head-mounted displays. Another, more descriptive, term for this wouldbe head-mounted stereo display.1.2 OverviewIn the next chapter we outline previous work directly related to monitor-based head-coupled stereo display, studies of depth perception in computer graphics, and experimentsto measure and evaluate temporal accuracy in virtual reality systems.In Chapter 3, the requirements for implementing head-coupled perspective and fordrawing stereoscopic images are discussed. A summary of the factors affecting per-formance under head-coupled stereo viewing, a list of issues related to accuracy andcalibration, and the use of auxiliary depth cues are all discussed.Chapter 4 describes the experimental procedures and results from the three exper-iments that were conducted. Chapter 5 discusses the relevance of the experiments toapplications of interactive 3D graphics. In the final chapter we summarize the contribu-tions of this work and discuss future extensions.Chapter 2Related WorkMonitor-based head-coupled stereo display is a technique that has been implemented inthe past by various researchers. This chapter surveys their work as well as work relatedto studies of depth cues in head-coupled displays and issues of temporal accuracy andartifacts.2.1 Head-Coupled Stereo DisplayHead-coupled stereo display shares common elements with immersive virtual reality sys-tems. In particular, both techniques employ head tracking and stereopsis with the goalof presenting a realistically stable computer-generated scene to the user.Various terms have been used to describe monitor-based head-coupled stereo display,among them fish tank virtual reality, viewpoint dependent imaging, and virtual integralholography.The earliest reported display of this type was the "Stereomatrix" system developedby Kubitz and Poppelbaum [36]. The user viewed a large (3 foot by 4 foot) projectionscreen illuminated from behind by lasers. Head tracking was performed by using photodetectors to track an infrared source worn on the user's head. A similar display, using acomputer monitor, was reported by Diamond et al. [17]. Molecular data made up of linesegments were viewed with head-coupled perspective (without stereo). A video camerawas used to track a small light bulb worn on the user's forehead.6Chapter 2. Related Work^ 7Similar early systems that employed head-coupling and stereo are described by variousresearchers [21][45][54][61]. These systems were typically limited to displaying wire-frameimages because of the computational cost of displaying shaded objects. An alternativeis to display from a set of precomputed perspective images, as suggested by Fisher [21].Venolia and Williams proposed a system using precomputed perspective images account-ing for only the side-to-side horizontal movements of the user, and not vertical movementsor movements in depth, so as to minimize the number of images that need to be precom-puted [61].Codella et al. describe the multi-person "Rubber Rocks" simulator that uses a head-coupled stereo display interface [10]. Users wear a tracker attached to a baseball cap, aswell as a glove, to interact with objects displayed stereoscopically on monitors or largeprojection screens. Multiple users, each viewing a different screen, can participate overa network to interact with the same scene.Most early reports of work with head-coupled stereo displays focus entirely on im-plementation issues of performing the tracking and image generation correctly. Recentadvances in tracking hardware have made it feasible and quite straightforward to cre-ate head-coupled stereo displays and hence more recent research has begun to deal withevaluating the effectiveness of the technique and the level of realism achieved by it.McKenna reports on three experimental real-time graphics display systems usinghead-coupling [40]. His goal was to determine how effective head-coupling was witha monitor-based display, either fixed or movable. The first display system used a fixedhigh resolution monitor with the perspective projection coupled to the user's head po-sition. The second display used the same monitor, but this time the monitor's positionand orientation was also tracked so that it could be tilted or swiveled to obtain differentviews (head movements were tracked as well). The third display was a handheld LCDscreen that could be freely moved. Both screen position and head position were trackedChapter 2. Related Work^ 8and used in computing the images. Stereo was not used in any of the three displays.McKenna describes the results of an informal target selection experiment undertaken toevaluate the first of the three displays (the fixed-monitor, head-coupled display). Sub-jects controlled a 3D cursor using a handheld tracker. In each trial, a cube was displayedin the scene and the subject was asked to align the cursor with the cube. Three view-ing conditions were employed in the experiment: fixed view, view controlled by mousemovements, and view coupled directly to head position. The results showed that underthe head-tracked condition, subjects could match the target more rapidly, and that themouse control was of virtually no benefit over the fixed view display for this task. Nostudies with the other two movable displays were reported.Deering presents the most complete analysis to date of the issues that must be ad-dressed to correctly implement head-coupled stereo display [15][16]. He discusses theimportance of several factors, including fast and accurate head tracking, a correct modelfor the optics of the human eye, the use of physically correct stereo viewing matrices, andcorrections for refraction and curvature distortions of CRT displays. Deering describesan implementation that achieves sub-centimeter registration between the virtual sceneand the real environment surrounding it.The refraction and curvature distortions inherent in most CRT displays make theimage plane appear curved in 3-space, and the distortion changes depending on the loca-tion of the observer. Implementing the full correction for the distortion is not practicaldue to the high computational expense. The method suggested by Deering provides afirst order approximation to the full correction by adjusting only the four points at thecorners of the viewport (as opposed to adjusting the entire image). Another possibilityis to correct for a particular point in the image where the observer is looking. This pointmight correspond to, for example, the location of a 3D mouse being used to interact withthe scene.Chapter 2. Related Work^ 9Deering also describes the issues involved in implementing an immersive head-coupleddisplay that doesn't use a head-mounted display [16]. The virtual portal system displayshead-coupled stereo images on three large projection screens covering most of the ob-server's field of view. The level of realism achieved with the system exceeds currenthead-mounted displays in resolution and registration and is less physically intrusive.2.2 Comparison of Depth CuesAside from evaluating the effectiveness of full head-coupled stereo viewing, our researchwas aimed at assessing the relative merits of stereopsis and head-coupled perspective asdepth cues. Several studies comparing depth cues have been reported in the literature,although none directly comparing head-coupling and stereo are known at this time.Sollenberger and Milgram compared the relative effectiveness of stereo and rotationaldepth cues [53]. They conducted two experiments, using a 3D tree tracing task (the sametask that we employed for our experiments). The first experiment was a 2 x 2 study withthe variables being the presence or absence of stereo and the presence or absence ofrotational motion of the scene. The rotation, about a vertical axis in the center of thescreen, was controlled by the user holding down a mouse button, with the direction ofrotation defined by the mouse's position — forward rotation if on the right half of thescreen, backwards if on the left side. The results showed a greater benefit from motionalone than from stereo alone and the best results were obtained with both depth cuescombined. (Subject performances were: 56.9% correct for neither cue, 74.6% for motion,66.6% for stereo, and 89.1% for both.) Their second experiment compared continuousmotion (rotation) with viewing from multiple static views (rotations of the scene about avertical axis), with the view changing automatically every 5 seconds. Display with stereoand multiple viewing angles was found to be no more effective than rotational motionChapter 2. Related Work^ 10alone, but less effective than rotational stereo display.The area of telerobotics and remote manipulation have benefitted from the techniquesof stereo and head-coupling. Pepper et al. [47] describe studies of the use of stereo intelepresence applications, showing a clear advantage of stereo display over fixed view-point non-stereo display. In addition, they describe studies to compare the effectivenessof stereo under different conditions of motion: a study was conducted to investigatewhether any depth perception is provided by the induced stereo movement seen whenthe viewer moves his or her head when viewing a stereo image. The results showed nochange in the subject's perception of 3D and no change in the ability to perform 3Dtasks. A preliminary report is given on true head-coupled display using a head-mounteddisplay coupled isomorphically with the cameras recording the scene, and hence providingtrue motion parallax cues. The results under this viewing condition show a significantimprovement over the stereo non-moving condition in measurements of stereoacuity.Cole et al. conducted experiments to evaluate the benefits of motion parallax (throughhead-coupling) in a teleoperation task, performed with and without stereo [11]. Subjectsviewed a real scene recorded through video cameras that moved according to their headmovements. The video images were displayed on a monitor. The results showed asignificant increase in performance when motion parallax was added to monocular views,but not when motion parallax was added to stereoscopic views. The reason cited forthis lack of improvement with head-coupling and stereo is that for their experiments, thesubjects' performance probably peaked with stereo alone and no further improvementwas possible.In addition to the two depth cues of binocular and motion parallax, there are severalother well-known cues which aid in the perception of depth, such as occlusion, shading,and shadows [28]. Wanger et al. report on the relative effectiveness of several cues,including shading, textures, perspective projections, and different types of shadows onChapter 2. Related Work^113D perception and object placement tasks [63][64].2.3 Temporal AccuracyOf the various deficiencies present in current virtual reality interfaces, the issue that isperhaps most often raised is the problem of lag in acquiring tracking data. Lag is usuallycited as being more severe than other problems such as low spatial resolution and lowframe rates (temporal resolution). Although most related work on temporal accuracyof trackers has been done in the context of immersive virtual reality, the issue is thesame for trackers used in monitor-based head-coupled displays, so results concerning lagprobably apply to both immersive and non-immersive displays.With respect to tracking, lag or latency is by definition the delay between movementof the tracker and the resulting change in the display. Lag can be classified as arisingfrom three primary sources [25]. The first is the lag in receiving and processing trackerrecords, including performing smoothing algorithms on the data. The second componentis the lag in the display loop (the time taken to compute and display a frame). Thefinal source of lag arises from minor delays introduced by variations in the system load(caused, for example, by network or operating system activity).The problem of lag has been studied by various researchers, primarily for the purposeof measuring and counteracting the lag in a system.Liang, Shaw and Green measured the lag in the Polhemus IsoTrack magnetic trackerto be approximately 110 ms when receiving records at 20 Hz [37]. Their method used avideo camera to record a swinging pendulum with a tracker attached, placed in front ofa display screen showing a time stamp. The computer kept a log of tracker records withtimes so that the lag between these values and the time stamps seen in the video couldbe determined by comparing the delay for a specific reference position. They discuss theChapter 2. Related Work^ 12use of Kalman filtering, an established technique in signal processing for smoothing andprediction [34]. Kalman filtering for virtual reality systems has also been discussed byother researchers [25].Similar experiments to measure lag are reported by Bryson and Fisher [6]. Theydefine two types of lag, transmission lag and position lag, and describe experiments tomodel the lag characteristics of several trackers. Transmission lag time is defined asthe time between the first movement of a tracker at rest and the first movement of thecursor (or 3D object) being controlled by the tracker. The other type of lag, positionlag, is the difference between the distance the tracker has moved and the distance thecursor controlled by the tracker has moved (measured in the same coordinate system).Hence position lag depends on the velocity at which the tracker is moved. Experimentswere conducted using video recording to measure and establish a relationship betweenthese two lags and the graphic update time. A model is proposed for the dependence ofposition lag on transmission lag time, velocity, and graphic update time.A more precise general testbed for measuring lag in trackers is described by Adelsteinet al. [1]. They used a large motorized rotary swing arm to move a tracker through aknown motion at a controlled frequency, and measured lags of Ascension, Logitech, andPolhemus trackers.Although careful measurements of lag have been reported and methods have beendeveloped to help reduce the amount of lag in systems, little is known about the extentto which lag is a problem; few systematic studies have been undertaken to characterizethe effects of lag on user perception and performance.Recently, MacKenzie and Ware reported on an experiment to study the effect of lagon a 2D Fitts' law target selection task [39]. Subjects performed the usual task whichinvolves moving a cursor as quickly as possible to a target area on the screen. To simulatelag coming from the mouse tracker, mouse records were buffered for different numbers ofChapter 2. Related Work^ 13frame times to generate different experimental conditions. The resulting response timeswere analyzed with respect to this lag. The results showed that a model in which laghas a multiplicative effect on Fitts' index of difficulty accounted for 94% of the variancein their data. This is better than alternative models that propose only an additive effectfor lag.Aside from lag, the other primary temporal artifact to consider is that of reducedframe rate. Various researchers have suggested that frame rate is relatively unimportantin comparison to lag [1][59], although no direct studies are known that confirm or refutethis hypothesis.Chapter 3Head-Coupled Stereo DisplayThe technique of head-coupled stereo display shares common characteristics with tradi-tional graphics workstation displays and with immersive virtual reality systems. Head-coupling and stereo provide extra depth cues not available with traditional displays, andat the same time do not suffer from the problems of current technology head-mounteddisplays. There are a number of reasons why, at least in the near term and for manyapplications, monitor-based head-coupled stereo display is more practical than immersivevirtual reality using head-mounted displays.One obvious reason for considering head-coupled display is that head-mounted displaytechnology is currently very primitive and limiting compared with workstation monitors,as well as being expensive. Current head-mounted displays typically have low resolution,on the order of 400 x 600 pixels, and the optics that are used to stretch the image over awide field of view create distortions in the image that are difficult to correct for efficiently[32][49]. While the technology has been improving rapidly, one can expect that it will beseveral years before the visual acuity afforded by monitor-based head-coupled display ismatched by that of head-mounted displays.A more fundamental argument, and one that will not change as technology improves,is that the property of immersion is probably not necessary for many applications. Forinstance, in the area of medical visualization, scientists viewing 3D scans of patientswould probably not want to be immersed in the data. There are cases where immersion14Chapter 3. Head-Coupled Stereo Display^ 15is desirable, such as architectural walkthroughs for example, but it can be argued thatthese form a small subset of applications which use 3D computer graphics. Immersion hasthe disadvantage that it disconnects the user from the real world; he no longer has easyaccess to his keyboard or other standard input devices and cannot easily interact withhis work environment and colleagues. See-through head-mounted displays and computeraugmented reality interfaces alleviate this problem somewhat [2][20].The following sections outline the components of head-coupled stereo display and theissues involved in implementing and using it correctly.3.1 Stereo DisplayThe basic concept underlying stereo display is to provide the user with a sense of depththrough binocular parallax. In the real world, the disparity between the two views seenby our two eyes allows us to judge distances to objects. In computer generated scenes,the same effect can be created by presenting different images to the two eyes, computedwith an appropriate disparity. To correctly view a 3D scene stereoscopically, each imageshould be created using a perspective projection that corresponds exactly to the positionof the user's eye when he is viewing the scene. Of course, without any head trackingcapability, an assumption must be made about where the user is located. Hodges de-scribes the software requirements for displaying computer generated stereo images usingcorrect off-axis projections [30]. For simplicity, many stereo applications (that are nothead-coupled) generate the left and right images by simply rotating or translating thescene (and displaying with two on-axis projections) [29][43][58]. While such approachesdo provide a reasonable set of disparities for stereo images and can provide stereopsiseffects, the images are not physically correct in general.When viewed from a position other than the intended one, stereo images will appear6 6Chapter 3. Head-Coupled Stereo Display 16Figure 3.1: The effect of induced stereo movement on an image computed with fixedperspective projections. The solid line represents the image plane and the dotted linesrepresent the projection from the eyes to the image. The object being drawn, a box,appears to distort as the user moves to an off-axis viewpoint.distorted. Stereo imagery suffers from an interesting artifact when the user's head moves,often referred to as induced stereo movement [60]. The scene will appear to bend aboutthe image plane, following the user's eyes (see Figure 3.1). This effect can be distractingand degrades the illusion that the virtual scene is stable and three-dimensional. Thisartifact is also present when viewing non-stereo images but is not as distracting.Induced stereo movement is only present when viewing a static (fixed viewpoint)stereo display and the effect disappears when head position is taken into account and theperspective projection is correctly coupled to head position, as will be discussed in thenext section.Another problem with stereo displays is the conflict between accommodation andconvergence. When our eyes fixate on an object in the real world, they converge inwardand the focal length of the lenses adjusts so that the fixated object is in focus, whileobjects at other depths are not in focus. This effect is known as depth-of-field. Whenviewing stereoscopic displays the eyes will converge according to a fixation point in theChapter 3. Head-Coupled Stereo Display^ 17scene, but they must always focus on the image plane, which is at a fixed depth. Henceall objects in the scene will appear in focus, regardless of depth. This effect can bedistracting and degrades the level of realism exhibited by the virtual scene. It may alsobe physiologically harmful to the eyes, although little is known about this. This problemis less severe the farther away the display screen is situated from the user, and thecloser the virtual objects are to the screen. In a head-mounted display, the optics makethe image plane appear approximately 40 cm from the user's eyes [49]; when viewing aworkstation monitor, a users's eyes are typically 80 cm from the screen surface.3.2 Head-Coupled PerspectiveA necessary part of the geometry pipeline for rendering 2D images of 3D scenes is theprojection that maps the graphic primitives in 3-space onto the view plane in 2-space.Most commonly in computer graphics, orthographic or single-point perspective projec-tions are employed. For a comprehensive discussion of the different types of projectionsthe reader is referred to the survey paper by Carlbom and Paciorek [8] or to standardcomputer graphics textbooks [23][24][43][44][50].A parallel projection (usually orthographic, or sometimes oblique) maps points in3-space directly onto the view plane along a perpendicular direction. Alternatively, aperspective projection is often used to scale the scene's horizontal and vertical coordinateswith depth, thus providing a sense of depth. The projection is called on-axis because theviewpoint is chosen to be a point along the z-axis (where the z-axis is perpendicular to thescreen plane). The projection is given by a viewing pyramid defined by a viewpoint andthe four corners of the screen (see Figure 4.2). The image would be physically correct ifthe image is viewed monocularly from the particular viewpoint used to create it. Viewingan image from the incorrect viewpoint causes the virtual scene to distort in various waysChapter 3. Head-Coupled Stereo Display^ 18[51]. This is the same problem encountered with induced stereo movement, althoughit is not as severe without stereo, and in fact the human visual system has evolved tocompensate for incorrect viewpoints as demonstrated by our willingness to view cinemaor television from incorrect viewpoints [13].In a head-coupled display system, the perspective projection is dynamically coupledto the current positions of the user's eyes and the projection is necessarily an off-axisprojection in the general case (see Figure 4.2).The effect of head-coupled perspective is illustrated by the four screen photographsin Figure 3.2. In all cases the program is displaying the same 3D model of an automobilepositioned at the center of the screen, level with respect to the monitor. Two differentperspective projections and two corresponding camera angles are employed (resulting inthe four photographs). Only in the two photographs where the camera position matchesthe perspective projection does the object appear three-dimensional and undistorted. Inthe other two photographs, where the camera position does not match the perspectiveprojection, the object appears distorted.Most graphics libraries contain functions for generating perspective projections, al-though not all support off-axis perspective. Our implementation uses the Silicon GraphicsGL library (see Appendix B). Alternatively, one could directly compute the viewing ma-trix required to transform points according to the projection defined by an arbitraryviewpoint and viewing plane [15].To provide correct head-coupled perspective, the system must know where the user'seyes are located. The eye positions are typically found by tracking head position andorientation and estimating the positions of the eyes with respect to a reference pointon the tracker. We assume that eye position is sufficient and that rotation of the eyehas no effect. This assumption is incorrect, however. The effective viewpoint of theeye, the first nodal point in the eye's optical system, is located approximately 0.6 cm inChapter 3. Head-Coupled Stereo Display^ 19Figure 3.2: An illustration of the effects of head-coupled perspective. The program isdisplaying images of a car positioned in 3-space at the center of the screen. In the top rowthe image is computed with an on-axis perspective projection and in the bottom row withan off-axis projection. The left column shows the screen when viewed from a positionon-axis and the right column shows the screen when viewed from a position off-axis. Onlyin the top-left and bottom-right photographs does the perspective projection match theviewing position, resulting in a realistic image that does not appear distorted.Chapter 3. Head-Coupled Stereo Display^ 20front of the eye's center of rotation, for an adult with normal vision, and so it movesas the eye rotates [15]. Thus to provide the correct perspective, one would have totrack the user's eye movements and adjust the computed viewpoint accordingly. Whileeye-tracking equipment is available and has been used in human-computer interaction[33], it is generally too costly and awkward to build into a head-tracking system. Inactuality, the inaccuracy caused by having a slightly incorrect eyepoint for the perspectivecomputations is unlikely to be larger than the errors arising from inaccuracies in the head-tracking and from distortions produced by the CRT screen. The effect may be worse forhead-mounted displays since the screens are much closer to the eyes than a conventionalmonitor would be.Note that head-mounted displays (as well as head-coupled displays) require off-axisperspective projections even though the relationship between the eyes and the screens isfixed. The screens in head-mounted displays are usually not perpendicular to the lineof sight, but are usually angled away from the face [49]. There are also other distortioncorrections that must be implemented in a head-mounted display and these can be quiteexpensive and difficult to perform in real time.3.3 Six Degree-of-freedom TrackingIn order to employ head-coupled perspective, the system must track the user's eye po-sitions, or at least the user's head position. Various technologies have been developedto perform 6 degree-of-freedom (position and orientation in 3D) tracking [41]. There arefour main categories: magnetic, acoustic, optical, and mechanical.Magnetic trackers, such as those sold by Polhemus and Ascension, are typically smalland lightweight. However, they suffer from interference problems when operated in thevicinity of electrical objects.Chapter 3. Head-Coupled Stereo Display^ 21Acoustic trackers transmit ultrasonic frequencies between one or more transmitters,usually worn on the user, and a common receiver. An example of an acoustic tracker isthe Logitech 3D mouse. One drawback of acoustic trackers is that a clear line of sightmust be maintained between the transmitter and receiver.A variety of optical tracking methods have been experimented with, although opticaltrackers are not yet commercially available. Optical tracking holds much promise due toits unintrusive nature, although it does have the same line-of-sight restriction as acoustictracking. Optical tracking is common for applications requiring less than 6 degrees offreedom, such as Krueger's Videoplace, which uses simple bitmap operations to trace thesilhouette of a participant's head [35].Mechanical trackers, such as the ADL-1 from Shooting Star Technology and thetracker used in the BOOM (Binocular Omni-Oriented Monitor) display from Fake SpaceLabs [38], use mechanical linkages and electronics for measuring angles (goniometers) toobtain very fast and accurate measurements. While mechanical trackers typically suffernegligible lag compared to other types of trackers, a disadvantage is that they can beheavier and less comfortable to use.Implementing head-coupled display is independent of the tracking technology em-ployed. With our system we have been using the ADL-1 mechanical tracker. This trackeruses potentiometers in each of 6 joints to measure angles, and returns the position andorientation of the end joint. The rated absolute positional accuracy of the ADL-1 is 0.51cm and its repeatability is better than 0.25 cm. This device has a lag of less than 3 ms,which is shorter than the lag introduced by other factors such as the time taken for read-ing the input buffer through the RS-232 port. We have made no precise measurementsof the tracker accuracy ourselves; the cited values are from the manufacturer. Duringour studies we used the raw data provided by the tracker and performed no prediction orsmoothing. A simple backwards averaging method for smoothing was implemented forChapter 3. Head-Coupled Stereo Display^ 22test purposes, although it was not enabled during the experiments so as not to introduceadditional lag into the system.3.4 Factors Affecting PerformanceIn addition to the primary functional components necessary for implementing head-coupled stereo display, there are a number of issues that have effects on the qualityof the system and on user performance. For general references on the topic of humanfactors for virtual reality systems, the reader is referred to the survey article by Ellis [18]and a collection of papers he edited [19].3.4.1 CalibrationTo maintain a high degree of realism with the display, the system should be calibratedcarefully to take into account the physical locations and characteristics of the monitor,the head tracker and the user. With regard to the monitor, the primary parametersare the positions (in the real world) of the corners of the viewport. The glass in mostmonitors is not flat, but is shaped like a section of a sphere or cylinder. The image plane isdistorted by this curvature, and also by refraction effects from the glass. Deering derivesequations describing the distortions for any point in the image [15]. Given this function,he adjusts the four corners of the viewport to appear physically correct for the currenteye position. This produces in effect a linear approximation to the distortion correctionover the image. Deering's approximation has been implemented in our system, althoughit was not employed during the experiments to reduce computational costs.Another important factor to correct for is any distortion in the tracker's measure-ments. Most trackers exhibit some distortion and this can be corrected for using anumber of methods, such as approximating the inverse distortion with polynomials orChapter 3. Head-Coupled Stereo Display^ 23using lookup tables [7]. Although we have not implemented any correction schemes forour ADL-1 tracker, some distortion has been observed as the tracker nears the outer rangeof its operating volume, and in particular as the user moves very close to the screen.There are also user-dependent parameters to adjust for, such as the spacing betweenthe eyes and the location of the eyes with respect to the tracker (when it is worn on theuser's head). In our current system, no provision for interactive calibration for new usersis made; "average" values of eye spacing and position have been chosen and hence therewill be some inaccuracies for different users.3.4.2 Temporal AccuracyInaccuracies resulting from timing delays due to communication and processing timeeither in the graphics pipeline or prior to that in the computation of the viewing param-eters for the virtual world must be dealt with in any system. The two primary temporalvariables are lag and frame rate.Lag is inherent in all instances of human-machine interaction, including virtual re-ality and telerobotics. Lag in head-coupled stereo display systems arises from delays inthe tracker transmitting data records, hardware or software delays while processing thetracker data to perform smoothing or prediction, and additional processing time spentin the main display loop prior to displaying the scene. Although lag is recognized as animportant factor in all VR interfaces and work has been done on techniques to compen-sate for it, there has been little experimental study of the perceptual and performanceeffects of lag in virtual reality systems reported in the literature.The frame rate in an interactive graphics system is defined as the number of framesdisplayed per second. A low frame rate, and the resulting delay between frames, willnot only contribute to the total lag in the system, but very low frame rates of around10 frames per second or less will make the scene appear very jittery and the effect ofChapter 3. Head-Coupled Stereo Display^ 24head-coupling will be less natural. Low frame rates are a standard problem in computergraphics when displaying complex scenes. For virtual reality systems, the problem maybe more significant as the image changes in response to the user's natural movementsand thus artifacts in the continuity may be more disturbing than temporal artifacts intraditional interactive graphics applications.3.4.3 Auxiliary CuesAs with conventional graphics display, there are a number of techniques available forproviding convincing depth cues. Particular techniques include shading, both Lambertianand specular, and the use of shadows to suggest the shape and relative positions of objects.A perspective projection provides depth information by scaling the extent of objects inthe x and y directions according to z (depth). These and other well known cues aredescribed in most standard graphics texts [23] [24] [43] [44].Chapter 4ExperimentsThree experiments were carried out to investigate different aspects of head-coupled stereodisplay. The primary purposes of these experiments were:1. evaluate the effectiveness of head-coupled stereo display in general;2. compare the relative performance of head-coupling and stereopsis as depth cues;3. investigate the effects of temporal artifacts when using head-coupled stereo display.This chapter describes the experiment system and general experimental procedure, fol-lowed by the details of the three studies. The next chapter will discuss the results andimplications of them.4.1 Experiment System ConfigurationFigure 4.1 shows a photograph of the system used to conduct the experiments. Theprogram is running on a Silicon Graphics Iris 4D 240/VGX workstation. The subjectis wearing StereoGraphics Crystal Eyes glasses and a Shooting Star Technology ADL-1 head tracker. The glasses are synchronized with an interlaced video monitor which isrefreshing at 120 Hz, and the LCD shutters in the glasses alternate to provide an effective60 Hz update to each eye. The head tracker is mounted above the screen by a woodenframe attached to the sides of the monitor. We would have preferred to attach the trackerdirectly to the top of the monitor but this was not possible due to the range of operation25Chapter 4. Experiments^ 26Figure 4.1: The head-coupled stereo display system. The subject's head position ismeasured by the ADL-1 mechanical tracker. StereoGraphics glasses are worn to providedifferent images to the left and right eyes, and the display monitor is synchronized withthe glasses to provide an effective 60 Hz to each eye.of our tracker; it was necessary to mount the tracker approximately 40 centimeters abovethe monitor. The monitor was raised so that the center of the screen was level withthe subjects' eye positions and the mouse and pad were positioned comfortably for thesubject. The distance from the screen to the subjects' eyes was approximately 50 cm.4.2 General Experimental ProcedureFive basic viewing conditions were employed in the experiments. These are listed inTable 4.1 with the labels that are used to refer to them subsequently. They are shownschematically in Figure 4.2. For "non head-coupled" conditions, the perspective imagewas computed once according to the subject's initial head position, whereas for head-coupled conditions the image changed dynamically as the user moved his or her head. Instereo conditions (conditions STE and HCS), different images were displayed accordingto the estimated left and right eye positions of the viewer. In the "non stereo" conditionsChapter 4. Experiments^ 27(conditions PIC, HCM, and HCB), the same image was presented to both eyes. Inconditions PIC and HCB the image was computed for the "cyclopean" eye position, thatbeing the position midway between the two eyes. For the other non stereo condition, the"head-coupled monocular" condition (HCM), the image was computed correctly for theright eye, and the subjects were asked to close or cover the left eye (with their hand ora piece of paper over the stereo glasses).In Experiments 1 and 2, the viewing condition varied randomly among the five condi-tions. In Experiment 3, the full head-coupled stereo condition was always employed andtemporal artifacts were introduced by simulating tracker lag and reduced frame rates.PIC PictureSTE Stereo onlyHCM Head coupled monocularHCB Head coupled binocularHCS Head coupled with stereoTable 4.1: The five possible viewing conditions used in the experiments.For each of the experiments we ensured that each subject could perceive depth usingstereopsis, and that each subject moved his or her head around throughout the experimentso that the effect of head-coupled perspective could be experienced. It is estimated that asmall proportion of people cannot achieve the benefits of stereopsis, due to irregularitiesin the eyes. To confirm stereopsis, each subject was shown a stereo test scene prior toperforming the experiment. Figure 4.3 shows the test scene. The background of the scenewas blue with two black squares, one above the other. Inside of the black squares werered squares, offset from the center slightly, either to the left or to the right dependingon the eye. When the images are viewed stereoscopically, the red square on top appearsto be be located in front of the screen and the red square on the bottom appears to beHead-CoupledHCSHCB0Chapter 4. Experiments^ 28Non-Stereo^ Stereo PICNon-Head-CoupledFigure 4.2: The five viewing conditions used in Experiments 1 and 2 (see Table 4.1).In each of the diagrams the image plane is represented by a bold horizontal line, andvirtual objects are shown in front of and behind the screen with the projection onto theimage plane indicated by solid lines. The dotted lines indicate the perspective projectionsemployed, each defined by an eyepoint and the corners of the screen.Chapter 4. Experiments^ 29Figure 4.3: The left eye and right eye images for the stereo test scene.located behind the screen. The only difference between the left and right eye images wasthis horizontal displacement, and so stereopsis was the only cue that could lead to depthperception in the image (other cues such as perspective were not used). The static imagewas shown with no head-coupling. Each subject was asked to describe what he or shesaw, and in all cases the subjects responded correctly that the top rectangle appeared tobe coming out of the screen and the bottom one appeared to be going into the screen.4.2.1 Experiment ScenesIn Experiment 1, two different scenes were shown to subjects to obtain their subjectiveevaluations of the value of stereo and head-coupling (see Figure 4.4). In both scenes wewanted to provide as much depth cueing information as possible and yet still maintaina 60 Hz update rate. The first scene contained an approximated sphere casting a pre-computed fuzzy shadow drawn on a striped ground plane. The scene was smooth shadedwith specular highlights. The second scene consisted of a bent tube object, similar inshape to the Shepard-Metzler mental rotation objects [4][52]. Again, the scene was ren-dered with smooth shading and specular highlights, however a shadow and ground planewere not included for the tube scene as it was not possible to render it reliably at 60 Hz.Colours were chosen to minimize ghosting effects due to slow phosphor decay times ofChapter 4. Experiments^ 30Figure 4.4: The sphere and bent tube displays used in Experiment 1. Hardware lightingwas used to achieve the specular reflection. The blurry cast shadow was pre-computedand subsequently texture-mapped onto the floor. The colours and background have beenmodified for black and white reproduction.the monitor. In particular, we chose colours with a relatively small green component.The background is a vection background and was composed of a random field of fuzzydiscs drawn on a blue background. The term "vection" is usually used to refer to thefeeling of self movement when a large field display is moved with respect to an observer.Recent evidence indicates that the effect can be achieved with even a small field of view[31]. Howard and Heckman suggest that one of the important factors in eliciting vectionis the perceived distance of a moving visual image, with images that are perceived asfurthest away contributing the most. In the experiments, we desire the observer toperceive the monitor as a window into an extensive space. We created the backgroundout of discs displayed as though they were an infinite distance from the user (with respectto their position, not their size). The edges of the discs were blurred to give the illusionof depth-of-field. The discs are not intended to be focussed on; they are intended to givea feeling of spaciousness when objects in the foreground are fixated.For Experiments 2 and 3, a 3D tree tracing task was employed. The scene containedthe same vection background as in Experiment 1, and two purple trees consisting ofstraight line segments in the foreground. The construction of the trees will be discussedChapter 4. Experiments^ 31in the section describing the Experiment 2 procedure. A sample pair of trees is shown inFigure Experiment 1: Subjective Impression of Three -dimensionalityThe goal of the first two experiments was to obtain subjective user preferences andperformance measurements under various viewing conditions (shown in Figure 4.2), in aneffort to evaluate head-coupled stereo display and compare the relative merits of stereoand head-coupling as depth cues. Experiment 1 obtained subjective rankings of thedifferent viewing conditions using two arbitrary scenes. The two scenes were the sphereand bent tube displays discussed earlier (see Figure 4.4). An experimental protocol thatinvolves comparison of randomly selected pairs of conditions was implemented to obtainthe rankings.4.3.1 ProcedureIn a given trial, a subject compared the impression of three-dimensionality given by twoviewing conditions randomly selected from the five conditions shown in Figure 4.2 andTable 4.1. Two icons, a triangle and a square, were shown in the top left corner ofthe screen, representing two conditions, with a circle around the icon representing thecurrent condition. The triangle and square icons were used to make it easier to keeptrack of which condition was active. By pressing the space bar, the subject could changethe viewing condition (and the highlighted icon). The subjects were asked to continuetoggling between the two conditions until they made a decision as to which condition gavethem a better sense of three-dimensionality. At this point they would click on either theleft or the right mouse button (marked with a triangle and a square respectively) toindicate the preferred condition. During the trials, the conditions were not identified byChapter 4. Experiments^ 32name to the subjects (they were identified only by icon). However, the conditions weredescribed to subjects prior to the experiment.To further judge the subjects' feelings about head-coupling and stereo, each was askeda set of questions after completion of the experiment.4.3.2 DesignThe construction of the experiment blocks was as follows. Each of the 5 viewing con-ditions was compared with all the others, making a total of 10 different pairs. Theassignment of the two conditions to either the triangle or square icon was random. The10 pairs were shown once for the sphere scene and once for the bent tube scene. A trialblock consisted of these 20 trials in random order. The experiment consisted of twoblocks of 20 trials for each subject (a different ordering was used for each block).Following the comparison trials, each subject was presented with the following set ofquestions.All of the following questions relate to the quality of the 3D spatialimpression.Is head-coupling as important, more important or less important than stereo?Is the combination of head-coupling and stereo better than either alone?Is head-coupling alone worthwhile? (If you had the option would you use it?)Is stereo alone worthwhile? (If you had the option would you use it?)Is head-coupling with stereo worthwhile? (If you had the option would you useit?)Do you have other comments on these methods of displaying 3D data?Seven subjects performed the experiment. The subjects were graduate or undergrad-uate students at the University of British Columbia. All of the subjects were male, andChapter 4. Experiments^ 33four of the subjects were familiar with high performance graphics systems.4.3.3 ResultsThere were no systematic differences between the data from the sphere scene and thedata from the tube scene and so these two sets of data were merged.Tables 4.2 and 4.3 summarize the combined results from all subjects. Each entry inTable 4.2 corresponds to a pair of viewing conditions. The value is the percentage ofthe trials in which the row condition was preferred over the column condition. Hencecorresponding percentages across the diagonal sum to 100%. For example, the value 89%in row 4 and column 2 means that condition HCB was preferred to condition STE in 25out of all 28 comparisons (4 responses from each of the 7 subjects). The value of 11% inrow 2, column 4 accounts for the other 3 responses in which condition STE was preferredover condition HCB.Viewing Condition PIC STE HCM HCB HCSPIC Picture 43% 4% 0% 7%STE Stereo only 57% 7% 11% 0%HCM HC monocular 96% 93% 29% 61%HCB HC binocular 100% 89% 71% 68%HCS HC & stereo 93% 100% 39% 32%Table 4.2: Pairwise comparison results from Experiment 1. The values in each rowcorrespond to the frequency a particular condition was preferred over each of the otherconditions.The most interesting result apparent from the data is that head-coupling withoutstereo was preferred over stereo alone by a wide margin of 91% to 9% (averaging themonocular and binocular results).Table 4.3 shows for each viewing condition the percentage of times it was preferredChapter 4. Experiments^ 34Viewing Condition FrequencyPIC Picture 13%STE Stereo only 19%HCM HC monocular 70%HCB HC binocular 82%HCS HC & stereo 66%Table 4.3: Summary by viewing condition of the results from Experiment 1. The valuein each row corresponds to the frequency a condition was preferred in all of the trials inwhich it was present.over all the trials in which that condition was present. The values in the second columnsum to n/2 x 100% = 250%, where the number of viewing conditions n is 5 in ourexperiment. Head-coupled display without stereo (both monocular and binocular) waspreferred somewhat more than head-coupled display with stereo (although this preferenceis likely not statistically significant).The responses to the questions also showed a strong preference for head-coupling.All users said they would use it if it were available. In response to the first question("Is head-coupling as important, more important or less important than stereo ?"), twoof the seven subjects stated that they thought stereo was more important than head-coupling. However, these same subjects preferred head-coupling over stereo in the directcomparison task. One subject complained about the awkwardness of the apparatus andpointed out that this would be a factor in how often it would be used. The complete setof responses is included in Appendix A as Tables A.1 through A.6.4.4 Experiment 2: Performance on a 3D Tree Tracing TaskThe second experiment compared the same viewing conditions used in Experiment 1 asmeasured by performance on a 3D task. This task is based on one used by SollenbergerChapter 4. Experiments^ 35and Milgram [53] to study the ability of subjects to trace arterial branching in brain scandata under different viewing conditions. Subjects were asked to answer questions thatrequired tracing leaf-to-root paths in ternary trees in 3-space. The stimulus trees weregenerated randomly by computer.4.4.1 ProcedureThe experiment stimulus consisted of a scene constructed as follows. Two ternary treesconsisting of straight line segments were constructed in 3-space and placed side-by-sideso that a large number of the branches overlapped (see Figure 4.5). One leaf of one of thetrees was highlighted and the subject was asked to respond as to whether the leaf waspart of the left tree or part of the right tree. For each trial, we chose as the highlightedleaf the one whose x coordinate was nearest the center of the screen. The reason for thiswas to ensure that the task would be reasonably difficult under all viewing conditions.In each experimental trial, the subject was presented with a scene and asked to clickon the left or right mouse button depending on whether the distinguished leaf appearedto belong to the left or the right tree. The bases of the two trees were labeled on thescreen with a triangle (the left tree) and a square (the right tree). The correspondingleft and right mouse buttons were similarly labeled with a triangle and a square as anadditional aid to help subjects remember the labeling.The trees were recursively defined ternary trees. A trunk of 8.0 cm was drawn atthe base of the tree, connected to the root node. Nodes above the root were definedrecursively, with the horizontal and vertical positions of the children placed randomlyrelative to the parent. There were three levels of branches above the root, resulting in27 leaves for each tree. The following recurrence relation gives a precise specification forone tree. This assumes a right-handed coordinate system with y pointing upwards andx pointing right.Chapter 4. Experiments^ 36Xbase = Ybase = Zbase = 0.0VerticalSpacingroot = 8.0 cmHorizontalSpacingroot = 8.0 cmXroot = XbaseYroot = Ybase + VerticalSpacingrootZroot = ZbaseVertiCalSpaCirtgchild = 0.7 x Vertica/SpacingparentHorizonta/Spacingchild = 0.7 x HorizontalSpacingparentXchild = Xparent HOriZOntalSpaCingchild x Rand()YchiId = Yparent VerticalSpacingehad x (1.0 + 0.25 x Rand())Zchild Zparent HorizontalSpacingchad x Rand()The function Rand() returns a uniform random number in the range [-1, +1]. Thetwo trees constructed for each trial were displayed side-by-side separated by a distanceof 1.0 cm.The visual complexity of the trees was tested beforehand, with the goal of makingthe task difficult enough that depth perception was a factor, but not so difficult thatan extreme number of errors would be made by a typical subject. This resulted in thespecific parameters that were selected. Figure 4.5 shows an example of the experimentstimuli for one trial.The experiment tested the same five viewing conditions as in Experiment 1 (seeTable 4.1) and subjects wore the stereo glasses and head tracking equipment throughoutthe experiment. Ten undergraduate and graduate students (nine male and one female),most of whom had experience with computer graphics workstations, served as subjectsfor the experiment. They were instructed that their error rates and response times wereChapter 4. Experiments^ 37Figure 4.5: An example of the tree display used in Experiments 2 and 3. The coloursand background have been modified for black and white reproduction.being recorded and that they should be most concerned with making as few errors aspossible.4.4.2 DesignA new pair of trees was randomly generated for each trial. The viewing condition washeld constant for each group of 22 random trials. The first two trials of each group weredesignated as practice trials to familiarize the subject with the condition. A trial blockconsisted of all 5 groups given in a random order, and the entire experiment consisted of3 such blocks, resulting in a total of 60 trials in each of the 5 experimental conditions.A practice group of 10 trials (two in each condition) was given at the start of the exper-iment. The stereo test scene (see Figure 4.3) was presented to each subject prior to theexperiment to verify the subject's ability to use stereopsis to perceive depth.Chapter 4. Experiments^ 384.4.3 ResultsThe results from Experiment 2 are summarized in Table 4.4. The timing data show thatthe head-coupled stereo condition was the fastest, but that head-coupling without stereowas slow. There are significant differences at the 0.05 level between condition HCMand condition HCS and between condition HCB and condition HCS, by the WilcoxonMatched Pairs Signed Ranks Test. The only other difference that is significant is betweenconditions HCB and PIC.Viewing Condition Time (sec) % ErrorsPIC Picture 7.50 21.8STE Stereo only 8.09 14.7HCM HC monocular 8.66 3.7HCB HC binocular 9.12 2.7HCS HC & stereo 6.83 1.3Table 4.4: Experiment 2 timing and error resultsThe error data in Table 4.4 provide more significant results, with errors ranging from21.8% in the static non-stereo condition without head-coupling to 1.3% for the head-coupled stereo condition. All of the differences are significant in pairwise comparisonsexcept for the difference between conditions HCM and HCB, the two head-coupled con-ditions without stereo.4.5 Experiment 3: Effects of Lag and Frame RateIn a head-coupled display system the delay in the display update arises from two primarysources. The first is the delay in receiving and processing physical measurements from thetracker to produce eye position and orientation data. The processing delay is typicallydue to communication delay and smoothing algorithms, implemented either within theChapter 4. Experiments^ 39tracker hardware itself, or on the host computer. The second lag is the delay betweenreceiving the eye positions and updating the display, that is, the time required to computeand render the scene using a perspective projection that takes into account the latesttracker measurements (or two perspective projections when displaying in stereo). Thissecond lag is hence directly related to the frame rate. There is usually a third lagcomponent present due to variations in system load. This component is more difficultto predict and measure and for our purposes we effectively eliminated it as a factor byrestricting network access to the workstation during the experiment.Experiment 3 was designed to investigate the effects of lag and reduced frame rates onperformance of a 3D task under head-coupled stereo viewing. In particular we wanted todetermine how response times were affected by increasing lag and to compare the relativeimportance of lag and frame rate.4.5.1 ProcedureThe 3D tree tracing task was used again for this experiment. All of the experimental trialswere conducted under the full head-coupled stereo viewing condition (condition HCS).Subjects were informed that the accuracy of their responses and their response timeswould be recorded. They were instructed to perform the task as quickly as they couldwithout seriously sacrificing the accuracy of their responses. Note that this is differentfrom the instructions given to subjects in Experiment 2, where error rate was consid-ered most important. The reason for this change of focus is due primarily to the lowlevel of difficulty of our task and the fact that the trials were always performed underhead-coupled stereo viewing. We reasoned that measuring response times would be mostrelevant when dealing with the addition of temporal artifacts and that error rates wouldnot vary significantly as presumably a large degree of depth perception can still be ob-tained through stereopsis and motion (even in high lag conditions where the motion isChapter 4. Experiments^ 40not coupled accurately with head movements).The subjects were ten male graduate students, all of whom had some prior experienceusing graphics workstations.4.5.2 DesignThe two variables in the experiment were frame rate and simulated tracker lag. Framerates of 30 Hz, 15 Hz, and 10 Hz were used, and tracker lags of 0, 1, 2, 3, or 4 frametimes were simulated. Hence there were 3 x 5 = 15 conditions in total. Table 4.5 showsthe total lag times resulting from these values. Total lag is defined asTotalLag = TrackerLag + 1.5 x Framelnterval.The program was synchronized with the internal system clock to run at a maximumframe rate of 30 Hz. The frame rates of 15 Hz and 10 Hz were generated by redraw-ing frames once or twice, respectively. Tracker lags were simulated by buffering trackerrecords for a number of frame times. The actual frame rates and lags that were achievedwere measured during the experiment to verify the accuracy of the software. The mea-sured times were found to be within 3 milliseconds of the predicted values in all cases.Subjects were presented with 15 blocks of 22 trials, with lag and frame rate keptconstant within blocks. The first two trials in each block were designated as practicetrials to enable the user to become familiar with the block's lag and frame rate. Theblocks were presented in random order, and an additional block of 22 practice trials withmoderate lag and frame rate (15 Hz and 233.3 msec lag) was given at the start of theexperiment. The stereo test scene (shown in Figure 4.3) was presented to each subjectprior to the experiment to verify the subject's ability to use stereopsis to perceive depth.Chapter 4. Experiments^ 41Frame Rate Tracker LagFR (Hz) Frame time Base Lag # frames Lag Total Lag30 33.3 50.0 0 0.0 50.030 33.3 50.0 1 33.3 83.330 33.3 50.0 2 66.6 116.630 33.3 50.0 3 100.0 150.030 33.3 50.0 4 133.3 183.315 66.6 100.0 0 0.0 100.015 66.6 100.0 1 66.6 166.615 66.6 100.0 2 133.3 233.315 66.6 100.0 3 200.0 300.015 66.6 100.0 4 233.3 333.310 100.0 150.0 0 0.0 150.010 100.0 150.0 1 100.0 250.010 100.0 150.0 2 200.0 350.010 100.0 150.0 3 300.0 450.010 100.0 150.0 4 400.0 550.0Table 4.5: Experiment 3 conditions (all times are in msec)Chapter 4. Experiments^ 424.5.3 ResultsFigure 4.6 shows a plot of average response times over all trials and subjects for each ofthe 15 experimental conditions. The horizontal axis measures the total lag time, and thepoints are marked according to the different frame rates. Response times ranged from3.14 to 4.16 seconds. On average, subjects responded incorrectly in 3.4% of the trials.The distribution of errors across conditions showed no distinguishable pattern; there wasno significant correlation between errors and total lag (F(1,13) = 2.91, hypothesis is notrejected at p = 0.10). A plot of error rates is shown in Figure 4.7.Results from Experiment 3^100^200^300^400^500^600Total Lag (msec)Figure 4.6: Plot of response time versus total lag for Experiment 3. Each point corre-sponds to an experimental condition with a particular lag and frame rate (see Table 4.5).The line is the best fit to the linear regression model involving total lag only.A regression analysis was performed to compare the effect of total lag and frame rate,and in particular to determine whether total lag, frame rate, or a combination of bothwould best account for the data. Three models were tested using linear regression on the+30 fps 015 fps +10 fps 0 _00^ 0o +0^ 010 0^8765432100 200 300Total Lag (msec)400 500 6000^++^ 000^ -Chapter 4. Experiments^ 43Error Results from Experiment 3Figure 4.7: Plot of error rates versus total lag for Experiment 3. Each point correspondsto an experimental condition with a particular lag and frame rate (see Table 4.5).15 averaged points. The models wereModel 1: log time = ci. + c2 x TotalLagModel 2: log time = ci. + c2 x FramelntervalModel 3: log time = c1 + c2 x TotalLag + c3 x Framelntervalc1 , c2 , and c3 are constants. In models 2 and 3, Frame interval was used instead offrame rate (frame interval = 1/frame rate) since both lag and frame interval measuretime (whereas frame rate has dimensions of 1/time). Model 3 is an additive model whichtakes both total lag and frame interval into account. The regression line for Model 1 isplotted along with the timing data in Figure 4.6.Linear regression was performed and the regression constants were found to be thefollowing.Chapter 4. Experiments^ 44Model 1: log time = 0.51 + 0.20 x TotalLagModel 2: log time = 0.49 + 0.98 x FrameIntervalModel 3: log time = 0.49 + 0.13 x TotalLag + 0.52 x FramelntervalThe effectiveness of the regression fit to the data can be measured by the coefficient ofdetermination r 2 , which measures the fraction of the variance which is accounted for bythe regression model. The r 2 values for each of the three models are listed below. The F-test statistics that are given are for the test of significance of the regression (specifically,the test that the correlation coefficient differs significantly from zero). All three regressiontests showed significant correlation.Model 1: r2 = 0.50, F(1, 13) = 13.0,p < 0.005Model 2: r 2 = 0.45, F(1,13) = 10.6, p < 0.01Model 3: r 2 = 0.57, F(2, 12) = 7.95, p < 0.01Model 3 involves multiple linear regression with two variables. A test of significancewas performed to determine the strength of this model over Models 1 and 2, whicheach involve only one variable. Model 3 shows no significant improvement over Model 1(F(1, 12) = 1.95, hypothesis is not rejected at p = 0.10), whereas Model 3 does show amoderately significant improvement over Model 2 (F(1, 12) = 3.35, p < 0.10). Thus themodel which incorporates both total lag and frame interval does not perform significantlybetter than the model with total lag alone, although it is probably better than the modelwith frame interval alone.The three models can be rewritten in terms of tracker lag and frame interval insteadof total lag and frame interval, using the relation thatTotalLag = TrackerLag + 1.5 x FrameInterval.Chapter 4. Experiments^ 45The result is the following.Model 1: log time = 0.51 + 0.20 x TrackerLag + 0.30 x FramelntervalModel 2: log time = 0.49 + 0.98 x FrameIntervalModel 3: log time = 0.49 + 0.13 x TrackerLag + 0.72 x FrameIntervalChapter 5DiscussionThe results from the three experiments suggest a number of interesting conclusions withrespect to the relative merits of head-coupling and stereopsis, and the effects of temporalartifacts on 3D task performance.5.1 Subjective Evaluation of Head -Coupled Stereo DisplayBoth the comparison results and the positive response of the subjects to the concept ofhead-coupled display provide evidence for the value of the technique and suggest thatapplications that use computer graphics to display 3D data could benefit from its use.An unexpected result from the comparison trials is that on average subjects preferredhead-coupled non-stereo display over head-coupled stereo display. This is likely due to theghosting present when displaying stereo images. The monitor we used, which is typicalof common monitors used for computer graphics, is not optimized for stereo display: thephosphor decay times are longer than is desirable and hence there is some cross-talkbetween the left and right eye images. When objects are displayed without stereo, theimage tends to appear sharper than stereo images because there is no ghosting. Subjectsmight have preferred the sharpness of the non-stereo images to the ghosted stereo images,despite the added advantage of stereopsis.Subjects also mentioned the discomfort of the head tracker and this may have af-fected subjects responses to the questions following the comparison trials. Two of the46Chapter 5. Discussion^ 47subjects said that they thought stereo was more important than head-coupling yet thesame subjects preferred head-coupling in the comparison task. Aside from the trackerdiscomfort, another possible reason for this apparent bias towards stereo is the fact thatstereoscopic 3D is a technique which is already well known to most people, either fromsimilar use with graphics workstations or through 3D movies. In comparison, the head-coupling technique is much less well known. The awkwardness of head tracking is likelyto become less of a problem with advances in tracking, and in fact many other currentlyavailable trackers are less intrusive than the mechanical tracker we used [41].5.2 3D Task PerformanceExperiment 2 provides objective evidence of the value of head-coupled stereo display.Error rates in the tree tracing task using head-coupled stereo were significantly lowerthan the error rates obtained under any of the other viewing conditions. The resultsalso show that head-coupling alone is significantly better for this type of task than stereoviewing alone. Hence the results suggest that, if possible, head-coupling and stereoshould both be implemented, but if only one of the two techniques must be chosen, thenhead-coupling should be given preference for this type of task.Another factor to consider when choosing between head-coupling and stereo is therelative computational expense of the two techniques. To implement head-coupling inan interactive graphics application, all that is required is that the program change theperspective projection with each frame update, and hence the frame rate of the programis not reduced appreciably. Stereo requires that two images be generated and drawn foreach frame and thus halves the frame rate of the program. Many techniques for creatingstereo images, including using shutter glasses and field sequential display, also have theeffect of reducing the vertical resolution of the frame buffer by half. As display hardwareChapter 5. Discussion^ 48becomes faster, this same factor of two will remain, but the time required to adjust theperspective projection will almost vanish for a given investment in hardware cost.The fact that motion parallax (through head-coupling) outperforms binocular paral-lax (stereo) is not surprising and is supported by theories of visual perception promotedby Gibson and others [28]. The results are also similar to those obtained by Sollenbergerand Milgram in their comparison of stereo and rotational depth cues [53]. Overall, theerror rates obtained in the tree tracing task are lower than those obtained by Sollenbergerand Milgram, but the pattern is very similar despite the differences in the stimulus trees,the viewing conditions and the experimental protocols. Both studies found motion to bemore important than stereo, even though our motion was due to head-coupling ratherthan simple rotations of the object, as was the case in the study by Sollenberger andMilgram. Both studies found combined motion and stereo to be more effective thaneither in isolation. However, our data does not provide very much information aboutthe extent of the benefit of combined stereo and head-coupling. Because the error ratesfrom Experiment 2 were too close to zero in the head-coupled and head-coupled stereoconditions, subjects' performance could not increase much further.It can be argued that the improvements seen with head-coupling in the tree tracingtask are not due to the head-coupling as such, but rather to motion-induced depthperception [62]. Our current evidence does not counter this objection. However, it is likelythat the image motion produced by dynamic head-coupled perspective is less distractingthan techniques such as rocking the scene back and forth about a vertical axis, which iscommonly done in commercial molecular modelling and volume visualization packages.A more complete study would compare the benefits of dynamic head-coupled per-spective with the benefits of motion, rotational or otherwise, under stereo and non-stereoconditions. One would expect the performance of stereo viewing to improve with theintroduction of motion, and that the difference between head-coupled with motion andChapter 5. Discussion^ 49stereo with motion would not be as pronounced as it was in our study.Our timing data from Experiments 2 and 3 show an apparent inconsistency. Becausethe head-coupled stereo condition of Experiment 2 is comparable to the best lag and framerate condition of Experiment 3, one would expect the response times to be comparable.However, the best response times from Experiment 3 are approximately half of the bestresponse times from Experiment 2. This discrepancy is likely due to the fact that subjectswere given slightly different instructions for Experiment 3 because response time wasmore important than error rate. In Experiment 2, subjects tended to be more careful tominimize errors at the expense of response time. This is supported by the fact that inExperiment 2 with head-coupled stereo the average error rate was 1.3%; in Experiment3 this grew to 3.4%.Another problem with Experiment 2 concerns the selection of stimulus trees. Foreach trial, the program generated a new random tree and thus the set of trees used underone condition would be different from the set used for another, although on average theyshould be roughly comparable in difficulty. There are two primary reasons why a morecareful selection procedure was not used. When conducting the first two experiments wewere having difficulties with communications with the tracker: in a small percentage ofthe trials, the records would become corrupted and the display would be unpredictable.When this occurred, the tracker was reset and the trial was restarted, with new randomlygenerated trees. Hence some of the trees would be thrown out and we could not rely on aprecomputed set of trees. The problem with the tracker was solved before Experiment 3was conducted and so the difficulties did not occur there. The second reason for usingrandomly selected trees was due to the large amount of time that would have to bespent to select "good" trees, and also the difficulty of deciding what in fact are goodtrees, without introducing any bias towards trees that were "better" in some conditionscompared to the others. In our randomly selected trees, occasionally the solution wouldChapter 5. Discussion^ 50be almost immediately recognizable, with very few overlapping branches, but on the otherhand, some scenes would be very difficult, with a very dense overlapping of branches.This is not to say that a sophisticated method for selecting trees would be impossible,but rather that we decided that the random selection method was most appropriate forthe scope of this study.5.3 Effects of Lag and Frame RateExperiment 3 provides information about the importance of lag and frame rate on 3Dtask performance. Of the three regression models, the model that accounts for the mostof the variance is Model 3, which takes both lag and frame rate into account, as expected.The fact that this only accounts for 57% of the variance is likely due to the random natureof the stimulus trees leading to a wide variance in response times within conditions.The comparison of the relative importance of the two variables of lag and framerate showed no significant difference between the strength of the model which took onlylag into account (Model 1) and the model which took both lag and frame rate intoaccount (Model 3), whereas there was a moderately significant difference in the relativestrength of Model 2, which involved frame rate only, and Model 3. This suggests thatlag itself accounts reasonably well for the performance degradations observed, and thatlag is probably a more important temporal artifact than frame rate in its effect on theperformance of similar tasks.Given the data describing performance fall-off as lag increases, it is useful to obtainsome measure of what level of lag becomes prohibitive. Specifically we would like to knowthe lag value that makes performance under head-coupled stereo viewing worse than theperformance would be without head-coupling or stereo. We can compare the results fromExperiments 2 and 3 to obtain an approximate cut-off value by finding a point on theChapter 5. Discussion^ 51regression line in Figure 4.6 where response time is the same as for the static viewingcondition.The analysis becomes complicated because our range of response times for Experiment3 was lower than that for Experiment 2 due to the differing instructions given to subjects,as was discussed in the previous section. In Experiment 2 we found a best case responsetime of 6.83 seconds, whereas in Experiment 3 under the same conditions the responsetime was 3.25 seconds, which is a factor of 2.10 less. The Experiment 2 average responsetime for static viewing was 7.50 seconds. If we scale this by the same factor of 2.10,we find that it corresponds to a response time of 3.58 seconds under the conditions ofExperiment 3. From the plot of the first regression model (Figure 4.6), this correspondsto a total lag of 210 milliseconds. This suggests that for tasks similar to the tree tracingtask, lag above 210 milliseconds will result in worse performance, in terms of responsetime, than static viewing. Note that due to the large variance in our data it is difficultto say how accurate or significant this lag cutoff value is, and how relevant it is for othertasks.The error rates for Experiment 3 remained low under all conditions, averaging to3.4%, in contrast to Experiment 2 where the number of errors rose significantly in thenon-head-coupled conditions. This suggests that even in the presence of large lags andlow frame rates, head-coupling provides some performance improvement. This is notsurprising, however, because the effects are likely due to motion-induced depth; while wedo not have the data to verify this, we suspect that performance is similar to what itwould be if the scene were moving independent of the user, even without head-coupledviewing.Systems that use predictive methods such as Kalman filtering must make a compro-mise between the size of the prediction interval and noise artifacts that become worse asthis interval increases. Introducing prediction into a system will effectively flatten outChapter 5. Discussion^ 52the low-lag portion of the curve in Figure 4.6 and hence there will be a cut-off pointbeyond which the lag caused by filtering artifacts becomes unacceptable.The level of lag in commercial trackers can be expected to improve in the futurethrough hardware improvements and improved prediction techniques. However, our re-sults and estimates given by other researchers suggest that lags as low as 100 millisecondscan be disruptive. While the quality of commercial trackers can be expected to improvein the future with respect to lag, the current technology is such that many trackers in-troduce on the order of 75 ms or more. Even a very fast system by today's standardswith a 20 Hz frame rate and a tracker with 25 ms lag will generate a total lag of at least100 ms. Studies of lag are even more relevant for applications such as telerobotics wherelong communication lines can create delays of several seconds.Lag tends not only to have an effect on task performance but can also contribute tomotion sickness. This is a problem that is very important for the practical use of virtualreality, in particular for immersive systems where the conflict between the senses is moreapparent.5.4 ApplicationsThe tree tracing task used in our experiments maps directly to several current applicationsof interactive 3D graphics. The first, and in fact the application out of which the taskarose, is medical visualization of brain scan data where doctors may wish to trace pathsthrough complex networks of blood vessels [53]. The tree structure is also similar tosoftware visualization techniques that display modules with nodes connecting them torepresent object dependencies [48]. Our results are also likely to be applicable to manyother 3D tasks that can benefit from an extra sense of depth.The head-coupling technique is not limited exclusively to systems that use computerChapter 5. Discussion^ 53graphics. As reported by various researchers [11][47], there is potential for teleoperationusing cameras that are isomorphically coupled to the user's head. In fact, teleoperationis one application where techniques such as rotating the scene may be difficult or evenimpossible to implement if dealing with a large working scene (in the real world), whereashead-coupling is relatively easy since it only involves moving cameras.Chapter 6ConclusionsThis thesis presents a discussion of the technique of head-coupled stereo display, anexamination of the issues involved in implementing it correctly, and experimental studiesto investigate its effectiveness.Head-coupled stereo display, or "fish tank virtual reality", is similar to immersivevirtual reality systems in that head tracking and stereoscopy are employed to provide anadded sense of three-dimensionality. A number of reasons, including the state of currenthead-mounted display technology and the impracticality of immersion in some situations,make monitor-based head-coupled display a more practical choice for many applications.While the technique can be implemented with commercially available hardware, thereare a number of important issues, including accurate calibration of the system and theminimization of temporal inaccuracies, that are important for properly implementing thetechnique. The effect of lag is particularly relevant to both immersive and non-immersivevirtual reality systems.Three experiments were conducted to evaluate the effectiveness of head-coupled stereodisplay and to investigate the effects of temporal artifacts. The results of the first twoexperiments showed strong evidence for the value of the technique, both through sub-jective user preference tests and objective measures of 3D task performance. For the 3Dtree tracing task we tested, the results suggest that the head-coupling technique aloneis more beneficial than is stereo alone. Combined head-coupling and stereo provided the54Chapter 6. Conclusions^ 55most improvement.The third experiment provides an indication of the seriousness of the effect of lagon user performance. Subjects' response times increased dramatically as lag increasedand compared with the effect of low frame rates, it appears that lag alone accountsreasonably well for the degradation. This would suggest that designers of virtual realitysystems should make it a priority to employ very low-lag tracking devices and effectiveprediction methods. The advantage of faster tracking is likely more significant than theadvantage of having a very fast graphics display.6.1 Future WorkThe studies presented here represent initial attempts at characterizing user performanceusing head-coupled stereo displays and hence are necessarily limited in scope. There area number of issues regarding 3D performance using the technique that require furtherstudy.6.1.1 Experimental StudiesThe first two experiments compared the relative benefits of head-coupling and stereo. Inmost cases (in particular with the stimuli for Experiment 1) we endeavoured to provideas many depth cues from techniques other than head-coupling and stereo as possible,including specular highlights, shadows and a vection background. We neglected thedepth cues possible from motion-induced depth, however, and the scenes we displayedwere all static in space. This is somewhat unrealistic: typically an application that candisplay scenes at a high frame rate (high enough that head-coupling can be employed) willtake advantage of motion and allow the scene to be moved by the user. A more completeanalysis of the benefits of head-coupling would compare performance with motion andChapter 6. Conclusions^ 56without. It is likely that with a moving scene, the performance difference between head-coupling alone and stereo alone would not be as large as the difference seen in our study.The relative effects of motion will of course depend on how the motion is controlled,whether through automatic or manual control by the user, and also on what type ofdevice is used to control the motion.Another important area for future studies is to investigate the performance of othertasks under head-coupled stereo viewing and how the effects vary with the size of thedisplay (and whether the display is immersive).Our third experiment gives some initial indication of the effects of lag and frame rateon user performance. However, the design of the experiment, and the selection of theexperimental conditions, did not permit us to obtain a clear comparison between theeffects of frame rate and lag. There is a need for closer investigation of the cut-off valueswhere lag becomes prohibitive; this could be accomplished better by choosing a finer setof lag conditions to focus in on particular regions.6.1.2 Extensions to Head -Coupled Stereo DisplayAn important problem that has not been fully addressed in our implementation is propercalibration of the system. Our system has been calibrated approximately and the displayparameters are not adjusted individually for different users. There is a need for techniquesto calibrate efficiently and accurately for distortions both from the tracker and from thescreen, and to interactively calibrate for different users, adjusting for different eye spacingsand eye positions relative to the head tracker. There would also be value in studies thatevaluate just how accurate the display needs to be, so that an appropriate balance couldbe met between the accuracy and the expense of different calibration methods.There are many interesting possibilities for extending head-coupled stereo displaysbeyond the implementation described here. With larger screens it is possible to obtainChapter 6. Conclusions^ 57the effects of head-coupling and stereo and also provide a sense of immersion [12][16].Large or multiple screen displays have the advantages of immersion yet do not sufferfrom the problems of head-mounted displays, such as wide angle distortions and physicaldiscomfort.One drawback to head-coupled display in comparison to traditional display is thatthe display can only be viewed effectively by one user at a time. The stereo imagesand perspective distortions become distracting to other people looking on from the side.A possible solution to this is to use the same technology as field sequential stereo toeffectively multiplex the display between different users (instead of different eyes), andtrack multiple head positions. While the cost may be too expensive to make this techniquepractical in all but very high-end applications, it may be feasible in limited situationswhere only two or three users are presented with non-stereo head-coupled display.Another interesting technique, suggested by McKenna's work [40], is to use smallmovable displays that are tracked. A small LCD display in an augmented reality-typeapplication might be preferable to see-through head-mounted displays, as the displaycould be shared between users and wouldn't have to be worn.By definition, head-coupled display implies using head position to adjust the perspec-tive projection used in displaying 3D scenes. Given that the application knows wherethe user's head is, it can make further use of this information. An interesting interactiontechnique which we have experimented with is head-coupled scene rotation. Objects aremade to rotate in a matter analogous to 2D virtual trackball techniques, but with the ro-tation defined by head position rather than mouse position. As the user moves his head,the scene can be made to rotate in an opposite direction to give him a more completeview of it.Appendix AExperiment ResultsThe following 6 tables list the answers given by the seven subjects in Experiment 1 inresponse to questions administered after the experiment.Is head-coupling as important, more important or less important than stereo?1 head coupling is more important2 head coupling is more important3 head coupling is less important than stereo4 head coupling is more important5 head coupling is more important6 head coupling is more important7 Less importantTable A.1: Experiment 1 subject comments from Question 1.Is the combination of head-coupling and stereo better than either alone?1 Yes2 Head coupling and head coupling with stereo seem roughly the same3 Yes4 No, I prefer head coupling alone5 Yes6 It is a close call between head coupling alone and head coupling with stereo.7 Yes definitelyTable A.2: Experiment 1 subject comments from Question 2.58Appendix A. Experiment Results^ 59Is head-coupling alone worthwhile? (If you had the option would you use it?)1 Yes2 Yes3 Yes4 Yes5 Yes6 Only problem is discomfort. For some visualization tasks head coupling would beworthwhile.7 YesTable A.3: Experiment 1 subject comments from Question 3.Is stereo alone worthwhile? (If you had the option would you use it?)1 Yes2 No3 Yes4 No5 Yes6 Yes. But only sparingly. The glasses are less of a problem than the head mount.7 YesTable A.4: Experiment 1 subject comments from Question 4.Is head-coupling with stereo worthwhile? (If you had the option would you use it?)1 Yes2 Yes3 Yes4 No — stereo is too much of a hassle, it dims the view and does not add much5 Yes6 Given head linking I would not bother with stereo. This is mainly because of theproblem wearing both the glasses and the head mount.7 YesTable A.5: Experiment 1 subject comments from Question 5.Appendix A. Experiment Results^ 60Do you have other comments on these methods of displaying 3D data?12 Motion is important for 3D3 Hard to tell difference between stereo and head coupling4 In general stereo made the images less crisp.^When choosing between a crispnon-moving image and a fuzzy stereo image which was moving, the fuzzy stereoimages was chosen.5 Background gives a good feeling of space as did the shading6 Found head coupling very effective. Very positive first impression. One eye wassometimes better than both. Ghosting is worse on the sphere scene.7 did not notice a difference between one eye and two eye conditionsTable A.6: Experiment 1 subject comments from Question 6.ConditionSubjects1 2 3 4 5 6 7 8 9 10Picture 20 14 15 12 12 10 15 9 7 17Stereo 7 9 5 11 6 13 11 13 3 10HC Monocular 8 0 2 4 1 1 0 1 3 2HC Binocular 6 3 1 2 0 3 0 0 2 2HC Stereo 3 0 0 1 0 0 1 0 1 2Table A.7: Experiment 2 errors by subject.ConditionSubjects1 2 3 4 5 6 7 8 9 10Picture 8.02 6.02 6.02 4.88 7.28 5.70 8.77 7.39 10.68 8.23Stereo 12.32 5.56 5.22 7.04 6.49 4.83 7.07 8.87 10.47 7.03HC Monocular 11.74 8.63 7.11 6.22 6.68 5.81 9.75 9.21 9.12 11.18HC Binocular 14.52 10.21 7.51 6.00 8.62 5.95 10.19 7.20 8.08 11.82HC Stereo 10.96 6.97 4.48 5.51 5.60 4.34 6.15 8.55 7.70 7.92Table A.8: Experiment 2 response times by subject (for correct responses only).Appendix A. Experiment Results^ 61SubjectsFR Lag 1 2 3 4 5 6 7 8 9 1030 0 3.30 2.57 3.15 3.22 2.42 2.68 2.55 4.17 4.33 4.5830 1 3.12 2.27 2.59 3.44 2.71 2.64 2.99 4.28 4.20 6.0630 2 3.99 2.65 3.51 2.90 2.56 2.75 3.26 5.10 5.20 5.1330 3 3.52 2.19 2.64 2.88 2.59 2.61 3.11 4.80 4.50 5.1730 4 2.82 2.51 4.15 4.51 2.72 2.46 3.11 5.21 5.67 4.7615 0 3.93 3.09 3.37 3.43 2.68 2.99 2.83 5.13 3.71 5.6515 1 2.48 2.45 3.05 3.29 2.62 2.71 3.11 5.07 4.53 5.7115 2 4.81 2.41 3.55 8.64 2.24 2.88 3.81 6.40 4.57 5.1515 3 2.37 2.35 3.38 3.64 2.20 2.72 3.87 3.08 5.86 4.2315 4 4.81 2.42 3.58 4.19 2.55 2.71 3.34 4.59 5.54 5.3910 0 4.19 2.62 3.25 4.04 3.47 2.63 3.64 5.06 4.26 4.9810 1 3.36 3.22 3.00 3.36 2.00 3.53 3.96 4.62 5.03 6.8910 2 4.13 2.41 3.23 5.09 2.72 2.94 3.31 4.58 5.60 5.6810 3 7.60 3.39 3.12 4.21 2.50 3.30 3.77 6.76 4.23 5.2910 4 5.10 3.44 4.44 3.53 2.30 3.06 3.47 5.51 5.39 7.28Table A.9: Experiment 3 response times by subject (for correct responses only).Appendix BHead-Coupled Stereo Display SoftwareThe head-coupled stereo display experiment and demonstration software was imple-mented using the Silicon Graphics GL library. The software has been run on SGI work-stations as well as IBM RS/6000 workstations. The C function draw_hc_stereo_scene()displays a scene in head-coupled stereo. It assumes that there is a functionget_tracker_eyepos() that returns the positions of the user's eyes obtained througha head tracker, and that there is a function draw_scene() that draws the scene centeredat the origin. The function draw_view() is called to draw a single head-coupled view foreach eye, using the GL window() function.#include <gl.h>#include <stdio.h>#define Lx 0#define Ly 0#define Hx 1280#define Hx 1024#define YMAXSTEREO 491#define YOFFSET 532/* Draw a scene using head-coupled stereo display.This assumes the monitor is already in stereo mode */void draw_hc_stereo_scene(){float L_ eye [3] , R_eye [3] ;get_tracker_eyepos (L_ eye , R_eye) ;62Appendix B. Head-Coupled Stereo Display Software^ 63/* draw right eye view */viewport(0, XMAXSCREEN, 0, YMAXSTEREO);draw_view(R_eye);/* draw left eye view */viewport(0, XMAXSCREEN, 0, YMAXSTEREO);draw_view(L_eye);/* Draw a view for a single eye position */void draw_view(float eye[3]){ Coord left, right, bottom, top, near, far;static Matrix Identity ={1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1};/* set up the off-axis perspective projection, with the eye at the origin lookingdown the positive z-axis */left = Lx - eye[0];right = Hx - eye[0];bottom = Ly - eye[1];top = Hy - eye[1];near = eye [2] ;/* far clipping plane — 10000 is arbitrary */far = 10000.0 + eye[2];loadmatrix(Identity);window(left, right, bottom, top, near, far);/* draw the background */cpack(0x00000000);clear(); zclear();Appendix B. Head-Coupled Stereo Display Software^ 64/* move the clipping plane out of the screen */scale(4.0, 4.0, 4.0);/* move view frustum according to eye position by doing the opposite trans-lation; this moves the center of the viewport to the world origin */translate(-eye[0], -eye[1], -eye[2]);draw_scene0;Bibliography[1] Adelstein, Bernard D., Eric R. Johnston, and Stephen R. Ellis. "A testbed forcharacterizing dynamic response of virtual environment spatial sensors". Pro-ceedings of the ACM Symposium on User Interface Software and Technology,pp. 15-22, 1992.[2] Bajura, Michael, Henry Fuchs, and Ryutarou Ohbuchi. "Merging virtualobjects with the real world: Seeing ultrasound imagery within the patient".Computer Graphics (SIGGRAPH '92 Proceedings), Vol. 26, No. 2, pp. 203-210, July 1992.[3] Blanchard, Chuck, Scott Burgess, Young Harvill, Jaron Lanier, Ann Lasko,Mark Oberman, and Michael Teitel. "Reality Built For Two: A Virtual Real-ity Tool". Computer Graphics (1990 Symposium on Interactive 3D Graphics),Vol. 24, No. 2, pp. 35-36, March 1990.[4] Booth, Kellogg S., M. Phillip Bryden, William B. Cowan, Michael F. Morgan,and Brian L. Plante. "On the parameters of human visual performance: Aninvestigation of the benefits of antialiasing". IEEE Computer Graphics andApplications, Vol. 7, No. 9, pp. 34-41, September 1987.[5] Brooks, Frederick P., Jr. "Walkthrough — A dynamic graphics system forsimulating virtual buildings". Proceedings of 1986 Workshop on Interactive3D Graphics, pp. 9-21, October 1986.[6] Bryson, S. and S. Fisher. "Defining, Modeling and Measuring System Lag inVirtual Environments". Proceedings of the 1990 SPIE Conference on Stereo-scopic Displays and Applications, Vol. 1256, pp. 98-109, 1990.[7] Bryson, S. "Measurement and Calibration of Static Distortion in Three-Dimensional Magnetic Trackers". Proceedings of the 1992 SPIE Conferenceon Stereoscopic Displays and Applications III, Vol. 1669, 1992.[8] Carlbom, I. and J. Paciorek. "Planar geometric projections and viewing trans-formations". ACM Computing Surveys, Vol. 10, pp. 465-502, December 1978.[9] Clifton, T.E. III and Fred L. Wefer. "Direct volume display devices". IEEEComputer Graphics and Applications, Vol. 13, No. 4, pp. 57-65, July 1993.65Bibliography^ 66[10] Codella, Christopher, Reza Jalili, Lawrence Koved, J. Bryan Lewis, Daniel T.Ling, James S. Lipscomb, David A. Rabenhorst, Chu P. Wang, Alan Norton,Paula Sweeney, and Greg Turk. "Interactive simulation in a multi-personvirtual world". Proceedings of CHI '92 Conference on Human Factors inComputing Systems, pp. 329-334, April 1992.[11] Cole, Robert E., John 0. Merritt, Richard Coleman, and Curtis Ikehara."Teleoperator performance with virtual window display". Proceedings of the1991 SPIE Conference on Stereoscopic Displays and Applications II, Vol.1457, pp. 111-119, 1991.[12] Cruz-Neira, Carolina, Daniel J. Sandin, Thomas A. DeFanti, Robert V.Kenyon, and John C. Hart. "The CAVE Audio Visual Experience Auto-matic Virtual Environment". Communications of the ACM, pp. 64-72, June1992.[13] Cutting, J.E. "On the efficacy of cinema, or what the visual system did notevolve to do". Pictorial communication in virtual and real environments. pp.486-495. Taylor and Francis, 1991.[14] Dallas, W.J. "Computer-generated holograms". Topics in Applied PhysicsVolume 41, The Computer in Optical Research: Methods and Applications.pp. 291-366. Springer-Verlag, 1980.[15] Deering, Michael. "High resolution virtual reality". Computer Graphics (SIG-GRAPH '92 Proceedings), Vol. 26, No. 2, pp. 195-202, July 1992.[16] Deering, Michael F. "Making virtual reality more real: Experience with theVirtual portal". Proceedings of Graphics Interface '93, pp. 219-226, May1993.[17] Diamond, R., A. Wynn, K. Thomsen, and J. Turner. "Three-dimensionalperception for one-eyed guys, or the use of dynamic parallax". ComputationalCrystallography, pp. 286-293, 1982.[18] Ellis, Stephen R. "Nature and origins of virtual environments: A bibliographicessay". Computing Systems in Engineering, Vol. 2, No. 4, pp. 321-347, 1991.[19] Ellis, Stephen R., M.K. Kaiser, and A.J. Grunwald, editors. Pictorial com-munication in virtual and real environments. Taylor and Francis, 1991.[20] Feiner, Steven, Blair Maclntyre, and Doree Seligmann. "Annotating the realworld with knowledge-based graphics on a see-through head-mounted dis-play". Proceedings of Graphics Interface '92, pp. 78-85, May 1992.Bibliography^ 67[21] Fisher, Scott S. "Viewpoint dependent imaging: An interactive stereoscopicdisplay". Processing and Display of Three -Dimensional Data, Proc. SPIE Int.Soc. Opt. Eng., Vol. 367, pp. 41-45, 1982.[22] Fisher, S.S., M. McGreevy, J. Humphries, and W. Robinett. "Virtual envi-ronment display system". Proceedings of 1986 Workshop on Interactive 3DGraphics, pp. 77-87, October 1986.[23] Foley, J.D. and A. van Dam. Fundamentals of Interactive Computer Graphics.Addison-Wesley Publishing Company, 1982.[24] Foley, J.D., A. van Dam, Steven K. Feiner, and John F. Hughes. ComputerGraphics: Principles and Practice. Addison -Wesley Publishing Company,second edition, 1990.[25] Friedmann, Martin, Thad Starner, and Alex Pentland. "Device synchroniza-tion using an optimal linear filter". Computer Graphics Special Issue 0992Symposium on Interactive 3D Graphics), Vol. 26, pp. 57-62, March 1992.[26] Fuchs, H., S.M. Pizer, L.C. Tsai, S.H. Bloomberg, and E.R. Heinz. "Addinga true 3-D display to a raster graphics system". IEEE Computer Graphicsand Applications, Vol. 2, pp. 73-78, September 1982.[27] Furness, T.A. "Harnessing virtual space". Proceedings of the 1988 SID In-ternational Symposium, pp. 4-7, 1988.[28] Gibson, J.J. The ecological approach to visual perception. Houghton Mifflin,Boston, 1979.[29] Crotch, S.L. "Three-dimensional and stereoscopic graphics for scientific datadisplay and analysis". IEEE Computer Graphics and Applications, Vol. 3,No. 8, pp. 31-43, November 1983.[30] Hodges, Larry F. "Tutorial: Time-multiplexed stereoscopic computer graph-ics". IEEE Computer Graphics and Applications, Vol. 12, No. 2, pp. 20-30,March 1992.[31] Howard, I. P. and T. Heckman. "Circular vection as a function of the relativesizes, distances and positions of two competing visual displays". Perception,Vol. 18, No. 5, pp. 657-665, 1989.[32] Howlett, Eric M. "Wide angle orthostereo". Proceedings of the 1990 SPIEConference on Stereoscopic Displays and Applications, Vol. 1256, pp. 210-223,1990.Bibliography^ 68[33] Jacob, Robert J.K. "What you look at is what you get: Eye movement-based interaction techniques". Proceedings of CHI '90 Conference on HumanFactors in Computing Systems, pp. 11-18, April 1990.[34] Kalman, R.E. and R.S. Bucy. "New results in linear filtering and predictiontheory". Transactions of ASME (Journal of basic engineering), Vol. 83d,pp. 95-108, 1961.[35] Krueger, M.W. Artificial Reality II. Addison-Wesley Publishing Company,1991.[36] Kubitz, W.J. and W.J. Poppelbaum. "Stereomatrix, an interactive threedimensional computer display". Proceedings of the Society for InformationDisplay, Vol. 14, No. 3, pp. 94-98, 1973.[37] Liang, Jiandong, Chris Shaw, and Mark Green. "On temporal-spatial realismin the virtual reality environment". Proceedings of the ACM Symposium onUser Interface Software and Technology, pp. 19-25, 1991.[38] MacDowall, I., M. Bolas, S. Pieper, S. Fisher, and J. Humphries. "Implemen-tation and integration of a counterbalanced CRT-based stereoscopic displayfor interactive viewpoint control in virtual environment applications". Pro-ceedings of the 1990 SPIE Conference on Stereoscopic Displays and Applica-tions, Vol. 1256, 1990.[39] MacKenzie, I. Scott and Colin Ware. "Lag as a determinant of human per-formance in interactive systems". Proceedings of INTERCHI '93 Conferenceon Human Factors in Computing Systems, pp. 488-493, April 1993.[40] McKenna, Michael. "Interactive viewpoint control and three-dimensional op-erations". Computer Graphics Special Issue (1992 Symposium on Interactive3D Graphics), Vol. 26, pp. 53-56, March 1992.[41] Meyer, Kenneth and Hugh L. Applewhite. "A survey of position trackers".Presence, Vol. 1, No. 2, pp. 173-200, 1992.[42] Muirhead, J.C. "Variable focal length mirrors". Review of Scientific Instru-ments, Vol. 32, pp. 210-211, 1961.[43] Newman, William M. and Robert F. Sproull. Principles of Interactive Com-puter Graphics. McGraw-Hill, 1973.[44] Newman, William M. and Robert F. Sproull. Principles of Interactive Com-puter Graphics. McGraw-Hill, second edition, 1979.Bibliography^ 69[45] Paley, W.B. "Head-tracking stereo display: Experiments and applications".Proceedings of the 1992 SPIE Conference on Stereoscopic Displays and Ap-plications III, pp. 84-89, 1992.[46] Pausch, Randy. "Virtual reality on five dollars a day". Proceedings of CHI'91 Conference on Human Factors in Computing Systems, pp. 265-270, April1991[47] Pepper, R.L., R.E. Cole, and E.H. Spain. "Influence of camera separation andhead movement on perceptual performance under direct and TV-displayedconditions". Proceedings of the Society for Information Display, Vol. 24, No. 1,pp. 73-80, 1983.[48] Robertson, G. G., J. D. Mackinlay, and S. K. Card. "Cone trees: animated 3Dvisualizations of hierarchical information". Proceedings of CHI '91 Conferenceon Human Factors in Computing Systems, pp. 189-194, April 1991.[49] Robinett, Warren and Jannick P. Rolland. "A computational model for thestereoscopic optics of a head-mounted display". Presence, Vol. 1, No. 1,pp. 45-62, 1992.[50] Rogers, D.F. and J.A. Adams. Mathematical Elements for Computer Graph-ics. McGraw-Hill, 1976.[51] Sedgwick, H.A. "The effects of viewpoint on the virtual space of pictures".Pictorial communication in virtual and real environments. pp. 460-479. Tay-lor and Francis, 1991.[52] Shepard, R.N. and J. Metzler. "Mental rotation of three-dimensional objects".Science, Vol. 171, pp. 701-703, 1971.[53] Sollenberger, Randy L. and Paul Milgram. "A comparative study of rotationaland stereoscopic computer graphic depth cues". Proceedings of the HumanFactors Society 35th Annual Meeting, pp. 1452-1456, 1991.[54] Suetens, P., D. Vandermeulen, A. Oosterlinck, J. Gybels, and G. Marchal."A 3-D Display System with Stereoscopic, Movement Parallax and Real-timeRotation Capabilities". Proceedings of the SPIE, Medical Imaging II: ImageData Management and Display (Part B), Vol. 914, pp. 855-861, 1988.[55] Sutherland, Ivan. "The ultimate display". Proceedings of IFIP Congress, pp.506-508, 1965.Bibliography^ 70[56] Sutherland, Ivan. "A head -mounted three dimensional display". Fall JointComputer Conference, AFIPS Conference Proceedings, Vol. 33, pp. 757-764,1968.[57] Teitel, Michael A. "The Eyephone, a head mounted stereo display". Proceed-ings of the 1990 SPIE Conference on Stereoscopic Displays and Applications,Vol. 1256, pp. 168-171, 1990.[58] Tessman, Thant. "Perspectives on stereo". Proceedings of the 1990 SPIEConference on Stereoscopic Displays and Applications, Vol. 1256, pp. 22-27,1990.[59] Tharp, G., A. Liu, and L.W. Stark. "Timing considerations in helmetmounted display performance". Proceedings of the SPIE Conference on Hu-man Vision, Visual Processing and Digital Display III, Vol. 1666, 1992.[60] Tyler, William. "Induced stereo movement". Vision Research, Vol. 14,pp. 609-613, 1974.[61] Venolia, D. and L. Williams. "Virtual integral holography". Proceedingsof the SPIE, Extracting Meaning from Complex Data: Processing, Display,Interaction, Vol. 1259, pp. 99-105, 1990.[62] Wallach, H. and D. H. O'Connell. "The kinetic depth effect". Journal ofExperimental Psychology, Vol. 45, pp. 205-217, 1953.[63] Wanger, Leonard. "The effect of shadow quality on the perception of spatialrelationships in computer generated imagery". Computer Graphics Special Is-sue (1992 Symposium on Interactive 3D Graphics), Vol. 26, pp. 39-42, March1992.[64] Wanger, Leonard R., James A. Ferwerda, and Donald P. Greenberg. "Per-ceiving spatial relationships in computer-generatedimages". IEEE ComputerGraphics and Applications, Vol. 12, No. 3, pp. 44-58, May 1992.


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items