Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

pCubee : evaluation of a tangible outward-facing geometric display Lam, Billy Shiu Fai 2011

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata


ubc_2011_spring_lam_billyshiufai.pdf [ 8.42MB ]
JSON: 1.0071788.json
JSON-LD: 1.0071788+ld.json
RDF/XML (Pretty): 1.0071788.xml
RDF/JSON: 1.0071788+rdf.json
Turtle: 1.0071788+rdf-turtle.txt
N-Triples: 1.0071788+rdf-ntriples.txt
Original Record: 1.0071788 +original-record.json
Full Text

Full Text

pCubeeEvaluation of a Tangible Outward-facing GeometricDisplaybyBilly Shiu Fai LamB.ASc., The University of British Columbia, 2011A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF APPLIED SCIENCEinThe Faculty of Graduate Studies(Electrical and Computer Engineering)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)April, 2011c Billy Shiu Fai Lam 2011AbstractThis thesis describes the evaluation of pCubee, a handheld outward-facinggeometric display that supports high-quality visualization and tangible in-teraction with 3D content. Through reviewing existing literatures on 3Ddisplay technologies, we identi ed and examined important areas that haveyet to be fully understood for outward-facing geometric displays. We in-vestigated the performance of a dynamic visual calibration technique tocompensate for tracking errors, and we demonstrated four novel interactionschemes a orded by tangible outward-facing geometric displays, includingstatic content visualization, dynamic interaction with reactive virtual ob-jects, scene navigation through display movements, and bimanual interac-tion. Two experiments were conducted to evaluate the impact of displayseams and pCubee’s potential in spatial reasoning tasks respectively. Twostimuli, a path-tracing visualization task and a 3D cube comparison taskthat was similar to a mental rotation task, were utilized in the experiments.In the  rst experiment, we discovered a signi cant e ect on user perfor-mance in path-tracing that was dependent on the seam thickness. As seamsize increased beyond a thickness threshold, subjects relied less on multiplescreens and spent longer time to trace paths. In the second experiment, wefound that subjects had signi cant preference for using the pCubee displaycompared to a desktop display setup when solving our cube comparisonproblem. Both time and accuracy using pCubee were as good as usinga much larger, more familiar desktop display. This proved the utility ofoutward-facing geometric displays for spatial reasoning tasks. Our analy-sis and evaluation identi ed promising potential but current limitations ofpCubee. The outcomes from our analysis can help to facilitate developmentand more systematic evaluations of similar displays in the future.iiPrefaceThe pCubee display has been a collaborative research e ort at the HumanCommunication Technologies Laboratory by Professor Sidney Fels, Dr. IanStavness, Master’s student YiChen Tang, undergraduate student Ryan Barrand me. The work reported in this thesis has resulted in three publications.1. Billy Lam, Ian Stavness, Ryan Barr, and Sidney Fels. 2009. Interact-ing with a personal cubic 3D display. In Proceedings of the 17th ACMinternational conference on Multimedia (MM ’09). ACM, New York,NY, USA, 959-960. Awarded Best Technical Demonstration.2. Ian Stavness, Billy Lam, and Sidney Fels. 2010. pCubee: a perspective-corrected handheld cubic display. In Proceedings of the 28th interna-tional conference on Human factors in computing systems (CHI ’10).ACM, New York, NY, USA, 1381-1390.3. Billy Lam, Yichen Tang, Ian Stavness and Sidney Fels. A 3D CubicPuzzle in pCubee. In Press. Symposium on 3D User Interfaces 2011,IEEE. Awarded 3DUI Contest second place.Portions of publications 1 and 2 have been modi ed for Chapters 3 and 4of this thesis. Publication 3 has been modi ed for Chapter 5 and AppendixC. Professor Fels and Dr. Stavness contributed ideas on the development andevaluation of pCubee and provided editing and revisions for the publications.Dr. Stavness participated in writing the initial draft for publication 1. Mr.Tang assisted in conducting the evaluation of using pCubee to solve a 3Dcubic puzzle as described in publication 3. Mr. Barr was responsible fordeveloping portions of the pCubee software concerning the integration of therendering and physics simulation engines that is presented in publication 1.iiiPrefaceI made the following signi cant contributions to the research and writingfor the three publications. ideas on the design and development of the pCubee display reportedin publication 2; assembly of the pCubee display hardware and implementation of thepCubee software, including calibration and prototyping the applica-tions that are reported in publications 1 and 2; design, implementation and execution of all of the experiments thatare reported in publications 2 and 3; writing initial drafts for publications 2 and 3 and subsequent editingof all of the publications.Professor Fels and Dr. Stavness actively supervised the research projectand contributed ideas on the development and evaluation of the pCubeedisplay. All content reported in this thesis was part of my research during myparticipation in the project, including the implementation of the calibrationalgorithm and prototyping of the interaction techniques, the design andexecution of all pilot studies and formal user studies, and the analysis of alldata that resulted from the experiments.The pCubee research project was funded by the Networks of Centres ofExcellence of Canada through GRAND, the Graphics, Animation and NewMedia NCE, and by the National Sciences and Engineering Research Councilof Canada. All experiments reported in this thesis have been approved by theUBC Behavioural Research Ethics Board (Certi cate Number H08-03005).ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xList of Code Snippets . . . . . . . . . . . . . . . . . . . . . . . . . xiiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . 42 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.1 Classi cation of 3D Displays . . . . . . . . . . . . . . . . . . 52.1.1 Volumetric Displays . . . . . . . . . . . . . . . . . . . 62.1.2 Geometric Displays . . . . . . . . . . . . . . . . . . . 112.2 Evaluation of 3D Displays . . . . . . . . . . . . . . . . . . . 142.2.1 Tracking Calibration . . . . . . . . . . . . . . . . . . 152.2.2 Interaction Techniques . . . . . . . . . . . . . . . . . 182.2.3 Visual Discontinuity . . . . . . . . . . . . . . . . . . . 192.2.4 3D Task Performance . . . . . . . . . . . . . . . . . . 202.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21vTable of Contents3 Analysis on Design, Calibration and Interaction Techniques 233.1 Display Hardware . . . . . . . . . . . . . . . . . . . . . . . . 233.2 Software Components . . . . . . . . . . . . . . . . . . . . . . 263.2.1 Rendering Software . . . . . . . . . . . . . . . . . . . 263.2.2 Physics Software . . . . . . . . . . . . . . . . . . . . . 283.2.3 Integration and Simulation . . . . . . . . . . . . . . . 293.3 System Speci cations . . . . . . . . . . . . . . . . . . . . . . 303.4 Tracking Calibration . . . . . . . . . . . . . . . . . . . . . . 303.4.1 Dynamic Visual Calibration . . . . . . . . . . . . . . 313.4.2 Calibration Results . . . . . . . . . . . . . . . . . . . 333.5 Interaction Techniques . . . . . . . . . . . . . . . . . . . . . 373.5.1 Static Visualization . . . . . . . . . . . . . . . . . . . 373.5.2 Dynamic Interaction . . . . . . . . . . . . . . . . . . 383.5.3 Large Scene Navigation . . . . . . . . . . . . . . . . . 393.5.4 Bimanual Stylus Interaction . . . . . . . . . . . . . . 403.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 Evaluation of Visual Discontinuity: E ect of Seams . . . . 444.1 Path-tracing Tasks . . . . . . . . . . . . . . . . . . . . . . . . 454.2 Apparatus . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464.3 User Study: E ect of Seam Size . . . . . . . . . . . . . . . . 474.3.1 Condition Design . . . . . . . . . . . . . . . . . . . . 504.3.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . 504.3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 514.3.4 Discussions of Results . . . . . . . . . . . . . . . . . . 544.4 Pilot Study: Radial Spanning Tree . . . . . . . . . . . . . . . 584.4.1 Condition Design . . . . . . . . . . . . . . . . . . . . 604.4.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . 614.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 624.4.4 Discussions of Results . . . . . . . . . . . . . . . . . . 624.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65viTable of Contents5 Evaluation of Task Performance: Spatial Reasoning . . . . 675.1 Mental Rotation Tasks . . . . . . . . . . . . . . . . . . . . . 675.2 User Study: 3D Cube Comparison . . . . . . . . . . . . . . . 695.2.1 Apparatus . . . . . . . . . . . . . . . . . . . . . . . . 715.2.2 Condition Design . . . . . . . . . . . . . . . . . . . . 725.2.3 Method . . . . . . . . . . . . . . . . . . . . . . . . . . 735.2.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 765.2.5 Discussions . . . . . . . . . . . . . . . . . . . . . . . . 775.3 Applications in Spatial Reasoning Tasks . . . . . . . . . . . . 835.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 866 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 876.1 Contribution Summary . . . . . . . . . . . . . . . . . . . . . 876.2 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . 886.3 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . 91Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92AppendicesA CubeeModel API . . . . . . . . . . . . . . . . . . . . . . . . . . 99A.1 Mutators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99A.2 Accessors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101A.3 Selection Support . . . . . . . . . . . . . . . . . . . . . . . . 101A.4 Sound Support . . . . . . . . . . . . . . . . . . . . . . . . . . 102B Sample Hello World Scene . . . . . . . . . . . . . . . . . . . . 104C A 3D cubic Puzzle in pCubee . . . . . . . . . . . . . . . . . . 106C.1 Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107C.1.1 Direct Selection and Manipulation . . . . . . . . . . . 107C.1.2 Large Rotation . . . . . . . . . . . . . . . . . . . . . . 108C.1.3 Placement . . . . . . . . . . . . . . . . . . . . . . . . 108C.1.4 Correction . . . . . . . . . . . . . . . . . . . . . . . . 108viiTable of ContentsC.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . 109C.2.1 User Study 1: Standard Puzzle . . . . . . . . . . . . . 109C.2.2 User Study 2: Google Puzzle . . . . . . . . . . . . . . 110C.2.3 Overall Results . . . . . . . . . . . . . . . . . . . . . 111viiiList of Tables3.1 Speci cations of the pCubee system . . . . . . . . . . . . . . 304.1 Statistics of mean response times in the seam size study. . . . 524.2 Statistics of mean error rates in the seam size study. . . . . . 524.3 Per-screen usage pattern in the seam size study . . . . . . . . 564.4 Multi-screen usage pattern in the seam size study . . . . . . . 564.5 Test conditions in the path-tracing pilot study . . . . . . . . 594.6 Preference ranking in the path-tracing pilot study. . . . . . . 645.1 Test conditions in the cube comparison study. . . . . . . . . . 725.2 Mean error rates and response times in the cube comparisonstudy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755.3 Correlation coe cients (r2) in the cube comparison study. . . 765.4 Questionnaire responses regarding the cube comparison task . 785.5 Mean per-screen usage pattern in the cube comparison study 825.6 Mean multi-screen usage in the cube comparison study . . . . 82ixList of Figures1.1 pCubee cubic display . . . . . . . . . . . . . . . . . . . . . . . 22.1 Holographic displays . . . . . . . . . . . . . . . . . . . . . . . 72.2 Static-volume displays . . . . . . . . . . . . . . . . . . . . . . 92.3 Swept-volume displays . . . . . . . . . . . . . . . . . . . . . . 102.4 Multi-view displays . . . . . . . . . . . . . . . . . . . . . . . . 102.5 Geometric display implementations . . . . . . . . . . . . . . . 132.6 gCubik autostereoscopic cubic display . . . . . . . . . . . . . 153.1 hardware components of pCubee . . . . . . . . . . . . . . . . 243.2 Screen arrangement of pCubee . . . . . . . . . . . . . . . . . 253.3 pCubee Electronics . . . . . . . . . . . . . . . . . . . . . . . . 263.4 View frustum calculation . . . . . . . . . . . . . . . . . . . . 273.5 Skewed images generated with o -axis projections . . . . . . 283.6 Magnitudes of calibration correction vectors . . . . . . . . . . 343.7 Results using dynamic visual calibration . . . . . . . . . . . . 353.8 Static visualization inside pCubee . . . . . . . . . . . . . . . . 383.9 Dynamic interaction through display movements . . . . . . . 393.10 Scene navigation using display motion . . . . . . . . . . . . . 403.11 Bimanl interaction using the Polhemus stylus . . . . . . . . . 414.1 Seam collisions on cubic displays . . . . . . . . . . . . . . . . 444.2 A path-tracing task sample . . . . . . . . . . . . . . . . . . . 464.3 Virtual frame occlusions used in the seam size study . . . . . 474.4 Spherical path stimuli used in the seam size study . . . . . . 484.5 Mean response times in the seam size study . . . . . . . . . . 53xList of Figures4.6 Mean error rates in the seam size study . . . . . . . . . . . . 534.7 Viewpoint visualization for the seam size study . . . . . . . . 574.8 Radial spanning tree stimulus . . . . . . . . . . . . . . . . . . 594.9 Experimental setup for path-tracing pilot study . . . . . . . . 604.10 Mean response times in the path-tracing pilot study . . . . . 634.11 Mean error rates in the path-tracing pilot study . . . . . . . . 635.1 A sample mental rotation stimulus . . . . . . . . . . . . . . . 685.2 A cube comparison stimulus used in the mental rotation study 705.3 Experimental setup for the cube comparison study . . . . . . 715.4 Mean error rates in the cube comparison study . . . . . . . . 775.5 Mean response times in the cube comparison study . . . . . . 775.6 Mean error rates in the cube comparison pilot study . . . . . 795.7 Mean response times in the cube comparison pilot study . . . 795.8 Cube comparison viewpoint movement visualization . . . . . 835.9 3D docking task . . . . . . . . . . . . . . . . . . . . . . . . . 845.10 Solving a 3D cubic puzzle in pCubee . . . . . . . . . . . . . . 855.11 Two puzzles used in the cubic puzzle task . . . . . . . . . . . 85C.1 Solving a 3D cubic puzzle in pCubee . . . . . . . . . . . . . . 106C.2 Two puzzles evaluated in the cubic puzzle experiment . . . . 109C.3 Measured interaction times for the standard puzzle . . . . . . 110C.4 Completion times for physical and virtual Google puzzles . . 111xiList of Code SnippetsB.1 Code snipplet of a sample pCubee scene . . . . . . . . . . . . 105xiiAcknowledgementsThank God for all the opportunities, successes and failures given to methroughout the course of working on this thesis. It has been my privilegeto take part in the pCubee project and be guided by some of the brightestminds I have come to know in the  eld.To my supervisor Sid: thank you for your supervision and guidance.Your insights have greatly broadened my limits and perspectives.To Ian: thank you for backing me up and keeping me out of trouble somany times. You have been invaluable to everything written in here.To my HCT and MAGIC friends: you guys made the lab time that muchmore slack and enjoyable. Best wishes to your ongoing research and futureendeavors.Thank you mom and dad for your love and patience. Love you all. (Bro,thank you for not being here, I needed your room and the quiet time)A big thank to all my friends for journeying with me, supporting me,praying for and with me. I would not be here without you guys, seriously.xiiiChapter 1IntroductionUnderstanding the design and performance of three-dimensional (3D) dis-play technologies has become increasingly important and relevant to ourinteraction with digital 3D information. Due to rapid advances in com-puter graphics and capture systems in the recent years, 3D data sources aregrowing more abundant and accessible, along with an increasing number ofdisplay technologies that allow us to visualize and interact with them.In addition to commercially-available stereoscopic displays, di erent vi-able 3D display technologies have been proposed, including volumetric dis-plays and head-tracked, perspective-corrected displays. While there hasbeen signi cant focus on achieving implementations of the highest possibletechnical standards, less emphasis has been placed on formal evaluation ofthese displays. This prevents us from being able to fully understand andcompare them with respect to their designs and how well they support 3Dperception and interaction in speci c tasks.This is especially true for a class of multi-screen 3D displays, which werefer to as outward-facing geometric displays. They extend the concept oftraditional Fish Tank Virtual Reality (FTVR) displays by arranging multi-ple  at panel screens outwardly to form a tiled geometric shape (hence thename geometric), such as a cube. By tracking the user’s head position andcorrecting the perspective of each screen correspondingly, outward-facinggeometric displays can render 3D scenes to make virtual content appearas if real objects were contained within the boundary enclosed by the dis-play. Unlike more sophisticated hardware requirements for other 3D displaytechniques, outward-facing geometric setups take advantage of existing high-quality  at panel screens, making the hardware mostly self-contained andusually compact enough to be directly manipulated while held in a user’s1Chapter 1. IntroductionFigure 1.1: The pCubee cubic displayhands. The capability to couple manipulation and visualization togetherin a tangible system provides an interaction experience that is similar tohow we interact with physical objects in our hands. As 3D visualizationbecome more prevalent and more interactive technologies such as touch-screen and gesture-based controls are made available, we see great potentialin outward-facing geometric displays to become e ective tools in a varietyof application domains involving 3D content, such as scienti c data visual-ization, computer-aided design (CAD), biomedical applications, and othervirtual reality tasks such as artistic virtual painting and animation.The introduction of outward-facing geometric displays has brought fortha number of research problems yet to be thoroughly explored. One of themain design challenges with these geometric displays is the presence of phys-ical seams, or bezels, at the joining edges between display panels, whichcan disrupt the visualization experience when a user’s gaze moves acrossmultiple screens. Another problem is that head-tracked displays requireaccurate tracking of the user’s viewpoint to generate perspective-correctedimages for the user’s viewpoint. Distorted or slow tracking leads to vi-sual mismatches that are readily apparent when viewing multiple screens.21.1. ContributionsIn terms of interaction techniques and 3D task performance, the tangiblenature of outward-facing geometric displays o ers promising potential tosupport more intuitively perception and interaction with 3D objects. Ex-ploring all of these issues is a crucial step towards fully understanding thecapabilities and supporting the future adoption of outward-facing geometricdisplays in real-world 3D applications.To address the lack of empirical work on these issues, we conduct anevaluation on a tangible outward-facing geometric display, pCubee, whicharranges  ve small  at panel screens into a cubic shape without the bottom(Figure 1.1). Through an exploratory analysis of pCubee, we explore thestrengths and weaknesses of the display, including its system design andtracking calibration, and we demonstrate di erent interaction techniquesthat it a ords. We perform two controlled user studies in which we investi-gate (i) the e ect of physical seam occlusions in pCubee on user performancein a path-tracing visualization task, and (ii) how users perform a 3D cubecomparison task, which was similar to mental rotation, using pCubee com-pared to a conventional desktop setup. In this thesis, we report the outcomesand  ndings from our analysis and evaluation of pCubee.1.1 ContributionsThe research presented in this thesis provides the following contributionswith respect to the evaluation of outward-facing geometric displays.Evaluation on the Impact of Seam SizeUsing a 3D path-tracing experiment, we discovered that user performanceand interaction behaviors were dependent on the level of visual discontinu-ity caused by physical seam occlusions of the pCubee display. Our resultsrevealed that path-tracing is an unsuitable task for current outward-facinggeometric displays because of both seam occlusions and the task’s apparentinability to take advantage of wide range of perspectives.31.2. Thesis StructureEvaluation on Spatial Reasoning using pCubeeUsing a 3D cube comparison experiment based upon existing mental ro-tation literature, we found that users signi cantly preferred using pCubeecompared to a desktop-and-mouse setup to perform the task. While physi-cal seam occlusions remained an issue in the current pCubee hardware, ourresults con rmed the usability of tangible outward-facing geometric displaysin this type of spatial reasoning and perception task.Novel 3D Interaction SchemesWe proposed and showcased four interaction schemes in pCubee, includ-ing static visualization, dynamic interaction, large scene navigation and bi-manual stylus-based interaction. These techniques are unique to tangibleoutward-facing geometric displays and show the capabilities of this new dis-play technology to support novel methods for interacting with 3D content.1.2 Thesis StructureThe remainder of this thesis is structured as follow: Chapter 2 surveysprevious literature on existing 3D display technologies closely related topCubee; Chapter 3 provides an analysis of the pCubee system with respectto its design, tracking calibration and interaction techniques; Chapter 4reports a  rst experiment on evaluating the e ect of physical seam occlusionsin pCubee on path-tracing tasks; Chapter 5 reports a second experimenton evaluating the spatial reasoning capability using pCubee for 3D shapecomparison tasks; Chapter 6 concludes with future directions our researchsuggests.4Chapter 2Related WorkResearch and development of 3D display technologies date back to as earlyas the 1960s (I.E. Sutherland developed what was known to be the  rsthead-mounted immersive display in 1968 [54]); however it was not until thelast two decades that more variety of concepts could be realized. Due torapid improvements in computer graphics and display technologies, recent3D display development boasts both signi cantly higher  delity and moresophisticated design. These emerging high quality implementations haveallowed researchers to better explore and evaluate various aspects of inter-action techniques and task performance.In this chapter, we review previous literature on 3D display technologiesthat are similar to pCubee. The  rst part of the chapter surveys existing3D displays to provide an analysis of di erent implementation techniquesand the strengths and weaknesses of each. Because formal evaluations ofoutward-facing geometric displays are few, our goal for the review is toextract important issues for 3D displays identi ed by other researchers inthe past, which we can reference for our evaluation of pCubee. In the secondpart of the chapter, we summarize empirical  ndings associated with 3Ddisplays in the context of a number of topics important for outward-facinggeometric displays.2.1 Classi cation of 3D Displays3D display technologies convey 3D information by mimicking one or a combi-nation of depth cues used by the human visual perception system. These in-clude monocular cues such as motion parallax, occlusion, perspective, light-ing and shadows, and binocular cues such as stereopsis and convergence (see52.1. Classi cation of 3D Displayschapters 2 and 3 of the report by Wickens et al. [60] for an in-depth surveyof these visual cues and their interactions). While conventional 2D displayssupport a number of depth cues such as perspective, occlusion, lighting andshadows, 3D displays utilize more salient depth cues, including stereopsisand motion parallax, to deliver compelling 3D e ects to their users. Exist-ing 3D display implementations that are similar to pCubee can be classi edinto two categories: (i) volumetric displays, in which 3D information is ren-dered to its corresponding physical location; and (ii) geometric displays, inwhich visualization is dependent on the arrangement of multiple screens anda correct perspective projection onto each.2.1.1 Volumetric DisplaysVolumetric displays, also known as \true 3D" displays, illuminate 3D pointsin their physical spatial locations, which allows them to render perceptuallyrich 3D content by satisfying all visual depth cues just as real-world objectsdo. 3D e ects generated by volumetric displays are achieved without requir-ing users to wear special glasses or other hardware, a particularly desirableproperty for 3D displays which is referred to as autostereoscopic. The factthat these displays can be viewed by a multitude of users with independentviewing perspectives makes them ideal tools for collaborative 3D tasks. How-ever, volumetric displays are di cult to realize at the present time becauseof a number of technical challenges that have to be overcome. Volumetricdisplays are in general limited in resolution, brightness, and compactnessdue to constraints with the optical elements used or the large number ofsimultaneous views they have to render. Opacity can also be a problem fortranslucent volumetric systems that di use lights in all directions.Existing implementations of volumetric displays include holographic [56],static-volume [17], swept-volume [23, 53] and multi-view [33, 34] techniques.From the perspective of output, these displays all provide true 3D imagevisualization, but their underlying designs and strategies vary widely.62.1. Classi cation of 3D Displays(a) Visualization(b) SchematicFigure 2.1: Visualization and schematic for a holographic display. Figuresadapted from [56].72.1. Classi cation of 3D DisplaysHolographic Displays. Holographic displays record the light scatteringpatterns of physical objects and reconstruct them in a display medium. Theviewer can view these 3D recordings within a certain viewing range as if theyhave been imprinted inside the medium. Traditionally, holographic imagesare recorded in permanent materials such as silver halide  lms, dichromatedgelatin or photopolymers. Displaying dynamic 3D content with holographicdisplays remains technologically challenging: interactive holographic dis-plays are still far from achieving real-time speed. The most recent imple-mentation by Blanche et al. [56] (Figure 2.1) requires a minimum 1 minuterecording time on a 4in. x 4in. hologram.Static-volume Displays. Another class of volumetric displays, referredto as static-volume or cross-beam displays, share similarities with holo-graphic displays in the sense that they too render volumetric data withinsolid-state display mediums. Instead of recording light scattering patterns,however, static-volume displays utilize active optical elements that can beexcited to emit light. Using dual intersecting infrared laser beams at lo-cations representing the voxel data, these systems can display 3D objectsby drawing out their shapes rapidly within the display medium. Figure 2.2illustrates a cross-beam, static-volume display by Downing et al. [17]Swept-volume Displays. These displays rely on the persistence of visionto allow users to stitch together 3D images from a sequence of 2D slices.Di erent architectures, including rotational and translational systems, havebeen implemented to display these 2D slices using high-speed projectors ordisplays. A rotational system is described by Grossman and Balakrishnan[23] (Figure 2.3). It utilizes a rotating, omnidirectional di user screen toproject lights from 2D slices in all directions in 3D space. On the otherhand, a translational system, such as the DepthCube [53], uses multi-planaroptical elements that can rapidly shut o and let through light to projectstacks of 2D slices at their corresponding planes to achieve depth.82.1. Classi cation of 3D Displays(a) Visualization(b) SchematicFigure 2.2: Visualization and schematic for a static-volume display. Figuresadapted from [17].92.1. Classi cation of 3D Displays(a) Visualization (b) SchematicFigure 2.3: Visualization and schematic for a swept-volume display. Figuresadapted from [19].(a) Visualization (b) SchematicFigure 2.4: Visualization and schematic for a multi-view display. Figuresadapted from [34].102.1. Classi cation of 3D DisplaysMulti-view Displays. Multi-view displays are a special class of volumet-ric displays. Similar to swept-volume displays, they reconstruct 3D imagesby displaying a large number of 2D projections through high-speed projec-tion such as in the light  eld system by Jones et al. [34] (Figure 2.4, or\special-purpose" LED arrays such as in the RayModeler by Ito et al[33].However, instead of rendering and di using the data points at their phys-ical spatial locations, multi-view displays project the 2D images througha surface directed at each image’s corresponding viewing direction. Thesecon gurations provide autosterescopic horizontal-parallax views in 360 de-grees, but in general they neglect 3D information in the vertical directionand also do not take into account the user’s viewing distance (the light  eldsystem by Jones et al. required head-tracking for per-user vertical parallaxe ects).2.1.2 Geometric DisplaysAlternative approaches to volumetric displays are geometric displays, whichrely more heavily on motion parallax to deliver 3D e ects to their users. Byrendering images on multiple 2D screens with perspectives corrected to theuser’s point of view, these displays can establish the illusion of 3D on a 2Dsurface, a technique we refer to as head-coupled perspective rendering.Geometric displays are extended from the original concept of head-tracked desktop virtual reality displays, also Fish Tank Virtual Reality(FTVR) displays as described by Arthur et al[2]. Traditionally, FTVRdisplays often consist of a single screen coupled with a head tracker andLCD shutter glasses to generate stereo images at the users perspective.These include small-scale desktop systems as described by Deering [13] andMcKenna [42], and also large-scale systems that support multiple regionsof user-speci c perspectives for better collaboration as described recentlyby Maksakov et al. [41] While simple and fairly e ective, these systemso ers a limited viewing angle and are hindered by occlusion mismatcheswhen virtual objects rendered in front of the screen are cut o by the screenboundary. By arranging multiple screens together, geometric displays e ec-112.1. Classi cation of 3D Displaystively overcome the viewing angle limitations, allowing objects to be in frontand behind the screens with proper occlusion cues depending on the screencon gurations.An advantage of geometric displays over volumetric displays is in theongoing advances in projection and display technologies to create bright,high-resolution images in increasingly lighter form factors. The arrange-ment of multiple screens into di erent geometric shapes can establish acompelling illusion similar to volumetric displays but show more detailedimagery. However, geometric displays are typically only valid for one per-spective, which signi cantly hampers collaboration tasks, as opposed to vol-umetric displays that by de nition provide multiple simultaneous correctviews. While special eyewear such as shutter or polarized lenses can be usedfor stereo viewing or potentially multiplexing perspectives between two ormore users, the additional hardware presents its own issues such as reducedbrightness, ghosting and discomfort. We distinguish geometric displays intotwo categories, inward-facing and outward-facing setups.Inward-facing Displays. Inward-facing geometric displays utilize di er-ent combinations of projector and projection screen arrangements to gener-ate correct 3D perspectives to the users on otherwise  at surfaces. One ofthe earliest displays in this category is the CAVE system [11] (Figure 2.5a),which uses the walls of a room as inward-facing back projection screens.The system allows the tracked user to walk around and receive correct per-spectives of a surrounding scene, in addition to providing stereoscopic viewsthrough shutter glasses. CAVE systems provide strong 3D e ects and animmersive experience because users are located inside the virtual reality pro-jected from all sides; however, these systems are both large and expensiveto set up.A number of other inward-facing geometric displays have also been pro-posed. Cubby [15] (Figure 2.5b) uses three small rear-projection screensand shows compelling monocular head-coupled 3D e ect through the largemotion-parallax a orded by the multi-screen setup. The \virtual showcases"as described by Bimber et al. [5] (Figure 2.5c) demonstrate cubic or cylindri-122.1. Classi cation of 3D Displays(a) CAVE (b) Cubby(c) Virtual Showcase (d) CubeeFigure 2.5: Visualizations in di erent geometric display arrangements of rear-projection systems and trans ective surfaces thatcan produce a similar e ect to outward-facing displays. As with the CAVEsystem, one of the biggest advantages of projection-based geometric displaysis that they can be made seamless, however, it is challenging to make themtangible given their large size compared to their outward-facing counter-parts.132.2. Evaluation of 3D DisplaysOutward-facing Displays. Outward-facing geometric displays also ex-tend the FTVR concept and arrange  at panel screens to face outwardly.The e ect is exactly opposite of that generated by CAVE: instead of havinga virtual environment that surrounds their users or objects that are in frontof the screens, outward-facing displays enclose the virtual objects withina physical volume to be viewed from outside. Inami [30] showcased the rst outward-facing geometric display prototype, which he described as an\object-oriented" display, called MEDIA CUBE. The system renders cor-rect perspectives onto four display panels representing four sides of a cubicshape. This portrays a compelling 3D e ect of objects contained inside.A number of other outward-facing geometric displays similar to pCubeehave been implemented in the past, all of which draw many parallels withMEDIA CUBE. Cubee [52] (Figure 2.5d) is a large-scale version of pCubee,which was assembled from  ve desktop monitor-sized screens compared to ve-inch screens used in the current system. Cubee was supported withropes from an overhead truss to allow for direct user manipulation. A moresophisticated design, such as a  ve-screen cubic prototype gCubik [39], uti-lizes special lens arrays to precisely divide integral images containing mul-tiple perspectives to a wide range of viewing angles. The lenses achievean autostereoscopic e ect similar to the multi-view 3D displays as describedpreviously in Section 2.1.1, although they signi cantly degrade the resolutionat each perspective, as shown in Figure 2.6. As well, real-time interactioncurrently remains a problem due to the large number of simultaneous viewsthe system needs to render.2.2 Evaluation of 3D DisplaysVarious explorations and evaluations of 3D displays have been reported inthe past. Many of these focused on topics that are common to most 3Ddisplays, such as di erent approaches for interacting with 3D content andperforming 3D tasks. There are also less-explored areas that we consider tobe especially signi cant for pCubee and other geometric displays, such as142.2. Evaluation of 3D DisplaysFigure 2.6: gCubik autostereoscopic cubic display showing limited resolu-tions. Figure adapted from [39].the requirements of tracking and calibration and the discontinuity of visu-alization created by the display seams or bezels. Given the lack of formalevaluations on outward-facing geometric displays, we survey existing empir-ical  ndings on 3D display technologies in the context of these various issuesin order to support a systematic approach to our analysis and evaluation onpCubee.2.2.1 Tracking CalibrationThe accurate tracking of the user’s viewpoint relative to the display is es-sential for head-tracked systems to render perspective-corrected images tothe user. Especially for multi-screen geometric displays, the 3D e ect iscompromised when there are mismatches between the virtual scene and thephysical arrangement of the screen panels due to tracking errors. Therefore,it is important to understand and characterize the tracking and calibrationtechniques employed to achieve the performance desired for these particularsystems.A number of alternatives for position tracking have been utilized forhead-tracked displays in the literature, including mechanically linked sys-tems such as the Shooting Star Technology ADL-1 tracker [2], ultrasonicsystems as described by Deering [13], and electromagnetic systems such as152.2. Evaluation of 3D Displaysthe Polhemus Fastrak used in a number of geometric displays [11, 30, 52].Our review focuses speci cally on issues surrounding electromagnetic track-ing systems which are most often used, as is the case for pCubee. Wecategorize these issues into spatial and temporal calibration problems.Spatial Calibration. In general, electromagnetic tracking has many fa-vorable characteristics, such as its acceptable resolution without line-of-sightproblems, but they are notorious for their sensitivity to magnetic  eld dis-tortions resulting from metal objects and electronic equipment in the en-vironment. Further, their spatial accuracy falls o rapidly as the distancebetween the transmitter and the sensors increases. These factors add upto signi cant errors in position and orientation data (up to more than 50cm in location errors and 15 degrees in orientation errors as observed in[27]). These tracking errors can be corrected through a two-step calibra-tion process: (i) data acquisition to characterize the distortions in the workspace, and (ii) error compensation using di erent numerical methods duringsystem usage to correct for distortions.Numerical methods used in the error compensation stage can be classi edinto two categories: global methods that consider all data points to derivethe most accurate global mapping function, and local methods that onlyuse neighboring data points to obtain a localized compensation. Globalmethods described in previous work include high-order polynomial  t [7, 35],Hardy’s Multi-quadric method [61] and also neural network-based method[36]. While global techniques provide continuous mapping over the entiretracked space and thus better error compensation than local methods, theyare more challenging to implement. On the other hand, local methods, suchas tri-linear [38] and tetrahedral interpolations [18], are simpler to adapt butsu er from discontinuities (of zeroth order) when the interpolated functioncrosses from one data grid to another. The de ciency could be improvedupon by using a higher number of surrounding data points to provide a morecontinuous gradient throughout the tracked space [7].Di erent calibration approaches as mentioned above have been shownto reduce tracking errors to the order of less than 2 cm for position and 3162.2. Evaluation of 3D Displaysdegrees for orientation for electromagnetic trackers. In most scenarios, anassumption these techniques make is that the display con guration remainsconstant, and therefore only one cycle of data measurements is necessary tocharacterize the distortions. This assumption can be a problem for pCubeeand other tangible head-tracked displays, which we describe in further detailsin the following chapter.Temporal Calibration. Besides spatial distortions, temporal artifactssuch as latency in the tracking, software and hardware systems can beproblematic for head-coupled perspective displays. As shown previouslyby Arthur et al. [2], lag can disrupt motion-based 3D e ects and task per-formance (lag over 210 msec was found to be worse than static viewing fora simple path-tracing task). Deering [13] also suggested that critical val-ues for perceived lag are similar to motion fusion and should be no morethan 50-100 msec. Previously, head-tracked systems mitigate the e ect oflatency by using predictive tracking techniques, such as simple linear inter-polation (Deering [13]) or higher order interpolators such as Kalman’s  lter(Friedmann et al. [? ]).It is important to note that predictive tracking introduces additional un-desirable artifacts into the tracking data, including overshooting during highacceleration and amplifying sensor noise. Overshooting can be notorious es-pecially for tangible outward-facing geometric displays like pCubee becauseof the high degree of movements available to the displays compared to theuser’s relatively limited head motion. Recent head-tracked system develop-ment are less concerned with temporal calibration because of improvementsin tracking and graphics technologies that reduce lag to well below thresh-old values identi ed above. For our evaluation of pCubee, we focus only onspatial calibration to correct for distortions in the tracking system as lagwas rarely noticed to be an issue in the system.172.2. Evaluation of 3D Displays2.2.2 Interaction TechniquesInteracting with 3D content requires mechanisms di erent from conventionaldesktop-and-mouse setups. Actions such as pointing and selection remainchallenging problems that need to be investigated for di erent display con- gurations. Various interaction schemes and mappings have been proposedfor existing prototypes and for high  delity mock-up displays. They havebeen shown to o er a more engaging and intuitive means of interactionwith 3D content as compared to traditional 2D display setups. Here wesummarize these proposed interaction schemes with respect to volumetric,inward-facing and outward-facing geometric displays.Volumetric Display Interaction. For volumetric displays, interactionrequires indirect manipulation techniques that are from outside the visual-ization space because users cannot directly reach into the bounded virtualcontent. Balakrishnan et al. [3] explored a variety of possible interactionschemes using Wizard-of-Oz prototypes. Implementations that extend fromthese ideas include ray-tracing-based selection using a 3D ray cursor (Gross-man et al. [21]) and gesture-based rotation using over-the-surface interaction(Ito et al. [33]). In particular, Grossman and Balakrishnan identi ed a 3Dray cursor metaphor to be a better design choice than a point cursor in3D space for selection, improving movement time, error rates and input de-vice footprints [21]. With touchscreen technology becoming common-place,multi-touch interaction can also be incorporated with volumetric displays.Selection and manipulation with multi-touch gestures, such as zooming, of-fer interesting possibilities that have been explored for volumetric displays[23].Inward-facing Geometric Display Interaction. For interaction withinward-facing geometric displays, one challenge is the occlusion problem,similar to how stereo images can be blocked by screen borders. As the virtualobjects are \ oating" in front of the displays as opposed to being behindthem, reaching into the display space can lead to visual mismatches when182.2. Evaluation of 3D Displaysusers attempt to move behind the virtual objects. A virtual tip extension ona physical stylus reaching into the visualization space has been demonstratedwith Cubby [16], which partially alleviates the occlusion issues. A one-to-one mapping of a 3D input device, such as a wand, is also often used to allowmore direct manipulation of virtual objects (Vickers et al. [57], Demiralp etal. [14]).Outward-facing Geometric Display Interaction. For outward-facinggeometric setups, a tangible interaction scheme can be employed to allowusers to directly manipulate the display hardware, which is not possible withstatic systems. While tangible 3D input devices have been explored in previ-ous evaluations, as was done by Hinckley et al. [29] and Ware and Rose [59],outward-facing geometric displays couple manipulation and visualization ina uni ed workspace. This enables a novel interaction scheme of having sim-ulated physics for virtual objects based on the tracked movement of thedisplay, as was shown in Cubee [52]. Touch-screen rotation proposed forvolumetric displays have also been demonstrated with gCubik [39], showingpotential interaction development in over-the-surface, gesture-based controlfor outward-facing setups. Further, other interaction techniques availableto volumetric displays can be applied to outward-facing setups for a singleuser seeing the correct viewpoint.2.2.3 Visual DiscontinuityPhysical artifacts inherent in di erent 3D display designs cause disruptionsto the experience that we refer to as visual discontinuity. For tasks in whichthe continuity of the rendering is important, such as scienti c data visualiza-tion, these artifacts are particularly undesirable. Similar to understandingtemporal artifacts such as frame rates and lags, identifying performance lim-its due to physical spatial artifacts have important implications to the futuredevelopment of 3D display technologies.While visual discontinuity is a less-explored area, it is present in manyof the 3D displays discussed earlier in this chapter. For outward-facing192.2. Evaluation of 3D Displaysgeometric displays such as pCubee, a large portion of the virtual scene canbe occluded by the presence of physical seams at the joining edges betweenscreens. Seams between multiple monitors have been investigated in thepast for 2D information such as texts and lines (Mackinlay and Heer [40]),though their e ects have not been examined in multi-screen geometric 3Ddisplays. Understanding the impact of seams can lead to insights regardingcontent or tasks that are suitable for these displays. For other displayssuch as swept-volume or multi-view systems, the e ect of di erent levels ofsacri ced resolutions or intervals between 2D slice sequences can shed lighton the capability and usability of those systems.2.2.4 3D Task PerformanceOne of the most important areas in 3D display evaluations is understandingwhether and how visualization and interaction schemes can better supportusers performing 3D tasks. Past evaluations of task performances focusedon comparisons between di erent 3D displays and depth cues they a ord ina number of task domains. We can these categorize into visualization andreasoning tasks.3D Visualization. The extra depth information provided by 3D displaysallows users to better explore 3D data for tasks such as scienti c visualiza-tions where the spatial relationship between the data are important. Graphs,path-tracing and visual search are such tasks that have been investigatedwith 3D displays in the past. Arthur et al. [2] compared a one-screen FTVRsetup to a monitor-based desktop system and reported that the FTVR setupsigni cantly improved performance in a path-tracing task. They found thatbene ts gained by head coupling were more than those gained from stereo-scopic e ect alone. Ware et al. [58] rea rmed these trends in a graphvisualization study and also noted that any structured motion cues in gen-eral, including head-coupled rendering, hand-guided motion or automaticrotation, led to similar performance improvements. In more recent compari-son study, Demiralp et al. [14] compared the performance of the CAVE and202.3. Summarya one-screen FTVR display using visual search tasks and concluded thatusers performed better with and preferred FTVR displays. However, Prab-hat et al. [47] reported contrasting results in their comparison of the samevirtual environments in more complex scienti c visualization tasks. Further,comparisons between CAVE-like setups and standard desktop workstationsin statistical visualization [1] and oil well path planning [24] have revealedusers’ preference towards the immersive system. The divergent results fromthese studies suggest that user performance and preference on di erent 3Ddisplay implementations can be very task-dependent.3D Reasoning. An alternative to pure scienti c visualization tasks are 3Dreasoning tasks that involve the perception and understanding of 3D spaceand shapes. These include tasks such as collision judgments and 3D rotationwhich have also been evaluated with 3D displays in the past. Grossmanand Balakrishnan [22] conducted a comparison between traditional stereoFTVR system and a swept-volume display and found that the latter allowedbetter perception in both depth and collision judgment tasks. Prabhat et al.[48] evaluated a desktop setup and a CAVE system for learning hypercuberotations and showed that users were more accurate and learned more aboutthe geometries using the CAVE. Other researchers have used or proposedmental rotation as a task stimulus for 3D display or input system evaluationin the past (Booth et al. [6], Hinckley et al. [29]), which is a natural  t inthe 3D reasoning task space. We see mental rotation as an interesting areato explore because 3D rotation has been shown to be a di cult task [44],and few empirical evaluations have been done that utilized 3D displays inthis area.2.3 SummaryIn this chapter, we reviewed a number of existing 3D display con gurationsthat are similar to pCubee, including volumetric and geometric displays.Given that there has been no reported empirical work on outward-facinggeometric displays, we presented a review of evaluations with other existing212.3. Summaryimplementations to provide insights on the current research landscape of 3Ddisplay technologies.Through our review, it becomes apparent that no one existing displayimplementation is best for all 3D interaction techniques and tasks given allthe design trade-o s needed to be taken into consideration. While volu-metric displays such as holographic and swept-volume displays o er true3D rendering and are suitable for multi-user collaboration tasks, they havelimitations in terms of resolution, brightness, compactness and opaqueness.On the other hand, geometric displays can o er higher quality visualizationbut are limited to a single user’s perspective. These tradeo s are supportedby the diverse results and preferences we identi ed from previous studies.Further, given the speci city of some tasks involved in past evaluations,such as oil well path planning and complex scienti c visualization, existingresults are not easily generalizable to allow fair comparisons between di er-ent display technologies. These  ndings further strengthen our desire for asystematic approach towards evaluations of various 3D displays.We categorized previous empirical results into a number of issues impor-tant for outward-facing geometric displays, including tracking calibration,interaction techniques, visual discontinuity and task performance. In ourinvestigation in subsequent chapters, we examine these four issues for thepCubee system, either through exploratory analysis or formal, controlleduser studies.22Chapter 3Analysis on Design,Calibration and InteractionTechniquespCubee is designed to be a compact cubic 3D display. The system integratesinput manipulation and output visualization to support tangible interactionsuch as tilting and shaking the display to make virtual objects react. Bycoupling an additional input device, such as a 3D stylus, pCubee enablesbimanual manipulation of both the display and the stylus. These factorsmake pCubee a compelling 3D display that a ords novel interactions andtasks that are closer to how users interact with physical objects.In this chapter, we provide an exploratory analysis of the pCubee displayto understand its system design and issues regarding tracking calibrationand interaction techniques. For tracking, we examine a dynamic visualcalibration technique for pCubee, through which we provide an analysis of itsperformance, strengths and limitations. Regarding interaction techniques,we showcase a number of novel interaction schemes that can take advantageof the unique, tangible nature of pCubee and other outward-facing geometricdisplays.3.1 Display HardwareThe hardware design of pCubee is diagrammed in Figure 3.1. The displayconsists of  ve 5-inch VGA resolution (640x480 pixels) LCD panels [32] thatare mounted onto a wooden box-shaped frame. The panels are arranged and233.1. Display HardwareFigure 3.1: Hardware components of pCubee.aligned to create even seams on all sides of the display; two side panels areoriented vertically to  t them evenly with the top screen. The bottomportions of the vertical panels are covered up to create even bottom seamson all sides, leaving an empty area that can also be used to grasp the displaywithout blocking the screens. The bottom side of the box is left open forventilation and cables with a small 120x96x36 mm base to make the entiredisplay box easier to grasp, and the total weight of the frame, base, andscreens is measured to be 1.3kg (2.87lbs). Excluding the base, the displaybox measures 145x120x145mm.Figure 3.2 illustrates the screen con guration we used in pCubee. Smallphysical seams are di cult to realize with LCD panels because the bor-der is dependent both on the thickness of the panel and the width of thebezel. The virtual arrangements shown in the  gure, with the screen edgesactually connected, measures 133.90x110.15x133.90 mm, compared to theprototype’s actual dimensions reported above. Brightness and color con-sistencies across the viewing range of the screen panels are also importantfactors to take into consideration because users can view from all around243.1. Display HardwareFigure 3.2: Screen arrangement of outward-facing geometric display. While screen brightness is consistentwith the current LCD panels, there are noticeable color distortions at obliqueviewing angles, producing either a blue or yellow tint depending on the sideof the screen users are viewing from.Three graphics signal outputs are used to drive pCubee because userscan only see three sides of the box at any given time. A host computer,(Intel Quad Core 3.0GHz processor, Windows XP) with two dual-outputNvidia GeForce 9800 GX2 graphics cards, generates three VGA signals.The distribution of separate rendering contexts to graphics card outputs isdone using multi-monitor support in the Nvidia graphics driver. The VGAsignals for opposite sided screens (front and back, left and right) are routedthrough signal splitters to get  ve video signals total. Each VGA signal isconverted to low-voltage di erential signaling (LVDS) video with an analog-to-digital (AnD) control board [31] and connected to a timing control boardon the backside of the LCD panel as shown in Figure 3.3. The  ve A/Dcontrol boards are housed in a pedestal and connected to pCubee with abundle of  ve 1-meter LVDS cables. A stylus is incorporated to allow forprecise manipulation of content inside the display.253.2. Software ComponentsFigure 3.3: pCubee electronics: showing the LCD panel, the controller boardand the LVDS and VGA cables.3.2 Software ComponentsThe pCubee software is built upon existing open source engines, includingOpenSceneGraph (OSG) [43] for rendering, Nvidia PhysX [10] for physicssimulation, and the FMOD toolkit [20] for sound simulation.3.2.1 Rendering SoftwareThe OSG engine is used to render high quality graphics in pCubee, in-cluding shadows, multi-texturing and shading, for compelling depth cuesand realism. To generate perspective-corrected images on each screen ofpCubee, a standard o -axis projection scheme as described by Deering [13]is implemented. This is done in OSG by creating three View objects thatcorrespond to the three visible screens on pCubee. The camera for eachView is located at the users real-world eye position, oriented perpendicularto its corresponding virtual screen, and given a view frustum that passesthrough the screen corners as shown in Figure 3.4. The near-clip plane isset to be coincident with the screen plane in order to prevent rendering ofvirtual objects that are outside of the pCubee boundary, which would causeocclusion issues at the screen edges (i.e. objects that are in front of the dis-play seams would not be seen). Figure 3.5 illustrates how the skewed images263.2. Software ComponentscameraFront frustum Front nearclip-planeLeft frustumLeft nearclip-planeFigure 3.4: View frustum calculation for each pCubee screen.generated with o -axis projections fuse when viewed obliquely on the sidesof the cubic display. The multiple rendering contexts are contained withina single CompositeViewer object and the camera parameters in each Vieware updated before a single call is made to CompositeViewer to update allthe Views simultaneously.A virtual pCubee frame is added to the 3D scene to enhance occlusioncues and the illusion of looking into a box. At oblique viewing angles, thereal seams along the cube edges occlude virtual objects within the cube,and the virtual objects occlude the virtual frame that is rendered behind.pCubee shows only monocular views due to current synchronization limita-tions with the LCD panels, which is most noticeable at perspectives wherethe virtual frame only aligns with one eye but not the other. Stereoscopicrendering could be added to pCubee with stereo-capable  at panel displaysand synchronized shutter glasses to alternate between left-eye and right-eye views for more accurate occlusion cues. However, rendering objects toappear outside of the display boundary, especially across multiple panels,remains a problem and should be avoided.273.2. Software ComponentsFigure 3.5: Images generated with o -axis projections. Note how each imagelooks skewed when viewed directly but produces the proper 3D e ect whenarranged geometrically.3.2.2 Physics SoftwareA physics simulation engine is integrated with the OSG renderer to createdi erent ways for users to interact with 3D content in pCubee. In the cur-rent pCubee software, Nvidia PhysX engine is used for real-time simulationof rigid body, deformable body, and particle system dynamics. Each virtualobject in pCubee is represented both in the rendering scene as an OSG Geodeobject and in the physics simulation scene as a PhysX Actor. For rigid bodymodels, the representations are often the same polygonal mesh; however formore detailed 3D objects, a high-resolution polygonal mesh could be used forthe OSG Geode while its convex hull is used as the PhysX Actor to achievefaster simulation. For soft-body models, a coarse tetrahedral mesh could beused as the physics Actor, which is linked to a higher resolution polygonalmesh for rendering. Objects in the scene can be either static or dynamic.Static objects appear \attached" to the display because their positions areupdated based on the display tracking sensor before each simulation step.The virtual pCubee frame, virtual transparent walls surrounding the frameand ground plane are static objects and move with the physical display. Dy-namic objects appear to move freely within the box, i.e. they fall downward283.2. Software Componentsunder gravity relative to the real world, because their positions are updatedby the physics engine after each simulation step. Collisions are computedbetween dynamic objects and between dynamic and static objects, makingdynamic objects appear to bounce o the virtual inner walls as if pCubeewas a glass box with real objects inside.The FMOD toolkit is used to generate collision sound e ects and ambientsounds that blend with the virtual scenes in pCubee. Currently, sound e ectsare pre-recorded and played at a volume corresponding to the magnitudesof the collision events. More realistic collision sounds could be synthesizeddirectly from the collision objects.3.2.3 Integration and SimulationThe pCubee software uses an object-oriented design. The CubeeModel class,which is the base class of all objects rendered inside the display, integratesthe functionalities of the rendering, physics and sound engines to facilitateagile scene development. Classes extending from CubeeModel are used togenerate models of di erent properties, including convex meshes, trianglemeshes as well as softbody models. Both the OSG Geode and PhysX Actorrepresentations of virtual objects are managed within inherited or derivedclasses of CubeeModel. Functions to set various model properties, includingcollision sounds, materials and textures, are also implemented. Appendix Adocuments the application programming interface (API) of the CubeeModelclass and Appendix B illustrates the code required to create a simple virtualscene containing a dynamic soccer ball.The pCubee system can achieve a 60Hz update rate for dynamic sceneswith a small number of rigid bodies (e.g. 50 rigid body cow models eachwith a 5800-triangle Geode for rendering and a 125-triangle convex hull asits PhysX Actor). For more complex physics simulation, such as soft bodiesand particle systems, the system achieves a 40Hz update rate for modestsized scenes appropriate for the scale of the display (e.g. two soft body cowmodels with 1700 tetrahedra each). The simulation loop for pCubee is asix-step process as outlined below:293.3. System Speci cationsDisplay Dimension 145x120x145 mmWeight 1.3 kg (2.87 lbs)Total Resolution 5x640x480 pixels (5xVGA)Number of Simultaneous Views 1Display Color 24-bit Full ColorMax. Update Rate 40 Hz with stylus, 60Hz withoutTable 3.1: Speci cations of the pCubee system1. Obtain latest display, head and stylus (if used) positions;2. Update the positions of static objects in both their OSG Geode andPhysX Actor representations;3. Perform a single time step simulation to update the position of dy-namic objects in the physics simulation scene;4. Update dynamic object positions in OSG scene graph based on physicssimulation results;5. Update OSG View frustum parameters based on relative position ofthe user’s viewpoint to the display.6. Render scene and play collision sound e ects (if any).3.3 System Speci cationsWe summarize the speci cations of the pCubee system in Table 3.1 usingsimilar metrics as used by Ito et al. [33]3.4 Tracking CalibrationTo render a perspective-corrected scene, the pCubee system requires theposition of the users eye relative to the display. In addition, it requires theposition and orientation of te display in space to allow the physics engine to303.4. Tracking Calibrationdetermine velocity and acceloration information to simulate a virtual scenethat reacts to display movements.pCubee relies on a wired electromagnetic tracking system (PolhemusFastrak [46]) to achieve low latency tracking. With two sensors attached,the tracking update rate is 60Hz with a reported latency of 2-3msec. Toestimate the user’s eye position in space, a head-tracking sensor (referred tohenceforth as head sensor) is embedded in the top of a pair of headphones,making the wired sensor less intrusive as users listen to sound e ects andmusic while using pCubee. A tracking sensor is also embedded on the baseof the box (referred to henceforth as display sensor) to track the movementof pCubee in 6 degrees-of-freedom. A pre-computed o set and rotationfor each LCD screen relative to the display sensor is used for calculatingperspective-corrected view frustums. pCubee also incorporates a tracked3D stylus (referred to henceforth as stylus sensor) to allow users to directlyinteract with virtual objects inside the display. However, the tracking updaterate is slowed to 40Hz with the additional stylus sensor.3.4.1 Dynamic Visual CalibrationAs discussed in Chapter 2, electromagnetic systems have distortion issuesthat require calibration to improve the tracking accuracy. Although inoutward-facing displays such as pCubee, the physical seams provide addedocclusion cues, the e ect of having a physical boundary around the vir-tual scene is compromised when the perspective is incorrect due to trackingerrors, resulting in a rendered virtual frame that does not align with thephysical frame.We implemented and tested a dynamic visual calibration technique basedon the line-of-sight method described by Czernuszenko et al. [12] to exploreits e ectiveness in calibrating the pCubee system. The line-of-sight methodmeasures tracking errors as the amount of displacement between virtual andphysical objects placed in front of the display that should appear superim-posed when viewed from a particular perspective. We see the techniqueas a natural  t for pCubee because of the presence of physical seams that313.4. Tracking Calibrationcan be used \for free" to visually align with the virtual frame to performperspective adjustments. Thus, it is unnecessary to superimpose additionalphysical objects over the display surface during the calibration process. Theadvantages of this technique are two-fold: i) users intuitively know they needto input additional data points when there are visual mismatches betweenthe physical and virtual frames, and ii) since the physical and virtual framesare always present, the system allows for quick re-calibration during usage.The calibration procedures begin with the initialization of o sets forboth the display sensor relative to the center of pCubee and the head sen-sor relative to the user’s eye. The display sensor’s o set can be physicallymeasured and remains constant with the pCubee’s design, while the headsensor’s o set is user-dependent and needs to be visually obtained at thestart of each system usage. This is achieved by asking the user to align thephysical and virtual frames from any single perspective. The obtained headsensor’s o set is added to the  nal interpolated correction vector to achievea corrected perspective at any given point in space.After the o sets are initialized, the user can interactively manipulatepCubee and generate correction vectors at locations where they see mis-matches between the physical and virtual frames. The user adjusts thevirtual perspective using keyboard commands until the virtual frame in thescene is aligned with the physical display, creating a new correction vectorat that speci c location. For each location P, the correction vector v(P) isobtained by the following equation (adapted from [12]):v(P) = V Vo (3.1)where Vo is the head sensor o set initialized for each independent user,and V is the user-adjusted o set to provide the correct perspective at lo-cation P. Correction vectors are stored and used to create a lookup table(LUT), a rectilinear 3D grid of 200x200x200 units (1 unit is equivalent to 1cm in real world space) that characterizes the e ects of these vectors (f(Q))at each grid point Q according to the equation (adapted from [12]):323.4. Tracking Calibrationf(Q) = v(Pi) (3.2)if there exists i such that dist(Pi;Q) = 0; or:f(Q) =nXi=1winPj=1wjv(Pi) (3.3)if dist(Pi;Q)6= 0 for all possible i: n i 1. n is the total number ofcorrection vectors used, and wj is a weight based on distance:wj = 1dist2(Pj;Q)(3.4)We chose an exponential factor of two for the distance (dist) becausewe tested it to be adequate for the pCubee system setup; a higher or lowerfactor can be used to increase or decrease the area of e ect by each correctionvector. Once the LUT is established, we perform linear interpolation withthe closest eight data points surrounding the tracked location of the user’seye. The perspective of the virtual scene can then be rendered based onthe corrected location, which is the summation of the initial head sensoro set and the interpolated correction vector. Every time a new correctionvector is introduced, the LUT is updated in real-time until the user obtainsvisually correct viewpoints all around the display.3.4.2 Calibration ResultsTo understand the performance of the dynamic visual calibration technique,we asked two volunteer subjects to test the calibration process on pCubee.While our goal was not to precisely characterize the calibration outcomeswith such a small number of subjects, we expected it to be a quick validationon whether error reductions using the technique would be comparable towhat was reported in the past when applied to pCubee.Following a procedure similar to that described in [12], subjects gener-ated correction vectors at pre-determined locations, and we measured themagnitudes of these vectors. These included locations about 30 cm away ex-333.4. Tracking Calibration01234567890 4 8 12 16 20 24Correction Magnitudes (cm)MeasurementsSeries1 Series2Figure 3.6: Magnitudes of the 24 correction vectors each subject generatedusing the dynamic visual calibration technique. Note the decreasing trendlines indicating a decrease in residual errors.tending from the centers of the four side screens and the four side edges at 3di erent heights (i.e. above the display, parallel to the display and below thedisplay); this resulted in eight locations surrounding pCubee at each heightfor a total of 24 correction vectors. To ensure the adjusted perspectives wereconsistent at each point, we asked subjects to perform the calibration usingmonocular-only viewing by covering one eye. Based on previously reportedresults, we had expected magnitudes to gradually decrease as the numberof correction vectors increases, which would indicate a decrease in residualerrors in the tracked space. Figure 3.6 illustrates the magnitudes of the 24measurements each subject generated. As shown, the magnitudes of the cor-rection vectors gradually decreased throughout the calibration process downto an order of less than 3 cm, which implies that the user-perceived errorswere smaller than 3 cm. These results are very similar to the correctionsachieved by Czernuszenko et al. in their experiment. Figure 3.7 shows thevisualization on pCubee before and after calibration.343.4. Tracking Calibration(a) Before Calibration(b) After CalibrationFigure 3.7: Visualization on pCubee before and after calibration. Note thevirtual contents were skewed across multiple screens and are mismatchedagainst the physical frame before calibration. The virtual contents alignedvisually with the physical frame after calibration.353.4. Tracking CalibrationThere are currently a number of limitations to the dynamic visual cal-ibration approach when applied to pCubee, including a potentially inade-quate assumption about the dynamic system, a lack of depth reference whencalibrating from directly centered and in front of each screen, and uncor-rected angular errors.As discussed previously, an assumption most calibration techniques makeis that the display system will remain static, which is true for most  sh tankor inward-facing geometric displays reported in the past. The main di er-ence with tangible outward-facing geometric displays is that both the systemand the user can be dynamic during usage. Display movements can causeadditional distortions and make data acquisition and error compensationmore challenging. In our calibration implementation, we made a similar as-sumption that the user’s head position would remain static because pCubeeallows the user to perform most manipulations in their hands and to movetheir heads only slightly. While this is somewhat true, the assumption isviolated when the user adjusts their head position, resulting in additionalerrors that are not accounted for in our current technique. We believe theseadditional errors resulted in the  uctuations of a number of correction vec-tors shown in the  gure, such as in the 1st and 15th measurements. A morecomprehensive calibration approach would be to use lookup tables that rep-resent the absolute positions of both the display and the head sensors. Thiswould require a di erent and more complex calibration method, involvingover a few hundred data measurements [27] compared to only 24 in ourcurrent approach.Our volunteer subjects commented that calibrating from a front view ofeach screen was di cult due to a lack of depth reference. Given that pCubeeonly supports monocular viewing, subjects were unable to accurately judgedepth using the physical frame when viewing only one screen. Subjectsindicated that calibration was the easiest when viewing from the cornersand edges of the cubic display, which allowed them to e ectively comparewhether the physical and virtual frames were parallel. Moreover, our cali-bration technique only corrects for position errors, not angular errors, whichis another important factor in pCubee due to the orientation changes that363.5. Interaction Techniquescould occur with a dynamic display. An additional correction table could bebuilt by measuring orientation di erence at each grid point to compensatefor angular errors.3.5 Interaction TechniquespCubee enables a number of tangible interaction techniques that are dis-tinct from static volumetric and geometric display systems. We exploredfour novel interaction schemes suitable for tangible outward-facing geomet-ric displays. These include static visualization of 3D models, dynamic in-teraction with virtual objects through simulated physics, walk-through and y-through scene navigation, and bimanual stylus interaction to supportselection and other interesting applications.3.5.1 Static VisualizationA natural technique for viewing a 3D scene within pCubee is to rotate it tolook into di erent sides of the box, a common interaction scheme that is alsodemonstrated in other outward-facing displays [30, 39, 52]. The interactionmetaphor requires small sized or miniature virtual objects that  t withinthe bounds of the physical box, but the visualization e ect is as compellingas if the users are observing a physical object through a glass box.In this case, while objects in the scene are static (i.e. stationary withinthe display), the perspective of the scene is constantly changing correspond-ing to the movements of the display and the user’s head. Complex 3D shapescan be viewed from di erent sides in a tangible manner. High quality real-time rendering and the visual quality of the LCD panels allow for highlydetailed representations of di erent types of 3D data including CAD, archi-tectural, or anatomical models. Static information visualization can also bean important application for observing virtual artifacts, such as in museumsor in schools, at which the equivalent physical models cannot be presented.We demonstrate this interaction technique with a 3D model of a Japanese373.5. Interaction TechniquesFigure 3.8: Static visualization of a Japanese Noh Mask artifact.Noh mask1, as shown in Figure 3.8. The artist’s signature stamp on thebackside of the Noh mask is clearly visible if the user looks into the back-side screen.3.5.2 Dynamic InteractionExtending the metaphor of virtual objects inside the box, we can make themdynamically react to the movement of the display with simulated physicssupported by the software implementation. In this case, objects in the sceneare dynamic and move within the display due to simulated forces, includinggravity, collisions with the inner sides of the display box and also othervirtual objects. This interaction scheme was also demonstrated in Cubee[52], but the display movement was restrained due to the large and heavydisplay. The pCubee prototype allows for  ner and more responsive controldue to its small size. The interaction between the user and the virtualobjects is indirect: the user moves the box, and the box moves the objectsthrough downward sliding under gravity or colliding with the walls of thebox.Reactive object interaction is well-suited for games or entertainment ap-1We thank Xin Yin from Ritsumeikan University Computer Vision Lab for providingthe Japanese Noh Mask model.383.5. Interaction TechniquesFigure 3.9: Dynamic interaction with cow models.plications where dynamic 3D content can be fun to play with, as with manyexisting physics-based toys and games. A tangible outward-facing geomet-ric display like pCubee can serve as a platform to implement similar toysand games in the virtual reality domain. We demonstrate dynamic inter-action with virtual cows tipping and bouncing inside pCubee, as shown inFigure Large Scene NavigationSimilar to volumetric displays, larger virtual scenes that extend outside thebounds of the physical box present a problem in navigating to see distalparts of the scenes. We propose an interaction scheme for navigating 3Dlandscapes in pCubee in which the viewpoint translates in the directionthat the display is tilted. This interaction technique is similar to using ajoystick in virtual cockpit  ight simulation games.We achieve this e ect by placing a ball with simulated gravity insidethe scene that reacts to the user’s tilting motion, serving as a virtual \nav-igator". By centering the virtual cameras on the \navigator", the user canexplore around the scene as it rolls through the landscape. By adjustingsimulated gravity (or other e ects), we have control over the e ect the dis-play tilt has on the traversal speed, which is like a control-to-display ratio.393.5. Interaction TechniquesFigure 3.10: Navigation through a virtual landscape using display motion.By using simulated earth gravity, the user feels they are adjusting the tiltof a hill for the ball to roll downward, which is quite natural for pCubee.Figure 3.10 depicts a desert navigation application we prototyped.2An alternative can be a \ y-through" style navigation, in which thedisplacement of pCubee from its original position constitutes its velocityand the rotation constitutes its angular velocity. These types of navigationinteractions may be useful for virtual museums, where the user can bringdistal exhibits into their perspectives, and also gaming, where users need togo to di erent places on a large-scale map to accomplish di erent objectives.With outward-facing displays made wireless and portable, it is also possibleto create an augmented reality (AR) application that uni es a virtual worldwithin the real-world space, in which the user walks around a physical roomto observe objects in museums or treasure-hunt-style games.3.5.4 Bimanual Stylus InteractionDirect selection and manipulation of objects are needed in applications thatrequire  ne-grained user control, such as 3D widget interaction, CAD de-2We thank Team Timeless from Simon Fraser University for providing the desert scene.403.5. Interaction Techniques(a) Object selection (b) Virtual paintingFigure 3.11: Bimanual interaction with pCubee using the Polhemus stylus.sign, virtual sculpting or painting. Traditionally, direct 3D manipulationtechniques are implemented with either stylus or tracked  ngers to uniman-ually reach into the virtual space with static setups [16, 21]. The tangle na-ture and small form factor allow the user to use pCubee and an additional3D input device in tandem to support the theoretical model of bimanualmotor behavior proposed by Guiard [25]. Guiard’s Kinematic Chain (KC)model suggests that two hands work as a pair of asymmetric motors assem-bled in a serial linkage, based on observations in motor tasks such as pagemanipulation by the non-dominant hand in handwriting.3D bimanual control has been explored with tangible interfaces (Hinck-ley et al. [28]) and dual mouse setups (Balakrishnan and Kurtenbach [4])to control the view with the non-dominant hand and cursor or cut-planemanipulation with the dominant hand. Results from these studies havedemonstrated that bimanual interaction can be more natural and fasterthan unimanual mouse input in certain 3D tasks. Coupled manipulationand visualization in pCubee allows the user to hold and view the display inthe non-dominant hand for tangible, contextual movements (i.e. rotate thedisplay to the desired perspective), while the dominant hand can be usedfor  ner movement to control and manipulate objects inside the display.Using the Polhemus stylus, we experimented with multiple schemes tomanipulate virtual content inside pCubee such as by creating a virtual ex-413.5. Interaction Techniquestension to the physical pointer, as was done with Cubby [16]: when the userpositions the stylus near pCubee, the virtual tip appears inside the displayto provide an extension. The virtual stylus allows the user to select objectsthat the virtual tip comes into contact with. Figure 3.11a shows a scenariowhere we demonstrate pointing into and interacting with the scene with thevirtual extension, creating a bimanual interaction metaphor similar to usinga physical pointer to pick and poke objects held in ones hand.Another approach is the line-of-sight selection technique as was done byGrossman et al. [21], which lets the user point the stylus into the pCubeespace at a distance t select objects that are in line with the direction of thestylus. We demonstrate the interaction scheme with a 3D virtual paintingapplication on the Japanese Noh Mask model, as shown in Figure 3.11b. Theuser can point into the scene and press the stylus button to spray di erentcolors onto the 3D model, which updates the model’s texture in real time.The area of e ect of the spray can be controlled by the distance betweenthe stylus and the painting surface. The interaction scheme draws parallelsto holding and spray-painting a physical object.The above stylus interaction schemes require precise stylus calibration fora consistent visual match between the physical and virtual styluses, whichis currently a challenge for pCubee. Due to monocular-only viewing in thecurrent system, it is di cult to achieve the desired sense of smooth tran-sition between the physical and virtual domains. Given these limitations,we explored a third interaction scheme in which the user can manipulatevirtual content within pCubee by using the stylus in a physical workspacethat is decoupled from the visualization, much like how it is done currentlywith existing 3D input devices. We implemented the interaction scheme ina 3D cubic puzzle task that involves pointing, selection, manipulation andplacement of 3D objects. We will describe the 3D cubic puzzle task and theinteraction schemes in the context of spatial reasoning tasks in Chapter 5.423.6. Summary3.6 SummaryIn this chapter, we reported our analysis on the pCubee display with respectto its system design, tracking calibration and interaction techniques. First,We described the hardware components of pCubee, in which we discussed thedesign challenges that should be taken into consideration for future displaydevelopment. The current limitations in the hardware design of pCubee,including the thick seams and color distortions at oblique viewing angles,require better display technologies to overcome them. We also described thesoftware components of the system, including the OpenSceneGraph, PhysX,and FMOD engines and how they are integrated in the CubeeModel class.The software architecture adheres to object-oriented design and facilitatesagile content development for pCubee.For tracking calibration, we explored a dynamic visual calibration tech-nique adapted from the previous work by Czernuszenko et al. [12] andtested its performance on pCubee. The technique allows the user to nat-urally look around the display and adjust perspectives at locations wherethere are mismatches between the virtual content and the physical displayframe. We validated that the calibration technique alleviates the visual mis-match problem by using only 24 correction measurements, as compared to amuch larger number of data points required by other calibration techniques.We also described a number of novel interaction techniques that can besupported by pCubee, including static visualization, dynamic interaction,scene navigation and bimanual stylus interaction. The interaction tech-niques are novel because of the tangible nature of outward-facing geometricdisplays. We prototyped these interaction schemes in a number of virtual ap-plications, which helped demonstrate interesting applications possible withthis unique type of 3D display technology.43Chapter 4Evaluation of VisualDiscontinuity: E ect ofSeamsFigure 4.1: Seam occlusion problem with existing outward-facing geometricdisplays.444.1. Path-tracing TasksThe design of multi-screen con gurations leads to thick seams at edgesbetween the  at-panel screens. These seams result in disruptions of the 3Dvisualization when users switch views from one screen to another, which werefer to as visual discontinuity. While the problem is apparent in all pre-viously described outward-facing geometric displays, as illustrated in Fig-ure 4.1, no formal evaluation has measured its e ect on the user’s viewingexperience or 3D task performance.We conducted an experiment to evaluate the e ect of pCubee’s seamsize in a 3D path-tracing visualization task that has been commonly usedin classic evaluations with single screen FTVR displays. Visualization taskssuch as path-tracing are suitable stimuli because they can be very vulnera-ble to visual discontinuity; seam occlusions prevent users from maintainingcontinuous visibility and thus hinder their ability to resolve the paths. Wecon rmed this vulnerability in a pilot experiment prior to the user study,in which we compared the path-tracing performance of pCubee and a 2Ddesktop display. In this chapter, we  rst report our main study and related ndings on the impact of seam size in path-tracing tasks; we then discuss ourpilot comparison study and the lessons learned from it, which were valuablefor identifying the design parameters in our main experiment.4.1 Path-tracing TasksPath-tracing tasks require users to examine path or graph structures inspace, such as to determine the root of a tree branch that is tangled withother branches as illustrated in Figure 4.2. Investigating user behaviorsand performances in path-tracing provide implications to real-world appli-cations, including visualization of scienti c data such as vector  elds andbiomedical data such as blood vessels. Traditionally, evaluations relied onpath structures of di erent designs, including top-down trees [2, 51] andinformation nets [58]. Trends identi ed in these previous studies are thatboth structured motion-based and stereo 3D views are better than conven-tional 2D views, but motion cues are more bene cial than stereo cues aloneregardless of the structures that were observed. While head-coupled motion454.2. ApparatusFigure 4.2: A path-tracing task showing two trees interleaving one another.Subjects were asked to identify whether the circle belonged to the triangleor square root at the bottom. Figure adapted from [2].in pCubee should lead to similar performance advantages, we used varia-tions of these path stimuli to examine issues surrounding the multi-screenaspect and seam occlusions of the display.4.2 ApparatusBoth the pilot and the main studies were conducted using the workstationand the pCubee device described in Chapter 3. For the desktop display,we used a non head-tracked 24-inch LCD monitor (1920x1200 resolution,0.27mm pixel pitch) for the pilot study and a 20-inch ViewSonic VP201bdisplay (1600 x 1200 resolution, 0.26 pixel pitch) for the main study. Neitherof the displays was stereo-capable due to limitations with the LCD panels.The experiments were set up on a desk where the subjects performed thetasks while seated. We used a conventional keyboard interface for subjectsto enter their responses. The mouse that was used for manipulation in theexperiment was set up on the right side of the subject while the pCubeedisplay was set up on the left. Subjects were allowed to freely interact464.3. User Study: E ect of Seam Size(a) 0-unit frame (b) 0.3-unit frame (c) 1.3-unit frame (d) 2.3-unit frameFigure 4.3: Virtual frame structures of varying seam sizes used in the seamsize experiment.with the pCubee display as they chose (all subjects performed the pCubeeconditions seated except one, who chose to walk around pCubee instead ofpicking it up in the pilot study, and we discarded that data).Head-tracking for pCubee was done using a pair of headphones with themounted head sensor, which subjects had to wear for the duration for thestudy. We used a  xed head-to-eye o set (10 cm below, 5 cm in front and3 cm to the left to approximates monocular viewing with the left eye) forhead-coupled perspective rendering on pCubee. In the experiment, 1 \unit"in the virtual space was analogous to 1 cm in the real world. No calibrationof the tracking data was performed in this study.4.3 User Study: E ect of Seam SizeTo investigate the e ect seam size has on visualization tasks, we evaluateduser performance in path-tracing under the presence of di erent frame oc-clusions.Because we do not have physical prototypes di erent than the currentpCubee, the experiment was conducted on a desktop setup. We created fourvirtual frames to simulate the di erent versions of pCubee that would havebeen built using panels of di erent seam sizes. As shown in Figure 4.3, weincluded 0-unit, 0.3-unit, 1.3-unit and 2.3-unit virtual frames to create dif-ferent occlusion levels around the path stimuli. The 0-unit frame representedthe best possible scenario when no frame was present, and the other three474.3. User Study: E ect of Seam Size(a) 1.3-unit segment (b) 2.6-unit segment (c) 3.9-unit segmentFigure 4.4: Spherical structures of varying segment lengths used in the seamsize experiment.frames were designed to be within the possible minimum and maximumranges that could be physically constructed (the seam size on the currentphysical pCubee is 2.3 cm, and we anticipated that the next pCubee pro-totype would be constructed with thinner display panels that create 1.3-cmseams). The virtual frames were generated to have viewing windows thatwere analogous in size as the physical pCubee display.We designed a set of spherical path structures (see Figure 4.4) to moreaccurately represent searching in 3D space compared to stimuli used in pastevaluations. Each structure was constructed by spanning three paths froma center node (referred to henceforth as the root), with each path containingmultiple segments. The paths were placed around the surface of a 3-unit ra-dius \shell" centered at the root. In total, three di erent spherical structureswere generated using varying segment lengths to create the paths, including1.3-unit, 2.6-unit and 3.9-unit segments as illustrated in Figure 4.4. Wewere interested in whether varying segment lengths alone at each frame oc-clusion level would a ect users in their viewpoint selection behaviors whenperforming path-tracing. We suspected that users would switch across mul-tiple screens more if segments were long enough to appear on more than onescreen simultaneously. In our design, 3.9-unit segments were easily view-able across two screens under all virtual frame occlusions, 2.6-unit segmentswere mostly blocked by only the 2.3-unit frame, and 1.3-unit segments werecompletely blocked by both 1.3-unit and 2.3-unit frames.484.3. User Study: E ect of Seam SizeNodes used to connect the segments were randomly generated on the3-unit shell with a 0.8-unit minimal node separation distance. To createmore irregular and random path structures, we varied the distance betweena node and the 3-unit shell ( 0:4 unit variance from the surface radius) andalso the length of each individual segment (a random factor of 0.7 to 1.3).We also constrained the total length of each path to be approximately 30units to balance the di culty across stimuli of di erent segment lengths (i.e.1.3-unit structures had 23 segments on each path; for 2.6-unit, 11 segments;and for 3.9-unit, 7 segments).With traditional path structures, we discovered in our pilot study thatsubjects were able to narrow down the task by ignoring large portions ofthe stimuli where the trace target was not present. Whereas in sphericalpath structures, each path occupied a large region around the sphericalsurface, and subjects would not be able to easily isolate the paths froma single perspective. Compared to the top-down tree used by Arthur etal. [2] and the \radial-spanning tree" used in our pilot study, the sphericalpath stimulus was a more spatially 3D design similar to \information nets"[58]. We expected that subjects would be inclined to stay on the side of thestimulus where occlusions to the path being traced were minimal, and thusthey could take advantage of large viewpoint changes to rotate and maintainthe most desirable views.The task involved two spherical path structures placed side by side in thecenter of the scene in each trial, separated horizontally by 1 unit betweenthe two roots that were represented by a blue sphere and a yellow sphererespectively. A target node, represented by a white sphere of the same size,was randomly placed at the end of one of the paths in one structure. Thegoal of the task was to determine which root the target node was connectedto. Subjects indicated their answers by pressing keys that were color-codedblue and yellow to correspond to the colors of the roots. To control forperformance consistency, we used a  xed set of random seeds to generatethe same set of path structures for each subject, but the order of presentationwas randomized. We tested and iterated on the path generation algorithmto ensure the chosen design made the path-tracing task non-trivial yet not494.3. User Study: E ect of Seam Sizeso di cult that a typical subject would have more than a 10 to 15 percenterror rate.4.3.1 Condition DesignIn total, we tested four conditions of subjects performing the spherical path-tracing task. Each condition involved a di erent virtual frame renderedover the spherical path structures, which we refer to as 0 Frame, 0.3 Frame,1.3 Frame and 2.3 Frame. The four conditions were run on the desktopmonitor, and we showed a  xed perspective projection of the virtual sceneby placing the virtual camera 60 units from the path stimulus to simulatea typical desktop viewing distance. Subjects were allowed to use the mouseto rotate the spherical path structures along with the virtual frame; wemapped horizontal mouse movements to yaw (z-axis) rotation and verticalmouse movements to roll (x-axis) rotation. We reset the mouse cursor to beat the center of the screen after each mouse click to avoid subjects losingtrack of the mouse at the monitor’s edges.4.3.2 MethodThe experiment used 4x3 within subjects design to evaluate performanceacross the four frame conditions and the three segment lengths. Subjectswere instructed to complete the task as fast as possible while keeping errorsto a minimum. At the beginning of each condition, subjects were given ve practice trials to familiarize themselves with the presence of di erentframes, during which the system provided auditory feedback on whether theyanswered correctly. No feedback was given during the actual experiment.For each condition, there were 15 consecutive trials of the path-tracingtask for a total of 60 trials through the experiment. Within each condi-tion block, we generated  ve spherical path structures for each of the threesegment lengths and randomized their orders across the  fteen trials. Eachtrial was initialized with the viewpoint as if the subject was looking fromthe front side of pCubee. Upon completion of the 60 trials, we asked sub-jects to  ll out a short questionnaire on their approaches and challenges504.3. User Study: E ect of Seam Sizewith respect to the seam size and segment length variations. Prior to theexperiment, an instruction sheet was given to the subject outlining the taskand the procedures as described above.10 subjects (6 males, 4 females) were recruited to participate in the studywith compensation. Due to the limited subject pool, the presentation of thefour conditions was counterbalanced such that no condition would be inthe same order more than 3 times between all subjects (i.e. each conditionwould be presented  rst no more than 3 times, and so forth). The princi-pal dependent variables for the experiment were response times and errorrates, in which response times represented the durations from the stimulusonset to a keyboard response and error rates represented the percentages ofcorrect responses provided. Throughout the experiment, we also recordedthe locations from which they viewed the path structures, which was doneby capturing the virtual camera location relative to the centre of the virtualscene in each frame (i.e. 1 camera location data every 25 msec for our experi-ment software running at 40 frames per second). This allowed us to analyzehow subjects manipulated their viewpoints under di erent conditions, asdescribed below.4.3.3 ResultsTables 4.1 and 4.2 shows the statistics for the mean response times anderror rates. Two-way repeated measures analysis of variance (ANOVA) wascarried out on both variables across the twelve seam size and segment lengthcombinations 3. We only considered response times from correctly answeredtrials, because incorrect responses could mean a missed trace that resultedin shortened or lengthened trace times.For the mean response times, degrees of freedom (DOF) were correctedusing Greenhouse-Geisser estimates of sphericity (epsilon = 0.652), and theresults showed that response times across seam sizes di ered signi cantly3We removed outliers from the data if the response time of an individual trial was3 times the inter-quartile range (IQR) away from the 1st and 3rd quartiles of the meanresponse times within the same condition. In total, we removed 15 outliers out of 600 datapoints spread across four subjects in all four seam size conditions; 0 Frame: 4 outliers;0.3 Frame: 3 outliers; 1.3 Frame: 2 outliers; 2.3 Frame: 6 outliers.514.3. User Study: E ect of Seam SizeSeam SizeSegment Length 0 Frame 0.3 Frame 1.3 Frame 2.3 Frame1.3 17.85s 17.70s 25.28s 27.41s2.6 20.16s 20.84s 31.08 30.92s3.9 17.14s 17.50s 32.32s 30.61sAverage 18.38s 18.68s 29.56s 29.65sTable 4.1: Statistics of mean response times in the seam size study.Seam SizeSegment Length 0 Frame 0.3 Frame 1.3 Frame 2.3 Frame1.3 10.00% 6.00% 6.00% 8.00%2.6 4.00% 8.00% 4.00% 10.00%3.9 2.00% 0.00% 11.00% 8.00%Average 5.30% 4.70% 7.00% 8.70%Table 4.2: Statistics of mean error rates in the seam size study.(F(1.956,17.603) = 9.171, p = 0.002). However, no signi cant di erenceswere found between segment lengths (F(2,18) = 1.428, p = 0.266) nor in-teraction between seam sizes and segment lengths (F(6,54) = 0.806, p =0.570). Pairwise t-tests with Bonferroni adjustments revealed that mean re-sponse times were signi cantly di erent (p<.05) between the 0 Frame and1.3 Frame conditions, between the 0.3 Frame and 1.3 Frame conditions,and between the 0 Frame and 2.3 Frame conditions.For the mean error rates, the data showed no signi cant di erence be-tween the four seam sizes (F(1.536,13.826) = 0.821, p = 0.494) with DOFcorrected using Greenhouse-Geisser estimates of sphericity (epsilon = 0.512)nor between the three segment lengths (F(2,18) = 0.604, p = 0.557). Fur-ther, no interaction e ect was found between the two factors (F(2.706,24.366)= 1.569, p = 0.225).Figures 4.5 and 4.6 show the plots for the mean response times and meanerror rates, respectively, across the test conditions.524.3. User Study: E ect of Seam Size010203040500Mean Response Times (s)Frame Size (cm)1.3 segment 2.6 segment 3.9 segment0.3 1.3 2.3Figure 4.5: Plot of mean response times in the seam size experiment. Notethe clear gap between the 0.3 Frame and 1.3 Frame conditions. Error Rates Frame Size (cm)0.3 1.3 2.3Figure 4.6: Plot of mean error rates in the seam size experiment. Datashown was averaged across all segment lengths within each frame condition.534.3. User Study: E ect of Seam Size4.3.4 Discussions of ResultsFrom the statistically signi cant e ects found on mean responses times andthe additional viewpoint data that was collected, we observed importanttrends regarding the e ect of seam size in path-tracing tasks. Although thetasks were performed using a desktop interface, we suspect user behaviorsin the physical pCubee display would be similar to these results if given thesame seam size conditions.Mean Response TimesAs illustrated in Figure 4.5, a signi cant division in the mean response timesbetween the di erent seam size conditions can be observed. The 0 Frameand 0.3 Frame conditions produced similarly fast response times, while thetwo thicker 1.3 Frame and 2.3 Frame conditions produced similarly slowerresponse times. The gap between the two condition groups were surprisinglylarge (10 seconds) despite a relatively small sample. Speed-accuracy trade-o was not present to be a factor, as subjects also made an increased numberof errors as the seam size increased (though not signi cantly). The fact thatresponse times of the 0.3 Frame condition was not signi cantly di erentfrom the 0 Frame condition suggests that occlusion was not a problem withthe thinnest virtual frame. In the 0.3 Frame condition, a spatial referencewith minimal occlusion was commented to be helpful to subjects in the task;one subject felt that the 0.3 Frame \was the easiest" and \no frame wassurprisingly hard" because they did not have a \concrete reference point".These results indicate that seam sizes that are currently possible me-chanically for pCubee (2.3 Frame condition) are too big to be e ective, aswas also pointed out by subjects in the questionnaire responses. However,the results also revealed promise for future display development becausesmall seams are mechanically feasible with better screen panels, but seam-less geometric displays are di cult to achieve and require a di erent type ofdisplay technology. From the data, it appeared that seams only have to bebelow a certain threshold point (in between 0.3 Frame and 1.3 Frame) forthe display to be e ective. There might also be a screen-to-seam ratio that544.3. User Study: E ect of Seam Sizetolerates thicker seams with increased display real estates. Further investi-gation into the performance gap will be important for deriving more precisedesign guidelines for outward-facing geometric displays.Speci c lengths of the path segments did not a ect user performance.Mean response times were not signi cantly di erent, and no noticeable pat-terns could be observed from the error rates. Subjects indicated longersegment lengths appeared more di cult due to more self-occlusions withinthe path structures themselves but not due to the di erent frame occlusionlevels. We compensated for the di culty di erence by adjusting the numberof segments, which was re ected in similar response times across all segmentlengths used. In general, subjects did not exhibit any changes in the strate-gies used to solve the task with respect to di erent segment lengths. Theseresults suggest that the seams’ impact might be independent of the size ofthe virtual content involved.Viewpoint AnalysisBy analyzing the recorded virtual camera positions, we can better under-stand user interaction with the stimulus and whether there were any notabletrends between the conditions.We de ned frustums within the virtual space that represent six viewingregions (i.e. front, back, left, right, top and bottom) in order to visualizehow subjects observed the path structures throughout the trials. We binnedthe recorded virtual camera position data into these six regions using twometrics: i) per-screen usage pattern, which represents the average number ofvirtual camera data points in each of the six viewing regions across all trials,and ii) multi-screen usage pattern, which represents how many regions wereviewed and how long each was viewed in an average trial. Tables 4.3 and 4.4summarize the virtual camera data using these two metrics; we calculatedboth as percentages of the total number of data points recorded in eachframe condition. We also explored screen usage patterns across di erentsegment lengths but found they were similar, which further con rmed thatsegment lengths had no e ect on how subjects interacted with the scene, as554.3. User Study: E ect of Seam SizeSeam Size Front Back Left Right Top Bottom0 Frame 45.61% 9.93% 13.97% 15.41% 8.61% 6.46%0.3 Frame 59.95% 8.38% 12.70% 12.20% 6.76% 0.01%1.3 Frame 56.51% 11.73% 14.62% 9.54% 7.60% 0.01%2.3 Frame 63.97% 10.54% 6.73% 11.73% 7.02% 0.02%Table 4.3: Per-screen usage pattern in the seam size studySeam Size First Second Third Forth Fifth Sixth0 Frame 63.52% 24.22% 9.34% 2.38% 0.47% 0.06%0.3 Frame 72.72% 20.43% 5.52% 1.08% 0.24% 0.00%1.3 Frame 75.02% 17.65% 5.60% 1.57% 0.15% 0.00%2.3 Frame 78.78% 16.49% 3.58% 0.95% 0.19% 0.00%Table 4.4: Multi-screen usage pattern in the seam size studywas discussed previously.From the per-screen usage pattern, subjects spent the majority of theirviewing from the front, which can be attributed to the fact that trials wereinitialized from the front view. All other viewing regions occupied no morethan 15% each of the total camera data, compared to the 45% to 65% thatwere spent on the front viewing region. Subjects took advantage of thebottom view in the 0 Frame condition when there was no occlusion fromthe bottom, which resulted in the more evenly spread viewpoint data acrossthe six viewing regions compared to all other conditions.From the multi-screen usage pattern, there was a steady increase in theamount of time subjects spent on one screen in the task as the seam sizeincreased. Close to 80% of the time in a single trial was spent on one screenin the 2.3 Frame condition. Furthermore, subjects devoted to only two ofthe six viewing regions close to 90% of their time even in the best scenario, 0Frame condition. These patterns revealed that even with the spherical pathstimulus that was designed speci cally for multi-screen viewing, subjectswere able to solve them within small viewing regions with motion, especiallyin the presence of thick seams that discouraged subjects switching views.564.3. User Study: E ect of Seam Size(a) 0 Frame (b) 0.3 Frame(c) 1.3 Frame (d) 2.3 FrameFigure 4.7: Viewpoint movement visualization for the seam size experiment.The black dots represent the virtual camera positions where the subject wasviewing from around the pCubee frame throughout the trial.The observed patterns were good indicators to why subjects took more timeto solve the task in the thicker frame conditions, because they had to resolveadditional segment occlusions when viewing mostly from a single region.By plotting the virtual camera positions in 3D space, we could see moreclearly how subject behaviors changed corresponding to the seam size. Fig-ure 4.7 illustrates the behaviors of how a subject’s viewpoint moved aroundthe scene in a single trial in each condition. As shown, in the extreme cases,most movements were in front of one side of the structure when frame oc-clusions were the largest (1.3 Frame and 2.3 Frame conditions); whereaswhen the virtual frame was minimal, there was little constraint on where574.4. Pilot Study: Radial Spanning Treethe subject would view from, and the camera positions spread more evenlyaround the scene (0 Frame and 0.3 Frame conditions).Our viewpoint analysis also provided insights regarding the usability ofoutward-facing geometric displays in path-tracing visualization tasks. Asdiscussed above, our path-tracing task was solvable through motion in asingle viewing region; this is consistent with the  ndings in Ware’s study [58]that any structured motion cues, including head-coupled rendering, hand-guided or automatic motions, lead to similar performance improvementsin path-tracing. We conclude that there would only be small bene ts tousing pCubee for rotation compared to using a mouse for the task evenwithout any seam blockages. This also explains why using pCubee andmouse a bimanually o ered about the same performance as using just thedesktop and mouse in our pilot study, as described below. While path-tracing visualization tasks are still feasible with outward-facing geometricdisplays with small enough seam size, we argue there are 3D tasks that canbetter take advantage of the large range of tangible manipulation supportedby pCubee.4.4 Pilot Study: Radial Spanning TreeIn a pilot study which preceded our main study, we explored di erent visu-alization and manipulation techniques a orded by pCubee in another path-tracing task, compared with a desktop display and mouse. Instead of theconventional desktop setup which would be the current status-quo solutionto solving the task, we were interested in whether pCubee can o er perfor-mance bene ts to users.We designed a path structure that spanned outwardly in all directions(see Figure 4.8), which we refer to as radial spanning trees, to better utilizethe visualization space inside pCubee. Each of the tree structures containedthree levels of branching: the  rst level extended seven branches from theroot; the two subsequent levels after the  rst-level branches each extendedrandomly either three or four branches, resulting in a total of 63 to 112branches per tree. To avoid cluttering and ambiguity due to overlapping584.4. Pilot Study: Radial Spanning TreeFigure 4.8: Radial spanning tree stimulus used in the path-tracing pilotstudy.Condition Visualization Rotation InputpCubee-only pCubee pCubeepCubee-and-Mouse pCubee Mouse(bimanual)Desktop-and-pCubee Desktop pCubeeDesktop-and-Mouse Desktop MouseTable 4.5: Test conditions in the path-tracing pilot studypaths, each branch was of a random length from 0.5 to 1.5 units, witha minimal separation distance of 0.5 unit between the end points of thebranches. The spanning direction of each branch was randomly generated,as long as it satis ed the separation distance requirement.Similar to the main study, the pilot study task was to search throughtwo overlapping trees to determine connections between the roots and atarget node, which was randomly placed at the tip of one of the last-levelbranches in one of the tree structures. The subjects indicated their answersby pressing keys that are color-coded blue and yellow. The tree structureswere randomly generated for each trial.594.4. Pilot Study: Radial Spanning TreeFigure 4.9: Experimental setup for the path-tracing pilot study.4.4.1 Condition DesignWe tested four conditions involving di erent combinations of manipulationand visualization with pCubee, the desktop monitor and the mouse. In the rst condition (pCubee-only), only pCubee could be used to visualize thetrees; the subjects could pick up the display and look into di erent sides. Inthe second condition (pCubee-and-Mouse), visualization inside pCubee wascoupled with the mouse that could be used to rotate the trees relative to thedisplay for a bimanual interaction scheme. In the third condition (Desktop-and-pCubee), pCubee was used as an input device, and the visualization ofthe trees was decoupled and displayed on the desktop monitor; the rota-tion of pCubee was mapped with a one-to-one ratio onto the visualizationon the desktop monitor. In the forth condition (Desktop-and-Mouse), themouse was used to rotate the visualization on the desktop monitor. Thefour conditions are summarized in Table 4.5, and Figure 4.9 illustrates ourexperiment setup.To ensure consistent task di culty across the four conditions, we in-cluded a virtual pCubee frame on the desktop monitor visualization, as604.4. Pilot Study: Radial Spanning Treeillustrated in Figure 4.8 previously, to provide the same occlusion as wouldbe observed on pCubee. For conditions involving the desktop monitor forvisualization, we showed a  xed perspective projection of the virtual sceneby placing the virtual camera 60 units away to simulate a typical desktopviewing position. We also adjusted the size of the visualization on the desk-top monitor to match what would be physically seen on pCubee. Due tothe pixel pitch di erence between the LCD panels, visualization on pCubeeis sharper and o ers better image quality than the desktop monitor.For conditions that involved pCubee for visualization, we turned o thedesktop monitor to avoid disruption. For conditions that involved the mousefor rotation, the mapping was the same as what was used in the main ex-periment. Because the mouse cursor was reset to be at the center of thescreen after each mouse click, subjects would not be able to see the cursorto appear inside pCubee in the pCubee-and-mouse condition.4.4.2 MethodWe measured response time and accuracy to evaluate performance betweenthe four conditions. All subjects were  rst given verbal instructions aboutthe task and how they could interact with pCubee. They were instructedto be accurate as they could while completing the task as fast as possible.After the brie ng, all subjects performed 10 consecutive trials of the path-tracing task for each condition (for a total of 40 trials) and were allowedpractice trials before each condition. Upon completion of the 40 trials, weconducted a short interview session with each subject, in which they wereasked to rank their preferences for the di erent interaction schemes. We alsoasked some general questions regarding what the subject liked or dislikedabout pCubee. In total, ten subjects (7 males, 3 females) were recruited toparticipate in the study with compensation.614.4. Pilot Study: Radial Spanning Tree4.4.3 ResultsRepeated measures ANOVA carried out on the two dependent variables4showed signi cant di erence in response time (F(3,27) = 9.395, p = 0.004)but not in error rate (F(3,27) = 2.444, p = 0.081). Pair-wise t-tests withBonferroni adjustments on mean response times revealed that the pCubee-only condition was signi cantly slower than the pCubee-and-Mouse condi-tion and the Desktop-and-Mouse condition. Given the small sample size, nosigni cant ordering e ect was found between the conditions using two-factor(univariate) ANOVA (F(3,27) = 0.473, p = 0.704). Figures 4.10 and 4.11show plots of mean response time and error rate for each condition.For the preference ranking, nine out of ten subjects indicated they pre-ferred the pCubee-and-Mouse condition most. Table 4.6 summarizes thenumber of votes each condition received for each rank (equal ranks werepermitted). A CHI-square was unsuitable for our analysis because our sub-ject pool was limited and the vote count in each category was small.4.4.4 Discussions of ResultsDespite the small sample size, a number of important lessons was learnedabout users interacting with the pCubee display which inspired our mainstudy. Here we summarize the results with respect to the mean responsetimes and error rates, and also qualitative feedback and observations weobtained during the pilot study.Mean Response Times and Error Rates Subjects were the fastest inthe pCubee-and-Mouse condition, which suggests that the bimanual interac-tion scheme with pCubee o ers bene ts in accurately choosing viewpointsand tracing information in 3D scenes. This would accord with the  ndingsby Balakrishnan and Kurtenbach [4], in which bimanual interaction tech-niques for camera control o ered faster performance. A contributing factor4Using IQR range, we removed 35 outliers out of 400 data points spread across 7subjects in all four conditions; no visible patterns could be drawn. (pCubee-only: 6outliers; pCubee-and-mouse: 8 outliers; LCD-and-pCubee: 10 outliers; LCD-and-mouse:11 outliers).624.4. Pilot Study: Radial Spanning Tree05101520253035Mean Response Time (s)ConditionspCubee-only pCubee-and-MouseDesktop-and-pCubeeDesktop-and-MouseFigure 4.10: Plot of mean response times in the path-tracing pilot study. Error RateConditionspCubee-only pCubee-and-MouseDesktop-and-pCubeeDesktop-and-MouseFigure 4.11: Plot of mean error rates in the path-tracing pilot the faster response times may have been that the path structures couldbe rotated independently of the virtual frame in the pCubee-and-Mouse con-dition, which reduced the impact of virtual frame occlusions. A surprising634.4. Pilot Study: Radial Spanning TreeCondition First Second Third ForthpCubee-only 2 5 2 1pCubee-and-Mouse 9 1 0 0Desktop-and-pCubee 0 1 8 1Desktop-and-Mouse 0 3 1 6Table 4.6: Preference ranking in the path-tracing pilot study.result was that the pCubee-only condition was relatively slower versus allother cases, including the Desktop-and-pCubee condition which was least-preferred. A lack of user-speci c calibration could have been a problem thathindered subjects from selecting their desired viewpoints. The extra key-board acquisition time (1-2 seconds) when using the pCubee-only conditioncould have biased the response time data, but the amount is small comparedto the overall time di erences. Additional occlusions may also have playeda role, as it was observed that subjects would commonly grasp the sides ofthe box despite the small base on pCubee’s bottom that was intended forholding, and the hand movements could have blocked their view during thetask.In terms of accuracy, subjects made noticeably more errors when theywere using the desktop display. Speed-accuracy trade-o was not a domi-nant factor between the conditions, as subjects performed the fastest in thepCubee-and-Mouse condition while making the least errors (the Desktop-and-Mouse condition was almost just as fast but with double the error rate).We attributed this di erence in accuracy to the pixel pitch di erence whichresulted in shaper images on pCubee. This was a de ciency we correctedfor in our subsequent experimental designs to balance the number of pixelsthe test stimuli would occupy.Feedback and Observations During the interview sessions, most sub-jects commented favorably on the high degree of control available to them inthe pCubee-and-mouse condition. The bimanual interaction scheme allowedthem to  rst choose their viewpoints through rapid movement of their head644.5. Summaryand display and then  ne-tune the rotation of the path structures with themouse to get the most desired view. A number of subjects commented thatthe interaction was intuitive to them as if they were holding real objectsin their hands. On the contrary, subjects felt that the Desktop-and-pCubeecondition was unintuitive and cumbersome.While in general subjects disliked the weights and cables which madepCubee di cult to manipulate, we observed the thick seams to be an espe-cially problematic factor that a ected how subjects interacted with the cubicdisplay. Subjects indicated the thick display seams were a major impedi-ment for being able to perform the task. Initially, we were concerned thatsubjects would rely heavily on the mouse input in the pCubee-and-Mousecondition, as they were most accustomed to mouse interaction. While sub-jects utilized both pCubee and the mouse bimanually, most manipulationwith pCubee was performed at the early stage in a trial when subjects weresearching for the best initial viewpoint to perform the task with subsequentmouse rotations. We seldom observed subjects to switch across the mul-tiple screens of pCubee after engaging the mouse, but instead they usedslight head movements to \wiggle" their viewpoint back and forth. Theseevidences suggest that the radial spanning tree was solvable even with asingle head-tracked display as long as subjects were allowed mouse control;but more importantly, the presence of the seams discouraged users to bettertake advantage of the multi-screen aspect of pCubee.4.5 SummaryIn this chapter, we reported an experiment on the visual discontinuity ofpCubee. We investigated the e ect of seam size by testing user performancein path-tracing tasks under four di erent levels of seam occlusions. A divi-sion of user performance in the task was discovered: subjects were similarlyfast when the seam size of the virtual pCubee frame was less than 0.3 unitand similarly slow when the seam size was greater than 1.3 units, whichsuggests a physical threshold that exists in between.By analyzing viewpoint data on how subjects interacted with the scene,654.5. Summaryit was shown that thicker seams discouraged subjects to utilize multiplescreens to perform the task. This led to additional self occlusions of thepath structures that subjects had to solve, thus resulting in increased re-sponse times with thicker seams. Further, it was also observed that varyingsegment lengths in the path structures had little e ects on user performanceand interaction behaviors. This revealed that subjects were mainly a ectedby how much information was blocked by the thick seams as opposed towhether or not the information could be carried across multiple screens.These observations accord with our pilot study in which we explored dif-ferent visualization and manipulation techniques a orded by pCubee. Al-though we identi ed bene ts of using pCubee bimanually with a mouse,the tangible display was used limitedly to perform initial coarse rotation,and subjects mostly focused on rotating with the mouse by using only onescreen.From the studies’ results, we concluded that path-tracing is not a suitabletask space for pCubee or other tangible outward-facing geometric displaysin which seams remain a signi cant issue. Even in the ideal case with aseamless frame, we observed little bene ts in the utilization of the otherwisecompelling multi-screen aspect of the cubic display. On the other hand,small-range structured rotation with high quality visualization, such as asingle-screen head tracked display, would be su cient for such tasks, whichrea rmed the  ndings of Ware and Franck [58]. However, our studies mo-tivate future evaluations to identify the observed seam size threshold moreprecisely, as it remains unclear how much a screen-to-seam ratio played arole. Further, similar evaluations in other visualization task domains, suchas change blindness tasks in 3D, would strengthen our understanding regard-ing the impact of visual discontinuity in pCubee and similar outward-facinggeometric displays.66Chapter 5Evaluation of TaskPerformance: SpatialReasoningCompared to existing 3D display technologies, pCubee allows users to inter-act with high quality virtual content in a uni ed workspace similar to howwe manipulate real-world physical objects. Despite a number of limitationswe identi ed in previous chapters, we see potential in pCubee to better assistusers in spatial perception and reasoning tasks, such as comparing shapesand performing 3D rotation.We conducted an evaluation on pCubee to characterize user performancein 3D reasoning. Speci cally, we compared pCubee to a desktop display ina shape comparison task similar to mental rotation, exploring interactionand manipulation techniques a orded by the di erent display setups. Thisinvolved the design of a 3D cube stimulus based on previous literature andpilot studies. Building upon the experiment, we also implemented a morecomplex 3D task that required users to solve a cubic puzzle inside pCubee.In this chapter, we report the evaluation and related  ndings on the utilityof pCubee in our shape comparison task and also in more practical problemsolving such as the cubic puzzle task.5.1 Mental Rotation TasksMental rotation tasks generally require the comparison of two 3D shapesoriented at two di erent angles (see Figure 5.1), which demands users to675.1. Mental Rotation TasksFigure 5.1: A mental rotation task sample in which subjects need to deter-mine if two shapes were identical. Figure adapted from [50].mentally rotate and match the models in order to determine whether theyare identical. Shepard and Metzler [50] pioneered studies in this domainand discovered that the response time for users to decide whether two 3Dshapes matched was linearly proportional to the angular di erence betweenthe two shapes. In further research, Shepard and Cooper [49] identi ed thatuser behaviors for rotations within the depth plane (2D rotations) and indepth (3D rotations) were similar, concluding that the task was sensitivenot to the axis of rotation but rather only to the angular di erence betweenthe shapes.More recent research has been in the area of cognitive psychology, iden-tifying trends in learning e ects [55], where users were found to performequally fast in familiar orientations with practice, and gender di erences [37,45], where a large performance advantage in favor of the males was clearlyshown. There is a large body of work on mental rotation with 2D shapesand letters [8, 9]; however our evaluation focuses on 3D shapes in virtualspace.Compared to the path-tracing visualization tasks evaluated in the previ-ous chapter, shape comparison tasks such as mental rotation have a numberof characteristics that make pCubee a suitable device to perform the task.First, mental rotation tasks involve stimuli that more closely resemble phys-685.2. User Study: 3D Cube Comparisonical objects. Users can learn their features to facilitate comparisons throughmanipulation and view point changes, which is compatible with the interac-tion scheme provided by pCubee. Second, stimuli in mental rotation tasksalso have simpler geometries that we suppose are less vulnerable to displayseam occlusions. Unlike path-tracing tasks in which the continuity of theinformation has been proved to be crucial, the nature of mental rotationtasks requires users to learn and maintain good spatial models of 3D shapesthroughout the comparison process. Given the tangible, coupled interactionprovided by pCubee, we expect it to o er users a strong spatial reference andmore intuitive approach to solving similar tasks compared to conventional2D display setups.5.2 User Study: 3D Cube ComparisonWe performed a comparison study to evaluate the performance of di er-ent visualization and manipulation combinations a orded by the pCubeedisplay, a conventional desktop display and a mouse in a 3D cube compar-ison task that was similar to mental rotation. We used a standard desktopmonitor and mouse setup as this would be the current approach users haveto solve the task. We were also interested in using the desktop displayfor visualization and the pCubee device just as an input device for rota-tion to investigate the impact of the tangible nature of a 3D device ratherthan a mouse for manipulating orientation. Thus, we evaluated three dif-ferent conditions: (i) Desktop-and-Mouse, (ii) Desktop-and-pCubee, and (iii)pCubee-only.For the experimental stimulus, we used a pair of dice-like cubic shapes.Instead of standard casino dice faces, we rendered di erent symmetricalicons on the cube faces, including spheres, cylinders, squares, diamonds andstars (see Figure 5.2). As opposed to traditional mental rotation stimulithat were usually evaluated in one  xed viewpoint, our cubic shapes weredesigned so that subjects can take advantage of large perspective changes.Given the limitations regarding the seams being thick relative to thescreen size in the current pCubee hardware, our stimulus design forced users695.2. User Study: 3D Cube ComparisonFigure 5.2: A 3D cube comparison stimulus. The two cubes are rotated atdi erent angles and could contain the same or some di erent face rotate pCubee and use multiple screens to examine and compare the3D cubes. This helped to mitigate the seams’ impact from the viewpointsencountered when performing the task. While users would potentially learnthe shapes better through interactive viewpoint changes as opposed to a xed viewpoint, the nature of the task was similar to mental rotation asusers were not allowed to rotate each cube independently.We centered the pair of 3-unit cubes, separated horizontally by 5.5 units,within the virtual scene (pCubee itself is 14.5 units wide). We used  ve iconsto map on to the six faces of each cube, allowing for one duplicated iconbecause we found the comparison to be trivial if all six faces were unique.The arrangements of the icons on the cubes were randomly generated acrossall trials. In total, there were four angular di erences we tested betweenthe cube pair, including 45, 90, 135 and 180 degrees (we found the 0-degreecomparison to be trivial). Each cube pair was  rst rotated together to arandom, arbitrary orientation (a random axis of rotation and a randomangle) and then a target angular di erence was applied to one of the cubesalong the same random axis.The goal of the cube comparison task was to identify whether the twocubes presented on the display were \identical" or \di erent"; subjects in-705.2. User Study: 3D Cube ComparisonFigure 5.3: Experimental setup for the cube comparison experiment.put their response on the keyboard by pressing either \y" for the identicalcases or \n" for the di erent cases. For the \identical" cases, the icon ar-rangements were equal between the cube pair, and it would be possible torotate one cube along some arbitrary axis to exactly match the icons on theother cube. For the \di erent" cases, we swapped the icons on two randomfaces on one of the cubes, and it would be impossible to rotate one cubealong any axis to produce the same icon arrangement as the other cube.We ensured that the swapping in the \di erent" cases would not reproduceidentical cubes (i.e. swapping two faces containing the same icons).5.2.1 ApparatusFigure 5.3 shows our experimental setup. Both the workstation and pCubeesoftware used were similar to what was described in the experiment in thelast chapter. For test conditions involving the desktop display for visual-ization, we used a non head-tracked LCD monitor (a 20-inch, ViewSonicVP201b display panel with 1600 x 1200 resolution). Again, we did nottest any stereo conditions due to limitations with the LCD panels on bothpCubee and the desktop setup.Di ering from the last experiment in which we used a pair of headphones,we mounted the head sensor on a head gear that could be adjusted to  t into715.2. User Study: 3D Cube ComparisonCondition Visualization Rotation InputDesktop-and-Mouse Desktop MouseDesktop-and-pCubee Desktop pCubeepCubee-only pCubee pCubeeTable 5.1: Test conditions in the cube comparison study.di erent users’ heads. We calibrated the head-sensor-to-eye o set for eachsubject at the beginning of the experiment to ensure they received the mostaccurate viewpoints possible. We applied error compensation in the headsensor data based on correction vectors generated using the dynamic visualcalibration approach prior to the experiment. To keep all test conditionsbalanced, we  xed the head gear on the subjects for the duration of theexperiment.The experiment was set up on a desk where the subjects performed thecube comparison task while seated. We used a conventional keyboard inter-face for subjects to enter their responses. The mouse used in the experimentwas set up on the right side of the subject while the pCubee display was setup on the left.5.2.2 Condition DesignThe three conditions in our evaluation are summarized in Table 5.1.In the Desktop-and-Mouse condition, subjects used a conventional mouseinterface to rotate the two cubes together in the desktop display. We used atrackball implementation provided within OSG to allow 3D view manipula-tion, which was similar to standard viewpoint manipulation interfaces usedin existing 3D modeling tools. The rotation mapping of the mouse to the3D scene was based on its movement on the desktop display. We reset themouse cursor to the center of the desktop display after each mouse click toavoid the cursor to go outside of the rendering contexts when it was usedfor rotation.In the Desktop-and-pCubee condition, subjects used the pCubee display725.2. User Study: 3D Cube Comparisonto manipulate the rendered content in the desktop display. We used a one-to-one mapping between the rotation of pCubee and the visualization onthe desktop display, so that the orientation of pCubee matches with whatwas seen in the desktop visualization. As with other tangible 3D inputdevices, this condition allowed subjects to perform rotation using pCubeein the workspace decoupled from the visualization.In the pCubee-only condition, subjects used the pCubee display to ma-nipulate and visualize the 3D cube stimulus rendered inside. In this case,visualization and manipulation were coupled in a uni ed workspace, whichwe believe would o er the most natural interaction scheme for the compari-son task. The cube pair remained  xed relative to pCubee, so subjects couldlook around the display to compare them.For conditions involving the desktop display, we showed a  xed perspec-tive projection of the 3D cube stimuli by placing the virtual camera 60 units(analogous to 60 cm in real space) away to simulate a typical desktop view-ing position. Compared to the previous experiment in which we attemptedto maintain the same physical size between display conditions, we correctedfor the di erence in display parameters between pCubee and the desktop dis-play by scaling the models by a factor of the pixel pitch di erence betweenthe two displays. Thus, the cubes appeared bigger on the desktop displaybut occupied approximately the same number of pixels as in pCubee.To further balance the visualization qualities for both displays, we ren-dered a virtual pCubee frame in the desktop visualization to maintain thesame levels of occlusions in all conditions. We piloted a short study to showthat seams remain a signi cant factor in our task, which we will brie ydiscuss in the context of our main evaluation results later in this chapter.5.2.3 MethodThe experiment used a 3x4 within subjects design for the three displayconditions and four angular di erences. The experiment consisted of 20consecutive trials of the cube comparison task per display condition for atotal of 60 trials. Within the conditions, we generated  ve cube pairs for735.2. User Study: 3D Cube Comparisoneach of the four angular di erences and randomized their orders across the20 trials. In between completion of the 20-trial blocks, we allowed subjectsto rest and practice 5 trials for the next condition.We instructed the subjects to be as accurately as possible when perform-ing the task by providing additional compensation for more accurate results.The system provided auditory feedback on whether they answered correctlythroughout the experiment, and it displayed the accumulated score (i.e. ac-cumulated number of errors made) to the subjects upon completion of each20-trial block.Post-task questionnaires were given to our participants to collect subjec-tive feedback regarding their interaction experiences. Using 7-point semanticdi erential questions, we designed 11 questions with which subjects ratedon speci c aspects of the interaction and mental process involved in the taskfor each condition. We also asked open-ended questions on what subjectsliked and disliked about each interaction scheme. The 11 questions are listedbelow: Q1: Using condition to do the task was (Very ChallengingnVery Easy) Q2: Using condition to do the task was (Very UnintuitivenVery Intu-itive) Q3: Using condition to do the task was (Very UnenjoyablenVery En-joyable) Q4: Using condition to rotate to the view I wanted was (Very Chal-lengingnVery Easy) Q5: Using condition to rotate to the view I wanted was (Very Unin-tuitivenVery Intuitive) Q6: Using condition, rotating the cubes in my mind was (Very Chal-lengingnVery Easy) Q7: Using condition, rotating the cubes in my mind was (Very Unin-tuitivenVery Intuitive)745.2. User Study: 3D Cube ComparisonDisplay ConditionsError Rates Response TimesAngles D+M D+P P D+M D+P P45 16.67% 17.50% 20.42% 24.66s 26.36s 30.39s90 22.78% 16.67% 18.33% 32.88s 34.23s 33.49s135 18.33% 20.00% 18.33% 32.31s 37.41s 36.87s180 25.42% 25.00% 17.08% 34.00s 39.10s 32.82sAverage 20.80% 19.79% 18.54% 30.96s 34.27s 33.39sTable 5.2: Mean error rates and response times in the cube comparisonstudy. (D+M = Desktop-and-Mouse; D+P = Desktop-and-pCubee; P =pCubee-only) Q8: Using condition to do the task, the cubes appeared (Not at all3DnVery 3D) Q9: Using condition to do the task, the cubes appeared (Not at allRealnVery Real) Q10: Using condition, I felt I performed (Very SlowlynVery Fast) Q11: Using condition, I felt I performed (Very InaccuratelynVery Ac-curately)The experiment took one and a half hours on average, and 12 subjects(7 males, 5 females) participated in the experiment. The principal depen-dent variables for the experiment were response time and accuracy, and thethree test conditions were counterbalanced between subjects to account forordering bias. We recorded the virtual camera positions from where sub-jects viewed the scenes as in our previous experiment to analyze how theymanipulated viewpoints in di erent test conditions.755.2. User Study: 3D Cube Comparison5.2.4 ResultsTable 5.2 shows the descriptive statistics for the error rates and responsetimes 5. Two-way repeated measures ANOVA was carried out on the twovariables across the twelve display and angular di erence combinations. Nosigni cant main e ects of the type of displays used was found for eithermean response times (F(2,22) = 1.499, p = 0.245) or error rates (F(2,22)= 0.177, p = 0.839). A signi cant di erence was found between angulardi erences in mean response times (F(3,33) = 7.510, p = 0.011). Pair-wiset-tests with Bonferroni adjustments revealed that 45-degree (M = 27.136s,SD = 2.409) was signi cantly faster than 180-degree (M = 35.304s, SD =2.316). No interaction e ect was found between the display used and theangular di erences. We also performed an analysis using linear regression onthe two variables; the correlation coe cients r2 are summarized in Table 5.3.Figures 5.4 and 5.5 illustrate the mean error rates and response times plottedfor the di erent conditions.Table 5.4 summarizes the 7-point Likert scale responses for the 11 seman-tic di erential questions in the post-task questionnaire. Repeated measuresANOVA showed signi cant di erences between the Desktop-and-Mouse andpCubee-only conditions on Q6,Q7 and Q8 (F statistics are F(2,20) = 4.327,p = 2.745E-2; F(2,20) = 4.462, p = 2.498E-2; F(2,20) = 9.633, p = 1.176E-3respectively) and also between all conditions on Q9 in which we asked howrealistic the cubes appeared to be in the visualization (F(2,20) = 13.093, p= 2.318E-4).Condition Error Rates Response TimesDesktop-and-Mouse 0.49 0.69Desktop-and-pCubee 0.79 0.89pCubee-only 0.87 0.27Table 5.3: Correlation coe cients (r2) in the cube comparison study.5Using IQR range as in the previous experiment, we removed 26 outliers in total outof 720 data points spread across six subjects; Desktop-and-Mouse condition: 12 outliers;Desktop-and-pCubee condition: 4 outliers; pCubee-only condition: 10 outliers.765.2. User Study: 3D Cube Comparison00. 45 90 135 180Mean Error Rate (%)Angular Difference (degree)pCubee-only Desktop-and-Mouse Desktop-and-pCubeeFigure 5.4: Plot of mean error rates in the cube comparison experiment.01020304050600 45 90 135 180Mean Response Time (s)Angular Difference (degree)pCubee-only Desktop-and-Mouse Desktop-and-pCubeeFigure 5.5: Plot of mean response times in the cube comparison experiment.5.2.5 DiscussionsWe observed interesting results and trends revealing the advantages andcurrent limitations of pCubee in spatial reasoning tasks. Here we discuss775.2. User Study: 3D Cube ComparisonQuestions D+M D+P PQ1 -0.167 0.333 0.833Q2 -0.0833 0.750 1.250Q3 -0.500 0.500 1.333Q4 0.667 0.917 1.167Q5 0.417 0.583 1.250Q6 -0.417 0.417 0.917 Q7 -0.250 0.417 0.750 Q8 0.250 1.000 2.083 Q9 -0.917 0.417 1.833 Q10 -0.667 0.750 0.583Q11 -0.333 0.833 0.583Table 5.4: Questionnaire responses from the cube comparison experiment.Values in bold are the highest in their categories. Values with asterisks ( )indicate there is a signi cant di erence to at least one other condition.our main evaluation results in the context of error rates, response times,subject preferences and viewpoint usage.Mean Error Rates and Response TimesSurprisingly, both mean error rates and response times were very similarin the three conditions, which was contrary to our belief that the coupledworkspace in pCubee would better support the task. This was also contraryto  ndings in a previous study by Ware and Rose [59] on 3D spatial rotation,in which they noted the importance of collocation of the physical rotationalinput and the virtual object being rotated. Compared to traditional mentalrotation results, both response times and error rates were noticeably higherin our experiment; this could be due to the nature of our stimulus whichrequired subjects to view multiple sides to perform comparison instead ofviewing from a  xed perspective.By comparing these data with results from a pilot study we conductedwith the same 3D cube stimulus, the characteristics of using pCubee in thetask space could be more clearly shown. In the pilot study, we tested only785.2. User Study: 3D Cube Comparisonthe Desktop-and-Mouse and pCubee-only conditions, but we did not rendervirtual frame occlusions in the desktop display condition. Figures 5.6 and 5.7show the mean error rates and response times of this pilot study. 45 90 135 180Mean Error Rates (%)Angular Difference (degree)pCubee-only Desktop-and-MouseFigure 5.6: Plot of mean error rates in the cube comparison pilot study.051015202530354045500 45 90 135 180Mean Response Time (s)Angular Difference (degree)pCubee-only Desktop-and-MouseFigure 5.7: Plot of mean response times in the cube comparison pilot study.795.2. User Study: 3D Cube ComparisonFrom the pilot study results, subjects performed signi cantly faster usingthe desktop display than with pCubee when no virtual frame was present.Compared to our main evaluation results, we concluded that seam occlusionsremain a signi cant factor for pCubee even in spatial reasoning such as ourcomparison task. As in the path-tracing pilot study, we attributed a portionof the time di erence to the extra keyboard acquisition time when pCubeewas used for manipulation, which was a limitation in the experimental designwe failed to address methodologically.By examining the linear regression coe cients in Table 5.3, the meanresponse times in our main evaluation were in line with the linear relation-ship between judgment speed and angular di erence previous reported withtraditional mental rotation tasks (r2 = 0.69 for the Desktop-and-Mouse con-dition and r2 = 0.89 for the Desktop-and-pCubee condition), except for thepCubee-only condition (r2 = 0.27). Large standard deviations were presentin the error rate data with no noticeable trends; we suspected this is due tothe di culty of the task itself but also possibly the random nature in thegeneration of the 3D cube stimuli, which resulted in large between-subjectvariance (unlike Chapter 4, where order was randomized but not the stim-uli). Given the relatively small sample size, a larger participant pool wouldbe needed to con rm trends between response times, error rates and angulardi erences in our stimulus.An encouraging insight drawn from these results is that seam occlusionsaccounted for majority of the time di erence between our main evaluationand the pilot study. Most subjects disliked the cables, the size and weightof pCubee which restrained their ability to manipulate the device. Theseare technical factors that could be improved with engineering e orts foroutward-facing geometric displays. We argue that with a smaller, lighterand potentially wireless prototype, the manipulation factor could be greatlyimproved to allow pCubee to outperform the desktop display.805.2. User Study: 3D Cube ComparisonSubject Preferences and FeedbackFrom the questionnaire, pCubee received the highest average scores in 9 outof 11 questions on performing the cube comparison task. Most importantly,in questions regarding the di culty and intuitiveness of rotating the cubesmentally, scores given to the pCubee-only condition were signi cantly higherthan the Desktop-and-Mouse condition. These results revealed qualities ofpCubee that made it the \preferred" and intuitive choice for performing ourcomparison task.Subjects’ feedback were also in agreement with these qualitative advan-tages pCubee provides: \I rotated it (pCubee) in my hands a lot, whichwas nice. I was less familiar with this (pCubee) than rotating on a monitor,which was challenging. I tried to visualize pCubee as a box containing thecubes suspended in clear gelatin, and to see into the glass box. This seemedto help." Another subject identi ed that \it (pCubee) was initially slower,having to move around a physical object, but I got used to it and I hadmore con dence in my answers." Furthermore, subjects commented on theunintuitiveness of the mouse-based rotation: \it (mouse) took a lot of clicks,and I sometimes lost my frame of reference", and \it was di cult to get theview I wanted because it was less hands-on than actually rotating the cube."However, from the experimental data, we could not judge whether therewas a di erence between coupled pCubee-only and decoupled Desktop-and-pCubee visualization and manipulation in our cube comparison tasks. Thequestionnaire responses re ected that subjects felt they were faster and moreaccurate, though not signi cantly, in the Desktop-and-pCubee condition, andthey also indicated their preferences for a larger viewing area with the desk-top monitor. These results may be due to the fact that objects appearedlarger on the desktop display after the pixel pitch adjustment, and theycontradicted with previously reported  ndings in a similar task space re-garding the advantages of a uni ed workspace [59]. It would be worthwhileto further investigate the bene ts of coupled and decoupled visualizationand manipulation for outward-facing geometric displays.815.2. User Study: 3D Cube ComparisonDisplay Front Back Left Right TopD+M 26.17% 22.87% 10.46% 11.44% 28.90%D+P 50.66% 7.50% 5.94% 10.83% 24.90%P 45.56% 13.08% 6.22% 8.67% 26.34%Table 5.5: Mean per-screen usage pattern from the cube comparison experi-ment. The bottom side is not shown since it is occluded in the visualizationand was almost never used.Display First Second Third Forth FifthD+M 51.89% 24.16% 13.61% 7.15% 3.10%D+P 57.31% 25.23% 11.34% 4.79% 1.29%P 58.16% 24.22% 11.39% 4.70% 1.52%Table 5.6: Mean multi-screen usage pattern from the cube comparison ex-periment. The sixth screen is not shown because the bottom screen wasalmost never used.Viewpoint AnalysisTable 5.5 and 5.6 summarize the virtual camera data using the same twometrics of per-screen usage and multi-screen usage patterns as described inthe seam size experiment.From the per-screen usage pattern, viewpoint usage was spread evenlyacross di erent sides especially in the Desktop-and-Mouse condition, inwhich there were little constraints on the mouse manipulation. For pCubee,the front and top screens were most heavily used, which could be attributedto the experimental setup that placed pCubee to have the front and topscreens most easily accessible.More noteworthy trends could be drawn from the multi-screen usagepattern when compared with the seam size experiment. In our cube com-parison tasks, subjects made use of at least three viewing regions, whichaccounted for about 90% of the virtual camera recordings in all three testconditions. To visualize subjects’ interaction behaviors more explicitly, Fig-ure 5.8 depicts how one subject manipulated their perspective in each of the825.3. Applications in Spatial Reasoning Tasks(a) Desktop-and-Mouse (b) Desktop-and-pCubee (c) pCubee-onlyFigure 5.8: Viewpoint movement visualization in the cube comparison ex-periment. The black dots represent the virtual camera position where thesubject was viewing from around the pCubee frame throughout the trial.three conditions in a single trial. As shown in these data, the virtual cameraviewpoints surrounded the pCubee frame from all  ve sides; these interac-tion behaviors were drastically di erent from the restrained behaviors due toseam size which were observed in the path-tracing studies. These evidencesshowed that our stimulus design was successful in persuading subjects toperform a larger range of view manipulation.5.3 Applications in Spatial Reasoning TasksResults from our study have important implications that can aid the designand development of tangible, outward-facing geometric displays in challeng-ing, real-world applications, such as 3D rotation and docking [26] (see Fig-ure 5.9). Imagining rotation in 3D has been proven to be a di cult task [44],and precisely rotating and positioning 3D objects, such as in a 3D Tetrisand puzzle applications, remain as open research problems.Building upon results from the cube comparison experiment, we exploredthe utility of pCubee in a 3D cubic puzzle task. Speci cally, we were curiouswhether or not pCubee could support more complex rotation and problemsolving with its novel and tangible interaction schemes. The 3D cubic puz-zle, as shown in Figure 5.10, involved a \working space" where the puzzlepieces were scattered and a 3x3x3 \solution space" into which users had835.3. Applications in Spatial Reasoning TasksFigure 5.9: A 3D docking task requiring the user to superimpose one chairon top of the other. Figure adapted from [26].to  t pieces. Using the physical stylus, users were asked to control a vir-tual stylus inside pCubee to complete the puzzle, which involved pointing,selection, manipulation and placement of the puzzle pieces in 3D space. Dif-ferent mechanisms were implemented to assist users in performing the task,including visual guides for collisions and snapping in the \solution space".Refer to Appendix C for a detailed description of the interaction schemesthat were implemented, as well as the experimental analysis.We recruited 10 volunteer subjects to attempt the puzzle task, in whichwe compared user performance in solving a physical cubic puzzle on a deskand a virtual version of an identical puzzle inside pCubee 5.11. We coun-terbalanced the order of the physical and virtual puzzles and measured thetotal completion time of each. In a post-task questionnaire, we asked thevolunteer subjects about the bene ts and weaknesses doing the puzzle ineach domain.All subjects were able to complete the puzzles in both domains, andaverage completion times were measured to be 147.8s and 327.3s for thephysical and virtual puzzles. In the questionnaire, subjects preferred theinteractions (selection, manipulation, placement and correction) available845.3. Applications in Spatial Reasoning TasksFigure 5.10: pCubee showing stylus interaction with a 3D cubic puzzle.(a) Physical Puzzle (b) Virtual PuzzleFigure 5.11: Physical and virtual puzzles used in the cubic puzzle them in the physical puzzle over the virtual puzzle, but perceived per-formance was equal for both domains (+1 scores on 7-point Likert scales).Despite their strong preferences for solving the puzzle physically, subjectswere only about twice as slow solving the virtual puzzle, which we consideredto be acceptance performance given the well known challenges of 3D inter-action. These results were encouraging, and they con rmed the practicalapplication potential of pCubee. However, there remains a signi cant gapbetween physical manipulation and virtual manipulation with 3D interfaces.855.4. Summary5.4 SummaryIn this chapter, we reported an experiment on the capability of pCubeeto support 3D reasoning. In our study, we compared the performance ofpCubee to a desktop display in a cube comparison task that was similarto mental rotation. We explored di erent interaction techniques includingusing pCubee for coupled visualization and manipulation, using the desktopdisplay for visualization and pCubee for decoupled manipulation, and alsoa conventional desktop and mouse setup.Due to limitations of thick seams in the current pCubee hardware, wedesigned a stimulus that required subjects to view from multiple sides tosolve the task. The viewpoint data recorded during the experiment showedthat subjects utilized multiple viewing regions which was di erent than whatwas observed in the previous seam size experiment. While seam occlusionsremained to be a factor that signi cantly a ected user performance, weidenti ed that subjects signi cantly preferred using pCubee over a desktopsetup. Both time and accuracy in using the preferred pCubee device wereas good as using the larger, more familiar desktop display.We further examined pCubee for more complex tasks such as solving 3Dpuzzles that require selection, manipulation and placement. We compareduser performance in solving an identical pair of a physical puzzle on a deskand a virtual puzzle inside pCubee. While users were signi cantly fasterin solving the physical puzzle, results from using pCubee were better thanwhat we had expected, which further con rmed the usability of pCubeefor spatial reasoning tasks. These results motivate the ongoing evaluationof tangible outward-facing geometric displays to identify their associatedbene ts in similar domains, such as in 3D visual search tasks.86Chapter 6ConclusionsIn this thesis, we evaluated pCubee, a tangible outward-facing geometric dis-play, to address the lack of empirical work reported on this class of 3D displaytechnology. Through identifying important issues for 3D display evaluationsbased on previous literature, we analyzed and investigated multiple aspectsof pCubee, including its design, tracking calibration, interaction techniques,seam size occlusions and 3D task performance. In this concluding chap-ter, we revisit the main contributions of this work related to outward-facinggeometric displays. Further, we suggest a number of future directions inthe context of continual development and evaluation on display technologiesthat are comparable to pCubee.6.1 Contribution SummaryEvaluation of the Impact of Seam SizeThrough an evaluation on di erent virtual pCubee frame occlusions in path-tracing visualization tasks, we identi ed a signi cant division on responsetimes and interaction behaviors that were dependent on the widths of theseams. Our results suggested a threshold on the physical size of seams thatwould hinder task performance and discourage users from viewing acrossmultiple screens. For the current size and con guration of pCubee, wediscovered that the display could be e ective as long as the seams werebelow a threshold that exists between a 3-13 mm window. More impor-tantly, we demonstrated that path-tracing tasks failed to take advantage ofmulti-screen viewing and are unsuitable tasks for tangible outward-facinggeometric displays.876.2. Future DirectionsEvaluation of Spatial Reasoning using pCubeeIn an evaluation of a cube comparison task that was similar to mental rota-tion, we identi ed that subjects signi cantly preferred the pCubee displayover a desktop setup to perform the task. Both response times and errorrates using the pCubee system were shown to be similar to the bigger andmore familiar desktop setup, despite the current limitations of pCubee. Ourevaluation con rmed the usability of tangible outward-facing geometric dis-plays for 3D spatial tasks and proved them to be a promising approach forimproving users’ 3D manipulation and reasoning abilities.Novel 3D Interaction SchemesWe explored interaction schemes a orded by tangible outward-facing geo-metric displays that are di erent from existing static 3D systems and aremore similar to how we interact with physical objects in the real world. Weproposed four novel interaction schemes, including static 3D content visu-alization, dynamic interaction, large scene navigation and bimanual stylus-based interaction, to demonstrate the application potential for outward-facing geometric displays.6.2 Future DirectionsThroughout the course of this research, a number of challenges related tooutward-facing geometric displays have been encountered and are promisingdirections for future investigation. Here we summarize these future direc-tions with respect to display development and each of the area regardingpCubee that we have investigated.Display DevelopmentMuch of the problems associated with the properties of  at panel displays inthe current pCubee system can be overcome with better display technologies.Organic Light Emitting Diode (OLED) display panels can make a signi cantimprovement over LCD panels in both seam thinness and wide viewing angle886.2. Future Directionsacuity. Square panels would also create a true cubic display to have equalviewing real estate around all sides, but square screens are not commerciallyavailable. Transmitting all images in one single graphics signal output willbe a signi cant step towards reducing the amount of cabling involving inthe current system, allowing the display to be less constrained and moreeasily manipulated. We foresee future outward-facing geometric displays arecapable of providing better 3D visualization and interaction for the purposesof real-world applications and more in-depth evaluations.Tracking CalibrationOutward-facing geometric displays present a calibration problem that re-quires corrections on both the display and the user. With the current mag-netic tracking system, distortion arises from display movements is a chal-lenge that existing calibration techniques fail to compensate for. Developinga comprehensive tracking technique that addresses position and orientationerrors of both the display and the user would be an important step towardsa more immersive 3D experience. Other tracking technologies less proneto distortion, such as inertial and optical techniques, could be considered,but accuracy and latency constraints are high for outward-facing geometricdisplays, like pCubee.Interaction TechniquesBeyond visualization tasks, precision interaction with virtual content, suchas selection and manipulation, remains a di cult problem for 3D displaytechnologies regardless of their designs. The bimanual interaction schemea orded by pCubee is a unique approach to manipulation with 3D objectsthat have yet to be thoroughly investigated in this work. It is worthwhileto further explore 3D selection and manipulation techniques that can besupported by pCubee’s bimanual, uni ed workspace.896.2. Future DirectionsVisual DiscontinuityWhile adequate seam occlusions were shown to provide correct occlusioncues and spatial reference, we propose to identify a more precise seam thresh-old for outward-facing geometric displays. From our evaluation, it remainsunclear whether a threshold is absolute or relative to the size of the displaypanels, though we suspected it could be dependent on a screen-to-seam ratioor the amount of perspective change that is involved in the discontinuity.Given the severity of the e ect seam size had in our task performance anal-ysis, deriving a design guideline on the level of visual occlusions would be asigni cant contribution to future display development.3D Task PerformanceThrough evaluation of our cube comparison task, we discovered that pCubeewas the preferred choice despite its performance was no better than the desk-top display. Future evaluations with an improved prototype (i.e. untetheredmanipulation, thinner seams) would facilitate better comparison betweenthe systems and con rm any performance bene ts in using pCubee. Also,migrating existing 2D spatial psychology experiments to 3D, such as changeblindness or visual search tasks, are avenues of future evaluation few hasaddressed in the past.ApplicationsThroughout this thesis, we demonstrated the capability of pCubee to sup-port a number of interesting applications, including artistic painting and 3DTetris or cubic puzzles. While pCubee is naturally a gaming and entertain-ment platform, it can be utilized in practical and novel tasks especially inthe area of virtual presence. For virtual museums, physical objects can bebrought into the display for closer examination and manipulation; also forteleconferencing, 3D representations of the persons can be shown inside thedisplay boundary for a \talking-head" interaction experience.906.3. Concluding Remarks6.3 Concluding RemarksWe foresee outward-facing geometric displays to become a new paradigm forvisualizing, interacting and communicating with digital 3D content, makingthem an e ective tool in practical applications including both educationand entertainment. As thinner and higher quality display technologies likeOLED panels reach the market, these displays will continue to improve in delity and create even more compelling 3D e ects.The evaluations of the various aspects of pCubee reported in this thesisare the  rst such analyses applied to the domain of outward-facing geo-metric displays. We believe this work be an important  rst step towardsrealizing their full potential in wide-reaching real-world applications as de-scribed above. Our aim is that outcomes from our evaluations will providea reference and a systematic framework for analyses on similar display tech-nologies in the future to facilitate deeper understanding of these displays.91Bibliography[1] L. Arns, D. Cook, and C. Cruz-Neira. The bene ts of statistical visual-ization in an immersive environment. volume 0, page 88, Los Alamitos,CA, USA, 1999. IEEE Computer Society.[2] K.W. Arthur, K.S. Booth, and C. Ware. Evaluating 3D task perfor-mance for  sh tank virtual worlds. ACM Trans. Inf. Syst., 11:239{265,July 1993.[3] R. Balakrishnan, G. Fitzmaurice, and G. Kurtenbach. User interfacesfor volumetric displays. Computer, 34(3):37 {45, mar 2001.[4] R. Balakrishnan and G. Kurtenbach. Exploring bimanual camera con-trol and object manipulation in 3d graphics interfaces. In Proceedings ofthe SIGCHI conference on Human factors in computing systems: theCHI is the limit, CHI ’99, pages 56{62, New York, NY, USA, 1999.ACM.[5] O. Bimber, B. Frhlich, D. Schmalstieg, and L.M. Encarnao. The virtualshowcase. IEEE Computer Graphics and Applications, 21:48{55, 2001.[6] K.S. Booth, M.P. Bryden, W.B. Cowan, M.F. Morgan, and B.L. Plante.On the parameters of human visual performance: an investigation of thebene ts of antialiasing. In Proceedings of the SIGCHI/GI conferenceon Human factors in computing systems and graphics interface, CHI’87, pages 13{19, New York, NY, USA, 1987. ACM.[7] S. Bryson. Measurement and calibration of static distortion of positiondata from 3D trackers. In Proceedings of SPIE Conference on Stereo-scopic Displays and Applications III, pages 244{255, 1992.92Bibliography[8] L.A. Cooper. Mental rotation of random two-dimensional shapes. Cog-nitive Psychology, 7:20{43, 1975.[9] M.C. Corballis, N.J. Zbrodo , L.I. Shetzer, and O.B. Butler. Decisionsabout identify and orientation or rotated letteres and digits. Memory& Cognition, 6(2):98{107, 1978.[10] Nvidia Corporation. Nvidia PhysX Engine : new.html/. Technical report,Nvidia, 2010.[11] C. Cruz-Neira, J.D. Sandin, and T.A. DeFanti. Surround-screenprojection-based virtual reality: the design and implementation of theCAVE. In Proceedings of the 20th annual conference on Computergraphics and interactive techniques, SIGGRAPH ’93, pages 135{142,New York, NY, USA, 1993. ACM.[12] C. Czernuszenko, D. Sandin, and T. DeFanti. Line of sight method fortracker calibration in projection-based VR systems. In Procedings of2nd International Immersive Projection Technology Workshop, 1998.[13] M. Deering. High resolution virtual reality. In Proceedings of the 19thannual conference on Computer graphics and interactive techniques,SIGGRAPH ’92, pages 195{202, New York, NY, USA, 1992. ACM.[14] C. Demiralp, C.D. C.D. Jackson, D.B. Karelitz, S. Zhang, and D.H.Laidlaw. CAVE and Fishtank virtual-reality displays: A qualitativeand quantitative comparison. IEEE Transactions on Visualization andComputer Graphics, 12:323{330, 2006.[15] J.P. Djajadiningrat, G. Smets, and C. Overbeeke. Cubby: A multi-screen movement parallax display for direct manual manipulation. InDisplays, volume 17, pages 191{197, 1997.[16] J.P. Djajadiningrat, P.J. Stappers, and C. Overbeeke. Cubby: A uni edinteraction space for precision manipulation. In Medicine Meets VirtualReality 2001, pages 129{135. IOS Press, 2001.93Bibliography[17] E. Downing, L. Hesselink, J. Ralston, and R. Macfarlane. A three-color, solid-state, three-dimensional display. In Science, volume 273,pages 1185{1189, 1996.[18] S. Ellis, B. Adelstein, S. Baumeler, G. Jense, and R. Jacoby. Sensorspatial distortion, visual latency, and update rate e ects on 3D trackingin virtual environments. Virtual Reality Conference, IEEE, 0:218, 1999.[19] G.E. Favalora. Volumetric 3D displays and application infrastructure.Computer, 38(8):37 { 44, aug. 2005.[20] FMOD. FMOD Toolkit : Technical report, FMOD, 2010.[21] T. Grossman and R. Balakrishnan. The design and evaluation of selec-tion techniques for 3D volumetric displays. In Proceedings of the 19thannual ACM symposium on User interface software and technology,UIST ’06, pages 3{12, New York, NY, USA, 2006. ACM.[22] T. Grossman and R. Balakrishnan. An evaluation of depth perceptionon volumetric displays. In Proceedings of the working conference onAdvanced visual interfaces, AVI ’06, pages 193{200, New York, NY,USA, 2006. ACM.[23] T. Grossman, D. Wigdor, and R. Balakrishnan. Multi- nger gesturalinteraction with 3d volumetric displays. In Proceedings of the 17thannual ACM symposium on User interface software and technology,UIST ’04, pages 61{70, New York, NY, USA, 2004. ACM.[24] K. Gruchalla. Immersive well-path editing: Investigating the addedvalue of immersion. Virtual Reality Conference, IEEE, 0:157, 2004.[25] Y. Guiard. Asymmetric division of labor in human skilled bimanualaction: the kinematic chain as a model. Journal of Motor Behavior,19:486{517, 1987.94Bibliography[26] M. Hachet, P. Guitton, and P. Reuter. The CAT for e cient 2D and 3Dinteraction as an alternative to mouse adaptations. In Proceedings ofthe ACM symposium on Virtual reality software and technology, VRST’03, pages 225{112, New York, NY, USA, 2003. ACM.[27] J. Hagedorn, S. Satter eld, J. Kelso, W. Austin, J. Terrill, and A. Pe-skin. Correction of location and orientation errors in electromagneticmotion tracking. Presence: Teleoper. Virtual Environ., 16:352{366,August 2007.[28] K. Hinckley, R. Pausch, J. Goble, and N. Kassell. Passive real-worldinterface props for neurosurgical visualization. In Proceedings of theSIGCHI conference on Human factors in computing systems: celebrat-ing interdependence, CHI ’94, pages 452{458, New York, NY, USA,1994. ACM.[29] K. Hinckley, J. Tullio, R. Pausch, D. Pro tt, and N. Kassell. Usabilityanalysis of 3D rotation techniques. In Proceedings of the 10th annualACM symposium on User interface software and technology, UIST ’97,pages 1{10, New York, NY, USA, 1997. ACM.[30] M. Inami. Media3: the virtual hologram. In ACM SIGGRAPH 97Visual Proceedings: The art and interdisciplinary programs of SIG-GRAPH ’97, SIGGRAPH ’97, pages 107{, New York, NY, USA, 1997.ACM.[31] VertexLCD Inc. LCD panel controller ADB1024-14LVK : Technical report, VertexLCD, 2010.[32] VertexLCD Inc. LCD panel KM050V01-L01 : Technical report, VertexLCD, 2010.[33] K. Ito, H. Kikuchi, H. Sakurai, I. Kobayashi, H. Yasunaga, H. Mori,K. Tokuyama, H. Ishikawa, K. Hayasaka, and H. Yanagiisawa. 360-degree autostereoscopic display. In ACM SIGGRAPH 2010 EmergingTechnologies, SIGGRAPH ’10, pages 1:1{1:1, New York, NY, USA,2010. ACM.95Bibliography[34] A. Jones, I. McDoWall, H. Yamada, M. Bolas, and P. Debevec. Ren-dering for an interactive 360 light  eld display. In ACM SIGGRAPH2007 papers, SIGGRAPH ’07, New York, NY, USA, 2007. ACM.[35] V. Kindratenko. Calibration of electromagnetic tracking devices. Vir-tual Reality, 4:139{150, 1999. 10.1007/BF01408592.[36] V. Kindratenko and W. Sherman. Neural network-based calibrationof electromagnetic tracking systems. Virtual Reality, 9:70{78, 2005.10.1007/s10055-005-0005-3.[37] M.C. Linn and A.C. Petersen. Emergence and characterization of sexdi erences in spatial ability: A meta-analysis. Child Development,56(6), 1985.[38] M.A. Livingston and A. State. Magnetic tracker calibration for im-proved augmented reality registration. In Presence: Teleoperators andVirtual Environments, volume 6, pages 532{546, 1997.[39] R. Lopez-Gulliver, S. Yoshida, S. Yano, and N. Inoue. gCubik: real-timeintegral image rendering for a cubic 3D display. In ACM SIGGRAPH2009 Emerging Technologies, SIGGRAPH ’09, pages 11:1{11:1, NewYork, NY, USA, 2009. ACM.[40] J.D. Mackinlay and J. Heer. Wideband displays: mitigating multiplemonitor seams. In CHI ’04 extended abstracts on Human factors incomputing systems, CHI EA ’04, pages 1521{1524, New York, NY, USA,2004. ACM.[41] E. Maksakov, K.S. Booth, and K. Hawkey. Whale tank virtual real-ity. In Proceedings of Graphics Interface 2010, GI ’10, pages 185{192,Toronto, Ont., Canada, Canada, 2010. Canadian Information Process-ing Society.[42] M. McKenna. Interactive viewpoint control and three-dimensional op-erations. In Proceedings of the 1992 symposium on Interactive 3D graph-ics, I3D ’92, pages 53{56, New York, NY, USA, 1992. ACM.96Bibliography[43] OSG. OpenSceneGraph : Technical report, OSG, 2010.[44] L.M. Parsons. Inability to reason about an object’s orientation using anaxis and angle of rotation. Journal of Experimental Psychology: HumanPerception and Performance, 21(6):1259{1277, 1995.[45] T.D. Parsons, P. Larson, K. Kratz, M. Thiebaux, B. Bluestein, J.G.Buckwalter, and A.A. Rizzo. Sex di erences in mental rotation andspatial rotation in a virtual environment. Neuropsychologia, 42, 2004.[46] Polhemus. Polhemus Fastrak : Technical report, Polhemus, 2010.[47] Prabhat, A. Forsberg, M. Katzourin, K. Wharton, and M. Slater. Acomparative study of desktop, Fishtank, and CAVE systems for theexploration of volume rendered confocal data sets. IEEE Transactionson Visualization and Computer Graphics, 14:551{563, 2008.[48] Prabhat, D.H. Laidlaw, T.F. Bancho , and C.D. Jackson. Comparativeevaluation of desktop and CAVE environments for learning hypercuberotations. In CS-05-09, 2005.[49] R.N. Shepard and L.A. Cooper. Mental Images and Their Transforma-tions. MIT Press, 1982.[50] R.N. Shepard and J. Metzler. Mental rotation of three-dimensionalobjects. Science, 171(3972):701{703, 1971.[51] R.L. Sollenberger and P. Milgram. A comparative study of rotationaland stereoscopic computer graphics depth cues. In Human FactorsSociety 35th Annual Meeting, pages 1452{1456, 1991.[52] I. Stavness, F. Vogt, and S. Fels. Cubee: thinking inside the box. InACM SIGGRAPH 2006 Emerging technologies, SIGGRAPH ’06, NewYork, NY, USA, 2006. ACM.97[53] A. Sullivan. DepthCube solid-state 3D volumetric display. In Proceed-ings of SPIE 5291, pages 279{284, 2004.[54] I.E. Sutherland. A head-mounted three dimensional display. In Pro-ceedings of the December 9-11, 1968, fall joint computer conference,part I, AFIPS ’68 (Fall, part I), pages 757{764, New York, NY, USA,1968. ACM.[55] M.J. Tarr and S. Pinker. Mental rotation and orientation dependencein shape recognition. Cognitive Psychology, 21(2):233{282, 1989.[56] S. Tay, P.A. Blanche, R. Voorakaranam, A.V. Tunc, W. Lin, S. Roku-tanda, T. Gu, D. Flores, P.Wang, G. Li, P. St Hilaire, J. Thomas, R.A.Norwood, M. Yamamoto, and N. Peyghambarian. An updatable holo-graphic three-dimensional display. Nature, 451(7179):694{698, 2008.[57] Donald Lee Vickers. Sorcerer’s apprentice: head-mounted display andwand. PhD thesis, 1972. AAI7310165.[58] C. Ware and G. Franck. Evaluating stereo and motion cues for vi-sualizing information nets in three dimensions. ACM Trans. Graph.,15:121{140, April 1996.[59] C. Ware and J. Rose. Rotating virtual objects with real handles. ACMTrans. Comput.-Hum. Interact., 6:162{180, June 1999.[60] C.D. Wickens, S. Todd, and K. Seidler. Three dimensional dis-plays: Perception, implementation, and appliations. Technical ReportCSERIAC Rep: CSERIAC-SOAR-89-001, Wright-Patterson Air ForceBase, Ohio, 1989.[61] G. Zachmann. Distortion correction of magnetic  elds for positiontracking. Computer Graphics International Conference, 0:213, 1997.98Appendix ACubeeModel APIThe following routines constitutes the CubeeModel API in the pCubee soft-ware:A.1 Mutators void setTexture(const string& texture, TextureCoord coord=CLAMP):Function: Apply a texture to the virtual model. Texture can beset to repeat or clamp to edge.Parameter texture:  le name of texture to loadParameter coord: OpenGL parameter set to repeat or clamptexturesReturn: void void setSpecular( oat red,  oat green,  oat blue,  oat shine):Function: Change the specular and shine values of the model’sdefault materialParameters: red, green, blue, and shine valuesReturn: void void setDi use( oat red,  oat green,  oat blue):Function: Change the di use value of the model’s default mate-rialParameters: red, green, blue valuesReturn: void99A.1. Mutators void setAlpha( oat alpha):Function: Change the alpha value of the model’s default materialParameters: alpha valueReturn: void osg::Material getDefaultMaterial(:Function: Get and modify the material that is applied to thismodel when it is not highlighted by the stylusReturn: A pointer to the model’s default material osg::Material getHighlightedMaterial(:Function: Get and modify the material that is applied to thismodel when it is highlighted by the stylusReturn: A pointer to the model’s highlighted material void setPosition(osg::Vec3 pos):Function: Set the position vector of the model relative to pCubeeParameter pos: relative position vector of the modelReturn: void void setRotation(osg::Matrix rot):Function: Set the rotational matrix of the model relative topCubeeParameter rot: relative rotational matrix of the modelReturn: void void makeStatic(osg::Matrix XReference):Function: Set the model to be  xed to a reference matrix, couldbe the pCubee display’s matrix or the stylus’ matrixParameter XReference: reference matrix for the model to be xed toReturn: void100A.2. Accessors osg::Geode getGeode():Function: Get the OSG’s Geode object of the modelReturn: OSG’s representation of the model NxActor getActor():Function: Get the PhysX’s Actor object of the modelReturn: PhysX’s representation of the modelA.2 Accessors  oat getScale():Function: Get the OSG’s Geode object of the modelReturn: scale value of the model osg::Vec3 getPosition():Function: Get the position vector relative to pCubeeReturn: relative position vector to pCubee osg::Matrix getRotation():Function: Get the rotational matrix relative to pCubeeReturn: relative rotational matrix to pCubeeA.3 Selection SupportThe following routines are included to support the NodeVisitor that handlesstylus-based selection. bool isSelectable():Function: Check whether the model is selectable by the stylusReturn: boolean value for the query101A.4. Sound Support void setSelectable(bool val):Function: Set the model to be selectable or not by the stylusParameter val: boolean value indicating whether model can beselectableReturn: void bool isSelected():Function: Check whether the model is selected by the stylusReturn: boolean value for the query void setSelected(bool val):Function: Set the model to be selected or not by the stylusParameter val: boolean value indicating whether model is se-lectedReturn: void bool isHighlighted():Function: Check whether the model is highlighted by the stylusReturn: boolean value for the query void setHighlighted(bool val):Function: Set the model to be highlighted or not by the stylusParameter val: boolean value indicating whether model is high-lightedReturn: voidA.4 Sound SupportThe following routines are included to support collision sound playback void setCollisionSFX(FSOUNDSAMPLE sfx,CubeeAudio cubeeAudio):102A.4. Sound SupportFunction: Set the model to be selectable or not by the stylusParameter sfx: sound track to be played for the model uponcollisionParameter cubeeAudio: the CubeeAudio object that encapsu-lates the FMOD APIReturn: void void playCollisionSFX(const int vol):Function: play the sound track that has been attached to themodelParameter vol: volume value for the sound to be playedReturn: void103Appendix BSample Hello World SceneCode Snippet B.1 shows a sample \hello world" scene that initializes a soccerball that bounces within pCubee104Appendix B. Sample Hello World SceneCodeSnippet B.1 Sample scene initialization code to create a soccer ballthat bounces within pCubee./* attaches a light to the scene */root->addChild(new Light(root.get(), &XCubeeWorld));/* adds the virtual pCubee frame to the scene */osg::ref_ptr<Cubee> cubee= new Cubee(cubeePhysics->getScene(), 1.0f, osg::Vec3(0,0,0),osg::Matrix::identity(), &XCubeeWorld,"../geometry/cubeeFrame_normals.obj");root->addChild(cubee.get());/* adds a dynamic sphere to the scene */osg::ref_ptr<Sphere> sphere= new Sphere(cubeePhysics->getScene(), osg::Vec3(0,0,0), 5, 5);sphere->setTexture("../textures/SoccerBall.bmp");root->addChild(sphere.get());/* sets a trigger to play a bouncing sound at collision */sphere->setCollisionSFX(cubeeAudio->load2dSample("../audio/boink1.wav"), cubeeAudio);cubeePhysics->getScene()->setActorPairFlags(*cubee->getActor(), *sphere->getActor(),NX_NOTIFY_ON_START_TOUCH | NX_NOTIFY_FORCES);105Appendix CA 3D cubic Puzzle in pCubeeFigure C.1: pCubee showing stylus interaction with a 3D cubic puzzle.Taking part in the IEEE 3DUI contest in 2011, we explored a decoupledbimanual interaction scheme of using pCubee and the stylus and conductedtwo informal studies to evaluate its performance in 3D cubic puzzle tasks.The following describes the interaction design, the experimental protocoland the result analysis regarding the cubic puzzle task.The 3D cubic puzzle, as shown in Figure C.1, involves a \working space"where the puzzle pieces were scattered and a \solution space" where subjectshad to  t the pieces into. Subjects could hold pCubee in their non-dominanthand and the stylus in their dominant hand to manipulate the puzzle piecesin a bimanual fashion but in a non-uni ed workspace.106C.1. InteractionC.1 InteractionThe interaction of the 3D cubic puzzle task can be classi ed into four stages,including direct selection and manipulation, large rotation, placement andcorrection.C.1.1 Direct Selection and ManipulationWe render a virtual stylus in the 3D scene within pCubee and employ a one-to-one physical-to-virtual stylus mapping that is o set by a user-modi abledistance. Users manipulate the physical stylus in a workspace in real spacedetached from the \working space" while visualizing the interaction insidepCubee. The orientation mapping is direct: the virtual stylus’s orientationis \attached" to the physical stylus. This allows for bimanual rotation con-trol of the virtual stylus within pCubee: users can rotate both the physicalstylus and/or pCubee to adjust the orientation of the virtual stylus withinthe \working space". The position mapping is also direct, but we imposecertain constraints to better support multi-screen visualization and inter-action with pCubee and a detached stylus. First, we constrain the virtualstylus to remain within pCubee’s boundary, which allows users to drag thephysical stylus workspace to any desired locations as needed. Second, weconstrain the relative position of the virtual stylus to be  xed to pCubee,which prevents users from losing track of the virtual stylus due to unintendedtranslations when rotating pCubee to view di erent sides.The virtual stylus acts as a 3D cursor in the \working space" for selection.When the tip of the stylus intersects with a puzzle piece, the piece is outlinedwith a green wireframe to indicate that it is ready to be selected. Users tapthe stylus button once to select the piece and enter a direct manipulationmode. The selected puzzle piece is attached to the tip of the stylus and isdirectly manipulated as described using the mapping above. To release aselected piece or to place it onto the \solution space", users tap the stylustoggle button once.107C.1. InteractionC.1.2 Large RotationPerforming large rotation using a one-to-one mapping with the physicalstylus is challenging due to the attached cable and limited wrist rotation.We provide a drag-and-clutch mechanism to allow users to rotate a selectedpiece in the direction that the stylus is being dragged while the stylus buttonis held down. A virtual arcball is rendered over the piece to indicate thelarge rotation mode.Both position and orientation of the piece remain  xed relative to pCubeewhen performing large rotation. The clutching mechanism allows users torelease the button while repositioning the stylus and pCubee as needed torotate the piece along any desired axes. Users can either drop the pieceby tapping the stylus button or select the piece again by intersecting thepiece and holding down the stylus button until it is outlined in green, thenreleasing the button.C.1.3 PlacementWe implement a snapping mechanism to assist users to place pieces in the\solution space". First, we identify the closest axis-aligned orientation of theselected puzzle piece. Then, we  nd the closest empty slot in the \solutionspace" that the axis-aligned piece can  t within. If the distance between theclosest empty slot and the center of the puzzle piece is within 1 unit (eachpuzzle piece is formed with 1-unit cubes), we render a white wireframe inthe \solution space" to indicate the possible placement location. The pieceis snapped onto the location if the stylus button in tapped while the visualguide is shown.C.1.4 CorrectionPieces that have been placed onto the \solution space" can be selected andremoved; users can also rapidly shake pCubee to reset and drop all theplaced pieces back onto the \working space".108C.2. Experiments(a) Contest Puzzle (b) Google PuzzleFigure C.2: Two puzzles evaluated in the cubic puzzle experiment.C.2 ExperimentsWe conducted two informal user studies to elicit understanding on our stylusinteraction design in performing the 3D cubic puzzle tasks. Here we describethe two experiments and their results.C.2.1 User Study 1: Standard Puzzle10 novice subjects, who were not familiar with 3D interfaces, and 5 ex-pert subjects, who were familiar, were recruited for a study that requiredthem to solve a \standard" puzzle (see Figure C.2a). All subjects wereintroduced to the interaction schemes and were allowed a 1-minute prac-tice session. Novice subjects performed only a single trial while the expertsubjects did three trails of the \standard" puzzle. In each trial, we mea-sured the completion time and also the time spent on each interaction stage.Post-task questionnaires were used to solicit feedback about the di cultyand intuitiveness of the interactions, and whether or not features such ashighlighting, rotation widgets and snapping were helpful.ResultsFigure C.3 shows the times spent on di erent interactions for trials thatwere successfully completed by our subjects. In total, 7 novices completed109C.2. Experiments02004006008001000120014001600Total Selection Direct Manipulation Large Rotation ClutchTime (seconds)Novice ExpertsFigure C.3: Time spent on di erent stages of the standard puzzle.the puzzle; 2 experts failed to complete the initial trial while another gaveup after failing the  rst two trials.C.2.2 User Study 2: Google PuzzleThe same 10 novice subjects took part in a second study that compared theperformance of a real, physical Google puzzle and a virtual Google puzzlein pCubee (see Figure C.2b). We counterbalanced the order of the physicaland virtual puzzles and measured total completion time of each; in thequestionnaire, we also asked subjects about the bene ts and weaknessesdoing the puzzle in each domain.ResultsFigure C.4 shows the times spent on di erent interactions for trials that weresuccessfully completed. All subjects were able to complete the trials, andaverage completion times were 147.8s and 327.3s for the physical and virtualpuzzles. In the questionnaire, subjects preferred the interactions (selection,manipulation, placement and correction) available to them in the physical110C.2. Experiments0100200300400500600Time (seconds)Puzzle UsedVirtual PhysicalFigure C.4: Completion times for physical and virtual Google puzzlespuzzle over the virtual puzzle, but perceived performance were equal forboth domains. (+1 scores on 7-point scales).C.2.3 Overall ResultsWe identify the selection scheme which we designed to be a bottleneck, whichtook averages of 41% (317.3s) and 43% (140.6s) of the total completion timesfor the standard and Google puzzles respectively. In the qualitative feed-back, subjects indicated that the stylus cable limited their ability to freelymanipulate the stylus; they also commented on the number of functionsthat were implemented onto a single button was confusing. We believe thatenhancing the stylus’s capabilities or using a di erent 3D input device willgive us much improved results. An interesting observation is that subjectsused pCubee’s physics capability extensively to bring pieces closer to theirviews.Surprising to us was the seldom usage of the drag-and-clutch rotationmechanism, which we found to be important during our initial design itera-tions. Subjects negatively rated the di culty, intuitiveness and the rotation111C.2. Experimentswidget in the questionnaire. We suspect that the large rotation mechanismwas di cult to use because imagining rotation axes is a di cult task [44].As a result, subjects instead resorted to direct manipulation to perform bothtranslation and rotation. All other interaction stages and helping features,including highlighting and snapping, received neutral to positive scores.112


Citation Scheme:


Usage Statistics

Country Views Downloads
China 24 8
United States 5 0
France 2 0
Netherlands 2 0
Sweden 1 0
India 1 0
United Kingdom 1 0
City Views Downloads
Beijing 24 0
Ashburn 2 0
Georgetown 2 0
Unknown 2 18
Rijswijk 2 0
Tiruvannamalai 1 0
Stockholm 1 0
Guyancourt 1 0
London 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items