UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Whale Tank Virtual Reality Maksakov, Evgeny 2009

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2009_fall_maksakov_evgeny.pdf [ 1.51MB ]
Metadata
JSON: 24-1.0051200.json
JSON-LD: 24-1.0051200-ld.json
RDF/XML (Pretty): 24-1.0051200-rdf.xml
RDF/JSON: 24-1.0051200-rdf.json
Turtle: 24-1.0051200-turtle.txt
N-Triples: 24-1.0051200-rdf-ntriples.txt
Original Record: 24-1.0051200-source.json
Full Text
24-1.0051200-fulltext.txt
Citation
24-1.0051200.ris

Full Text

WHALE TANK VIRTUAL REALITY by EVGENY MAKSAKOV B.Sc., Novosibirsk State University, 2000  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The Faculty of Graduate Studies (Computer Science)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  September 2009  © Evgeny Maksakov, 2009  Abstract Whale Tank Virtual Reality is a novel technique for collocated collaboration in virtual reality. It provides a simple solution for head-coupled virtual reality technology allowing more than one user at a time to observe a 3D scene from a correct perspective. Whale Tank VR employs natural interaction using a large touch screen display. It provides each user with a personal viewport into the virtual scene that may be joined and shared with other users’ viewports in certain circumstances of collaboration. We conducted an experiment to study the influence of head coupling on users’ awareness-and-recall of collocated coworker’s actions. The study employed a simulated collaborative situation under several levels of task difficulty. The results revealed no statistically significant difference in awareness-and-recall performance with or without the presence of head coupling. This suggests that in situations where head coupling is employed, there is no degradation in users' awareness of collocated activity. There are a number of benefits to Whale Tank VR. The head coupling is advantageous because it allows a user to experience the sense of a third dimension and to observe difficult-to-see objects without requiring additional navigation other than natural head movement. The multiple viewports available in our Whale Tank VR technique enable collocated collaboration by seamlessly adjusting the head-coupled perspectives in each viewport according to the proximity of the collaborators to ensure a consistent display at all times.  ii  Table of contents Abstract ...................................................................................................... ii Table of contents ....................................................................................... iii List of tables .............................................................................................. iv List of figures .............................................................................................. v Acknowledgements.................................................................................. viii Chapter 1. Introduction .............................................................................. 1 1.1 VR and collocated collaboration issue with head coupling ............................. 2 1.2 Head coupling and users’ peripheral awareness ........................................... 3 1.3 Contribution ............................................................................................. 4 1.4 Overview of the thesis............................................................................... 5 Chapter 2. Related work ............................................................................. 7 2.1 Comparative practical usefulness of virtual reality ........................................ 7 2.2 Collocated collaboration in virtual reality ................................................... 11 2.3 Benefits of large screen displays .............................................................. 14 Chapter 3. Whale Tank Virtual Reality...................................................... 17 3.1 The concept of Whale Tank VR................................................................. 17 3.2 Calculating the position of the user's eyes................................................. 21 3.3 Calculating visual frustum ........................................................................ 23 3.4 Projecting the pointer ray into the scene using touch screen functionality .... 28 Chapter 4. Methods................................................................................... 30 4.1 Participants ............................................................................................ 30 4.2 Apparatus .............................................................................................. 32 iii  4.2.1 Large screen display ......................................................................... 33 4.2.2 Touch screen functionality ................................................................. 35 4.2.3 Device for head-tracking.................................................................... 37 4.2.4 Hardware and software ..................................................................... 38 4.3 Procedure .............................................................................................. 40 4.3.1 Distraction task (shuffling game)........................................................ 40 4.3.2 Awareness-and-recall task ................................................................. 44 4.3.3 Experimental design.......................................................................... 47 4.3.4 Data collection.................................................................................. 49 Chapter 5. Data analysis and results ........................................................ 51 5.1 Shuffling game scores analysis................................................................. 52 5.2 Recall scores analysis .............................................................................. 56 5.3 Awareness (verbal report) scores analysis ................................................. 59 5.4 Summary and conclusion......................................................................... 61 Chapter 6. Discussion and future work..................................................... 64 Bibliography.............................................................................................. 73 Appendix A................................................................................................ 76 A.1 Demographic questionnaire ..................................................................... 76 A.2 Log file example ..................................................................................... 77 A.3 Experimental protocol ............................................................................. 79 A.4 Experimental script ................................................................................. 80 Appendix B: Statistical tables ................................................................... 88 Appendix C: Ethics certificate ................................................................... 99 iv  List of tables Table 3.1. Parameters for asymmetric frustum calculation (Equation 6). .................. 27 Table 5.1 Summary of the results......................................................................... 62 Table A.2.1. Log file format description. ............................................................... 78 Table A.3.1. Experimental protocol....................................................................... 79 Table B.1a. One-sample Kolmogorov-Smirnov test for shuffling game scores (headcoupled condition). .......................................................................................... 88 Table B.1b. One-sample Kolmogorov-Smirnov test for shuffling game scores (nonhead-coupled condition) ................................................................................... 89 Table B.2. Descriptive statistics for shuffling game scores....................................... 90 Table B.3. Mauchly's test of sphericity (shuffling game scores) ............................... 91 Table B.4. Tests of within-subjects effects (shuffling game scores) .......................... 92 Table B.5. Pairwise comparisons for shuffling game speed levels............................. 93 Table B.6. Pairwise comparisons for awareness-and-recall game complexity levels.... 93 Table B.7. One-sample Kolmogorov-Smirnov test for recall scores ........................... 94 Table B.8. Descriptive statistics for recall scores .................................................... 95 Table B.9. Mauchly's test of sphericity (recall scores) ............................................. 96 Table B.10. Tests of within-subjects effects (recall scores)...................................... 97 Table B.11. Pairwise comparisons for recall scores between speed levels................. 98 Table B.12. Pairwise comparisons for recall scores between complexity levels.......... 98  v  List of figures Figure 3.1. Collaborative work in front of a large touch screen display. .................... 18 Figure 3.2. Whale Tank VR with two separate dynamic viewports............................ 19 Figure 3.3. Whale Tank VR, two users share the viewport. ..................................... 20 Figure 3.4. Redefining the axes from Polhemus settings to actual. .......................... 22 Figure 3.5. Head direction vectors. ....................................................................... 22 Figure 3.6a. Symmetric viewing frustum (top view) ............................................... 24 Figure 3.6b. Asymmetric viewing frustum (top view) ............................................. 24 Figure 3.6c. Symmetric viewing frustum. .............................................................. 24 Figure 3.6d. Asymmetric viewing frustum. ............................................................ 24 Figure 3.7a. Calculating new frustum, Z axis component........................................ 25 Figure 3.7b. Calculating new frustum, X and Y axes components............................ 25 Figure 3.8. Pointer ray......................................................................................... 29 Figure 4.1. Large touch screen display and sensors (attached to beams) for head tracking. ......................................................................................................... 33 Figure 4.2. Organization of the tiled large screen (the dark tiles are the ones that were used).............................................................................................................. 34 Figure 4.3a. Touch screen setup. ......................................................................... 35 Figure 4.3b. Touch screen hardware. ................................................................... 36 Figure 4.4a. Wireless markers.............................................................................. 37 Figure 4.4b. Wireless markers in harnesses. ......................................................... 37 Figure 4.5. Whale Tank VR installation scheme (aerial perspective). ........................ 39 Figure 4.6. Shuffling game looks .......................................................................... 42 vi  Figure 4.7. Shuffling game in progress ................................................................. 43 Figure 4.8. Awareness-and-recall cubes setup ....................................................... 44 Figure 4.9. Selection of the cube during the awareness phase ................................ 45 Figure 5.1a. Profile plot for average shuffling game scores in head-coupled mode. .. 55 Figure 5.1b. Profile plot for average shuffling game scores in non-head-coupled mode. ...................................................................................................................... 55 Figure 5.2a. Profile plot for average recall scores in the head-coupled condition. ..... 59 Figure 5.2b. Profile plot for average recall game scores in the non-head-coupled condition......................................................................................................... 59 Figure 5.3a. Profile plot for average awareness (verbal report) scores in head-coupled condition......................................................................................................... 60 Figure 5.3b. Profile plot for average awareness (verbal report) scores in non-headcoupled condition. ........................................................................................... 60 Figure 5.4. Box plots showing confidence intervals for the talk out loud score (TOLScore) from different factors’ perspective: viewing condition, speed and complexity (the numbers near outliers are their corresponding participant numbers). ...................................................................................................................... 61 Figure 6.1. Position of the scene projection on the screen in different modes........... 65 Figure 6.2. New hardware for powering the large screen display. ............................ 71  vii  Acknowledgements In the course of work underlying my thesis, I worked with some of the most unique and fascinating hardware and tools of a time. Uniqueness had its own drawbacks because some of these hardware needed serious tuning and some of it was unable to work as we wanted it to. Nevertheless, the experience I have obtained was great and I very much enjoyed being involved in playing with these interesting devices. Whale Tank Virtual Reality ended up being an eye-catching technology. I felt it was extremely rewarding to demonstrate it to other people and receive a lot of positive feedback. All this rewarding experience would not be possible without other people who were also involved in it. I would like to thank my supervisor Kellogg Booth for providing an opportunity to work with unique hardware on this very interesting project, for his guidance and for his large contribution in improvement of my grammar and writing style of this thesis, which has been naturally difficult for me because it is not written in my first language. I would like to thank my second reader Michiel van de Panne for contributing to improvement of this thesis. I would also like to thank Kirstie Hawkey for providing valuable advice about the experimental design, helping with organization of our experiment and helping with proofreading and improving my thesis. Thanks go to Russell McKenzie who assisted me in data collection during the experimental sessions, participated in Whale Tank VR testing, and proofread several parts of this document’s draft. Thanks go to Garth Shoemaker whose personal libraries for the Polhemus Liberty Latus device I used, which saved me many hours of programming. He also participated in piloting the experiment, helped with a video for my project as an actor in the video, viii  screenshots of which are used in some of the figures in this thesis, and he contributed his thoughts on the Results chapter of this thesis, and answered many of my questions. Thanks go to Joel Lanir who contributed his suggestions to the Methods chapter of this thesis. Thanks go to Jeff Hendy, Leah Findlater, and Jennifer Fernquist who piloted the experiment. This research was supported by funding from the Natural Sciences and Engineering Research Council under the Discovery grant program, the Strategic Project grant program through ARTIFACT (Advanced research, techniques, and informatics for future advantages in construction technology), and Strategic Network grant program under NECTAR (the Network for effective collaboration technology through advanced research).  Evgeny Maksakov University of British Columbia, September 2009  ix  Chapter 1. Introduction This thesis explores the consequences of two trends in computer graphics: the widespread availability of consumer priced virtual reality (VR) and the increasing use large display surfaces for collocated collaboration. Virtual reality has gradually become accessible to the general population and it is not an exotic technology anymore, due to its decreasing cost. NVIDIA Corporation has already introduced reasonably priced stereoscopic shutter glasses that can be purchased in consumer computer stores and prices on monitors with a high refresh rate (120 Hz) are comparable to monitors with a low refresh rate (75 Hz). At the same time, head-tracking that is required for VR can be implemented with inexpensive equipment, such  as  a  single  Wii  Remote  and  a  pair  of  infra-red  LEDs  (http://www.youtube.com/watch?v=Jd3-eiid-Uw). VR indirectly is popularized by such traditional mass entertainment as cinema. The number of new movies released in stereoscopic 3D increases every year and they are shown in most large theaters. Traditionally, large screen displays have been used for collocated collaboration by groups of people. These displays are also becoming increasingly common. It is no longer rare to have an electronic whiteboard in schools and company meeting rooms. It is quite probable that in the future, instead of interactive whiteboards, we will talk about entire interactive walls. Large screen displays fit very well for head-coupled stereo displays (Arthur, 1993),  1  also known as Fish Tank VR (Arthur et al., 1993), because they eliminate many limitations that were intrinsic in the original Fish Tank VR design (see Chapter 2). Unfortunately, the standard Fish Tank VR technique does not support more than one user. This limitation restricts the fundamental collaborative characteristic of large screen displays. Because of this, stereoscopic images on large screens for multiple users in practice are currently limited to 3D movies. This limitation is not a unique feature of Fish Tank VR. Similar techniques based on head coupling, other than head-mounted displays (HMD), also have to solve the same problem to achieve multi-user capability. Available solutions are not perfect and leave an open opportunity for improvement. The research reported in this thesis addresses the problem of collocated collaboration in head-coupled virtual reality, similar to Fish Tank VR, but on a shared large screen to take advantage of both the capabilities of large screens and touchscreen functionality when it is present. We call this Whale Tank VR to emphasize the use of a larger screen than in traditional Fish Tank VR.  1.1 VR and collocated collaboration issue with head coupling One of the earliest forms of immersive virtual reality used a head-mounted display (HMD) (Sutherland et al., 1966). Because a user wears the display, the geometric relationship between the user’s eyes at the display surface remains constant. This means that there is no need to explicitly “couple” the view perspective to account for changes in this relationship.  2  In the beginning of the 1990s, several new virtual reality techniques, such as Fish Tank VR (Arthur et al., 1993) and the CAVE (Cruz-Neira et al., 1992), were developed. These techniques do not require HMDs. Instead, they used head coupling combined with stereo shutter glasses to produce the effect of immersion on regular screens or projected surfaces. These displays were public, in contrast to HMDs, and consequently were less intrusive. However, this approach was not effective for multiple collocated users. The essence of the problem is in the head coupling, where only one person could be head-coupled with the image on the screen. There have been several attempts to address this to extend the technologies to multiple users (Agrawala et al., 1997; Arthur et al., 1998; Simon et al., 2005; Fröhlich et al., 2005). Three of these systems solve the issue only partially. One, by Fröhlich et al. (2005), gives a fairly complete solution, but is very sophisticated (see Section 2.2), which leaves opportunities for simpler systems to be developed in the future.  1.2 Head coupling and users’ peripheral awareness The focus of our work is head coupling. Humans estimate distances using eye-toeye parallax, head movement, and perspective scale. Prior research has suggested that neither the CAVE nor large-scale Fish Tank VR provides advantages over a non-headcoupled mode in navigational or way-finding tasks (Swindells et al., 2004). However, no research has addressed the effect of head coupling on user awareness of other users’ actions or the ability to recall spatially distributed objects. Such studies were not  3  conducted previously because collocated collaboration was almost non-existent with large-screen Fish Tank VR setups. As this is now possible with Whale Tank VR, described further in Chapter 3, we have conducted such a study, which we report in Chapters 4 and 5.  1.3 Contribution This thesis introduces a novel technique, named Whale Tank VR, which is based on the Fish Tank VR technique. It combines many advantages of Fish Tank VR and the CAVE, and it affords collocated collaboration in a virtual reality setting. It naturally takes advantages of a large touch screen display. With Whale Tank VR, two or more users can collaborate using the same screen. The Whale Tank VR technology will be explained in detail in Chapter 3. We conducted a user study to address the question of whether head coupling is beneficial for awareness and recall of interactions with three-dimensional objects in the virtual scene. The study employed two simultaneous tasks to emulate a process of collaboration. During the experiment, we measured participants’ awareness of other users’ actions, the ability of participants to recall these interactions, and their performance in a distraction task. Through a statistical analysis, we tested the effects of the head-coupled condition and two other factors, which were related to levels of difficulty in the tasks, on participants’ performance. We predicted that awareness and recall might be better in the head-coupled mode because it provides a better spatial  4  representation, and thus an extra cue helps to detect and remember objects in space. However, we realized that awareness or recall might be worse in head-coupled mode because the changing image on the screen could be distracting in the case of incomplete immersion. Possible negative impact could come from the fact that the image on the screen (2D projection) does not look the same when a virtual scene is observed from a different perspective in the head-coupled condition. Results of the study revealed no performance difference between head-coupled and non-head-coupled conditions. We think that head coupling is an advantage because it allows a user to experience the sense of a third dimension, which has already proven to be useful for certain tasks (Ware et al., 1993). Whale Tank VR provides the additional ability to observe otherwise difficult-to-see objects in a scene while staying close to the large screen display (see Chapter 6). This combination of advantages makes it a prospective multi-user VR technology. Whale Tank VR is a simple solution that enables collocated collaboration for head-coupled stereo displays. Because our study found no difference in awareness of other users’ actions and an ability to recall them under the presence or absence of head coupling, this suggests that these other advantages might be made available without compromising the aforementioned abilities.  1.4 Overview of the thesis The remainder of the thesis is organized as follows: Chapter 2 reviews the related literature. It covers three aspects: the comparative  5  practical usefulness of different types of virtual reality, collaboration in virtual reality, and the benefits of large screen displays. Chapter 3 explains the idea of Whale Tank VR and how using a large touch screen can make a difference in user collaboration. We describe how Whale Tank VR makes Fish Tank VR collaborative and the features such a design enables. Chapter 4 describes the technical details of the implementation of Whale Tank VR. It also describes the experimental procedures for the user study, including recruitment, experimental tasks, factors and dependent variables used in experiment, and data collection methods. Chapter 5 presents the results of our user study. This includes a quantitative analysis of the awareness-and-recall task and performance in a distraction task under different viewing conditions (head-coupled and non-head-coupled) and different levels of task difficulty. Chapter 6 discusses the results of the user study and implications for further usage and improvement of Whale Tank VR. We argue that the benefits of Whale Tank VR make it a useful technology. We then speculate about the direction of future work based on Whale Tank VR.  6  Chapter 2. Related work It helps to understand the contribution of Whale Tank VR in relation to other existing solutions for collocated collaboration. It is also important to understand the place of Whale Tank VR among other VR techniques. This chapter is divided into three subsections, each of which is directly related to Whale Tank VR and its application to collocated collaboration. The first section describes research that evaluated various VR techniques and compared them to each other in order to reveal their advantages and disadvantages as viewing techniques for a single user. The second section describes research that addressed the problem of collocated collaboration in VR systems. The third section describes research that provides additional support for the use of large screen displays, which is our solution for solving the collocated collaboration problem.  2.1 Comparative practical usefulness of virtual reality Cruz-Neira et al. (1992) introduced the CAVE™, a single-user VR system based on the idea of spatially immersive displays (SID). It employs head tracking combined with stereo glasses to achieve virtual immersion. The user moves in a room comprising backprojected large wall screens and, optionally, front- or back-projected floor and similar ceiling screens; this created a partially immersive virtual reality. The CAVE was a singleuser technology because the image was generated only from the personal perspective of a single user’s head-coupled perspective.  7  The technique of Whale Tank VR is based on a direct extension of head-coupled stereo display technology (Arthur, 1993), dubbed Fish Tank VR in the literature (Arthur et al., 1993), and designed for a single user as well. The idea behind Fish Tank VR is very similar to the CAVE: as a user moves her head, the system redraws the 3D stereo scene on the screen depending on the user’s head position in space. Unlike the CAVE, Fish Tank VR creates an illusion of a “real” third dimension behind only one screen (a monitor). To make the illusion complete, the system supports usage of stereo glasses, which creates an effect of visual parallax between the eyes during system use. As with the CAVE, Fish Tank VR is impossible for correct simultaneous viewing by more than one collocated user because of the personalized viewing perspective that is produced according to single user’s head position. Any other users who are present will not see the correct perspective for a true 3D image from their head position because it is not computed. Ware et al. (1993) evaluated a number of aspects of their Fish Tank VR. They conducted three experiments to test empirical usefulness and formal performance of participants who used Fish Tank VR with or without stereo glasses and with or without head coupling. Subjective opinions expressed by participants after one of the experiments correlated with objective measurements that indicated that head coupling had a greater usefulness than did stereoscopic vision. In general, the participants chose head coupling without stereo over all other conditions. Another experiment described in the same paper used a graph-tracing task to show that head coupling together with the  8  stereoscopic vision gives the best performance in terms of reduced error rate for the specific task in question, followed by just head coupling alone, with stereoscopic vision alone being the least effective of the three. A final experiment aimed at testing how response time depends on frame rate and lag when participants were trying to be accurate. The experimental results showed exponential dependence of response time on total lag (system lag plus lag induced by the frame rate). This showed that smooth temporal performance of a system is very important for VR. Several other researchers studied which type of VR offers the most benefit to users. Qi et al. (2006) found that working with head-mounted displays (HMDs) for prolonged periods of time can be very tiresome in comparison to Fish Tank VR and CAVE systems. They also found that the time for task completion was significantly better in the Fish Tank VR condition. Average error rates were found to be significantly lower for Fish Tank VR in several experimental tasks, including the identification of shape (identifying the number of differently shaped objects), density (identifying the densest sub-region of volume), connectivity (identifying the number of connected clusters of objects), and longest chain plus spatial region recognition (identifying the region to which the longest connected chain belongs) tasks. However, no difference in error rate was found for a size task (identifying the number of object sizes). Demiralp et al. (2003) compared the advantages and disadvantages of CAVE and Fish Tank VR technologies based on subjective users’ opinions. Their five study participants were neuroscience experts who had experience working with Diffusion  9  Tensor Magnetic Resonance Images (DT-MRI). They used DT-MRI scanned models of a brain in CAVE and Fish Tank VR set-ups in order to evaluate the usefulness of the two VR techniques. They found that CAVEs may make some people feel claustrophobic, that standing all the time can be tiresome, and that users of a CAVE suffer from having partial immersion: participants pointed out that it would be inconvenient for them to interrupt working with the system for a short period of time (phone calls, visitors, etc.). Fish Tank VR was also found to have problems, which prevented it from being a perfect tool for representing a 3D environment to even for a single user. Problems include small-scale models, limited field of view (FOV), and a lack of suitable natural interaction and gestural expression. Among the benefits of the CAVE that Demiralp et al. mention are large-scale models (which correlates with findings by Schultze et al. (2005)), a wide FOV, the possibility of walking around, and the ability to use natural gesture-based interactions. Among the benefits of Fish Tank VR, Demiralp et al. mention ease of seeing an overview of the whole scene, relative ease of pointing at objects, the ability to interrupt working with VR at any time, and that Fish Tank VR is better suited for collaboration among two people (sitting either together or remotely). The “better suitability” for collocated collaboration, in this case, did not mean that it was a good tool for it. Rather participants’ comments meant that the CAVE is worse for collocated collaboration when a non-head-coupled user is trying to understand the image, which is drawn for a head-coupled user. Indeed, in the Fish Tank VR situation, when users sit near each other, the head of the non-head-coupled user is better aligned with the head  10  of the head-coupled one than in the CAVE, and the image is not changing as considerably as in the CAVE during coupled head movements. Overall, the authors found strong user preference for Fish Tank VR over the CAVE. Mulder et al. (2000) similarly note that a limited field of view (FOV) is a problem for Fish Tank VR because FOV in Fish Tank VR is dependent on the monitor size and the distance from the user to the screen. They explain that this problem causes an incorrect representation of the image at the edges of the screen. Their solution to this problem was the installation of additional blocking screens in front of the monitor, which they called “cadre”. Swindells et al. (2004) compared performance between a CAVE and a large screen version of Fish Tank VR with and without both stereoscopic vision and head coupling. They used a common desktop non-stereo, non-head-coupled display as a control case. Their study revealed no significant difference in either performance (time for completion) or success rate between any of the conditions for either navigation or wayfinding tasks.  2.2 Collocated collaboration in virtual reality Supporting collaborative work in virtual reality is desirable. Almost twenty years ago, Blanchard et al. (1990) first enabled several people to work together in a virtual environment. They created a VR system using head-mounted displays (HMDs). For a long time, immersive collaborative environments were limited to HMD-based systems or  11  to remote collaboration in SIDs (such as the CAVE) or Fish Tank VR configurations. However, several attempts have been made to solve the single-perspective head coupling limitation of SID and Fish Tank VR for collocated use. Agrawala et al. (1997) proposed the use of three-state shutter glasses (left eye open, right eye open and both closed) to separate the images between the eyes and users. Unfortunately, a major drawback of this technology is a very low image refresh rate (~30 Hz) and a very low brightness level (¼ of the original screen brightness) of the screen in two-user mode, which creates significant discomfort for users of their system. Arthur et al. (1998) took a different approach and used two separate Fish Tank VR screens positioned at 90° to each other to do collocated collaboration. The two users looked at separate screens, but the visual virtual space was the same and the users see each other and communicate quite easily. The disadvantage of this approach is that users have to always see the model in the virtual space from a different direction than their collocated colleague. Simon and Scholz (2005) eliminated head tracking by creating a multi-viewpoint image, which is a specially transformed image applied to a circular screen. If the user is located in the area that is roughly in the center of the room, the image then appears similar to the head-coupled perspective. They also supported stereoscopic vision using shutter glasses and used a picking ray paradigm for pointing at and selecting objects. At the same time, Simon (2005) compared users' performance in an object selection  12  task in multi-viewpoint static environment with a similar SID that was based on head coupling. They found that the difference between mean performance was on average 0.04 of a second in favor of multi-viewpoint system with the total time per selection averaging about half of a second. No correction was made in his analysis for conducting multiple t-test; applying a Bonferroni correction would have resulted in non-significant findings. The disadvantage of multi-viewpoint image SID is that it is passive: the user cannot actually eliminate the occlusion of any object in the scene despite the constant illusion of a correct perspective. Thus, for multiple users, such a system is limited to static 3D views as real observation from another perspective is possible only if the image itself somehow provides a navigation process as in movies or via a manual controller. Fröhlich et al. (2005) combined personal shutter glasses, mechanical shutters for projectors, and circular polarization technology to create a multi-user SID. They described several ways to set up the system. The description that follows is one of the setups they used. An electric motor rotates an opaque disc with a transparent hole in it in front of projectors that employ polarization filters. This shutter disc spinning at 3000 rpm creates a frequency of up to 49.5 Hz per eye per user. The advantages of mechanical shutters are an independence from the projectors’ frame rate capabilities and an absence of “ghosting”. Customized shutter glasses separate pictures for different users. Circular polarization helps to separate images for the left and right eyes. Their system can scale to accommodate up to four users with some quality degradation. This  13  work apparently solves the problem of collocated collaboration in VR, but it has a disadvantage of being too sophisticated. It is expensive because multiple projectors covering the same area and/or special projectors with polarization are required. Furthermore, it is challenging to calibrate because of the presence of several shutters that need to be synchronized together. Mechanical shutters can also create loud sounds because of the electric motors, which may be undesirable.  2.3 Benefits of large screen displays Large screen displays are inherently collaborative. If there is no need for collaboration or shared usage, it is normally hard to justify the expense for the hardware or the space required for the hardware. This is the reason research on collaborative usage of large screens is essential. Some research on collaboration using large screen displays has been conducted. Hawkey et al. (2005) examined collocated collaboration using a large screen. They conducted an empirical study of the effect of proximity of the participants to the display and to each other on interaction with the display and collaboration. Participants were given a collaborative route-planning task using a subway map displayed on a SMART Board™ (whiteboard-size touch screen display). The authors found that participants preferred to interact directly with the large screen and, at the same time, stay near each other during the collaboration. Beyond collaboration, large screens have several benefits. Studies have shown that large screen size itself improved user performance on both regular activity tasks  14  (Czerwinski et al., 2003) and egocentric spatial tasks, such as first-person way-finding and orientation in a three dimensional environment (Tan et al., 2006 & Ni et al., 2006). Four times higher screen resolution (twice higher in each dimension compared to “low” 1280x720) also contributes to improved performance (Ni et al., 2006). Prior work that compared CAVE technology to Fish Tank VR (Demiralp et al., 2003) also mentioned that CAVE technology lost in user preference to Fish Tank VR partially because the latter had crisper images (better pixel density rate) despite the favored larger models in the former. Consequently, even though Fish Tank VR is designed initially for small displays, it could potentially benefit if used with large display sizes. In the context of VR, a study by Schulze et al. (2005) compared accuracy and speed in a 3D marking task using a 4-wall CAVE (without top and back walls), one cave wall, and Fish Tank VR. Accuracy significantly improved because of the larger scale of the models in the 4-wall CAVE. The speed of 3D marking was the fastest for the medium scale models but the difference in accuracy between medium and large models was not found to be significant. Fish Tank VR was slower than CAVE VR for large and medium sized models, but on average it was as accurate as the CAVE. The authors tested a single wall setup against the other VR conditions using only small models, i.e. the same size as in desktop Fish Tank VR, and its performance was identical to the CAVE with small models. The difference in performance between the small model single-wall setup and desktop Fish Tank VR was not found to be significant. Arguably, the most natural application for large screen displays is working with  15  maps. Ball et al. (2007) investigated the influence of screen size on the performance in map tasks such as navigation, search and pattern finding using zooming, panning and physical body movements at a large screen. They find that users become increasingly efficient in their tasks when the size of the display increased, especially on maps with higher level of details. Navigation performance scales non-linearly despite the general improvement as the size of the screen increases. Physical navigation was found to be drastically more efficient (more than 1,000%) and was preferable to virtual navigation for larger sizes of the display.  16  Chapter 3. Whale Tank Virtual Reality The concept of Whale Tank Virtual Reality is to take advantage of a large touch screen display to make Fish Tank VR technology (Arthur et al., 1993) collaborative. There are several advantages to this approach to collaborative VR systems. Among the different VR systems, Fish Tank has advantages over competitors such as spatial immersive displays (SIDs, i.e. CAVE) and head-mounted displays (HMDs). Working with HMDs is tiresome (Qi et al., 2006) in comparison to CAVE VR and Fish Tank VR. There is an inability to interrupt presence in VR quickly and easily for a short period of time in order to do another task, i.e. to answer a phone call, in SIDs and, therefore, HMDs can be undesirable for some users (Demiralp et al., 2003). CAVE VR can be claustrophobic for some users (Demiralp et al., 2003). The original desktop Fish Tank VR suffered from the field of view (FOV) being restrained by monitor size and incorrect images near the edges (Mulder et al., 2000), small models (Demiralp et al., 2003), and a limited availability for natural interaction (Demiralp et al., 2003). However, using a large touch screen display solves all of these problems. Additionally, large screens can add a collaborative quality to Fish Tank VR and resolve the problem of simultaneous use by more than one head-coupled user.  3.1 The concept of Whale Tank VR There are two important details to note about interaction with a large wall-sized touch  17  screen. When people work with a touch-sensitive wall display, they have to stay within at least the distance of their arm length to interact. Secondly, human vision has a relatively limited FOV where a perceived image stays in focus (see Fig. 3.1). As a result, we came up with the solution to the main problem with collaborative usage of the same Fish Tank VR screen by introducing the personal viewports. Whale Tank VR thus creates a personal viewport in front of each user, each with a correct perspective (Fig. 3.2).  Figure 3.1. Collaborative work in front of a large touch screen display. The personal viewport appears only if the corresponding user is in close enough proximity to the screen to interact with it. As a user comes closer to the screen, the viewport for that user shrinks and it expands again as the user backs off (there are predefined minimal and maximal sizes). The size of the viewport is important, but we  18  did not investigate what size is optimal. Each viewport has a colored border and a personal picking ray of the same color which is used for interaction with scene objects. The color serves as personal identification. Outside personal viewports, Whale Tank VR shows the static scene (with the same content) from a neutral centered viewpoint.  Figure 3.2. Whale Tank VR with two separate dynamic viewports. When a user touches the screen, a ray is projected, which starts from the touch point and ends deep in the scene. The projected ray is visible (see Fig 3.2) to other users and its personalized color can be used to identify the person to whom it belongs. For the user who touched the screen, the ray looks like a square-shaped pointer as it is seen from its base. This happens because of the perspective projection.  19  Figure 3.3. Whale Tank VR, two users share the viewport. We want users to be able to change their positions in front of the screen easily and move in front of it freely. Head coupling allows the system to know the users’ positions and their head orientations; thus, their viewports can follow them. The viewport position depends on the orientation of the user’s head. The center of the viewport, as a general rule, appears at the intersection of the screen and the line along which the face is pointed. Constraints can be set to limit the azimuth and elevation angles to keep the viewports near their corresponding users. If users come close to each other, then their viewports join to one shared viewport (Fig. 3.3). The shared view is a compromise, calculated at an average view point between the two users. A constraint can be set to limit the distance between users when two overlapping viewports merge with each  20  other to become one shared viewport. If one viewport tries to overlap another and the distance between users is larger than the merging threshold, then the system restricts its movement to prevent overlap and they instead remain separate, touching along a shared edge. In our Whale Tank VR, no viewport has precedence over another. Consequently, one viewport can push another out of the way in the horizontal direction until their azimuth angles are equal relative to the normal vector of the screen. Whale Tank VR can also be run in a split screen mode or in a mode with a set of statically located multiple viewports, which would activate upon users' approaching them.  3.2 Calculating the position of the user's eyes The backbone of Whale Tank VR for creating the effect of immersion is head coupling. To make head coupling possible we need to know the exact position of the user’s eyes in space. We define the “user’s eyes point” as a point that is located in between the real eyes of the user. All calculations for head coupling are made using that particular point. To calculate the position of the user's eyes point, we used the following procedure. The drivers of the Polhemus Liberty Latus provide the spatial coordinates in six standard degrees of freedom but the axes’ directions of the device did not correspond to the axes in the VR coordinates, which are used to calculate the scene. Therefore, we redefined axes as follows. If x’, y’, z’ have been the axes that are used by the Polhemus, then x, y, z are the new renamed axes: x = y’, y = z’, z = x’ (Fig. 3.4).  21  Figure 3.4. Redefining the axes from Polhemus settings to actual.  A simple rotation transformation (Equation 1) was applied to the axes’ direction vectors of the Polhemus Liberty Latus marker (angles represented by θaz for the azimuth, θel for the elevation, and θrl for the roll) to acquire head direction vectors r r r (d x , d y , d z ) (Fig. 3.5.). r d x = (sin(θ el ), − cos(θ az ) ⋅ cos(θel ), sin(θ az ) ⋅ cos(θ el )) r d y = (cos(θ el ) ⋅ sin(θ rl ), cos(θ az ) ⋅ sin(θel ) ⋅ sin(θ rl ) - sin(θ az ) ⋅ cos(θ rl ), cos(θ az ) ⋅ sin(θ el ) ⋅ sin(θ rl ) - sin(θ az ) ⋅ cos(θ rl )) (1) r d z = (cos(θ el ) ⋅ cos(θ rl ), cos(θ az ) ⋅ sin(θ el ) ⋅ cos(θ rl ) + sin(θ az ) ⋅ sin(θ rl ), sin(θ az ) ⋅ sin(θel ) ⋅ cos(θ rl ) − cos(θ az ) ⋅ sin(θ rl ))  Figure 3.5. Head direction vectors.  22  Because the marker itself is worn on a head-band, the user’s eyes point coordinates (ex,ey,ez) are corrected by adding to the original marker position (mx, my, mz) two offsets along the head direction vectors multiplied by the approximate distance of the marker position on the head relative to the eyes (Equation 2). Constants that were used in correction were the same for all participants and were not adjusted based on personal differences: shifty = 0.060 m and shiftz = 0.015 m.  r r r r e = m − shift y ⋅ d y − shift z ⋅ d z  (2)  3.3 Calculating visual frustum The viewing frustum (Fig. 3.6) has the user's eyes point at its top and the four sides (side clipping planes) of it pass through the edges of the screen. The frustum itself also has a front clipping plane, which is parallel to the screen and located at a minimal distance from the eye. The distance to the side clipping planes is determined from the center point of the front clipping plane rectangle. To produce a correct image for the viewer, relative to her position in head-coupled mode, we use a dynamic, asymmetric (Fig. 3.6b) viewing frustum. In contrast, for nonhead-coupled views the frustum is usually symmetric along axes with proportion between height and width equal to the aspect ratio of the screen (Fig. 3.6a). The parameters of the viewing frustum can be defined using the distances from the central line-of-sight to the side clipping planes, as measured at the front clipping plane. The changes in those dynamic frustum parameters happen due to two contributors: the  23  distance to the screen (along the Z axis) and the off-center shift (along the X and Y axes). We take a symmetric frustum as an initial one and consider the dynamic frustum as its modification.  Figure 3.6a. Symmetric viewing frustum  Figure 3.6b. Asymmetric viewing frustum  (top view)  (top view)  Figure 3.6c. Symmetric viewing frustum. Figure 3.6d. Asymmetric viewing frustum. Let us calculate the impact of the distance to the screen on the frustum parameters. Let h be an initial distance from the user’s eyes point to the screen. We denote a new distance to the screen as h'. Let c be an initial distance to the side clipping plane and c' be the new distance to the side clipping plane. Let a be a constant distance to the front clipping plane (or near in the OpenGL terms), b be the distance from the edge of the  24  screen to the screen center. All values except c' are known (Fig. 3.7a).  Figure 3.7a. Calculating new frustum, Z axis component.  Figure 3.7b. Calculating new frustum, X and Y axes components.  If we move the eyes on the top of the viewing pyramid along the Z axis (forward/backward), we need to divide the initial distance to the side clipping plane c by the new distance to the screen h' to obtain the new value c'. It is easy to notice in Figure 3.7a that c' can be determined using the property of similar triangles (Equation 3). In order to avoid hardcoding the size of the screen into the code, we calculate c' relative to predefined initial values c and h (Equation 4). If we choose to set the default distance to the screen h to 1 (one) in our world coordinates then we simply need to divide the default distance to the clipping plane c by the new distance to the screen h'.  25  c=  ab ab , c' = h h'  ab = ch ⇒ c ' =  ch h'  (3)  (4)  The second contribution resulting in an asymmetric frustum, the side shift, can be found as follows (individually for X and Y axes). Let s be a horizontal shift of the eyes along the X axis. We denote a contribution of the shift s to a change in the distance c to the side clipping plane as s' =c' - c. We draw an additional line that is parallel to the default line between the eye and the edge of the screen (Fig. 3.7b). Thus, we have a triangle equivalent to the eyes-center-edge triangle and the triangle where its base is equal to the eyes side shift s. Consequently, s' is a base formed by the front clipping plane in the small triangle, similar to the triangle with the base s. Using the property of proportionality in similar triangles again we get a formula to calculate s' ( Equation 5).  s' a sa = ⇔ s' = s h h  (5)  In the general case, we just add the two contributing components together to find the correct frustum parameters. In OpenGL we also need to take the screen aspect ratio into an account. Consequently, the formulas for the OpenGL four first frustum parameters are the following (Equation 6):  26  newLeft =  left ⋅ aspectRatio − ex ⋅ near ez  newRight = newTop =  right ⋅ aspectRatio − ex ⋅ near ez  top − e y ⋅ near  newBottom =  ez bottom − e y ⋅ near  Parameter  left  (6)  ez Value (in meters) -0.01  right  0.01  top  0.01  bottom near far  -0.01 0.01 7.0  Table 3.1. Parameters for asymmetric frustum calculation (Equation 6). In the formulas above (Equation 6), left, right, top and bottom are constants, representing initial distances to the side clipping planes, near is a constant representing a distance to the front clipping plane, and the point with coordinates (ex,ey,ez) is the position of the user's eyes point. Parameters for the OpenGL frustum function near and  far never change. The constant values of the left, right, top, bottom, near, and far, which we used for default eyes’ location at (0,0,1), are given in Table 3.1. The choice of parameters ensures that the front clipping plane (near) cannot cross the plane of the  27  screen and that the rear clipping plane (far) does not cull any objects from the scene.  3.4 Projecting the pointer ray into the scene using touch screen functionality To interact with objects that users see behind the screen and to ensure that other users can be aware of what is happening we use the idea of a ray projected from the eye of the user towards the place on the screen where the user's finger touches. The position of the user's eyes point (ex,ey,ez) is already known from the marker that is worn on the head. The touch location of the finger (tx,ty,tz) is calculated by converting the 2D screen coordinates into the virtual space coordinates. In the Whale Tank VR system, all points on the screen have z coordinate equal to zero by definition of the coordinate system. The tx and ty coordinates are determined by their proportional position on the screen taking into an account the aspect ratio (Equation 7).  ⎞ ⎛ mouseX t x = 2 ⋅ ⎜⎜ − 0.5 ⎟⎟ ⋅ aspectRatio ⎠ ⎝ RESOLUTION X ⎞ ⎛ RESOLUTION Y − mouseY t y = 2 ⋅ ⎜⎜ − 0.5 ⎟⎟ RESOLUTION Y ⎠ ⎝  (7)  Given (ex,ey,ez) and (tx,ty,tz), we have the directional vector of the pointer ray. Using the directional vector, we can draw a ray that extends from the fingertip until it intersects an object and ends there. Either the ray can be represented as a thick line or as a long and thin asymmetric pyramid, as in our case, that has a base under the finger  28  and a vector from the middle point of the base to the top of the pyramid represents the direction of the ray (Fig. 3.8). The pointer ray’s base position on the screen can also be used to pick objects on the screen using the regular OpenGL picking mode.  Figure 3.8. Pointer ray  29  Chapter 4. Methods To examine if awareness during collaboration in Whale Tank VR is affected by head coupling, we conducted a formal user study to investigate the effect of head coupling on performance in a simulated computer supported collaborative work (CSCW) environment. We envisioned several collaborative scenarios that might occur when people use a large touch screen together. The following core scenario motivates our study: two people working in front of the same large touch screen each manipulating or changing properties of some objects in the scene. For instance, two architects could be changing details of a building model they are working on. While each of them is engaged in their individual subtasks, which are part of their common task, they would like to be aware of their partner’s activities.  4.1 Participants We had 36 participants (23 males and 13 females). The age of participants varied from 18 to 39 years old (M = 26.5, SD = 5.85). Participants needed to be right-handed, because of the specificity of cubes' locations in virtual space. Left-handed people would be at a disadvantage because the given task implies turning the participant’s body to the left, which is less convenient when a person is touching the screen with their left hand versus their right hand. Colorblind people were also excluded from participating because cubes might be distinguishable, in part, by their colors. Ishihara’s  30  colorblindness  tests  (http://www.colblindor.com/2007/02/15/ishihara-plates-color-  blindness-test-in-a-leaflet/) were conducted for each participant to determine their eligibility. A priori power analysis with G*Power software (http://www.psycho.uniduesseldorf.de/aap/projects/gpower/) suggested that we needed 36 participants (parameters: 1 – β ≥ 0.8 and α level set to 0.05) if we wanted to find statistical significance of the differences with medium effect size, according to Cohen’s guidelines (Cohen, 1998). We note here that the estimated number of participants that were required to look for significance with small effect sizes was in the range of several hundreds. Such a large number of participants was not deemed realistic for our experiment. Therefore, we chose to look for a medium effect with 36 participants. We recruited a convenience sample of 38 participants: 24 males and 14 females. Two participants were later excluded from the study and substituted by additionally recruited participants. One of the two was excluded because she could not properly follow instructions and another because of extremely poor performance, which was well below the next lowest value. Recruitment of participants was done through the Reservax participant recruitment  web-site  (http://www.reservax.com/hciatubc/),  and  by  sending  an  advertisement to the Computer Science students’ social mailing list. Some participants heard about the study through word of mouth from previous participants. The participants were mostly graduate and undergraduate students, with two post doctoral fellows also serving as subjects. All 36 participants accepted for the final statistical  31  analysis were from the University of British Columbia. The backgrounds of participants were: 7 computer science undergraduate students, 4 electrical engineering graduate students, 3 computer science graduate students, 3 science undergraduate students, 2 computer science research assistants, 2 physics and astronomy graduate students, 1 biology undergraduate student, 1 cognitive systems undergraduate student, 1 arts undergraduate student, 1 engineering physics undergraduate student, 1 psychology undergraduate student, 1 pharmacy undergraduate student, 1 mining engineering grad student, 1 librarian, 1 IT technician, 1 medical genetics post doc, and 1 physics post doc. Three participants did not fill the “occupation” field in the questionnaire, for unknown reasons.  4.2 Apparatus The experiment was conducted in the Interactive Design Lab in the Institute for Computing, Information, and Cognitive Systems (ICICS) of University of British Columbia, where a large screen display and a system that we use for Whale Tank VR are located. The room with the display is 9.6 meters long, 6 meters wide, has one window to the left of the large screen, which was covered to prevent natural light reflections from interfering with user performance. An ambient light source at the back of the room was used instead of the standard room lighting to increase the contrast of the screen and to avoid light source reflections on the screen surface.  32  4.2.1 Large screen display  Figure 4.1. Large touch screen display and sensors (attached to beams) for head tracking.  The large screen used for the Whale Tank VR setup and the experiment measures 5.31 meters wide by 2.97 meters tall (approximately 16’ x 10’), and is a back projected display with a ground glass surface (Fig. 4.1). The surface of the screen is covered with a special coating that makes it smooth and allows the user to minimize the finger friction during dragging actions. The large space in the projector room, the long distance of the projectors from the surface and the good air conditioning ensure that the surface remains cool to the touch. It is a known problem with solutions such as the SMART Table™, available in our lab, that the surface can become hot after several  33  minutes of being on, resulting in the users’ fingers sweating, which makes for an unpleasant interaction experience. Coolness and smoothness of the surface made the touch interaction effortless and comfortable, which was important for the experiment because the participants had to interact with it for up to an hour. No participants complained that the touch screen caused any discomfort or difficulty.  Figure 4.2. Organization of the tiled large screen (the dark tiles are the ones that were used).  The back projected screen employs 12 projectors with a native resolution of 800x600 at 120 Hz each that can be scaled from a maximum 1024x768 at 120Hz. They were organized in 3 rows of 4 projectors in each but due to hardware limitations we use only the bottom two rows of these projectors (Fig. 4.2). The screen is driven by one personal computer (PC) with one video card (see Section 4.2.4). The video card sends two independent 1280x1024, 75Hz signals through each of its video outputs to the  34  Media Wall that resample the signal into the transformed eight 1024x768 at 75Hz signals across eight projectors in the two bottom rows to form a single image.  4.2.2 Touch screen functionality The screen has a vision-based touch sense ability provided by SMART Technologies Inc. A set of infrared (IR) emitters complemented with a set of IR cameras are located right above the screen. The cameras are set to watch along the surface of the screen (Fig. 4.3).  Figure 4.3a. Touch screen setup.  35  Figure 4.3b. Touch screen hardware. Touch screen functionality works as follows. 1) The cameras collect reflected IR light from the user's finger or any other physical object that touches the screen. 2) The images from the cameras go to digital signal processors (DSP) (Fig. 4.3), which determine the location of the touch action by triangulating the position. 3) That location is converted into the position of the mouse pointer on the screen, and if the screen is touched, the drivers produce a “left mouse button pressed” event. 4) The moment a person removes all fingers from the screen surface, the “left mouse button released” event is produced by the drivers. 5) Touching and moving the finger on the screen is converted to mouse dragging events. This system supports only a single touch mode for the research described in this thesis.  36  4.2.3 Device for head-tracking  Figure 4.4a. Wireless markers.  Figure 4.4b. Wireless markers in harnesses.  The Polhemus Liberty Latus device employs sensors to determine the position of markers (Fig. 4.4) located on desired objects, i.e. a user’s head. Sensors are connected to the device by wires and serve to register magnetic signals produced by wireless markers to determine their identity information, position and direction in space. The sensors were positioned in front of the screen above the users' heads (Fig. 4.1). One sensor has to be chosen as the primary and coordinates of the markers and other sensors are related to its position. Sensors must be no further than 1.4 m from each other for good stable area coverage because the stable range of one sensor is 0.7 m. In our installation the sensors were arranged in a straight line parallel to the screen on the distance 1.22 m from each other and 1 m from the screen. The Polhemus device produces six degrees of freedom coordinates (x, y, z, azimuth, elevation, and roll) in its own coordinate system for each activated marker and sends these to a PC. The coordinates need to be transformed for further usage according to the primary sensor’s location in the room.  37  The markers were worn on the head with the help of a special harness (Fig. 4.4b), which we made from a typical construction hardhat and some Velcro. This particular Polhemus model has an advantage over other Polhemus systems because the markers are wireless (Fig. 4.4a) and can be worn by the users without excessive intrusiveness and inconvenience relative to wire-based solutions.  4.2.4 Hardware and software For the experiment we used a modified (see section 4.3) version of Whale Tank VR (see Chapter 3) for participants’ tasks. The hardware was the same as in the regular version, which included a main PC (Pentium4 3.8 GHz with Hyper-Threading, 2GB RAM, one NVIDIA GeForce  9800GTX+  video  card),  Polhemus  Liberty  Latus  device  (http://www.polhemus.com/?page=Support_Latus) for wireless head-tracking, custombuilt SMART touch-surface hardware, two Media Wall units for video signal splitting to produce a tiled display (Fig. 4.2) and 8 InFocus DepthQ (DQ3120) projectors (Fig. 4.5).  38  Figure 4.5. Whale Tank VR installation scheme (aerial perspective). We used a Sony HDR-FX1000 camera to capture video of experimental sessions for  39  further analysis of participants’ performance. The camera was set approximately 4 meters away of the screen (Fig. 4.5) to ensure that all the screen, participant and preprogrammed computer actions (most of the time) are clearly visible.  4.3 Procedure Participants were asked to perform two simultaneous tasks to emulate the collaborative scenario described above. One task was a distraction task (“shuffling game”) and the other was an awareness-and-recall task. Our touch screen was lacking multi-touch functionality at that time and we decided to use software means to simulate actions of one of the collaborators. This approach has a benefit of allowing us to control such confounding variables as timing of selections, highlighting length, and possible human errors, which could vary from trial to trial if the collaborative task had been performed by a person. Therefore, the design was able to provide the same experience for each participant. Four graduate students from our lab piloted the experimental setup. One experimental session took approximately 65 minutes, including two-minute break before switching to a new viewing condition, and was conducted according to the protocol described in the Appendix A.2. All participants were following the same script throughout the procedure (see Appendix A.3).  4.3.1 Distraction task (shuffling game) First, we define two important terms, which we needed for defining the tasks for  40  participants. For the shuffling game, a description of which follows shortly, we chose the term “distraction task”, and for an awareness-and-recall task that is described in Section 4.3.2 we chose the term “peripheral task”. Because our main objective was to study the users’ awareness of other users’ actions and users’ ability to recall the other users’ actions, the performance in peripheral task was our main target and the shuffling game served only as a distraction. However, from the participant’s perspective, the shuffling game was a primary task. From the participants’ perspective, the task of trying to remain aware of the actions of another user while doing their primary task is the secondary peripheral task. Thus, the shuffling game was the primary task from the participants’ perspective but was the distraction task in our experiment, and the awareness-and-recall peripheral task was the secondary task for the participants and the task of main interest in the experiment. The shuffling game was located on the right side of the screen near the screen surface in virtual space. In this task, five pairs of cubes with a unique texture and color for each such pair are organized into two rows of five unique cubes in each row (Fig. 4.6). The top row represents a pattern and the bottom row is an interactive task row. During the game, a random cube changes its position (i.e., slot) in the pattern row; the other pattern cubes shift accordingly to accommodate it. The goal of the participant was to keep the cubes in the task row in the same order as in the pattern row by moving them (one at a time) to the correct positions in the task row. Cubes in the task row could be moved by touching them on the screen and dragging-and-dropping them  41  at another slot. When a task cube was dragged over another task cube, the other cube immediately slid to the nearest empty slot.  Figure 4.6. Shuffling game looks The score for the shuffling game was computed by measuring the percentage of time when the two rows were matched perfectly during the game. The score updates were suspended during the shuffling animation, because the participants were unable to match the two rows in that time. The counter decremented during the time between when the pattern cube was positioned and the participant released their cube in the correct location. The score counter was displayed to participants to the right of the shuffling cubes as a percentage of the maximum possible score during the trial (Fig. 4.7). Above the cubes, a message was displayed that indicated whether the current state of the cubes was correct (Fig. 4.7). The shuffling occurred every few seconds, depending on the game speed difficulty: every 6 seconds (slow pace), every 3 seconds (medium pace) or every 1.5 seconds (fast pace). The total length of each shuffling game round was 30  42  seconds.  Figure 4.7. Shuffling game in progress  Participants started a game round by pressing the start button located above the pattern cubes (Fig. 4.6). There was a two-second delay between pressing the start button and the actual start of the shuffling game to make sure the participant was ready. When the game was over, it disappeared completely from the screen and a “next” button appeared, which had to be pressed when the participant wanted to go to the next round.  43  4.3.2 Awareness-and-recall task  Figure 4.8. Awareness-and-recall cubes setup The scene visible to participants for the awareness-and-recall task contained 25 cubes, which were positioned left of center on the screen, left of the shuffling game  and  inside the virtual space behind the screen (Fig. 4.8). There were five cubes for each of five colors: red, green, blue, yellow and purple. The colors were chosen to be distinct enough from each other to avoid confusion. The cubes were also numbered (from 1 to 5), with the number written on every face of a cube. No combination of color and number was the same. The cube locations were the same for all participants and for all trials. All the cubes were positioned so as to be visible when the participant’s point of view was located directly in front of the shuffling game cubes. The cube positions were chosen by initial random positioning and then manually adjusted to make certain special cases we  44  wanted to include. The “Green 5” cube was manually placed to have its front face in the plane of the screen in the center of the scene. Some cubes were intentionally partially occluded, but not to the point where it would be impossible to recognize them. Cubes were positioned in the space so that some of them were located close to the screen while others were deep behind the screen. Some cubes were made to stand alone, while others were crowded together; some cubes were positioned to be high in the scene, some near the bottom, and some in the middle.  Figure 4.9. Selection of the cube during the awareness phase The peripheral task consisted of two parts: awareness and recall. The awareness part took place simultaneously with the shuffling game. During that phase, the program selected several cubes in the scene by pointing at them with a white ray and highlighting them as shown in Fig. 4.9. The goal of participants was to talk out loud, noting that a cube was selected, i.e. “blue two” or “two blue”, and to keep the sequence of such selections in mind until the end of the shuffling game. During the recall part of the task, the participants tried to reproduce the sequence of cube  45  selections by touching each cube in turn. This two-part activity of awareness and recall was decided upon during pilot testing. The recall on its own did not capture initial awareness of blocks being selected. At the same time, both awareness of another users’ actions at the moment they happened and an ability to recall those actions later are components of the envisioned collaborative task. Therefore, we decided to include both awareness reporting and recall in the experiment. Cubes were highlighted and pointed to by the ray (Fig. 4.9) as selected by a script. The ray originates in a stationary location to the left of the awareness-and-recall game cubes on the surface of the screen and ends on the selected cube. The selection timing was made at predetermined irregular intervals to avoid participants predicting its behavior. There was at least a one-second gap between selections and each selection lasted for 2 seconds. The number of cubes selected during each trial varied between 1, 3, and 5. Pilot testing suggested that the number of mistakes under such conditions was different enough to represent separate levels of difficulty. These levels of difficulty were assigned to trials in such a way that no particular pattern could be determined from earlier trials. During the recall phase, a participant could ask to reset their selection for any reason. For example, such requests happened when a participant accidentally touched the wrong cube while moving her hand across the touch screen closer than the touch threshold distance to the surface, the participant made a selection and then realized  46  that she had forgotten to select one cube at the very beginning, or the participant realized that she selected the cubes in the wrong order. If such request was made, the “reset” keyboard shortcut was pressed and a special record appeared in the log to indicate the desire of the participant to reselect the sequence.  4.3.3 Experimental design We used a repeated measures 2 (head coupling) x 3 (shuffling game speed) x 3 (awareness-and-recall task complexity) within subjects experimental design. Each combination of factors was repeated twice for a total of 36 trials. Such design allows for control of individual participants' differences and to reduce the necessary number of participants while at the same time increasing the power of the test. The experiment involved the following main factors that could affect participant performance: − Viewing condition with two levels: •  Head-coupled mode  •  Non-head-coupled mode, with a viewing perspective approximately the same as in the head-coupled mode  − Shuffling game speed with three levels: •  Shuffling action every 6 seconds  •  Shuffling action every 3 seconds  •  Shuffling action every 1.5 seconds  47  − Awareness-and-recall task complexity, represented by the length of the selection sequence that has to be noticed and memorized, with three levels: •  1 cube selected  •  3 cubes selected  •  5 cubes selected  The dependent variables in our experiment were the following measurements: − Score in the shuffling game (percentage of time when two rows of cubes matched) − Score in the recall part of the awareness-and-recall task (percentage of mistakes) − Score in the verbal report (percentage of mistakes) To prevent a learning effect from influencing the final result, we counterbalanced the viewing conditions. Half of the participants did the head-coupled condition first, before the two-minute break, and the other half did the non-head-coupled condition first. For each viewing condition, there were nine trials of unique combinations of awareness-and-recall game complexity and shuffling game speeds. Each combination was repeated twice for a total of 18 trials per viewing condition. The shuffling game's speed increased after every six trials, starting with the lowest speed for each viewing condition. Section 4.3.2 already mentioned that the complexity of the awareness-and-recall task was chosen randomly within the condition block formed by the two other factors: viewing condition and shuffling speed. The difficulty level for the awareness-and-recall game and speed in trials were assigned with the  48  intention that there should be a representation of all possible complexities for each viewing condition at each particular speed. For example, for a block of six trials that is formed by head-coupled mode and medium shuffling game speed factors there are two easy, two medium and two hard levels of complexities to choose from in random order.  4.3.4 Data collection Data was collected by several methods simultaneously. We used software logging, video recording, and manual records (handwritten notes and recording the results of the verbal report in the spreadsheet) with the help of an assistant. There were two log files: the main one recorded high level data such as user's action events, automated selections and notifications of general events and the second file recorded low level data about the head movement of our participants in space (x, y and z in Polhemus coordinates before transformations) and user-initiated touch screen events (type of event plus x and y coordinates). An example of the main log file with its legend is in Appendix A. The second log file contained raw data and served as a “plan B”. Using raw data from the second log file, it was possible to recreate complete sequence of events during any recorded experimental session. Later, if we wanted, we could replay all participants’ actions the same way as they happened during the experiment and reproduce the first log file too. Fortunately, we did not need to resort to this. Determining the need for recovery of data via the second log could be done by comparing the video record of a session with data provided by the corresponding main  49  log file.  50  Chapter 5. Data analysis and results To investigate the influence of head coupling on user performance in the collaborative environment, and its interaction with various task difficulties, we ran the user study described in the Chapter 4. We expected that there would be no effect of head coupling on the distraction task (the shuffling game), because it is almost in the plane of the screen and thus essentially two-dimensional. It was introduced mainly to keep participants busy with a foreground task while they monitored a 3D background task. What we wanted to know was whether head coupling would impact recall scores for the background task. This was not obvious without conducting an experiment. On the one hand, head coupling could be distracting and reduce performance because head movement in the foreground task causes a change in the scene for the background task. This could be especially true for Whale Tank VR because it has a lag (caused by coordinates’ low-pass filtering during the head tracking) despite efforts to make the implementation as fast as possible. On the other hand, head coupling could be beneficial because it better reflects the actual location of the objects, and thus would provide additional cues useful in the recall phase of the background task. Pilot testing suggested that participants would not have any difficulty with the awareness phase of the background task. The study was conducted to determine whether scores in the recall phase of the background awareness-and-recall task were  51  due solely to participants’ memory, not that they were unable to monitor the locations of activity in the background task. All statistical methods employed in this chapter were done in consultation with their description and usage as described in the book Reading statistics and research by Huck (2008), unless explicitly mentioned otherwise.  5.1 Shuffling game scores analysis As discussed in Section 4.3.1, the performance score computed in the shuffling game was the percentage of time the two rows of cubes were aligned so that the two rows of cubes matched. A one-way ANOVA performed on the data showed a strong effect of order (F = 41.8, p < 0.001, partial η² = 0.54 (large)). Only one participant had better performance in the first condition of the session. In general, in the second condition that commenced after a two-minute break, performance was always at least as good as in the first condition (7 of 36 participants had a difference in performance less than 1% between their first and second condition). Because the trials were counterbalanced for viewing condition and randomized (see Section 4.3.3 for details) any significant results should not be attributed to the order effect. Before conducting a parametric statistical test, we checked the normality assumption using a Kolmogorov-Smirnov test and analyzing the skewness and kurtosis. For the full table of the normality results see Table 1 in Appendix B. Results indicated that most of the data (16 out of 18 cases) was within an acceptable threshold. The only conditions  52  that violated the normality assumption were the slow-speed head-coupled conditions with complexity 1 (Z = 1.45, p = 0.03) and with complexity 5 (Z = 2.04, p < 0.001) in the awareness-and-recall game. The mathematical reasons for the lack of normality can be determined by checking the skewness and kurtosis values in Table 2 in Appendix B. Head coupling in the slow speed and high complexity case is very leptokurtic and is probably more related to a logistic distribution than a normal distribution. A logistic distribution appears when a value of some variable can be used to determine a system being in a certain binary state. A classic example is that of determining a probability of being deceased (states “dead” and “not dead”) given the age of the person. In our case, the logistic distribution may appear because participants had two different strategies. They tended to concentrate either on the shuffling game or on the awareness-and-recall task. When the shuffling speed was slow such separation in the scores is more apparent because reaction time is heavily dependent on the strategy choice rather than on the cognitive load. The same trend can be observed in the nonhead-coupled mode, but to a lesser extent. The data in the slow speed cases is noticeably skewed to the left (see Table 2 in Appendix B). This happens because the mean score is very high in this condition. Participants were constrained by their physical movement, not their cognitive load, so they could not achieve a shuffling score higher than their physical reaction time. In such very easy scenarios, skew will always be the case. Despite some violation in the normality assumption, F-tests are usually robust if  53  there are many (N > 30) cases per cell. In our case N=72. We therefore decided to use a parametric ANOVA test on the data. Mauchley's test of sphericity revealed that the data violated the sphericity assumption for the speed (W = 0.9, p = 0.03) and complexity (W = 0.86, p = 0.004) main effects, for the two-way interaction of viewing condition with complexity (W = 0.91, p = 0.03), for the two-way interaction of speed with complexity (W = 0.71, p = 0.005), and for the three-way interaction of all these factors (W = 0.68, p = 0.002). (See Table 3 in Appendix B for the complete sphericity test outcomes). Greenhouse-Geisser’s ε statistics for each of the violated cases showed that the violation was in an acceptable range (ε > 0.75). Therefore, we used the GreenhouseGeisser correction for the F-test. A three-way repeated measures ANOVA with Greenhouse-Geisser correction showed no statistical significance (F (3.4, 241.45) = 0.27, p = 0.87) for the interaction between viewing condition, speed and complexity factors for user performance in the shuffling game (see full results in the Table 4 in Appendix B). Effect size was also very small (partial η2=0.004), which supports the absence of a difference between the population means in case of the supposed interaction between all three factors. Figures 5.1a and 5.1b illustrate the similarity between the means in the two viewing conditions for the two difficulty factors.  54  Figure 5.1a. Profile plot for average Figure 5.1b. Profile plot for average shuffling game scores in head-coupled shuffling game scores in non-headmode. coupled mode.  As expected, there was no main effect of viewing condition (F (1, 71) = 0.59, p = 0.45). There was similarly no significant interaction between viewing condition and speed (F (1, 71) = 0.59, p = 0.39) or between viewing condition and complexity (F (1, 71) = 0.59, p = 0.51). As can be seen in Figures 5.1a and 5.1b, the different levels of speed and complexity have very different performance score means. Because the shuffling game was located very close to the plane of the screen and was a twodimensional task, its look-and-feel was almost independent of viewing condition. We therefore discuss only effects involving other factors. Two main effects were found statistically significant: shuffling speed (F (1.82, 129.43) = 732.23, p < 0.001) and awareness-and-recall complexity (F (1.75, 124.01) =  55  100.39, p < 0.001). Furthermore, the effect sizes were very large (partial η2=0.91 and partial η2=0.59, respectively). We conclude that an increase in difficulty levels of either of these two factors resulted in a large decrease in the shuffling game scores. An interaction between the speed and complexity also was statistically significant (F (3.51, 249.46) = 16.87, p < 0.001), again with a large effect size (partial η2=0.19). As can be seen in Figures 5.1a and 5.1b, the magnitude of the drop in mean performance values depends on the speed of the shuffling game. When complexity increases, the drop in performance happens faster if the speed is faster. For example, it is clear that a change from easy to medium complexity when the shuffling game speed is slow does not affect significantly the mean performance score, but a change from easy to medium complexity, when speed is fast, has a drop in performance of almost 8%. The difference in the shuffling game mean scores between every level of both difficulty factors (without interaction) was found to be statistically significant (for more details see Tables 5 and 6 in Appendix B). The mean values decreased with the increase of a difficulty levels as expected. This demonstrates that the choice of the difficulty levels for the shuffling game scores achieved the desired goal: we identified an effect of complexity.  5.2 Recall scores analysis The recall score was the percentage of cubes correctly selected by the participant compared to those previously highlighted by the computer during a trial (see Section  56  4.3.2 for details). Interestingly, there was a strong order effect for this measurement too. A one-way ANOVA showed that performance in the second viewing condition was significantly better than in the first (F = 14.09, p = 0.001), with a large effect size (partial η2 = 0.29). Five participants were better in the head-coupled condition, despite having it as a first condition, two were better without head coupling even though they had it first, and three participants showed equal performance in both viewing conditions. Such results may be a consequence of different strategies participants chose for performing the task (see Chapter 6 for a detailed explanation). Overall, means for both viewing conditions were exactly the same, which was quite surprising, given that performance for some individuals between the head-coupled and non-head-coupled conditions sometimes differed by up to 12%. There was no main effect of viewing condition because the means were equal. Therefore, we progressed towards examining all remaining factors and interactions, but first conducted a test of the normality assumption (see Table 7 in Appendix B for details). In most cases, it was severely violated. Only in three of the six cases with hard complexity could normality be assumed. This was not surprising, given the discrete nature of the data and few possible different outcomes. There were 1 to 5 possible mistakes per trial, thus the percentages in each trial were discretely predefined. At he same time , probability of getting no mistakes was substantial enough to create a skew in the data. Given the violation of assumptions, Friedman’s aligned rank test (ART) for multiple factors (Higgins, 2003) (an extension of the well-known Friedman’s  57  nonparametric test) would have been the best option in such scenario but it is not a part of the SPSS PASW Statistics 17.0 software package that was used for analysis. Nevertheless, as mentioned earlier, the parametric ANOVA is fairly robust to violations of normality when each cell contains a lot of data cases (N > 30). We note that, despite robustness, violations decrease the statistical power of the test. Mauchley’s test of sphericity revealed that this assumption was violated for the complexity main factor (W = 0.92, p = 0.048) and in the interaction between all three factors (W = 0.72, p = 0.008). Greenhouse-Geisser ε for the violated cases (ε = 0.92 and ε = 0.87 respectively) were in the acceptable range (ε > 0.75) to apply a Greenhouse-Geisser correction. Full details are in Table 9 of Appendix B. A three-way within-subjects ANOVA was conducted for recall scores (the complete results are in Table 10 of Appendix B). The trend of results was similar to those for the shuffling game scores. The only significant effects were main effects for speed (F (2,142) = 29.09, p < 0.001) and complexity (F (1.85, 131.12) = 307.59, p < 0.001), and their interaction (F (4,284) = 8.86, p < 0.001). Effect sizes for the main effects of speed and complexity were large: partial η2 = 0.29 (speed) and 0.81 (complexity). For the speed and complexity, interaction there was a medium effect size: partial η2 = 0.11. No other main effects or interactions were statistically significant and their effect sizes were either small or negligible. The difference between means at slow and medium speed levels of recall scores was not statistically significant. All other differences between the means at different  58  speed levels were statistically significant. The difference in mean performance, which was affected by complexity, was statistically significant between all complexity levels. The detailed analysis is in Tables 11 and 12 of Appendix B.  Figure 5.2a. Profile plot for average Figure 5.2b. Profile plot for average recall scores in the head-coupled recall game scores in the non-headcondition. coupled condition.  The profile plots in Figure 5.2 allow us to observe similarities between the two viewing conditions (the plots looks similar to each other), which explain the fact that we did not find a significant interaction effect between viewing conditions. It is clear that there is an interaction between the speed and complexity factors because the two speed lines intersect each other.  5.3 Awareness (verbal report) scores analysis To understand what caused participants to make mistakes during the recall phase,  59  participants were asked to say out loud the cube selections they noticed, while they were playing the shuffling game. The score in this task was the percentage of correct answers given by participants.  Figure 5.3a. Profile plot for average Figure 5.3b. Profile plot for average awareness (verbal report) scores in head- awareness (verbal report) scores in noncoupled condition. head-coupled condition.  Regardless of the viewing condition or difficulty of the tasks, awareness scores were very good. Four people had only one mistake in the whole session and one participant made no mistakes at all. For the slow and medium speeds, the mean scores were very close to perfect (see Figure 5.3), which created a large skew in the distribution of performance scores towards the high performance side. At the same time, a bunch of severe outliers diverged the values of the means. If one inspects the box plots in Figure 5.4, one can clearly see that all but one 95% confidence interval have collapsed into  60  the one point at mean performance 1.00. In other words, participants were 100% correct on average if outliers are excluded. Performing statistical tests in such conditions would not be valid.  Figure 5.4. Box plots showing confidence intervals for the talk out loud score (TOLScore) from different factors’ perspective: viewing condition, speed and complexity (the numbers near outliers are their corresponding participant numbers).  5.4 Summary and conclusion We analyzed three types of measures: shuffling game time, recall score, and awareness (verbal report) score. Both the shuffling game and recall mean scores for the two viewing conditions were almost identical and there were no significant differences for any interaction of the viewing conditions with the difficulty (speed or complexity) of the tasks. The choice of the task difficulty levels proved to be appropriate because in all cases the speed and complexity factors’ main effects and also their interactions were found to have a statistically significant impact on the performance scores. Awareness scores in general were close to perfect in all conditions except the hardest one (fast speed combined with hard complexity), indicating that there were no awareness issues  61  Factor  Viewing Condition (VC)  Measurements  Prediction  Actual Result  Shuffling Game Score  Not significant  No effect, means equal  Recall Score  Uncertain, there were points for both benefits and flaws  No effect, means equal  Awareness Score  No effect  No effect  Performance significantly drops on each speed level  Performance significantly drops on each speed level  No effect  No effect  Performance significantly drops on each speed level  Performance significantly drops on each speed level  No effect  No effect  Shuffling Game Score Speed  Recall Score Awareness Score Shuffling Game Score  Complexity  Recall Score Awareness Score Shuffling Game Score  Speed x Complexity  VC x Speed  VC x Complexity  VC x Speed x Complexity  Performance significantly drops as the combined difficulty increases  Recall Score  Performance is significantly different for each combination of difficulty levels Performance significantly drops as the combined difficulty increases  Awareness Score  No effect  Drop from the perfect performance in the “fast-andhard” condition (only)  Shuffling Game Score  Not significant  Not significant  Recall Score  Uncertain, there were points for both benefits and flaws  Not significant  Awareness Score  No effect  No effect  Shuffling Game Score  Not significant  Not significant  Recall Score  Uncertain, there were points for both benefits and flaws  Not significant  Awareness Score  No effect  No effect  Shuffling Game Score  Not significant  Not significant  Recall Score  Uncertain, there were points for both benefits and flaws  Not significant  Awareness Score  No effect  No effect  Table 5.1 Summary of the results  62  in either viewing conditions: both head-coupled and non-head-coupled modes worked equally well for the given task. Interestingly, the difference between mean scores for the slow and medium speeds was statistically significant only for the shuffling game scores, not for the recall scores. We can conclude that the medium speed three-second delay between shuffles gives participants enough time to concentrate on memorizing the cube selections. In Table 5.1 we list a brief summary of results, together with the assumptions that were made before the analysis.  63  Chapter 6. Discussion and future work The experiment showed that there was no statistical difference between the headcoupled and non-head-coupled viewing conditions in either the primary distraction shuffling game or the secondary peripheral awareness-and-recall task. An interaction of viewing condition with other factors also proved to be not significant. The effect sizes were either small or negligible for those conditions. This suggests that a result showing no effect was unlikely due to low statistical power. As might be expected, performance in both the awareness-and-recall task and the shuffling game decreased both with increase of complexity of the primary task and increased of speed of the shuffling in distraction task. This was significant for main effects of speed, complexity and their interaction. All three had large effect sizes. This may not seem very exciting, but there is a positive message in these results. In the experiment, viewing in the non-head-coupled mode was controlled to be similar to the view in the head-coupled mode. We think of Whale Tank VR as a multi-user technology and thus we ought to compare its usage to the analogous method of scene representation on a shared multi-user large screen without head coupling. When the graphical system has no idea about a users’ location and all parts of the screen are equally used, it is fair to assume that such a system would assume the static viewpoint located some distance in front of the center of the screen. This case is exactly as in a movie theater showing a 3D movie. The correct image is generated only for people in  64  the middle of the theater but people on the sides experience an equal amount of distortion. Similarly, if we wanted a static solution for multiple users instead of individual views as in Whale Tank VR, a fixed viewpoint would be located in front of the center of the screen. Thus, it made sense to put the fixed viewpoint approximately at the participants’ location to avoid confounding factors such as a different viewing angle/perspective in the experiment. In practice, off-center viewpoints for multiple users rarely happen.  Figure 6.1. Position of the scene projection on the screen in different modes. The impact of this distinction for a user can be illustrated as follows. A non-headcoupled (or static) perspective is fixed to a certain location. Often, the most important  65  objects in a scene are located around the middle of the scene. In the previous paragraph, we argued that when there are multiple users the fixed viewpoint is very likely to be in front of the middle of the screen. Multiple users cannot stay all in the same location conveniently if their tasks are independent of each other. Consequently, some of the users have to stay somewhere on the side and this creates a problem. As we mentioned in Chapter 3, the focused field of view of humans is quite limited and, in the case of a static perspective and certain user positions in front of the screen, several objects of the scene can become harder to see or even out of visible range. In headcoupled mode, objects of the scene are more visible and located in range most of the time unless occluded by other objects (see Figure 6.1). During preparation for the experiment, pilot participants pointed out that the frontal static view was at a disadvantage because its scene was located much further away from the participant than in the head-coupled mode. Therefore, the users had to turn their heads on a greater angle and it became harder to notice selections. On several occasions pilot participants were seen leaning back to observe the scene better, which sometimes caused them to lose track of changes in the shuffling game. On the other hand, in the head-coupled mode head turns were either very small or did not happen at all. During the experiment all the cubes were of a uniform size, which allowed participants to use 3D cues even in the non-head-coupled mode. Of course, if the cubes had different sizes it would be much harder to determine whether a projection of a  66  cube on the screen represents a nearby-but-small cube, or it represents a large cube that is far away. In this hypothetical situation, head-coupled mode might actually provide the user with cues increasingly more helpful than cues available the static mode. The disadvantage of head-coupled mode comes from the increased difficulty of selection. The participants during the study made visibly many more mistakes trying to select cubes in head-coupled mode than in the static mode. The selection problems stem from the dynamic changes in scene projection. To select the object in the headcoupled condition a user has to restrict head movement for a moment just before and while touching the screen. Piloting showed people were able to master the technique of such selection quite quickly, but the experimental participants saw the Whale Tank VR for the first time and did not have such an opportunity to learn before completing trials. Head coupling compensates for more difficult selections with the ability to use natural body movement to glance around an object or position oneself in a better location for observation. For example, for the experiment the “blue 4” cube was intentionally placed in a location where it was partially occluded by another cube, and it was difficult (but not impossible) to say whether it is “blue 1” or “blue 4” without looking carefully. Observation during the trials revealed that in the head-coupled conditions participants instinctively used the advantage of the head coupling and shifted the position of their head to clarify the number on the cube. Moreover, some  67  participants, whose first condition was head-coupled, tried to repeat the procedure in the non-head-coupled mode without success, followed by the comment that they cannot see the cube clearly. Another observation was made about the selection of the partially occluded objects. Partial occlusion makes it hard to select a partially occluded object on the screen with a finger without touching other objects due to a lack of precision (the finger is much larger than one pixel). But in the head-coupled condition the users in many cases can change their position to see the object better, and thus select it with fewer problems. When some participants had difficulty with selections in head-coupled mode, experimenters hinted to them that they could actually move to the left side of the screen and try selecting the object from there. In all cases, after one such attempt the users started to use this advantage in their further trials. Such momentary adjustments to head and body positioning would be more efficient than reaching for a control to rotate the whole scene, consistent with the findings by Ball et al. (2007) that natural navigation improves performance in certain tasks. This is especially important for work with large touch screens where standard mouse and keyboard controls may not be easily accessible. The reason that there was no significant difference in the recall scores might be due to the verbal report procedure. We observed that the participants used several techniques for memorizing but the vastly dominant technique was the verbal  68  memorization of the color and number sequences. Other reported techniques include associations with familiar objects, audio memory, optimized/coded memorization, and two-dimensional locations on the screen. At least four participants expressed the opinion at the end of the session that pronouncing numbers and colors was forcing them to memorize exactly the words and limited the usage of the spatial cues such as relative positions of the cubes in space. Only two participants reported that they used three dimensional spatial cues for memorization and both of them were in the top five in the recall task. One of these participants had a head-coupled condition first and was 3% better in the non-head-coupled mode. Another one had a non-head-coupled condition first and was 13% better in the head-coupled condition. The average learning effect was 4% across all participants. Because only two participants reported such techniques for memorization, we cannot draw any conclusions about “what if” situations with many people using such a technique but it is worth further investigation. The participants made very few or no mistakes in the verbal report procedure during the experiment and this was already clear during the piloting. Nevertheless, the procedure was not excluded from the study because some pilots showed that pronouncing the cube attributes aloud helped them to remember them better, which was not very surprising since then they involved an additional type of memory. Besides acting as a memory aid, the verbal report procedure provided us with information about whether the participants were ignoring the awareness-and-recall task in favor of the shuffling game task.  69  Even though head coupling itself did not provide any boost in performance, Whale Tank VR is a simple potential solution for collaborative virtual reality. Having good awareness in the head-coupled condition combined with the ability to see more objects of the scene without extra effort and body navigation when using the head coupling is encouraging enough for possible use of such VR systems outside of the lab. After all, the ability to see the 3D structure may be an important feature by itself. Of course, the combination of polarization with shutter glasses and head tracking (Frölig et  al., 2005) is better as a VR tool for users who use it directly but it has its own disadvantages for usage on the large screen (see Chapter 2). Such an approach would not allow people other than direct users to see what is happening on the screen because in this case other observers would see two very different images on top of each other. Using Whale Tank VR it is possible to observe an exact picture from the correct perspective and still have an overview of the rest of the scene, which is not a part of the personal viewport, if the screen surface is not occluded by the other viewports. We suspect that the experimental awareness task might not be hard enough to notice any difference in user performance (see Chapters 4 and 5). In the future, it would be interesting to test users’ awareness when facing increased difficulty levels. With new hardware from Cyviz Technologies that has become available in our lab recently (Fig. 6.2) we are able to drive the large screen display in stereo. This opens an opportunity to add stereoscopic vision as one of the factors for future experiments.  70  Figure 6.2. New hardware for powering the large screen display.  Another experiment that would be interesting to conduct in the future is to introduce an actual collaborative task involving two people instead of simulated actions as in our  71  experiment. It would be interesting to compare between collocated collaboration on the large screen, collocated collaboration on separate screens and remote collaboration via a networked virtual reality setup. The size of the viewport in Whale Tank VR is an interesting question as are what size of the viewport is sufficient and what positional restrictions for the viewport are the best. Another modification to Whale Tank VR can also be tested. Instead of normal viewports we can introduce fisheye lenses to them to create wider angle visibility with a trade-off of precision of representation. This might be useful for certain tasks. A study of different types of gestures for interaction with objects inside Whale Tank VR using an extension to our vision based touch system is also a possible direction of the future work, but at the moment there are hardware limitations that have to be overcome. Meanwhile other interface problems such as menu invocation, text typing, switching viewing modes etc. related to the gestures can be explored. Furthermore, it is interesting to study the scalability of Whale Tank VR and how it will handle the additional users. This study can be done after investigation of the optimal size of the viewport.  72  Bibliography Agrawala, M., Beers, A., Fröhlich, B., Hanrahan, P., McDowall, I., Bolas, M. (1997). The two-user responsive workbench: support for collaboration through individual views of a shared space. In proceedings to SIGGRAPH’97, ACM Press, p. 327–332. Arthur, K.W. (1993). 3D Task performance using head-coupled stereo displays. M.Sc. Thesis. University of British Columbia. Arthur, K.W., Booth, K.S. and Ware, C. (1993). Evaluating 3d task-performance for fish tank virtual worlds. ACM Transactions on Information Systems; Vol. 11, Issue 3, p. 239–265. Arthur, K.W., Preston, T., Taylor, R.M., Brooks, F.P., Whitton, M.C., Wright, W.V. (1998). Designing and building the PIT: a head-tracked stereo workspace for two users.  Technical Report. UMI Order Number: TR98-015., University of North Carolina at Chapel Hill. Ball, R., North, C., and Bowman, D. A. (2007). Move to improve: promoting physical navigation to increase user performance with large displays. In proceedings of the  SIGCHI Conference on Human Factors in Computing Systems. CHI '07; p. 191-200. Blanchard, C., Burgess, S., Halvill, Y., Lanier, J., Lasko, A., Oberman, M., Teitel, M. (1990). Reality built for two: A virtual reality tool. In proceedings of the 1990  Symposium on Interactive 3D Graphics, p. 35-36. Bowman, D., Datey, A., Ryu, Y., Farooq, U., Vasnaik, O. (2002). Empirical comparison of human behavior and performance with different display devices for virtual environments. In proceedings of Human Factors and Ergonomics Society Annual  Meeting; p. 2134-2138. Cohen J. (1988) Statistical power analysis for the behavioral sciences. 2nd edition. Cruz-Neira, C., Sandin, D.J., De Fanti, T.A., Kenyon, R.V., Hart, J.C. (1992). The CAVE: Audio visual experience automatic virtual environment. Communication of the ACM.  73  Vol. 35, No. 6, p. 65-72. Czerwinski, M., Smith, G., Regan, T., Meyers, B., Robertson, G., & Starkweather, G. (2003). Toward characterizing the productivity benefits of very large displays. In  proceedings of the INTERACT 03 Conference on Human-Computer Interaction, p. 916. Demiralp, C., Laidlaw, D.H., Jackson, C., Keefe, D., Zhang, S. (2003). Subjective usefulness of CAVE and Fish Tank VR display systems for a scientific visualization application. In proceedings of the 14th IEEE Visualization 2003 (VIS'03); p. 86. Fröhlich, B., Hochstrate, J., Hoffmann, J., Klüger, K., Blach, R., Bues, M., Stefani, O. (2005). Implementing multi-viewer stereo displays. WCSG 2005; p. 139-146. Hawkey, K., Kellar, M., Reilly, D., Whalen, T., Inkpen, K. M. (2005). The proximity factor: impact of distance on co-located collaboration. In proceedings of the 2005  international ACM SIGGROUP Conference on Supporting Group Work. GROUP '05; p. 31-40. Higgins, J.J. (2003). An Introduction to Modern Nonparametric Statistics. Duxbury  Press. Huck S.W. (2008). Reading statistics and research. Pearson Education Inc., 5th edition. Mulder, J. D., van Liere, R. (2000). Enhancing Fish Tank VR. In proceedings of the IEEE  Virtual Reality 2000 Conference; p. 91-98. Ni, T., Bowman, D. A., and Chen, J. (2006). Increased display size and resolution improve task performance in information-rich virtual environments. In proceedings of  Graphics Interface 2006. ACM International Conference Proceeding Series, vol. 137; p. 139-146. Qi, W., Taylor, II R.M., Healey, C.G., Martens, J.-B. (2006). A comparison of immersive HMD, Fish Tank VR and Fish Tank with haptics displays for volume visualization. In  Proceedings of the 3rd symposium on Applied perception in graphics and visualization. ACM International Conference Proceeding Series; Vol. 153, p. 51-58. Schulze, J., Forsberg, A., Kleppe, A., Zeleznik, R., Laidlaw, D. H. (2005). Characterizing  74  the effect of level of immersion on a 3D marking task. In proceedings of HCI  International, p. 447-452. Simon, A., Scholz, S. (2005). Multi-viewpoint images for multi-user interaction. In  proceedings to the IEEE 2005 Conference on Virtual Reality; p.107-113. Simon A. (2005). First-person experience and usability of co-located interaction in a projection-based virtual environment. In proceedings of the ACM Symposium on  Virtual Reality Software and Technology (VRST '05); p.23-30. Swindells, C., Po, B. A., Hajshirmohammadi, I., Corrie, B., Dill, J. C., Fisher, B. D., Booth, K. S. (2004). Comparing CAVE, wall, and desktop displays for navigation and wayfinding in complex 3D models. In proceedings of the Computer Graphics  international, p. 420-427. Tan, D.S., Gergle, D., Scupelly, P., Pausch, R. (2006). Physically large displays improve performance on spatial tasks. ACM Transactions on Computer-Human Interaction,  Vol. 13, No. 1, p. 71-99. Ware, C., Arthur, K.W., Booth, K.S. (1993). Fish Tank Virtual Reality. In proceedings of  INTERCHI ’93 Conference on Human Factors in Computing Systems. ACM, New York, p. 37-42.  75  Appendix A  A.1 Demographic questionnaire  Age:  Gender: male  female  Occupation (i.e. CS undergraduate student):  Do you play arcades, FPS (shooters) or any other games that require quick reaction? every day  several times a week  few times a month  very rarely or never  Have you had an experience with touch screen interaction before? never  few times  more than 10 times  used it daily (i.e. iPhone)  What’s your primary hand? right  left  ambidextrous  76  A.2 Log file example 0 nhcsh 2:15:48.236 The test has been started 0 nhcsh 2:15:48.236 Selection sequence started: true, 11 events will happen 0 nhcsh 2:15:49.190 tcp 2 03241 23140 954 0 nhcsh 2:15:49.252 tcm 1 03241 13240 1016 0 nhcsh 2:15:49.393 tcm 0 03241 03241 1157 0 nhcsh 2:15:49.596 pcd 03241 03241 1360 0 nhcsh 2:15:49.596 tcd 0 03241 03241 1360 0 nhcsh 2:15:49.596 Correct sequence achieved 0 nhcsh 2:15:52.736 gr1s u //has been selected by the program 0 nhcsh 2:15:53.252 pcp 03241 03241 5000 0 nhcsh 2:15:53.252 Sequence incorrect again 0 nhcsh 2:15:54.283 pcd 13240 03241 6047 0 nhcsh 2:15:54.658 tcp 0 13240 03241 6047 0 nhcsh 2:15:54.799 gr1r r//has been released by the program 0 nhcsh 2:15:54.877 tcm 1 13240 13240 6047 0 nhcsh 2:15:54.940 tcd 1 13240 13240 6047 0 nhcsh 2:15:54.940 Correct sequence achieved 0 nhcsh 2:15:57.533 rd5s u //has been selected by the program 0 nhcsh 2:15:58.268 pcp 13240 13240 9375 0 nhcsh 2:15:58.268 Sequence incorrect again  77  Log-file legend is explained in the following table: Word #  Letters or numbers  1  Natural number  2  First 2 or 3 letters Next letter Next letter  3  Time stamp  4.1  First 2 letters Next letter  4.2  First 2 letters  5.1.1  Meaning User ID Non-head-coupled or head-coupled mode slow, medium or fast speed easy, medium or hard memory task Time of experiment Action associated with a task cube or a pattern cube Picking, dropping or moving (only for task cubes) action  rd, gr, bl, yl or pp Action associated with a central scene cube of 1..5 s or r Self-explanatory  0..4  One digit  0..4 for each digit  5.1.2 & 6.1.1 5 digit code 5.2  nhc or hc s, m or f e, m or h tc or pc p, d or m  Text *  0..23  time  Next digit Next letter 4.3  Values  u or r  One letter  particular color (red, green, blue, yellow or purple) Number on the cube Colored and numbered cube is selected or released  Event record Slot #, from which a task cube has been taken (after tc-) Shows in which slot each pattern cube is currently located User- or software-powered action  6.1.2 & 7.1.1 5 digit code  0..4 for each digit  Shows in which slot each task cube is currently located  7.1.2 & 8.1.1 Many digits number  Natural number  Current shuffling game score  *  5.1.1 means that it relates to the record 4.1 and there is a branching in the record types  Table A.2.1. Log file format description.  78  A.3 Experimental protocol Experimental sessions took normally between 60-70 minutes. The protocol was as follows: Name of event  Time (min)  Description  Introduction and colorblindness test  3  Colorblindness test followed by quick description of the Lab facilities and what the experiment is about  Consent form  3  Consent form signing  Instructions  10  Participants are given particular instructions  Two practice rounds before 2 to 6 Practice rounds for participants to get familiarized with condition A viewing condition A (A & B order is different for each next participant). Participants perform this practice until they complete it correctly before proceeding further. Condition A Break with a background questionnaire One practice round before viewing condition B Condition B Feedback Total  ~18 2  18 trials for viewing condition A Participants fill in their background experience (select check boxes)  1 to 3 Practice round for participants to get familiarized with condition B. Participants performed this practice until they complete it correctly before proceeding further. ~18 5  18 trials for viewing condition B Participants share their experience with the researcher and answer few questions (see Appendix I)  ~65  Table A.3.1. Experimental protocol.  79  A.4 Experimental script <Demo trial is on the screen, head-coupled mode is on> − Good day. Thanks for coming for our experiment. − First, I need to make sure that you are not colorblind. Please, come here I’ll test you on colorblindness. <On-line colorblindness test> − Before we start, you also need to read and sign this consent form. If you have any questions, just ask. <Consent form signing> − Thank you. − Now I’ll do the explanation, feel free to interrupt and ask questions as they appear. − Here, as you can see, we have a very large screen display. It supports touch screen functionality. It supports only one touch at a time. <Asking the participant to touch several cubes on the screen> − I’ll ask you to wear this “crown”. Please, adjust it to the size of you head so it sit convenient and won’t fall off. <Size regulation> − OK, now I’ll put this marker in it.  80  <Marker is placed in the “crown”> − This marker is position tracking. Its position is captured by the sensors you can see above. You can see that the picture is changing as you move around according to your position. <Asking the participant to move around a bit> − The range of the sensors is slightly less than 1m or 3 ft. If you step too far back, the image will become progressively jittery and the sensors can lose alignment. Please, try to avoid that during the experiment. Let me know if there is something wrong with the alignment, you can easily notice if this will happen: the image changes will look very strange. If such thing happens I’ll restart the program. − Come here please. <Asking the participant to come close to the shuffling game> − Now I’ll explain what you are going to do here.  − There is a multitasking game you are going to play. It consists of two parts that you are going to play simultaneously. On the right part on the screen you can see a first part of a game that we call a Shuffling Game. <pointing at it> − It consists of two rows of cubes. The top row is a pattern row. The bottom row is your row, where you can move cubes around. <Demonstrating shuffling by the user, asking the participant to try>  81  − You need to match it with the pattern row by moving your cubes. − During the game, the cubes in the top row will change places from time to time. You need to move the cubes in the bottom row as fast as you can to match them back. − On the right side you will see your score in a form of percentage. Every round you’ll start with a 100 and if rows don't match the percentage will go down. You'll lose one percent every 0.3 seconds of mismatch. The shuffling game goes on for 30 seconds. One important note: the rows are considered as a match only if you release the cube, if you are holding it above the correct slot but haven’t release it yet it is still mismatch, so make sure to release it fast. <demonstrating> The speed of the game will increase as rounds progress. Everything clear? − Then look at the cubes in the middle. This is the second part of our game, which we call a memory game. During your shuffling game, some of these cubes will be selected… <waiting for the program to select a cube> like this. − At the same time as you are playing the Shuffling Game and as soon as you notice the selection you must talk out loud what selection you’ve noticed, i.e. “blue three” or “three blue” as you like more. Also you need to remember what cubes were selected and in what order. At the end of the Shuffling Game, when you’ll see the button “Next” you’ll need to come and select the cubes in the same order as you saw them highlighted. Is it clear?  82  − There will be from 1 to 5 selections per round. You won’t know up-front how many highlights is going to happen, so, you need to pay attention. − There is no limitation on how much time you are allowed to spend on your selections just be reasonable, otherwise you’ll spend playing it much longer than planned. Usually 30 seconds is enough time to select cubes if you remember them. − The score in the memory game will be based on the number of the errors you make. The errors will be always counted in your favor. For example, if you selected two correct cubes in the wrong order it will be only one wrong order mistake versus incorrect cube and incorrect cube. In general, if there are several ways to count your errors then the way that gives you less errors will be taken. Is it clear? − Now I’ll explain you how you can get a bonus $10. We are giving an extra $10 to the top 50% of the performers. How do we determine that? The lowest 25% scorers in the shuffling game will be eliminated. Then we take remaining participants and give the extra $10 to top 2/3 participants with highest scores in the memory game. − The idea here is that to get the bonus you have to play both games as hard as possible and don’t concentrate only on one part and forget about another, you have to be somehow good at both. So I encourage you to try to play both games as best as you can. Any questions?  83  − There will be 2 sets of short games of various difficulties. In the beginning of each set, there will be practice rounds: two obligatory before the first set and one before the second one, to make sure you are familiar with a new condition. Additional practices will happen if you’ll make a mistake. You’ll be practicing till you make everything right. Practice rounds don’t count against your score. Between the two sets there will be a short break. − Do you have any questions? <Answering if any> − Let’s start then... − Please, let me know, if any problems appear. − Now we will practice just the Shuffling Game. For now forget about the memorizing and talk out loud part. Just press “Start” button and start practicing. <30 sec pure shuffling game practice with demo set going on> − Press “Next” button now. <First viewing condition, first practice round, video camera turned on> − OK, now you will have a first real practice round. Don’t forget that you need to say what selections you see and before you press “Next” button next time you need to select the sequence of cubes that was selected by the program first. Now just press “Start” button when you are ready and start practicing. But when you’ll make your selections wait for my instructions. <1st practice round>  84  −  One more detail, if during the selection you by some reason choose an incorrect cube, let me know and I’ll reset the selection, so you can reselect all the cubes again.  <if practice round was not successful> − It was incorrect, let’s try again. <repeat 1st practice round till it’s successful> <if practice round was successful> − That was correct now there will be another practice round. You can press “Next” button now. − Now there will be another practice round. Press “Start” as soon as you are ready. <2nd practice round> <if 2nd practice round was not successful> − It was incorrect, let’s try again. <repeat 2nd practice round till it’s successful> <if 2nd practice round was successful> − That was correct, now press “Next” button. From now on rounds will be real and the score will count. You no longer have to wait for me to allow you to continue. Just go on your pace and, as soon as you feel ready, press “Start” and “Next” buttons yourself. <18 real trials for viewing condition A> − Stop, please. You finished the first set and now you’ll have a small break, during which I’ll ask you to fill in this questionnaire. <Questionnaire filling>  85  − Thank you. Now you can have a small break. <Break> − Before you start the second set there will be another practice round. Instructions are the same the only difference will be a viewing condition. Press “Start” button as soon as you are ready. <3rd practice round> <if 3rd practice round was not successful> − It was incorrect, let’s try again. <repeat 3rd practice round till it’s successful> <if 3rd practice round was successful> − That was correct, now press “Next” button. From now on all remaining rounds will be real and the score will count. Proceed on your pace till the end without my instructions. <18 real trials for viewing condition B> − That’s it. Now I’ll ask you few questions if you don’t mind. Questions asked at the end:  −  Tell me, what did you do to be the most efficient in this game? What did you concentrate the most/least on?  −  Explain your method of memorizing cubes. Did you try different strategies? What was benefiting you the most? What appeared to be a bad idea?  86  −  Did you change your way of playing the game in the process? In what circumstances did the change of strategy happened?  −  If you would do it again, now that you have an experience, how would you play? (this question was asked if the participants used more than one strategy during the session) <Writing down answers>  −  Thanks for your answers. Here is your payment. Please, sign this form that you received your payment. <Signing the payment form. Giving $10>  −  Thank you for coming. Bye.  87  Appendix B: Statistical tables  Viewing Speed Complexity condition  Slow  HC  Medium  Fast  Normal Parameters  a,b  Most Extreme Differences  N Mean  Std. Dev.  Absolute Positive Negative  Kolmogorov- Asymp. Sig. Smirnov Z (2-tailed)  Easy(1)  72  0.8514  0.09329  0.146  0.102  -0.146  1.239  0.093  Med.(3)  72  0.8574  0.07815  0.115  0.090  -0.115  0.978  0.294  Hard(5)  72  0.8253  0.11992  0.151  0.128  -0.151  1.282  0.075  Easy(1)  72  0.7539  0.12266  0.122  0.074  -0.122  1.033  0.237  Med.(3)  72  0.7079  0.15382  0.156  0.077  -0.156  1.326  0.059  Hard(5)  72  0.6586  0.16923  0.124  0.075  -0.124  1.055  0.215  Easy(1)  72  0.6018  0.15669  0.088  0.074  -0.088  0.750  0.627  Med.(3)  72  0.5324  0.15305  0.106  0.106  -0.055  0.900  0.392  Hard(5)  72  0.4837  0.15110  0.080  0.080  -0.071  0.677  0.750  Table B.1a. One-sample Kolmogorov-Smirnov test for shuffling game scores (headcoupled condition).  88  Viewing Speed condition  Slow  Normal Parametersa,b Complexity  Medium  Fast  a b *  N Mean  Std. Dev.  Absolute Positive Negative  Kolmogorov- Asymp. Sig. Smirnov Z (2-tailed)  Easy(1) *  72  0.8599  0.09114  0.171  0.134  -0.171  1.449  0.030*  Med.(3)  72  0.8653  0.06930  0.135  0.097  -0.135  1.149  0.143  72  0.8414  0.13044  0.240  0.189  -0.240  2.040  < 0.001*  Easy(1)  72  0.7583  0.11883  0.131  0.087  -0.131  1.108  0.171  Med.(3)  72  0.7192  0.13323  0.107  0.060  -0.107  0.907  0.383  Hard(5)  72  0.6729  0.17194  0.131  0.073  -0.131  1.115  0.166  Easy(1)  72  0.5872  0.15024  0.106  0.096  -0.106  0.903  0.389  Med.(3)  72  0.5399  0.15317  0.121  0.061  -0.121  1.026  0.243  Hard(5)  72  0.4800  0.15249  0.078  0.078  -0.045  0.662  0.774  Hard(5)  NHC  Most Extreme Differences  *  Test distribution is Normal. Calculated from data. the data is not distributed normally  Table B.1b. One-sample Kolmogorov-Smirnov test for shuffling game scores (nonhead-coupled condition)  89  Viewing condition  Skewness Speed  Medium  Fast  Slow  NHC  N  Min  Max  Mean  Medium  Fast  Valid N (listwise)  Kurtosis  Std. Dev. Statistic  Slow  HC  Complexity  Std. Error  Statistic  Std. Error  Easy(1)  72  0.42  0.97  0.85  0.09  -1.95  0.28  6.32  0.56  Med.(3)  72  0.54  0.97  0.86  0.08  -1.66  0.28  4.65  0.56  Hard(5)  72  0.23  0.97  0.83  0.12  -2.24  0.28  7.86  0.56  Easy(1)  72  0.41  0.94  0.75  0.12  -0.78  0.28  -0.05  0.56  Med.(3)  72  0.26  0.95  0.71  0.15  -1.05  0.28  0.93  0.56  Hard(5)  72  0.26  0.92  0.66  0.17  -0.7  0.28  -0.37  0.56  Easy(1)  72  0.31  0.94  0.6  0.16  -0.01  0.28  -0.91  0.56  Med.(3)  72  0.2  0.91  0.53  0.15  0.17  0.28  -0.33  0.56  Hard(5)  72  0.18  0.89  0.48  0.15  0.23  0.28  -0.5  0.56  Easy(1)  72  0.45  0.97  0.86  0.09  -2.11  0.28  5.8  0.56  Med.(3)  72  0.52  0.97  0.87  0.07  -2.1  0.28  7.93  0.56  Hard(5)  72  0.19  0.96  0.84  0.13  -3.26  0.28  12.63  0.56  Easy(1)  72  0.36  0.95  0.76  0.12  -1.11  0.28  1.41  0.56  Med.(3)  72  0.29  0.96  0.72  0.13  -0.9  0.28  1.34  0.56  Hard(5)  72  0.2  0.94  0.67  0.17  -0.84  0.28  0.11  0.56  Easy(1)  72  0.24  0.9  0.59  0.15  -0.18  0.28  -0.5  0.56  Med.(3)  72  0.22  0.81  0.54  0.15  -0.21  0.28  -0.92  0.56  Hard(5)  72  0.11  0.83  0.48  0.15  0.06  0.28  -0.26  0.56  72  Table B.2. Descriptive statistics for shuffling game scores  90  Within Subjects Effect  1  Mauchly's W  df  Significance  -  -  Greenhouse-Geisser’s  ε  Viewing Condition  1.0  1.0  Speed1  0.9  2  0.031  0.91  Complexity1  0.86  2  0.0041  0.87  ViewCond * Speed  0.99  2  0.73  0.99  ViewCond * Complexity1  0.91  2  0.031  0.91  Speed * Complexity1  0.71  9  0.0051  0.88  ViewCond * Speed * Complexity1  0.68  9  0.0021  0.85  Sphericity assumption is violated.  Table B.3. Mauchly's test of sphericity (shuffling game scores)  91  Mean Square  Source  Correction  Viewing Condition  Sphericity Assumed  1  0.011  Error (Viewing Condition)  Sphericity Assumed  71  0.018  Speed  Greenhouse-Geisser  1.823  Error (Speed)  Greenhouse-Geisser  129.43  Complexity  Greenhouse-Geisser  1.75  Error (Complexity)  Greenhouse-Geisser  124.01  0.007  ViewCond*Speed  Sphericity Assumed  2  0.007  Error (ViewCond*Speed)  Sphericity Assumed  142  0.008  ViewCond*Complexity  Greenhouse-Geisser  1.826  0.004  Error (ViewCond*Complexity)  Greenhouse-Geisser  129.68  0.005  Speed*Complexity  Greenhouse-Geisser  3.514  0.096  Error (Speed*Complexity)  Greenhouse-Geisser  249.46  0.006  ViewCond*Speed*Complexity  Greenhouse-Geisser  3.4  0.002  Error (ViewCond*Speed*Complexity)  Greenhouse-Geisser  241.45  0.006  a  df  F  Sig.  Effect size (Partial η2)  Observed Power a  0.59  0.447  0.008  0.117  11.630 732.23  < 0.001  0.912  1.000  < 0.001  0.586  1.000  0.94  0.392  0.013  0.211  0.64  0.513  0.009  0.151  16.87  < 0.001  0.192  1.000  0.27  0.870  0.004  0.105  0.016 0.703 100.39  Computed using alpha = .05  Table B.4. Tests of within-subjects effects (shuffling game scores)  92  (I) Speed  (J) Speed  Mean difference (I - J)  Std. Error  Sig.  a  95% Confidence Intervals for Difference a Lower Bound  Slow  Medium  Fast a  Upper Bound  Medium  0.138  0.008  < 0.001  0.119  0.157  Fast  0.313  0.009  < 0.001  0.29  0.336  Slow  -0.138  0.008  < 0.001  -0.157  -0.119  Fast  0.174  0.007  < 0.001  0.156  0.192  Slow  -0.313  0.009  < 0.001  -0.336  -0.29  Medium  -0.174  0.008  < 0.001  -0.192  -0.156  Bonferroni adjustment applied (α level set to 0.0167)  Table B.5. Pairwise comparisons for shuffling game speed levels  (I) Complexity  (J) Complexity  Mean difference (I - J)  Std. Error  Sig.  a  95% Confidence Intervals for Difference a Lower Bound  Easy  Medium  Hard a  Upper Bound  Medium  0.032  0.005  < 0.001  0.02  0.043  Hard  0.075  0.006  < 0.001  0.06  0.09  Easy  -0.032  0.005  < 0.001  -0.043  -0.02  Hard  0.043  0.005  < 0.001  0.031  0.055  Easy  -0.075  0.006  < 0.001  -0.09  -0.06  Medium  -0.043  0.005  < 0.001  -0.055  -0.031  Bonferroni adjustment applied (α level set to 0.0167)  Table B.6. Pairwise comparisons for awareness-and-recall game complexity levels  93  Viewing condition  Speed  Complex ity  N  Normal Parametersa,b  Most Extreme Differences  Std. Dev.  Absolute Positive Negative  Mean  Slow  HC  Medium  Fast  Slow  NHC  Medium  Fast  a b  KolmogorovSmirnov Z  Asymp. Sig. (2-tailed)  Easy(1)  72  0.97  0.17  0.539  0.433  -0.539  4.572  < 0.001*  Med.(3)  72  0.92  0.21  0.482  0.351  -0.482  4.094  < 0.001*  Hard(5)  72  0.58  0.28  0.147  0.147  -0.135  1.247  0.089  Easy(1)  72  1.0  0  Med.(3)  72  0.88  0.23  0.453  0.297  -0.453  3.843  < 0.001*  Hard(5)  72  0.58  0.25  0.216  0.216  -0.170  1.831  0.002*  Easy(1)  72  0.99  0.12  0.533  0.453  -0.533  4.523  < 0.001*  Med.(3)  72  0.75  0.3  0.327  0.201  -0.327  2.773  < 0.001*  Hard(5)  72  0.45  0.24  0.161  0.151  -0.161  1.369  0.047*  Easy(1)  72  0.94  0.23  0.540  0.405  -0.540  4.579  < 0.001*  Med.(3)  72  0.94  0.15  0.496  0.338  -0.496  4.207  < 0.001*  Hard(5)  72  0.6  0.23  0.185  0.148  -0.185  1.570  0.014*  Easy(1)  72  1.0  0  Med.(3)  72  0.86  0.25  0.434  0.288  -0.434  3.683  < 0.001*  Hard(5)  72  0.56  0.29  0.147  0.119  -0.147  1.249  0.088  Easy(1)  72  0.93  0.26  0.537  0.393  -0.537  4.561  < 0.001*  Med.(3)  72  0.74  0.27  0.263  0.170  -0.263  2.232  < 0.001*  Hard(5)  72  0.5  0.25  0.153  0.153  -0.153  1.302  0.067  c  c  Test distribution is Normal. Calculated from data.  c  The distribution has no variance for this variable, thus, the one-sample Kolmogorov-Smirnov test cannot be performed.  *  the data is not distributed normally  Table B.7. One-sample Kolmogorov-Smirnov test for recall scores  94  Viewing condition  Speed  Slow  HC  Medium  Fast  Slow  NHC  Medium  Fast  Valid N (listwise)  Complexity  N  Min  Max  Mean  Skewness  Std. Dev.  Statistic  Kurtosis  Std. Error  Statistic  Std. Error  Easy(1)  72  0  1.00  0.97  0.17  -5.870  0.28  33.38  0.56  Med.(3)  72  0  1.00  0.92  0.21  -3.149  0.28  10.62  0.56  Hard(5)  72  0  1.00  0.58  0.28  -.121  0.28  -0.72  0.56  Easy(1)  72  1.00  1.00  1.0  0  -  -  -  -  Med.(3)  72  0.33  1.00  0.89  0.23  -1.642  0.28  1.25  0.56  Hard(5)  72  0.20  1.00  0.58  0.25  .172  0.28  -1.11  0.56  Easy(1)  72  0  1.00  0.99  0.12  -8.485  0.28  72.0  0.56  Med.(3)  72  0  1.00  0.75  0.3  -.722  0.28  -0.85  0.56  Hard(5)  72  0  1.00  0.45  0.24  .078  0.28  -.021  0.56  Easy(1)  72  0  1.00  0.94  0.23  -3.964  0.28  14.1  0.56  Med.(3)  72  0.33  1.00  0.94  0.15  -2.403  0.28  5.33  0.56  Hard(5)  72  0  1.00  0.61  0.23  -.270  0.28  -0.39  0.56  Easy(1)  72  1.00  1.00  1.0  0  -  -  -  -  Med.(3)  72  0  1.00  0.86  0.25  -1.655  0.28  1.705  0.56  Hard(5)  72  0  1.00  0.56  0.29  -.084  0.28  -0.93  0.56  Easy(1)  72  0  1.00  0.93  0.26  -3.460  0.28  10.26  0.56  Med.(3)  72  0  1.00  0.74  0.27  -.721  0.28  -0.25  0.56  Hard(5)  72  0  1.00  0.5  0.25  0  0.28  -0.82  0.56  72  Table B.8. Descriptive statistics for recall scores  95  Within Subjects Effect  Mauchly's W  df  Significance  -  -  Greenhouse-Geisser’s ε  ViewingCondition  1.00  Speed  0.975  2  0.409  0.98  Complexity1  0.917  2  0.0481  0.92  ViewingCondition*Speed  0.981  2  0.514  0.98  ViewingCondition*Complexity  0.998  2  0.928  0.99  Speed*Complexity  0.835  9  0.184  0.92  ViewingCondition*Speed*Complexity1  0.724  9  0.0081  0.87  1  1.00  Sphericity assumption is violated.  Table B.9. Mauchly's test of sphericity (recall scores)  96  Correction  ViewingCondition  Sphericity Assumed  1  0.01  Error(ViewingCondition)  Sphericity Assumed  71  0.04  Speed1  Sphericity Assumed  2  1.33  Error(Speed)  Sphericity Assumed  142  0.05  Complexity1  GreenhouseGeisser  1.85  22.4  Error(Complexity)  GreenhouseGeisser  131.12  0.07  ViewingCondition*Speed  Sphericity Assumed  2  0.01  Error(ViewCond*Speed)  Sphericity Assumed  142  0.05  ViewingCondition*Complexity  Sphericity Assumed  2  0.06  Error(ViewCond*Complexity)  Sphericity Assumed  142  0.05  Speed*Complexity1  Sphericity Assumed  4  0.32  Error(Speed*Complexity)  Sphericity Assumed  284  0.04  ViewCond*Speed*Complexity  GreenhouseGeisser  3.47  0.04  Error  GreenhouseGeisser  246.51  0.06  (ViewCond*Speed*Complexity) a 1  Mean Square  Source  Computed using  df  F  Sig.  Effect size 2  (Partial η )  Observed Power a  0.13  0.719  0.002  0.065  29.09  < 0.001  0.291  1.000  307.59 < 0.001  0.812  1.000  0.12  0.886  0.002  0.068  1.31  0.273  0.018  0.280  8.86  < 0.001  0.111  0.999  0.71  0.57  0.01  0.212  α = 0.05  The difference between means is statistically significant with level of  α = 0.05  Table B.10. Tests of within-subjects effects (recall scores)  97  (I) Speed  (J) Speed  Mean Difference (I-J)  Std. Error  a  Sig.  95% Confidence Interval for Differencea Lower Bound  slow  medium  fast  a  Upper Bound  medium  0.013  0.013  1.000  -0.020  0.046  fast  0.102  0.015  < 0.001  0.064  0.140  slow  -0.013  0.013  1.000  -0.046  0.020  fast  0.089  0.015  < 0.001  0.053  0.125  slow  -0.102  0.015  < 0.001  -0.140  -0.064  medium  -0.089  0.015  < 0.001  -0.125  -0.053  Bonferroni adjustment applied (α level set to 0.0167)  Table B.11. Pairwise comparisons for recall scores between speed levels  (I) Complexity  (J) Complexity  Mean Difference (I-J)  Std. Error  a  Sig.  95% Confidence Interval for Differencea Lower Bound  easy (1)  medium (3)  hard (5)  a  Upper Bound  medium (3)  0.126  0.015  < 0.001  0.088  0.164  hard (5)  0.426  0.02  < 0.001  0.377  0.475  easy (1)  -0.126  0.015  < 0.001  -0.164  -0.088  hard (5)  0.300  0.017  < 0.001  0.257  0.343  easy (1)  -0.426  0.02  < 0.001  -0.475  -0.377  medium (3)  -0.300  0.017  < 0.001  -0.343  -0.257  Bonferroni adjustment applied (α level set to 0.0167)  Table B.12. Pairwise comparisons for recall scores between complexity levels  98  Appendix C: Ethics certificate The exact copy of the certificate is on the following pages.  99  The University of British Columbia Office of Research Services Behavioural Research Ethics Board Suite 102, 6190 Agronomy Road, Vancouver, B.C. V6T 1Z3  CERTIFICATE OF APPROVAL - MINIMAL RISK AMENDMENT PRINCIPAL INVESTIGATOR: DEPARTMENT: UBC BREB NUMBER: UBC/Science/Computer Kellogg S. Booth H03-80151 Science INSTITUTION(S) WHERE RESEARCH WILL BE CARRIED OUT: Institution  UBC  Site  Vancouver (excludes UBC Hospital)  Other locations where the research will be conducted: N/A  CO-INVESTIGATOR(S): Joanna McGrenere Regan Mandryk Wei You Martin Matthias Finke Sheelagh Carpendale Lyn Bartram Mark Hancock Mani Golparr Fard Madhav Nepal Colin Swindells Petra Neumann J. Karen Parker Joel Lanir Mike Blackstock Barry A. Po Rodger J. Lea Melanie Tory Rachel A. Pottinger Sheryl AS Staub-French Tamara Munzner Evgeny Maksakov Kirstie Ann Hawkey Anthony Tang Garth Shoemaker Lu Yu Sidney S. Fels  SPONSORING AGENCIES: 100  Canada Foundation for Innovation Natural Sciences and Engineering Research Council of Canada (NSERC) - "Collaborative Visualization and Interaction in Ubiquitous Computing Environments" PROJECT TITLE: ARTIFACT: Advanced Research, Techniques, and Informatics for Future Advantages in Construction Technology Expiry Date - Approval of an amendment does not change the expiry date on the current UBC BREB approval of this study. An application for renewal is required on or before: June 6, 2009 AMENDMENT(S):  AMENDMENT APPROVAL DATE: September 26, 2008  Document Name  Version  Date  1.0  September 15, 2008  1.0  September 15, 2008  Consent Forms: Phase 3 (touch screen) consent form Advertisements: Phase 3 (touch screen) email recruiting ad  The amendment(s) and the document(s) listed above have been reviewed and the procedures were found to be acceptable on ethical grounds for research involving human subjects.  Approval is issued on behalf of the Behavioural Research Ethics Board and signed electronically by one of the following: Dr. M. Judith Lynam, Chair Dr. Ken Craig, Chair Dr. Jim Rupert, Associate Chair Dr. Laurie Ford, Associate Chair Dr. Daniel Salhani, Associate Chair Dr. Anita Ho, Associate Chair  101  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0051200/manifest

Comment

Related Items