Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Exploring the design space for concurrent use of personal and large displays for in-home collaboration Arksey, Nicole 2007

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2007-0313.pdf [ 29.29MB ]
Metadata
JSON: 831-1.0052070.json
JSON-LD: 831-1.0052070-ld.json
RDF/XML (Pretty): 831-1.0052070-rdf.xml
RDF/JSON: 831-1.0052070-rdf.json
Turtle: 831-1.0052070-turtle.txt
N-Triples: 831-1.0052070-rdf-ntriples.txt
Original Record: 831-1.0052070-source.json
Full Text
831-1.0052070-fulltext.txt
Citation
831-1.0052070.ris

Full Text

Exploring the Design Space for Concurrent Use of Personal and Large Displays for In-Home Collaboration by Nicole Arksey B.Sc , University of British Columbia, 2004 A THESIS SUBMITTED IN PARTIAL F U L F I L L M E N T OF THE REQUIREMENTS FOR THE DEGREE OF Master of Science in The Faculty of Graduate Studies (Computer Science) The University of British Columbia August 2007 © Nicole Arksey 2007 Abstract Recent technology improvements have led to two trends - larger display screens in the home and more personal computing devices (with displays) being used in the home. We believe these two trends will converge. We want to understand the implications and possibilities of using a large shared display in combination with a small personal display for a variety of in-home applications. This thesis addresses three questions. First, can multiple users work on loosely coupled tasks on a single shared large display? Second, i f users are able to work in parallel on a single display, what is the impact of adding a personal display to the large shared display for collaborative tasks? Finally, for those applications that utilize a small personal display and a large display, how difficult is it for users to switch their attention between the displays? We completed a pilot study, a main study and a follow-up study to answer these questions. Subsequently we utilized the results to design and develop the Family Blog, a collaborative application using mobile phones and a large shared display. The results from our pilot study show that users are able to share a large display for loosely coupled tasks and suggest that personally relevant objects should be placed together relative to a user's seated position. Our main study demonstrates that users are able to use both a personal display and a large display for varying levels of coupling for different tasks, but each task should utilize a single display for the majority of the task activity, and that viewing and selecting media and completing collaborative tasks should be done on the shared large display rather the personal display. The results from our follow-up study indicate that while using both a personal and large display, users are able to switch their attention between the two displays without difficulty. Based on these findings, we built an application, the Family Blog that allows users to create photos, video, text, and audio files on a mobile phone, and then upload them to create a video blog of the shared photos on the large display. The Family Blog utilizes and validates our results and design guidelines from the studies. Table of Contents Abstract i i Table of Contents iv List of Tables vii List of Figures viii Acknowledgements xvi Introduction 1 1.1 Motivation 3 1.2 The Research Process 4 1.3 Research Contributions of Thesis 6 1.4 Overview of Thesis 7 Related Work 8 2.1 Large Shared Displays for Collocated Collaboration 9 2.1.1 Electronic Whiteboard Displays 10 2.1.2 Ambient Awareness Displays 11 2.1.3 Electronic Meeting Rooms 13 2.2 Interacting with Large Display for Collaboration 17 2.2.1 Mice and Keyboards 17 2.2.2 Laser Pointers 18 2.2.3 Personal Devices 19 2.3 Using a Combination of Large and Small Displays to Support Collocated Collaboration 23 2.3.1 Transferring content 24 2.3.2 Private and public areas 26 2.3.3 Workflow transition 27 2.4 Summary 31 Sharing a large display pilot study 34 3.1 Method 39 3.2 Participants 40 3.3 Apparatus and Materials 40 3.4 Procedure 46 3.5 Measures 49 3.6 Results 49 3.6.1 Territoriality 49 3.6.2 Distraction 56 3.6.3 Control Affordances 58 3.7 Discussion 59 3.7.1 Territoriality 60 3.7.2 Distraction 64 3.7.3 Control Affordances 64 3.7.4 Using the Large shared display for loosely coupled tasks 65 3.8 Design Guidelines 65 V 3.8.1 Territoriality 66 3.8.2 Distraction 66 3.8.3 Control 67 3.9 Conclusions and Future Work 67 Large and Small Display User Study 69 4.1 Method 71 4.1.2 Creating Screen Shot Task 73 4.1.3 Selecting Screen Shots Task 75 4.1.4 Building the Outline Task 78 4.2 Participants 82 4.3 Apparatus and Materials 83 4.3.1 Design 83 4.3.2 Functionality 85 4.4 Procedure 85 4.5 Measures 86 4.7 Results 87 4.7.1 Display Preference 88 4.7.2 Effectiveness 90 4.3.3 Switching Attention 95 4.3.4 Control Policy 98 4.3.5 Design 100 4.4 Discussion 101 4.5 Design Guidelines 104 4.5.1 Personal & Large Displays 105 4.5.2 Control Policies 106 4.6 Conclusions and Future Work 106 Difficulty of Switching User Study 108 5.1 Method 110 5.1.1 Tasks 110 5.1.2 Blocks 117 5.1.3 Description of Image Set 117 5.2 Participants 118 5.3 Apparatus & Materials 118 5.4 Procedure 119 5.5 Measures 120 5.6 Hypotheses 121 5.7 Results 122 5.7.1 Recognition Task 122 5.7.2 Recall Task 124 5.7.3 Difference between Performance and Perceived Workload 126 5.7.4 Summary of Results 128 5.8 Discussion 130 5.9 Conclusions 131 vi Family Blog 133 6.1 Functionality 135 6.1.1 Creating media 136 6.1.2 Sharing media 136 6.1.3 Creating the video blog presentation 137 6.2. Design 144 6.2.1 Managing Screen Real-Estate on a Large Shared display 145 6.2.2 Shared Control 147 6.2.3 Distribution of tasks on the personal and shared large displays 148 6.3 Implementation Details 150 6.4 Discussion 151 6.5 Conclusions 152 Conclusions and Future Work 154 Bibliography 158 Appendix A : Consent Form for Studies 163 Appendix B: Ethics Certificate 165 Appendix C: Materials for the Pilot Study 167 C . l Trip Planning Prototype 167 C.2 Travel Interests 176 C.3 Munich and the Swiss Alps Quiz for Pilot Study 177 C.4 London Quiz for Pilot Study 179 C. 5 Pilot Study Questionnaire 181 Appendix D: Materials for the Main Study 185 D. 1 Large and Small Display Prototype 185 D. 2 Main Study Questionnaire 200 Appendix E: Study 3 Material 211 E. 1 Questionnaire for Recall task 211 E.2 Questionnaire for Recognition task 212 V l l List of Tables Table 3.1 .Overview table showing the four groups who participated in the study and the general regions subjects placed their window applications on the large shared display relative to where they were sitting in both the full screen mode and the three-quarter screen mode for the movie player 52 Table 3.2. Summary of disruption questions showing mean scores and standard deviation for each question about disruption from Questionnaire data (N=12) 57 Table 3.3. Summary of control questions showing mean scores and standard deviation for each question about control from Questionnaire data (N=12) 59 Table 4.1 Overview of 3 different control policies (A, B and C) used in the Building Outline Task with brief descriptions 81 Table 4.2. Overview of our tasks requirements and which task meets each task requirement 81 Table 4.4. Table showing the time to complete the Selecting the Screen Shot Task as well as the standard deviations on the Personal display 92 Table 4.5. Table showing the time to complete the Selecting the Screen Shot Task as well as the standard deviations on the Large Display 93 Table 4.6. Table showing the total number of interactions in the Selecting the Screen Shot task for each group s for the personal display and the large display 94 Table 4.7. Total showing each groups control policy and the total number of screen shots captured per group 97 Table 4.8. Summary of questions about switching attention between the personal display and the large shared display in different tasks showing mean scores and standard deviation for each of these questions Questionnaire data 98 Table 5.1. A table of each subject's difference in scores for the performance and perceived workload for the Recognition task. A negative score indicates the subject's score was higher for the personal display rather than the large display.. 127 Table 5.2. A table of each subject's difference in scores for the performance and perceived workload for the Recall task. A negative score indicates the subject's score was higher for the personal display rather than the large display 128 V l l l List of Figures Figure 1.1. Diagram showing the research process of this thesis 5 Figure 2.1. The Notification Collage on a large shared display in a public setting showing 2 users looking at a map 12 Figure 2.2. A first visualization of i-Land showing users surrounding multiple large displays in the back of the room, a tabletop display in the middle of the room and several smaller displays throughout the room 14 Figure 2.3. The iRoom in use, showing 3 large displays on the wall and a group of users sitting around the table, 2 with a laptop 15 Figure 2.4. The first version of PebblesDraw showing 6 users connected to the drawing application, each with their own colored cursor represented at the bottom of the display 20 Figure 2.5. The Sweep technique can be used to control a cursor on a large display much like with an optical mouse. A user is moving the cell phone to move the cursor on the large display 21 Figure 2.6. C-Blink in use showing a user moving their cell phone to control the red cursor on the large display 22 Figure 2.7. The Remote Commander full-screen view on PDA showing a screen shot of the personal computer on the display of the PDA 25 Figure 3.1. The trip planning prototype in three-quarter screen mode showing an instance ofthe Map Application in the bottom right opened by the user with the blue cursor and the Map Application in the top left opened by the user with the green cursor. These window applications are overlaid on the bottom right corner and the top left comer of the Movie Player Application 42 Figure 3.2. The trip planning prototype in three-quarter screen mode showing that once the Map Icon is clicked by a user the background of the Map Icon turns the same color (green) as the user's cursor (green) 43 Figure 3.3. The trip planning prototype in three-quarter screen mode showing the different colored borders around the Map and Tag application windows opened by two different users. The Map Application and the Tag Application on the left hand side of the display both have blue borders, and the Map Application and the Tag Application on the right hand side of the display both have green borders. The green Tag Application and the blue Map Application both are overlaid over part of the movie player 45 ix Figure 3.4. A diagram of the large display, highlighting the four-corner regions used by most users. These regions include the top-right, top-left, bottom-right, and bottom-left regions 50 Figure 3.5. Bar Chart of user's responses (N=12) in the questionnaire for the statement "There was enough room on the screen to perform my tasks". Most users strongly disagreed or disagreed (1 and 2) with this statement 55 Figure 3.6. Bar Chart of user's responses (N=12) in the questionnaire for the question "In my home, I would like to be able to do more than one task on my T V . " Most users strongly agreed or agreed (5 and 4) with this statement 55 Figure 3.7. Bar Chart displaying results from questionnaire for preference for the two movie screen mode, three-quarter screen mode or full-screen mode. Most users preferred the three-quarter screen mode 56 Figure 3.8. Bar Chart of user's responses (N=12) in the questionnaire for the statement "When others were performing their tasks, it was disturbing to me." The scores for this statement were distributed across most scores 58 Figure 4.1. Nokia N80 mobile phone used for prototype in our study. The personal display on this phone has 352 x 416 pixels and can display up to 262,144 colors... 71 Figure 4.2. The large shared display in the Creating Screen Shot Task showing the video playing using the full screen of the display. Due to copyright reasons, the image has been removed 73 Figure 4.3. Nokia N80 mobile phone showing a screen shot captured from the video playing on the large shared display. Due to copyright reasons, the images have been removed 74 Figure 4.4. Nokia N80 mobile phone display showing interface for selecting screen shots. There are three thumbnails of the screen shots displayed underneath the currently viewed screen shot which is displayed in the center of the screen. Arrows on the bottom left and bottom right of the display indicates that there are more screen shots to view. Due to copyright reasons, the images have been removed 75 Figure 4.5. The large shared display showing three different personal areas for three different subjects. Each area has a different colored cursor, red in the left area, blue in the middle area and green in the right area. Due to copyright reasons, the images have been removed 76 Figure 4.6. A personal area on the large shared display for the subject with the red cursor. There are four thumbnails of the screen shots displayed underneath the currently viewed screen shot which is displayed in the center of the screen. Arrows X on the bottom left and bottom right of the display indicates that there are more screen shots to view. Due to copyright reasons, the images have been removed.... 77 Figure 4.7. The large shared display in the Building Outline Task using the control policy A, 1 shared cursor and control policy B, 1 shared cursor plus personal display. The shared cursor is purple. There are sixteen thumbnails of the screen shots displayed underneath the currently viewed screen shot that is displayed in the center of the screen Arrows on the bottom left and bottom right of the display that there are more screen shots to view. Due to copyright reasons, the images have been removed.... 78 Figure 4.8 Nokia N80 mobile phone display showing the personal display in the Building Outline Task using the control policy B, 1 shared cursor plus personal display. There are three thumbnails of the screen shots displayed underneath the currently viewed screen shot which is displayed in the center of the screen. The blue cursor indicates the currently viewed screen shot. Due to copyright reasons, the images have been removed 79 Figure 4.9. The large shared display in the Building Outline Task using the control policy C, 3 individual cursors showing each area has a different colored cursor, red in the left area, blue in the middle area and green in the right area. Arrows on the bottom left and bottom right of the personal areas show that there are more screen shots to view. Due to copyright reasons, the images have been removed 80 Figure 4.10. Bar chart showing the number of subjects who prefer the Large Shared Display versus the Small Display in the Selecting Screen Shot Tasks. Most subjects preferred the large display 89 Figure 4.11. Bar Chart with Error Bars showing the time to complete the Selecting the Screen Shot Task using the Personal Display and the Large Display 91 Figure 4.12. Bar Chart with Error Bars showing the number of interactions taken to complete the Selecting the Screen Shot Task on the Personal Display and the Large Display 94 Figure 4.13. Bar Chart of subject's responses (N=18) by different conditions in the Building the Outline Task in the questionnaire for the statement "It was difficult to switch attention between the movie and the personal display." Most subjects felt neutral or agreed (3and 4) with this statement 96 Figure 4.14. Bar Chart of subject's responses (N=18) by different conditions in the Building the Outline Task in the questionnaire for the statement "It was easy to collaborate with others to build the outline of the video" Most subjects felt neutral or agreed (3 and 4) with this statement 99 Figure 4.15. Bar Chart of subject's responses (N=18) by different conditions in the Building the Outline Task in the questionnaire for the statement "I understood which xi screen shots I was able to control on the large display". A l l subjects agreed and strongly agreed (4 and 5) with this statement 100 Figure 5.1. Sequence of 3 stimulus images (balloons, accordion, basket of apples) displayed on the large display without the priming sequence displayed on the large display. Each of these images is displayed for 1 second. The next image appears directly 1 second after the previous image. Due to copyright reasons, the images have been removed I l l Figure 5.2. The large shared display showing 1 stimulus image (the basket of apples) with a priming sequence of 3 images (turkey, airplane, basket of apples) at the bottom center of the display. Due to copyright reasons, the images have been removed.. 112 Figure 5.3. Nokia N80 mobile phone display showing priming sequence of 3 images (turkey, airplane, basket of apples) at the bottom center of the display. Due to copyright reasons, the images have been removed 113 Figure 5.4. The large shared display showing 1 stimulus image (the basket of apples) without a priming sequence of 3 images at the bottom center of the display. In this condition the priming sequence is displayed on the mobile phone as in Figure 5.3. Due to copyright reasons, the image has been removed 114 Figure 5.5 . Nokia N80 mobile phone showing the nine images displayed on the mobile phone display with numbers underneath each image which corresponds to the phone's keypad. Due to copyright reasons, the images have been removed 116 Figure 5.6. Four example images from the Microsoft Clip Art and Media webpage (balloons, accordion, basket of apples, bag) that were used in the stud. Due to copyright reasons, the images have been removed 117 Figure 5.7. Bar Chart showing the individual total performance scores for the Recognition Task for both the large display and the personal display 123 Figure 5.8. Bar Chart showing the individual total workload scores for the Recognition Task for both the large display and the personal display 124 Figure 5.9. Bar Chart showing the individual total performance scores for the Recall Task for both the large display and the personal display 125 Figure 5.10. Bar Chart showing the individual total workload score for the Recall Task for both the large display and the personal display 125 Figure 5.11. A bar chart with error bars showing the average performance score for the Recognition task and the Recall Task by large display and personal display. The difference between the performances of both displays is not significant for both tasks 129 X l l Figure 5.12. A bar chart with error bars showing the average perceived workload score for the Recognition task and the Recall Task by large display and personal display. The difference between the performances of both displays is not significant for both tasks 129 Figure 6.1. Nokia N80 phone showing the main menu when users first open up the Family Blog Application. Users can utilize the navigation keypad 135 Figure 6.2. Large display showing the top Blog Creation Panel, the middle Media area panel and the bottom Family Awareness panel. Currently 2 users are connected to the Family log. 'Mom's' area is on the left, 'Dad's' area is on the right and the family home media area is in the middle 138 Figure 6.3. Blog creation pane showing 5 video and images on the top line in the boxes. There is a grey box representing 1 audio file, which will play over the first 4 images and video 139 Figure 6.4. The large shared display showing 1 user connected to the Family Blog. The family media collection is on the left side of the Media Area panel, another user's, Mom, media collection of the right hand side of the Media Area panel 140 Figure 6.5. The large shared display showing 2 users connected to the Family Blog. The family media collection is in the middle side of the Media Area panel, another user's, Mom, media collection of the left hand side of the Media Area panel and the third user's, Dad, media collection is on the right hand side of the Media Area panel. 141 Figure 6.6. The large shared display showing 3 users connected to the Family Blog. The family media collection is in the top right side of the Media Area panel, another user's, Mom, media collection of the top left hand side of the Media Area panel, the third user's, Dad, media collection is on the bottom left hand side of the Media Area panel and the third user's, Girl, media collection is on the bottom left hand side of the Media Area panel 142 Figure 6.7. A personal area showing the filmstrip view of the media content. The currently selected content is enlarged above the other thumbnails 143 Figure 6.8. Nokia N80 mobile phone showing the screen on the phone when controlling large display. The color of the user's cursor (red) is displayed on the top left. The area which the user is controlling (personal media) is stated at the top of the display. Additional instructions are display on the bottom of the display 144 Figure 6.9.The Family Awareness Panel of the Family Blog on the large share display showing 4 users in which 2 users are connected (Mom and Dad) to the Family Blog and 2 users not connected (Girl, Boy) to the Family Blog 144 Xll l Figure 6.1.0. An example of two personal areas of the Family Blog from the large shared display 146 Figure 6.11. The bottom view of three personal areas showing two different color cursors. The red cursor is on the left and the blue cursor is on the right 148 Figure 6.12 Diagram showing Family Blog architecture. The database and client applications both send data to the Server 150 Figure C. 1.1. The trip planning prototype showing the Movie Player Application in full screen mode on the large shared display. The two application icons are located in the bottom centre of the screen and are overlaid on top of the movie. The Map Application icon is on the left and the Tag Application icon is on the right 168 Figure C.l .2. The trip planning prototype showing the Movie Player Application in three-quarter screen mode on the large shared display. The two application icons are located in the bottom centre of the screen and are not overlaid on top of the movie. The Map Application icon is on the left and the Tag Application icon is on the right. 169 Figure C.l .3. Trip planning prototype in three-quarter screen mode showing an instance of the Map Application in the bottom right hand side of the screen. The Map Application window overlays the bottom right of the movie player 170 Figure C.l .4. A n example of the Map Application enlarged from the previous figure showing a map of the City of London with labels for locations of interest and a marker of one of them (Aster House Hotel) 171 Figure C. 1.5. Trip planning prototype in three-quarter screen mode showing an instance of the Tag Application on the right hand side of the screen. The Tap Application window overlays the bottom right of the movie player 172 Figure C. 1.6. An example of the Tag Application enlarged from the previous figure showing a selection of tags for the City of London. The 'Paintings' tag has been selected by the user and has changed to the color red from black and beomces bold to indicate it has been selected 173 Figure C. 1.7. Trip planning prototype in three-quarter screen mode showing the Tag Application open and highlighting the resizing icon in the bottom right side of the Tag Application 174 Figure C . l .8. Trip planning prototype in three-quarter screen mode showing the Tag Application with panning arrows alongside the bottom and right hand side of the Tag Application window 175 xiv Figure D. 1.1. Nokia N80 mobile phone display showing a screen shot created from the video playing on the large display. The 'Capture' button is on the bottom left of the display. Due to copyright reasons, the image has been removed 185 Figure D . l .2. Nokia N80 mobile phone display the 'Next Phase' button is on the bottom right of the display. Due to copyright reasons, the image has been removed 186 Figure D.l .3. Large shared display during Selecting Screen Shot Task showing three personal areas with each individual subjects photos from the Creating Screen Shot Task. Above the three personal areas is the shared screen shot panel where screen shots are placed after users has chosen to share them. Due to copyright reasons, the images have been removed 187 Figure D.l.4. The large shared display in the Selecting Screen Shot Task showing three individual user's areas on the left, middle and right. The red users personal area is highlighted. Due to copyright reasons, the images have been removed 188 Figure D.l .5. Nokia N80 mobile phone showing the display for the Selecting Screen Shot Task. The navigation keypad on the phone is used to move the cursor left or right on the mobile phone and the large shared display. Due to copyright reasons, the images have been removed 189 Figure D.l .6. Nokia N80 mobile phone the display for the Selecting Screen Shot Task showing the left thumbnail screen shot has been selected by this user (blue) and is highlighted in yellow. Due to copyright reasons, the images have been removed. 190 Figure D . l .7. A personal area on the large shared display for the Selecting Screen Shot Task showing the left thumbnail screen shot has been selected by this user (blue) and is highlighted in yellow. Due to copyright reasons, the images have been removed 190 Figure D.l .8. Nokia N80 mobile phone in the Selecting Screen Shot Task showing the 'Share Selected' Button on the bottom left and the 'Next Phase' Button on the bottom right. Due to copyright reasons, the images have been removed 191 Figure D.l .9. The large shared display in the Selecting Screen Shot Task showing the list of shared screen shot on the top of the display. Due to copyright reasons, the images have been removed 192 Figure D . l . 10. The large shared display in the Outline Building Task showing the bottom area where the shared list of screen shots are displayed in the single cursor condition . The top panel is the outline created using screen shots from the shared list below. Due to copyright reasons, the images have been removed. Figure D. 1.11. The top panel of the large shared display from Figure D. 17 showing the outline once screen shots have been added to the outline. Underneath each screen X V shot is a number corresponding to its current place in the outline. Due to copyright reasons, the images have been removed 194 Figure D . l . 12. The bottom panel of the large shared display in the single cursor condition from the Figure D.l7 showing 3 selected screen shots which will appear in the outline. The selected screen shots have a yellow border around them. Due to copyright reasons, the images have been removed 194 Figure D . l . 13. The bottom panel of the large shared display in the single cursor condition from the Figure D . l 1 the single purple cursor on the left hand side on the thumbnail. Due to copyright reasons, the images have been removed 196 Figure D . l . 14. Nokia N80 mobile phone in the Building Outline Task with the control policy of 1 cursor plus personal display showing the display when controlling the large shared display. To control the phone, the 'Cell Phone' button on the bottom left is selected. Due to copyright reasons, the images have been removed 197 Figure D. 1.15. Nokia N80 mobile phone in the Building Outline Task with the control policy of 1 cursor plus personal display showing the display when controlling the personal display. To control the large shared display, the ' T V button on the bottom left is selected Due to copyright reasons, the images have been removed 198 Figure D . l . 16.The large shared display in the Building Outline Task in the 3 individual cursor condition showing three personal areas with three corresponding colored cursors. Red on the left, blue in the middle and green on the right. Due to copyright reasons, the images have been removed 199 xvi Acknowledgements First, I would like to thank both of my supervisors, Dr. Kellogg Booth and Dr. Rodger Lea. I learned a great deal from both of them and appreciate all of their encouragement and support over the last two years. Thanks to Dr. Sidney Fels for agreeing to be my second reader and offering great ideas throughout this project. I would like to thank the team at Panasonic R&D. Jean-Claude Junqua, who led the research project, Deanna Wilkes-Gibbs for her help with the design of the studies and many others including Peter Veprek, David Kryze, Shigenori Maeda, Toshiya Naka and Masanori Nakanishi. I am very grateful for all the support from other students. Without them, I could not have made it through this process. Thanks to Dave Temes, who listened to all my complaining over the last two years, to Tony Tang, who offered endless advice and support, and to my colleagues in the MAGIC lab, including Mike Blackstock, Matthias Finke, Nels Anderson and Amy Wei, who made coming into the lab fun and worthwhile. Thank you to my mother, Linda McClure, an inspiration for her endless pursuit of knowledge and to my father, Dennis Arksey, who luckily passed on his infallible work habits to me. Also to my stepparents Donald McClure and Darlene Arksey, who over the years have become my family and never once failed to offer their support. A l l of my parents love and support has enabled me to pursue this great opportunity and I thank you all. Finally, I want to thank my wonderful husband, Damien, for all of his support and encouragement over this seemingly never-ending pursuit of higher education. This endeavor would have been much less exciting and fulfilling without you to share it with. XVII The research reported in this thesis was financially supported by Panasonic R & D Company of America, the Natural Sciences and Engineering Research Council of Canada through the N E C T A R strategic network and the discovery grant program, and The University of British Columbia through the Department of Computer Science and the Media and Graphics Interdisciplinary Centre. Nicole Arksey University of British Columbia, 1 Chapter 1 Introduction With the recent advances in liquid crystal displays, plasma displays and projected televisions, consumers are able to purchase high-resolution, larger screens for their homes at reasonable prices. Currently available on the market are consumer-priced high-definition, flat screen, 60-inch displays. The future size of displays may only be restricted by the sizes of consumers' living rooms. These large displays are not only being used to watch movies and television shows, they are being used in combination with a home's personal computer. Products are already appearing on the market to support this use. Apple has created a device, Apple T V (Apple, 2007), enabling people to synchronize their digital media to their television displays to look at family photos and movies in high-definition, listen to music, and surf the Internet to watch online videos. Concurrent to this trend of larger and better quality displays in the home, personal devices, such as Smart Phones, personal gaming consoles and handheld computers, have also advanced in their capabilities, display quality and use in the home. For example, mobile phones have experienced an increase in computational power and many have the ability to include advanced features other than basic phone functionality. People use Smart Phones to create photos and videos, listen to music and even use the Internet from any location they choose. Not only are these personal devices useful when people are on the go, these devices are capable of transferring data to and from personal computers so content created on the phone can also be shared and distributed to others. Researchers are now investigating using personal devices to adaptively control different home appliances such as the washing machine and printer (Nichols et al., 2006). Eventually all appliances in your home might be controlled from a single interface that resides on one of these personal devices. These two trends, larger high-quality displays and more powerful personal devices with a display, both being used in the home, have the potential to converge for a variety of in-home applications. For example, personal displays and large displays could be used together to create and share multimedia content. A personal device, such as a mobile phone, can already be used to create photos, videos and audio clips. Often people wish to share these multimedia artifacts with family members and friends. By enabling the ability to upload this media to a large shared display, people will be able to edit, share and discuss these items collaboratively. Another example where a personal device with a display and a large shared display are currently being used together is in the video game sector. The Nintendo Wi i , a game console that can be used on a large home display and the Nintendo DS, a handheld personal game console with a display, can be used in conjunction with each other. The personal game system can be used to carry content to various locations and to control the shared display on which a Nintendo Wii game is being played with other people. These converging trends lead to many questions about how to design applications that utilize both the display on a personal device and a large shared display to enhance users' experience in the home. Of particular interest to us are the workflow transitions in which users switch between shared collaborative activities and personal activities. In this thesis, we examine how people can utilize a large shared display in the home, what the 3 impact will be of adding a small personal display, such as a mobile phone, when users are working collaborating, and when they are working individually. The research reported in this thesis was conducted in partnership with Panasonic, a consumer electronics company that is interested in understanding how personal devices and large displays may in the future be used in the home. 1.1 Motivation As personal devices with a display are gaining more technological power and larger high-quality displays are becoming more affordable, researchers are looking at ways these two can be used together in collocated collaborative settings. There are yet no clear guidelines regarding what part of a task should be presented on the display of a personal device or the large shared display. Previous approaches have looked at using the display to protect private content (Greenberg, 1999; Myers, 2001) or to transition between personal and collaborative tasks (Myers, 2001; Rekimoto, 1997; Tani et al., 1994). Using the personal display for presenting private content and the large shared display for public information is a rigid and inflexible policy. This does not make room for content that is not necessarily public or private. It is not clear which display should be used for this neutral content. A task-oriented approach is to use the personal display for tasks that are loosely coupled among members of the group and use the shared display for more tightly coupled tasks. One potential problem with any strategy that uses both personal and shared displays is that group members may loose valuable awareness of what other members of their group are doing (Gutwin & Greenberg, 1998). We need to ensure that any solution we adopt does not suffer from this problem. Our research explores these two approaches and focuses on whether users prefer to work on a personal area on a large shared display or on the display on their personal device for loosely coupled tasks, and how the use of these two displays effects collaboration for personal and group tasks. Specifically, our research examines the following four questions: 1. Are multiple users able to work in parallel on personal areas on a single shared large display? (Chapter 3) 2. If users are able to work in parallel on a single large shared display and furthermore a small personal device with a display is introduced, can we design applications that use both displays for varying levels of coupling between users engaged in collocated collaborative tasks? (Chapter 4) 3. For applications that utilize both a small personal display and a large display, how difficult is it for users to switch their attention between the displays? (Chapter 5) 4. Can we create an application that uses our results to build an application with a multiple small personal devices and a shared large display? (Chapter 6) 1.2 The Research Process In order to answer our research questions, we completed an informal pilot study, a main study and a focused follow-up study (see Figure 1.1). 5 Main Study Sharing a Large Display Informal Study Large & Small Display Study Switching Attention Focused Study Figure 1.1. Diagram showing the research process of this thesis. We utilized our results and experience from our studies to design and develop an application. The goal of the pilot study was to understand how multiple users share a single large display for parallel tasks. Our observations revealed that multiple people are able to work on personal areas on the large shared display. Based on these observations, we developed a set of design guidelines for using personal and shared space on a large display for multiple users. We utilized these guidelines for the design of our application for our main study. Once we better understood how multiple users could work on personal areas on the same display, we then moved onto the main study to investigate the impact that an additional small personal display would have for collaborative and loosely coupled tasks. The goal of the main study was to understand the difference between working on a personal area on the large shared display or working on a small personal display for individual parts of a collaborative task and what impact this had on collaboration with a group. From these results, we developed a second set of design guidelines for what parts of a task should be distributed over each of the displays, what displays users prefer to work on, how effective it is for users to move between these displays and how to manage shared control with multiple displays. During our main study, we found switching focus between different displays was distracting to some users. We had not anticipated this, so in a follow-up study we sought to gain a better understanding of how difficult switching focus between a large display and small personal display might be for users. The results from the follow-up study indicate that the cost of switching for some simple tasks is low, and that using a small personal display with a large display is a viable option for some applications in the home. Informed by our experience of our studies, we built an application, called the Family Blog, which utilizes our results and the design guidelines from our user studies. The Family Blog allows users to create photos, video, text, and audio files on a mobile phone and upload these to a large shared display. The mobile phone is used to interact with the large shared display as groups of users collaborative to build a presentation timeline from individual and shared content. 1.3 Research Contributions of Thesis The goal of this research was to gain a better understanding of how a personal device with a small screen can be used in combination with large shared displays for applications within the home. Our research focused on collaborative tasks and the impact of adding a small personal display. This research makes the following contributions: 1. We began to explore the design space of using small personal displays in combination with large shared displays for in-home collaborative tasks (Chapter 4 and Chapter 5). 7 2. We developed a task suitable for studying user's preferences and performance when completing collaborative tasks on a small personal display and a large shared display (Chapter 4). 3. We demonstrated the viability of our design guidelines through the creation of an application that uses multiple small personal displays and a large shared display for creating and sharing media content (Chapter 6). 1.4 Overview of Thesis Relevant previous work from the literature is discussed in Chapter 2. Chapter 3 describes our informal pilot study investigating how groups are able to work on loosely coupled tasks on a shared large display and presents a set of design guidelines for utilizing shared and personal space on a shared display. Chapter 4 describes our main study investigating the difference between working on a personal area on the large shared display and working on a small personal display for individual parts of a collaborative task, and discusses the impact this has on collaboration within a group. Chapter 5 presents our follow-up focused study investigating how difficult switching focus between a large display and small personal display is for users. Chapter 6 describes our Family Blog application, discusses our experience using it, and how the guidelines we developed, influenced its design. Chapter 7 considers directions for future work and summarizes the conclusions in the thesis. 8 Chapter 2 Related Work In this chapter, we discuss work related to our research, including how large shared displays are used for collocated collaboration, different methods for interacting with these large shared displays, and how large and small displays have been used in combination to support collaboration. First, we review different methods that have been reported in the literature for using a large shared display for collocated collaboration, including whiteboard displays, ambient awareness displays and electronic meeting rooms. In our work, a large display facilitates collaboration. We examine how multiple users can share these displays for varying levels of coupling in collaborative tasks in our pilot and main studies. Next, we review the literature on different methods for interacting with shared large displays, such as using multiple mice and keyboards, laser pointers and personal displays. In our main study and our Family Blog application, multiple users need to interact simultaneously with the shared large display so we utilized the navigation keypad on a mobile phone. Finally, we review a number of previous systems that use both large and small displays for collocated collaboration to see how these systems relate to our work using a small personal display and large shared display for collaborative tasks in the home. 9 2.1 Large Shared Displays for Collocated Collaboration With recent advances in technology, large displays are better quality and less expensive than ever before (Rogers & Lindley, 2004). These large displays are being used in businesses, classrooms and the home to enhance collocated collaboration within a group (Rogers & Lindley, 2004). Compared to regular desktop monitors, large shared displays have the ability to increase collaboration by increasing the amount of information displayed without further interactions being required from users (Ball & North, 2005), thus increasing the ability for more users to view content and have personal work areas (Russell & Sue, 2003, pg 3). This also increases the mutual awareness of group activity (Huang & Mynatt, 2003). There are a variety of different sizes and types of large displays including plasma displays, L C D displays, multiple tiled monitors, front and back projected displays, and SmartBoards (Rogers & Lindley, 2004). Researchers have looked at ways of creating wall-size displays by using arrays of projector that increase the potential resolution and size of screens. For example, L i et al. (2000) created an 18-foot rear-projected screen with 8 projectors and a resolution of 4,096 x 1,536 pixels. Our work looks at varying levels of task coupling during collocated collaborative tasks in the home. This first section of our review of related work considers three different categories of displays that focus on increasing collocated collaboration within a group. These three categories include displays that are used for knowledge work and collaboration, displays used to increase ambient awareness of a group, and displays used as components in electronic meeting rooms. 10 2.1.1 Electronic Whiteboard Displays Electronic whiteboards have been used to enhance collaboration within a collocated group to enhance brainstorming, clarifying and recording of new ideas (Pedersen et al., 1993). Two problems for traditional whiteboards are (1) that information is transient with no permanent record, and (2) that users cannot share multimedia artifacts with one another. Electronic whiteboards can offer solutions to both of these problems. LiveBoard, one of the first electronic whiteboards, was a "directly interactive, stylus-based, large-area display" (Elroy et al., 1992). Tivoli, an extension to LiveBoard, strived to provide users with a simple, easy-to-use electronic whiteboard with the power of a computer (Pedersen et al., 1993). Multiple users could write on the display, 'wipe' information off the display, and move information around on the display. Flatlands is another example of an electronic whiteboard focused on enhancing informal group work over an extended period of time (Mynatt et al., 1999). In particular, researchers looked at space management on the shared display, managing multiple applications, and creating a history of interactions with the display. One approach taken for managing screen real estate in Flatlands was to segment the display dynamically as new information was added to the display. A border around a segment denoted the grouping. Users could add more information to that segment. Another approach to managing real estate is to define personal areas in front of each user (Tang, 1991), and to use different, well-defined areas for personal, group and storage territories (Scott, 2003). When investigating whether two users sharing a single-display groupware application avoid interference, it was shown that users tend to partition their space on a shared display based on the task and seating position (Tse, 11 2004). Some of these techniques for managing screen real estate are further explored in the pilot study described in Chapter 3 and the results of that study are used in the design of the applications described in Chapters 4 and 6. 2.1.2 Ambient Awareness Displays Often when collaborating, tasks require both individual and group activity. There is a need for users to move fluidly between these two modes (Dourish & Bellotti, 1992) (Gaver, 1991). Being aware of what others are doing may help in managing the collaborative process (Dourish & Bellotti, 1992). Ambient awareness displays are a technique for increasing awareness within a group by allowing users to post information available to the rest of the group. The Notification Collage is an electronic bulletin board where users can post multi-media content to help increase group awareness and collaboration (Greenberg, 2001). Content is posted through the user's desktop computer and can be viewed on the large shared bulletin board or on the user's own computer (see Figure 2.1). The shared display is placed in a common area where users can choose to look at the information at their leisure to help keep track of what other group members are doing. 12 Figure 2.1. The Notification Collage on a large shared display in a public setting showing 2 users looking at a map. The Plasma Board is a community electronic bulletin board used to share content posted by coworkers or sampled from an intranet. The goal of the Plasma Board was to increase social interactions through awareness (Churchill, 2003). The Community Wall is another electronic bulletin board designed to improve the diffusion of information in a workplace where users can post and look at information from co-workers supporting asynchronous collaboration (Snowden 2002). Unlike the Plasma Board, users can interact with the Community Wall through direct touching of the screen, email, paper and a Palm Pilot interface. When interacting with the Community Wall using the Palm Pilot, information on the board is duplicated on the Palm Pilot screen, where users can create 13 new content that is posted on the Community Wall. The Palm Pilot has an online and an offline mode. One benefit of these ambient awareness displays is that users gain information about the rest of the group asynchronously; it does not require that all members of the group be co-present. Although this extra information helps a group's collaborative efforts, there is a balance between giving enough information to the group to maintain a useful level of awareness while still keeping sensitive information private (Greenberg, 2001). In our study in Chapter 4 we try to understand how awareness of users working on a personal area of a shared display may help the group coordinate its activity. In Chapter 6 we describe how users can explicitly share their private information with the group when this is appropriate. 2.1.3 Electronic Meeting Rooms The third category of large displays relevant to our work is electronic multi-large screen meeting rooms: rooms with multiple displays set up as a meeting room. Challenges for these rooms include establishing interaction techniques for multiple connected displays and providing the ability to transfer information across the multiple displays. i-Land is an electronic room where computer displays are added to walls, chairs and tables in order to support different forms of collaboration including presentations, brainstorming sessions, and 'information foyers' (see Figure 2.2) (Striez et al., 1999). 14 Figure 2.2. A first visualization of i-Land showing users surrounding multiple large displays in the back of the room, a tabletop display in the middle of the room and several smaller displays throughout the room. iRoom is a ubiquitous meeting room environments with large wall displays and integration of wireless appliances including a variety of handheld devices. The room consists of three touch sensitive displays, a 6 foot diagonal display, and a 3 foot x 4 foot tabletop display configured much like a standard conference room (see Figure 2.3). PointRight is a technique developed as part of the iRoom for interacting with a mouse across multiple displays (Johanson et al., 2002). MIT's Intelligent Room is a multi-modal meeting room that utilizes meta-data, such as speech and sensors, to display adaptive content within different displays in the room (Peters & Shobe, 2003). The focus of this project seems to have been more on system development and, less on the design of the user interface for the system. 15 Figure 2.3. The iRoom in use, showing 3 large displays on the wall and a group of users sitting around the table, 2 with a laptop. These rooms are set up to increase collocated collaboration within a team, but the research focus of these rooms has been on the creation of the systems to run these multiple displays and transfer content across them. In our own work described in Chapter 4, we study how we can move content between a small personal display and large shared display, and in Chapter 6, we implement these techniques as part of our Family Blog application. Electronic whiteboards, ambient displays and electronic meeting rooms are all examples of how large shared displays can provide support for collocated collaboration. Research on electronic whiteboards has focused on scenarios where users are working synchronously, standing close to the display and interacting with the display by touching the display with a pen or with their hands. Throughout a typical scenario, one person is usually in control while others are watching the person in control. It is not clear how these displays can support collaboration when people are not working on a task closely together but are moving back and forth between personal and collaborative tasks. 16 Unlike electronic whiteboards, ambient awareness displays focus on improving asynchronous collaboration within a group. Research on ambient awareness displays looks at the balance between private information and awareness to increase asynchronous collaboration within a group, but it often does not look at this balance when users are working synchronously. In our main study, we investigate whether or not some degree of awareness of what others are doing on a large shared display is preferred over working on individual personal displays for synchronous collaboration. Electronic meeting rooms also focus on groups working synchronously, but research in this area is more focused on creating new systems rather than investigating the usefulness and usability of existing systems. Little work has been done to assess how well users are able to follow data movement across multiple displays, or what mental model users have about where this information is stored and how users are able to access it. Most of the literature about using large displays for collocated collaboration focuses on a work environment. This thesis focuses on large displays in a home environment. At work, users have specific goals and methods to use to meet their goals, but when people are using large displays in the homes their focus is more on the quality of their experience rather than getting specific tasks accomplished. In our studies, the motivations for the tasks we examine arise from scenarios in which a large display is being used to enhance users' experience while trying to complete a task. We focus on eliciting users' preferences in order to move beyond traditional performance-oriented evaluation. 17 2.2 Interacting with Large Display for Collaboration Although technology for large displays has improved recently there are still many outstanding questions about methods for interacting with these displays. Because multiple users working together with only one mouse on a shared display creates some difficulty for collaboration (Stewart et al., 1999), there are still questions about how multiple users should interact on a single shared large display in a collocated collaborative setting. When a group shares a single large display to collaborate with only a single interaction device, users are not able to work on subtasks or loosely coupled tasks when another member of the group is in control of the only input device. One approach that does not require special interaction devices to interact with a large shared display is research that investigates using natural language alongside pointing to interact with a large display (Bolt, 1980). Users can use commands such as "make that smaller" while pointing to a blue triangle on the display. Another approach to interact with a large shared display is adding multiple interaction devices to enable users to work on a personal area on a shared display without relying on a single user to pass control of a device. 2.2.1 Mice and Keyboards Single Display Groupware (SDG) refers to a set of applications or systems that allow multiple users to have simultaneous control over a shared display (Stewart et al 1999). The type of interaction device used for control varies, but one approach is to utilize multiple mice and multiple keyboards. An example of an SDG application is the Multi-Device, Multi-User, Multi-Editor (MMM), a set of editors supporting simultaneous 18 input from multiple mice color (Bier & Freeman, 1991). The design of M M M focuses on user registration, methods of sharing screen real-estate amongst multiple users, minimizing interference between users, and giving individualized feedback to each of the multiple users. A unique color indicates each individual user's cursors. The user's objects are grouped together and surrounded by a border having the same color as the cursor. The study presented in Chapter 3 examines whether color is an effective method for representing an individual's objects on a large shared display. The SDG Toolkit enables developers to develop quickly SDG applications including editing applications, drawing applications and games (Tse and Greenberg 2004). The toolkit enables up to four simultaneous mice and keyboard input devices. We made use of the SDG Toolkit when developing the prototype for the study presented in Chapter 3. 2.2.2 Laser Pointers Although mice offer better motor control, in a collocated collaborative task on a large shared display multiple laser pointers are thought to encourage collaboration between subjects (Vogt et al., 2004). In LumniPoint, multiple users can use their own laser pointer as a pen or as a pointing device on a tiled, back-projected high-resolution display. Cameras capture the location of each laser beam, and strokes or other gestures are recognized and sent to an application as events (Vogt et al., 2003). While laser pointers are easy to use and encourage collaboration, one difficulty with these interaction devices is that users may have a difficult time holding them steady for an extended period of time, so the laser beam can become very jittery. This makes selecting small objects on 19 the display difficult and may become frustrating to users. Myers et al found that standing 10-feet away from an 8-foot wide screen, there is a 'wiggle' of about plus or minus 4 pixels. This can be improved by filtering techniques, but requires high-resolution cameras. A comparison of the laser pointer and 3 other interaction devices show the laser pointer performs the worst at time required to move across the display (Myers et al, 2002). 2.2.3 Personal Devices Besides mice and laser pointers, another approach to enable multiple users' simultaneous control over a shared display is to employ personal devices with screens, such as PDAs (Personal Digital Assistants) and mobile phones. Pebbles is a project investigating different methods for using a personal device and a personal computer together (Myers, 2001). Remote Commander is a technique where multiple users can interact with a shared display using the screen and Graffiti text input functionality of a small PDA. The PDA screen is used to move the cursor around the shared display and to select objects. The Graffiti capability of the PDA is used for text input on the shared display. The Remote Commander was used for a PowerPoint application where a PowerPoint presentation can be controlled from the PDA and the presenter can add annotations onto the slide that also appear on the shared display. Future work includes extending this for multiple simultaneous users. PebblesDraw is a drawing application utilizing Remote Commander where several users can be working together (see Figure 2.4). Each user has a cursor that enables simultaneous control of a simple drawing application (Myers et al., 1998). 20 I* t*|WTiraE |Tr tOT:niT-R U n d o (••;: J ' i : " L ' k i l l 0: •*mxA n B r * s 1 B " 0 T" B u r e u e Figure 2.4. The first version of PebblesDraw showing 6 users connected to the drawing application, each with their own colored cursor represented at the bottom of the display. A more recent technique for interacting with a large display is to make use of the camera available on a mobile phone to enable gestures (Ballagas, 2005). Using image processing techniques, images from the camera are used to determine which way the phone is moving. The movement data from the phone is sent to the display so that different gestures can be inferred. One gesture, called Sweep, uses optical flow image processing to move a cursor on the large display (see Figure 2.5). A second technique, Sweep & Point, allows users to select content on the large display and move it to the phone display. 21 <4 C«ib hit tdtt t i fWtu* Windu* H«if) • 9 m Figure 2.5. The Sweep technique can be used to control a cursor on a large display much like with an optical mouse. A user is moving the cell phone to move the cursor on the large display. C-Blink is another technique to interact with a large display using a mobile phone (Miyaoku, 2004 ). The L C D color screen on a mobile phone is used as a light signal marker, enabling low-cost interaction with a large public display. The screen emits different signals. A camera above the large display recognizes the phone's screen is blinking and moves the cursor relative to the position of the phone. Users are able to move a cursor, select objects on the screen, download content to the phone, and upload a picture to the large display (see Figure 2.6) 22 Figure 2.6. C-Blink in use showing a user moving their cell phone to control the red cursor on the large display. Dynamo is a public display focused on sharing and exchanging information across surfaces in a fluid and spontaneous manner. A PDA can be used to control the cursor on a shared display and to display media from other users. Individual users have space on the shared display where they control their own media or choose to share control with other users (Izadi et al., 2003). In this section, we have summarized different solutions for enabling multiple users to interact with large shared displays, including the use of multiple mice and keyboards, laser pointers, and personal devices with their own small displays. While using mice to interact with a large display may seem intuitive, a problem is that the mouse cursor becomes difficult to visually track on a large display (Robertson, 2005). In addition to this problem, using mice for applications in the home is awkward because there is often no hard surface where a mouse can be placed in a traditional living room environment, so specialized furniture would have to be created to enable the use of mice 23 in a living room. Laser pointers on the other hand do not require a hard surface, but holding a laser for an extended amount of time can be tiring for a user. The laser beam can become jittery, so selecting small targets becomes difficult. In contrast, using small personal displays is compelling because as these devices become more ubiquitous users can use devices they already own to interact with a large home display. Two of the techniques discussed, C-Blink and Sweep, use a personal device to move the cursor on a large display, but similar to using a mouse, it may be difficult for users to track visually a cursor on the display. A third approach for using a personal device to control a large shared display uses the screen on the personal display to augment the large shared display. Users can select objects on their personal display that in turn selects the same objects on the large display. This approach requires users to switch visual focus between the small personal display and the large shared display. It is unclear how difficult it is for users to switch visually between displays. In our third study, we examine i f users do in fact have difficulty switching their visual attention between a small personal display and a large shared display. 2.3 Using a Combination of Large and Small Displays to Support Collocated Collaboration Personal devices cannot only be used for interacting with shared displays; the displays on personal devices can be useful for supporting collocated collaborative tasks. Three different areas in which we see the personal device being useful for collaboration are (1) transferring content, (2) partitioning workspaces into private and public areas, and (3) improving the workflow of group and individual tasks. 24 2.3.1 Transferring content One benefit of personal devices is that content can be stored on these devices and moved around to different locations using networking. This benefit can only be fully realized once we are able to understand useful interaction methods for transferring content to and from these devices. One of the first examples of being able to move content from a personal device to a large shared display was Rekimoto's multiple device approach to an electronic whiteboard application (Rekimoto, 1998). Pick-and-Drop is a technique he developed to move content between a personal device and a shared large display. A user selects content on a PDA, selects an area on the shared display, and then the content moves to the large display. Information can be transferred to the PDA from the shared display using a similar technique. In the Pebbles project, the Remote Commander application cannot only be used to interact with the large shared display using the screen on the PDA, but users can download a picture of the shared screen to the PDA and zoom in for more detailed information (Myers et al., 2001). The image is updated on the PDA if the content is updated on the personal computer (see Figure 2.7). Figure 2.7. The Remote Commander full-screen view on PDA showing a screen shot of the personal computer on the display of the PDA Another approach that has explored transferring content between a shared display and a personal display is the Interactive Television (ITV) system. It moves content from a regular T V to a personal device. Users can select different types of content on the TV to be downloaded onto a personal device, such as a cell phone, which allows the content to become portable. The type of content explored in the ITV project included clips of a T V show or movie, and prizes from a game, including coupons for future use (Ma et al., 2004). ARIS is a window manager that allows users to move content between different devices with an iconic map of a multi-display environment. Content is moved through a network connection. It can be moved to and from a personal device and different shared displays located around the room (Biehl & Bailey, 2004). An example of content transfer in the entertainment industry is the Nintendo Wii , which takes advantage of the portable Nintendo DS display. A set of games that use both displays are the Pokeman series of games on the Wi i and the DS. Users who play 26 Pokeman on the DS can connect to the Wi i and watch their characters play and battle other users on the large display. 2.3.2 Private and public areas Another benefit of using a personal device with a display alongside a larger shared display is that it naturally partitions the workspace into private and public areas. When collaborating on a shared large display users are concerned with protecting their territory and minimizing embarrassment (Palen & Dourish, 2003). Because other viewers are more likely to notice sensitive text on a large-screen public display (Tan & Czerwinski, 2003), using a personal display allows users to keep sensitive information on only their private display. For this reason, personal devices may be a useful method for enhancing privacy of individuals when collaborating with others because the screens on a personal device are quite small so it is very difficult for others viewers to see the screen content. SharedNotes is an application that uses a PDA and shared display for the creation and sharing of notes during a meeting. The PDA is used to create personal notes and control the shared display. Once a user creates a note, it can be explicitly shared with others by selecting the note on the PDA. Public notes are displayed on the large shared display and a portion of the PDA display. Although the SharedNotes system has a 'rigid' notion of private and public (once an object is public it can not be made private again), a future design recommendation is to allow a more flexible notion of private and public (Greenberg et al., 1999). 27 Another part of the Pebbles project is the SlideShow Commander that allows the presenter to make notes on the current slides presented to an audience on the PDA. Future suggested work includes extending this functionality to allow audience members to create their own private notes using their PDAs (Myers, 2001). Another approach to meet the privacy needs of a group is to allow users to have more control of the information that is shared on a large shared display during a presentation. A role-based shared view allows users to specify which information is displayed to viewers having differing roles within a group, while still offering the audience some awareness of what the presenter is doing (Lior et al., 2005). 2.3.3 Workflow transition Often when collaborating, tasks require both individual and group activity and there is a need for users to move fluidly between these (Dourish & Bellotti, 1992; Gaver, 1991). When using a small personal display and a large shared display, users have to transition between their work on these two displays and decide where information should be displayed. How to achieve this is an open question. One possibility is to utilize the benefits of having two screens to distribute work and content, such as using a large shared display for displaying an overview of information, and the small personal display for display context information. Another possibility is to use the personal device to separate the workflow of a group between tightly coupled tasks, where all members of the group are doing the same tasks and loosely coupled tasks, where each member of the group is working on some subtask with the overall goal of contributing to the group's task. 28 An example of a system that transitions between a small personal display and a large shared display to improve the applications' abilities is a PDA-ITV application that explores how a real-estate information system can use a PDA and a T V to explore different houses on the real-estate (Robertson, 1996). The PDA is used as to interact with the T V through infrared technology. During parts of the task that only require low-resolution images, such as a map or text, the PDA display is used. The T V display is used for higher quality images, such as of houses or different neighborhoods. Users were able to use the system as an information browsing application that operates stand-alone on the PDA. While this system does not focus specifically on collaborative work, it does look at how a small personal display and large display can be used to enhance the workflow of a task. Three important design challenges for designing dual-device applications emerged: (1) the design for a dual-display system is challenging because the two displays differ in their output characteristics, (2) different parts of a task may be accomplished with different devices and input, and (3) output are confounded on a handheld display. The authors give five design guidelines to help solve the three challenges above: (1) distribute information across appropriate displays, (2) combine devices so ensemble provides more than each individual device, (3) display type should be based on information content, (4) choose a device based on a particular task, and (5) remember that device coordination is critical. Using a PDA and a whiteboard in combination, the PDA display, improves some of the challenges of using a whiteboard in a collocated collaborative task for text entry. It also reduces interaction difficulty such as problems with reach when multiple users utilizing a single display which causes a 'bottleneck' (Rekimoto, 1997). Building on the 29 Pick-and-Drop technique, Rekimoto added more functionality in his multi-device approach, with a whiteboard and PDA combination to help users move through varying levels of coupling in a task. The functionality provided includes text entry on the PDA, the ability to search for personal data on the PDA, and shared control of the whiteboard for multiple users. Courtyard is a multi-display system that integrates small personal screens with a large shared display in a business setting (Tani et al., 1994). Users work both collaboratively and individually to solve problems using the large amount of data available to them. The large shared display offers users an overview of the vast amounts of data whereas the small personal display (a computer monitor) is used for individual users to drill down into information that is more detailed. Courtyard allows users to move control from their personal display to the large shared display and support the transition of more detailed data from the large to the personal display to perform individual tasks. Similar to Courtyard, Command Post of the Future is another component of the Pebbles project (Myers, 2001). In Command Post of the Future, the PDA is used to view more details of the information presented on the large shared display. The detailed information is not only displayed on the PDA to minimize distraction to others who are using the large shared display, it is used as an access control mediator to the information: it is possible that not all users have permission to view some data, so users that do have permission to see it on their personal devices. Both of these approaches use the personal display to enable parallel work in a collaborative setting. Three areas where small personal displays and large shared displays are being used in combination include the transferring of content, partitioning workspaces into 30 private and public areas, and improving the workflow of group and individual tasks. One of the benefits of using a small personal device is that content can be brought from many different locations, but research in this area is focused on new techniques for transferring content rather than the usability issues for users and groups when disparate content is being manipulated on multiple displays. There has been little done to gain a better understanding of what a user's mental model is and how to provide feedback about content that has been transferred from one display to another. Another approach that uses two displays is separating private and public information. A small personal display is used for private information or tasks, and a large shared display is used for public information or tasks. A policy that is too rigid or that does not allow for neutral information may not work. Specifically, content that users do not mind i f other people see it, but the information was never intended for the group. Rather than dedicating each display for holding information that is private or is public, an alternative approach is to split the large shared display and small displays up by workflow tasks, the small personal display for detailed information and the large shared display for an overview, or to distribute tasks based on some qualities of the displays. While these are useful guidelines, there is no clear understanding of what the difference is between completing different tasks on the large shared display or on the small personal displays, or where users actually prefer to work. We investigated these two issues for a mixture of personal and collaborative tasks in our second study. Rather than simply assuming that users prefer to do some tasks on their personal display or the on large display, we studied this issue in an empirical setting and based the design of our Family Blog on our results. 31 2.4 Summary There are several different categories of displays used for collocated collaborative work. These include whiteboards, ambient awareness displays, and electronic meeting rooms. While none of the previous research studies focused on shared large displays in the home, we use some of their results to understand how to partition screen space, how to balance the needs for awareness and for private information, and how to transfer information across displays. In addition to how large displays are used for collocated collaboration, we discuss how different interaction techniques for multiple users can provide control for these displays simultaneously. This allows users to work on personal areas of a shared display and move between personal and group tasks. These techniques include using multiple mice and keyboards, laser pointers, and personal devices. In Chapters 4 and 6, we use the keypad and navigation keys on a mobile phone for moving an individual's cursor within collaborative applications on the large shared display. We utilize the display to provide user feedback to individual users. Previously, different methods for interacting with a large display using a personal device used the display to select and move objects on the large shared display, but no research has been done to look at how difficult it is for users to move between a personal display and a large display. We look at this issue in our follow-up study discussed in Chapter 5. There are three ways a personal device with a display can be used with a large shared display for collocated collaborative work: transferring information from the personal device to a shared display, maintaining public and private workspaces, and 32 supporting workflow transition. One of the benefits of personal displays is their mobility and therefore their content's mobility, but because the screens are so small it is difficult to share and then we need methods for transferring data across displays. The Family Blog application discussed in Chapter 6 uses uploaded multimedia content to a large shared display to enable discussions with other members in a family. Another benefit of small personal displays combined with large shared displays is that they allow users to partition a workspace into public and private information. SharedNotes and Remote Commander are two examples of how personal displays can be used to keep private information private. A challenge with these systems is what to do with information that is not necessarily private or public. It is not clear how to distribute this information so other design options must be explored, such as a user's preference. When using systems with both a small personal display and large shared displays, it is important to understand how to transition different parts of the workflow across the variety displays. Transitioning can include moving between tightly and loosely coupled collaborative tasks or utilizing the separate displays for different types of information. A second display can encourage users to work in parallel and it allows users to investigate independently different information without distracting others in the group. Different displays are best suited to display different types of information. Previous work has not investigated which display users prefer, or how to understand differences between working on a small personal display or a large shared display. Rather than making assumptions about which display a users should work on, we investigated these issues in the main study discussed in Chapter 4 to discover where user's prefer to work and how 33 this effects collaboration, discussed in Chapter 6. This informed the design of the Family Blog application 34 Chapter 3 Sharing a large display pilot study Before exploring how people in the home can utilize a large shared display and the value of adding a small personal display when users are working either collaboratively or individually, we began our research by first understanding how multiple users can share a single large display for loosely coupled tasks. This pilot study was limited to informal observations. It was designed to give us an indication of how people are able to work together on a shared display when they have personal information on a shared display. It has been shown that groups of users in physical workspaces, such as tabletop displays, define personal areas in front of them (Tang, 1991) and use different defined areas for personal, group and storage territories (Scott, 2003). Other investigations of whether two users sharing a single-display groupware application avoid interference found that users partition their space on a shared display based on the task and on seating position (Tse, 2004). Our pilot study builds on these findings to better understand i f users place their objects in territories based on their seating position for more than two users when completing personal tasks with multiple application windows rather than just collaborative tasks with one single application. The goal of the pilot study was to discover how people work on loosely coupled tasks on a shared large display. We planned to use the results from this study to inform the design of an application to be used in our main study. Specifically, the findings from 35 the pilot study would guide us on where personal and group areas should be placed on a shared display, the appropriate sizes for the personal areas on the shared display, and what control affordances might be used to indicate individual and group control of different objects on a shared display. In order to meet the goal of our study we generated a set of questions in three categories: territoriality, distraction and control affordances. The specific questions in each of the three categories are listed below. • Territoriality 1. Where do users place individual objects on the shared display? 2. Do users maintain strict boundaries or do they inter-mix their objects with those of other users? 3. How do users react to the notion of shared screen real estate? 4. What size do users prefer for their personal objects and for other objects? • Distraction: 5. How distracting is it to users to have objects visually overlaid on top of other objects? 6. How distracting is it to users to share screen real estate? • Control: 7. How do users determine which objects they can and cannot control? 8. Is a control policy that restricts users to controlling only their own objects useful? 36 In selecting a task for our study, we wanted to find an ecologically valid task that included some aspect of collaborative and individual work. We felt that a group travel-planning scenario met these requirements. Our model was a group of people planning a trip together. Often people involved in a trip meet to discuss what everybody wants to do on the trip to plan their itinerary. Different types of media including maps, videos, and guidebooks may be used to gather information about the destination. Throughout the planning process, people may look for information and report to the group with any new information. When people are looking for information to share with the group, typically people put their content in a specific area. If people are working around a table, people may put their own objects, such as a book or notepad, close to them and bring these objects to a shared area in the middle of the group when it is time to discuss the particulars. Each member of the group creates a territory for their personal objects, as well as a shared territory for the group's shared content. When members of a group are working together but one member, looks up information about a particular interest this may distract the groups working flow. The distracting member may take special measures to try to minimize the distraction to other members of the group. In this scenario, different people brought in to share with the group many different objects. There is a need to indicate who owns what objects. For example, i f one person brought in a guidebook and another person brought in a map for everyone to look at, members of the group should know who owns what in case they want to look at it, they know who to ask or know where to look for it. The group might be able to tell who owns what object because it is sitting in front of a particular person, but there might be additional indications about ownership, such as a name on the front of a book. 37 We have an understanding of where people may place paper maps, notepads and guidebooks around them, how distractions might disrupt the flow of the group's planning session, and how members of the group can indicate who owns which objects to plan a trip together. It is not clear what happens when the group now uses a large shared display for their trip planning session. To motivate the design of this pilot study, we developed the following travel-planning scenario that has specific elements of what a group of friends might do when they are planning a trip. A group of friends wants to plan a trip to London. The group gathers around to watch a travel guide movie about London including some of London's attractions. Once settled in, the group of friends decides that each person should make note of attractions they are each interested in visiting during their trip to London. Before the movie begins, the group decides that marking the attractions on a map and adding descriptions to the attractions would be useful for their discussions after the movie. The group watches the movie and each friend notes attractions they are interested in. After the movie is complete, each member of the group shares their information they found during the movie. From this information, the group decides on a few attractions to visit while they are in London. This scenario enables us to investigate the questions that we posed earlier about territoriality, distraction and control. In the scenario, there are tasks where users are working individually in parallel while still working together watching a travel movie. The parallel tasks include users finding attractions on a map and adding descriptive tags when attractions the user is interested in are mentioned during the travel movie. This is similar to the trip-planning session without the large shared display. 38 The movie player is a shared area on the large shared display. Although we are not looking at how users are able to share objects on a single shared display in this study we can see i f user's appropriate different territories on the large shared display when the shared area changes size. When people are planning a trip and one person is marking something on a map, while the others are trying to watch a movie, this may distract the other members in the group. This may or may not be so distracting that other members in the group can no longer follow along the movie. Because users are watching the travel movie while other users are completing their tasks, we can see how distracted users are by other's objects around or on the movie player. The different objects on the display, such as the map and the tag windows each belong to different users, just as in the trip-planning scenario above people may bring in their own maps and guidebooks. Observing i f users interact with only objects, they are allowed to on the large shared display allows us to answer questions about control. To realize this scenario, we implemented a system with three applications that would run in our test environment. The system contains a Movie Player Application, a Map Application, and a Tag Application. The Movie Player Application plays a video using the full screen of the large shared display or using three-quarters screen of the large shared display. The video can be started, stopped, or paused using a keyboard. The Map Application displays a map with labeled attractions. Markers can be added to the map to denote an attraction. The Tag Application displays several tags that can be selected to describe an attraction. During the study, subjects watched two 25-minute travel movies. Each subject was given motivational interests to listen for during the travel movies. Each 39 time an attraction about their interest was mentioned in the movie, subjects opened both the Map Application and the Tag Application. Subjects added a marker to the map and selected tags that described the attraction in the travel movie. When completed, subjects closed the Map Application and the Tag Application. Subjects were told the information from the Map Application and the Tag Application could be used after the travel movies were complete to review attractions in the travel movies. 3.1 Method During the pilot study, groups used two different applications to annotate different parts of a two travel movies. One travel movie was about Munich and the other travel movie was about the City of London. Groups watched one of travel movies with the Movie Application playing in full-screen mode and watched the other travel movie with the Movie Application playing in three-quarter screen mode. We chose to have all groups use both the Movie Application playing modes to observe i f users preferred that their application windows be placed around the movie (three-quarter mode) or that their application windows be placed on top of the movie (full screen mode). We wanted to observe i f users placed or sized their application windows differently in either the full-screen mode or the three-quarter mode to observe i f the size of the shared area affects the size or place of a user's application windows. We counterbalanced both the movie and the Movie Application playing mode for all groups. 40 3.2 Participants Twelve subjects (10 females) between 20 and 30 years of age participated in the study. The subjects were recruited using the University of British Columbia online reservation system and were compensated $20 for their time. The study took approximately one and a half hours. A l l subjects were students at the University of British Columbia. Eleven subjects were undergraduate students and one subject was a graduate student. Groups of three subjects participated in the study at a single time. In one group, all subjects knew each other beforehand and signed up for the experiment together. In all the other groups, the subjects had not met prior to their participation in the experiment. Groups of three were chosen to see subject's behavior with more than two subjects but still ensure there was enough screen real estate for subjects to open multiple applications simultaneously and to ensure the display would not become too crowded preventing subjects from completing their tasks. 3.3 Apparatus and Materials The prototype consists of a Movie Player application, a Map application, and a Tag application. The system ran on a desktop computer with a Pentium 3.4 GHz processor, 2 GB R A M and 120 GB of memory. We utilized a 66-inch S M A R T Board with a resolution of 1024 x 768 pixels as our display for this study. While we did not need the interactive capabilities of the S M A R T Board, it was chosen as our display because it looks similar to a television and the size was large enough to enable three users to work simultaneously on our prototype. 41 The S M A R T Board was placed against a wall and 3 chairs were placed at a distance of six feet in front of the display in a semi-circle. The chairs had small tables connected to them, similar to school desks. The tables gave users a hard, flat surface in which to control their mice. In order to make the environment more similar to a home, a small coffee table was placed in front of the chairs. Screen capture software was used to record the actions of the prototype. In addition, a video camera was used to capture the interactions of the subjects with each other. To create the system we used the C# programming language with the SDG Toolkit, a framework for quickly designing and developing prototype applications that supports multiple input devices (Tse, 2002). The SDG toolkit allowed us to develop quickly the three applications with up to four active mice and keyboards and four corresponding cursors on the display screen. The Map Application and the Tag Application were designed to be invoked (opened) by a user when needed, and closed by the user when no longer required (see Figure 3.1). The Movie Player Application was launched by the system and was always present. Each user could open one instance of both the Map Application and the Tag Application at a time. Users could only interact with applications that they opened. If a user tried to move, resize or interact with other user's applications nothing would happen. 42 Figure 3.1. The trip planning prototype in three-quarter screen mode showing an instance of the Map Application in the bottom right opened by the user with the blue cursor and the Map Application in the top left opened by the user with the green cursor. These window applications are overlaid on the bottom right corner and the top left corner of the Movie Player Application To open an application, users select one of the two application icons at the bottom centre of the screen. Once selected the icon's border turns the same color as the user's cursor (see Figure 3.2). After the icon is selected, a user clicks anywhere on the screen to open the application window at that location. When a user selects the application icon to when a user clicks on the screen to open the application window, no other user is able to click on that application icon to open an instance of that application. After a user 43 clicks on the screen, the icon's background turns back to black and then any user can open an instance of the application. Figure 3.2. The trip planning prototype in three-quarter screen mode showing that once the Map Icon is clicked by a user the background of the Map Icon turns the same color (green) as the user's cursor (green). Once an application is open, users can move the application to another part of the screen, or resize it. We decided not to allow users the functionality to move or resize the video player because we were interested in how users interact with their own application windows, not shared application windows. By default, an application's size when opened is small (140 x 100 pixels) and almost unusable at this size. Users can resize an application by dragging the bottom-right corner of the application. We forced users to choose where to put the application window each time they opened a new instance of the application in order to see what their preferred location was. If we had instead placed the application automatically and told users to move it to their 44 preference, there could have been instances where users assumed the automatic location was the proper location or could have forgotten to move it once the application was open. We also forced users to choose the size of the application windows each time they opened a new instance of the application by making the default size of the application window very small. In order to complete their tasks, users had to resize their windows and explicitly chose the size of their windows. We would use this resizing and placement data to understand what a user's preferences are for the size and location of their own applications on a shared large display. If any two applications overlap in the display, the last application opened appears on top of the other application. Surrounding each application was a colored border. The color of the border depended on the owner of the application and matched the color of the owner's cursor (see Figure 3.3). 4 5 Figure 3.3. The trip planning prototype in three-quarter screen mode showing the different colored borders around the Map and Tag application windows opened by two different users. The Map Application and the Tag Application on the left hand side of the display both have blue borders, and the Map Application and the Tag Application on the right hand side of the display both have green borders. The green Tag Application and the blue Map Application both are overlaid over part of the movie player. To answer our questions regarding how subjects share screen real estate of a large shared display when working individually, it was imperative that all subjects be able to interact with the prototype simultaneously. Enabling multiple user interactions means that subjects could interact with their application windows whenever they preferred without having to wait for a control device from another subject. Our prototype was enabled to use multiple mice and cursors. Although each subject had their own cursor, subjects could only move, resize, close and interact with application windows they had opened. This control policy was 46 clear and simple and met our main objective for the study that was to understand where and how individual subjects utilize shared screen real estate for their own applications. A more detailed description of the trip planning system is discussed in the Appendix. 3.4 Procedure Groups of three subjects participated in the study for a single session that lasted approximately 90 minutes. The overall procedure was as follows. 1. A questionnaire was administered to all subjects in a group to gather demographic information. 2. A brief training task was performed. 3. For each of two conditions, subjects watched a television program, performed their individual tasks during the program and completed a quiz afterward to test retention of information presented in the television program. 4. A second questionnaire was completed to collect information regarding preferences for the size and location of application windows, level of distraction and clarity of control affordances. During the body of the study, a group of the subjects watched a recorded television program about going on a vacation in a particular city. This program was displayed on the large shared display. The movies shown to the subjects were television shows about touring different cities in Europe. Each ran about 25 minutes. One movie was about traveling to London (Steves, 2001) and the other movie was about traveling to Munich and the Swiss Alps (Steves, 2003). The shows are hosted by the same travel 47 guide, Rick Steves, and shared a similar format. Each subject was given different motivational interests about the city that was being presented in the television program and was asked to listen for anything that was mentioned about their interest. If something in the movie was mentioned regarding their interest, subjects had to open an instance of both the Map Application and Tag Application to mark the appropriate area on a map and select appropriate tags from a list of predetermined tags. Each subject's motivational interests were mentioned four or five times in the movies (see Appendix for the list of interests for the different movies). Each subject was given motivational interests to listen for in the movie on an index card. For example, i f a subject was given the motivational interest of 'hotels', every time a hotel was mentioned in the movie, they would open the Map Application, find the hotel on the map and mark it. They would also have to open up the Tag Application and select tags describing the hotel's location, cost, area of town, etc. The tags chosen were associated with the movie using a timestamp corresponding to when the tagging application was opened and closed. If subjects were unable to find the location on the map, they were told to not mark anything on the map but instead to select just descriptive tags in the Tag Application. Subjects were told to complete the tasks as best they could while still following the content of the television program in order to answer questions from a quiz given at the end of the program. Although we told subjects to do their best on their task, we were not interested in their performance. We were interested in where subjects placed their application windows, what size they made windows and how their behavior changed based on what others were doing. 48 Before beginning the main task of the study, each group participated in a training task. This was analogous to the main task described above, except the T V program only played for 6 minutes, rather than the full 25 minutes. The program selected was about Berlin and was hosted by Rick Steves (Steves, 2003). Each subject's motivational interest was mentioned once in the training video. If a subject missed the cue in the program, the experimenter would indicate that it was now time to execute the mapping and tagging tasks. If a subject was having difficulty finishing the tasks, the experimenter would help to ensure, they understood what they were supposed to be doing in the main task. One of the issues we wanted to explore was i f subjects preferred objects overlaid to the video, or i f the preferred that there be room around the video to place their objects. In our application, the television program was able to play on the full screen, in which case all of the applications and icons had to be overlaid on video, or the television program could play in the three-quarter mode, which took up about 75% of the screen, and applications could be either moved around or overlaid over the video. This still allowed users to work on the bottom and sides of the screen without having to overlay on top of the movie. Subjects watched one program in one of the two movie screen conditions and watched the other television show in the other movie screen condition. Conditions were counter-balanced for order and television show even though we did not plan to perform statistical analysis on the preference data collected 49 3.5 Measures We measured the position and size of each of the application windows by recording the screen during the study and reviewing the screen capture videos. Additionally, log files were maintained in order to determine i f subjects attempted to control other subjects' windows. The log files captured a timestamp, subject's ID, mouse clicks, and the object a subject was selecting. A questionnaire was used to gather information about our dependent variable, preference for movie condition. The questionnaire was also used to gather information about a subject's strategy for placing and sizing objects, automatic layout policies and clarity of control affordances. Subjects were first asked to indicate on a 5-point Likert scale how disturbing or distracting sharing the display was, how clear it was which objects they could control, and which applications belonged to them. Subjects were then asked to answer questions regarding why they chose to make applications the size they did, and why they chose the placements, they did for their applications. The questionnaire is included in the Appendix. 3.6 Results The results obtained in our study are presented in relation to our main categories of interest: territoriality, distraction and control affordances. 3.6.1 Territoriality The results for this section were gathered by reviewing screen capturing video from each of the user study sessions and data from the questionnaire. Our findings are 50 reported in the context of 4 questions: where do users place their application windows on a shared display, i f users maintain boundaries for their application windows, how users react to sharing a single large shared display, and what size do users prefer their application windows. We coded placement of individual objects by splitting the screen into 6 areas, 3 columns and 2 rows. We used 3 columns to be consistent with the seating configuration and 2 rows in order to break up the top and bottom of the screen. 1. Where do users place individual objects on the shared display? We saw that subjects utilized the four corner regions of the screen for most of their application windows as shown in Figure 3.4. Top Top Top Left Middle Right Bottom Bottom Middle Bottom Left Right Figure 3.4. A diagram of the large display, highlighting the four-corner regions used by most users. These regions include the top-right, top-left, bottom-right, and bottom-left regions 51 There was a strong trend for subjects to place their application windows relative to where they were sitting within the group. Subjects who sat on the right tended to put their application windows on the right-hand side of the large shared display, and subjects to the left tended to put their application windows on the left-hand side of the large shared display. Subjects in the middle used both sides of the large shared display depending on what screen real estate was available when they opened their application windows. If two subjects were utilizing the same side of the screen, one subject used the top and the other subject used the bottom portion of the screen. The subjects within a group never discussed where their applications should be placed. Table 3.1 shows where subjects generally placed their application windows on the large shared display for both the full screen mode and three-quarter screen mode of the Movie Player Application. There were 4 instances in both the full screen mode and the three-quarter screen mode where the middle subject placed their application windows in the same region as a subject sitting on the left side or the right side. These did not cause conflicts because these subjects' interests were not mentioned during the same time of the travel movie. 52 Groups Full Screen Mode Three-Quarter Screen Mode Sitting Left Sitting Middle Sitting Right Sitting Left Sitting Middle Sitting Right 1 Bottom Left Bottom Right Top Right Bottom Left Bottom Right Top Right 2 Bottom Left Bottom Left Bottom Right Bottom Left Bottom Left & Top Right Bot tom Right 3 Bottom Left Top Left & Bottom Left Bottom Right Bottom Left Bottom Left Bottom Right 4 Top Left Bottom Right Bottom Right Top Left Bottom Right Bottom Right Table 3.1. Overview table showing the four groups who participated in the study and the general regions subjects placed their window applications on the large shared display relative to where they were sitting in both the full screen mode and the three-quarter screen mode for the movie player. Ten of 12 subjects utilized the same region on the large display for both the full screen mode and the three-quarter screen modes. Two subjects sitting in the middle changed their regions after a subject on the left side or a subject on the right side used this region. A l l subjects sitting on the left side utilized the left regions for the application windows. A l l subjects sitting on the right side utilized the right regions for their application windows. 53 2. Do users maintain strict boundaries or do they mix their objects? From the coding of the screen capture video described above, we observed that subjects utilized the same region of the screen for all of their applications. If more than one application window was open, subjects grouped these together in the same region. 3 How do groups react to the notion of shared screen real estate? In group 2, the subject sitting on the left side and the subject sitting in the middle both utilized the bottom-left hand side of the screen for the first condition (full screen mode). This did not produce any conflicts because their interests were not mentioned at the same time during the travel movie. During the second condition (three-quarter mode), a conflict was noted. The middle subject had his application open in the bottom left region and the left subject needed to open her applications to complete her tasks. The left subject opened her applications in the middle, but as soon as the middle subject had completed his tasks and closed the application, the left subject moved her applications back to the left side where she was sitting. After this, the middle user moved his application to the top-right hand side rather than trying to share the space with the other user. There was no discussion amongst the subjects about where to place these application windows. From the questionnaire, 9 of 12 subjects stated they moved their application windows to a particular area of the screen so they would not disturb or distract the other users. Eleven of 12 subjects felt they should use a particular part of the screen. Four subjects stated this was because of where they were sitting and 4 subjects stated this was 54 because they did not want to disturb others. The one subject who did not feel he should use a particular part of the screen stated he felt this way because his spot depended on where others placed their applications. This subject sat in the middle. 4. What size do users prefer their personal objects and other objects to be? The size that subjects chose for their different applications varied a great deal between the different groups. While the actual size of the applications varied, 7 of 12 users stated that they chose the size of the application to be as big as possible while not disturbing others or covering up other areas of the screen. In 3 of the groups, application windows never overlapped another subject's windows for more than a brief moment. In the one group where all subjects knew each other before the study, the sizes of the application windows were larger than the other groups. At one point, each subject had one single application open and each application took up one-quarter of the screen, meaning that all but one-quarter of the movie was displayed. If a subject's application window overlapped, another subject's application window the last application window opened would be on top of the other application window. Only one subject in this group stated they chose the size of the application window to minimize disruption to others. On a Likert scale of 1 to 5 (1= strongly disagree and 5 = strongly agree) subjects on average disagreed with the statement, "There was enough room on the screen to perform my tasks" (M= 2.2, SD= 1.26) (see Figure 3.5 for distribution of responses). 55 6 1 ' p—I . 1 1— o -I—I 1—i—' 1—i—I 1—i—I 1—r—aB 1 2 3 4 5 Score on Likeit Scale (1=strongfy disagree, 5=strongly agree) Figure 3.5. Bar Chart of user's responses (N-12) in the questionnaire for the statement "There was enough room on the screen to perform my tasks ". Most users strongly disagreed or disagreed (1 and 2) with this statement. Although they did not agree that there was enough screen real estate for their tasks, subjects did agree with the statement, "There was enough room on the screen to perform my tasks" (M= 4.0, SD= 1.12) (see Figure 3.6 for distribution of responses). 6 -j — - — • 1 w 5 1 r— * 3 — I 2 1 1 — z 1 [ 1 — Q , [ I I | I I | I "I ^ I I 1 2 3 4 5 Score on Likert Scale (1=strongly disagree, 5=strongl/ agree) Figure 3.6. Bar Chart of user's responses (N=12) in the questionnaire for the question "In my home, I would like to be able to do more than one task on my TV. " Most users strongly agreed or agreed (5 and 4) with this statement. 56 3.6.2 Distraction The results from this section are derived from data taken from the questionnaire. Our findings are reported in the context of 2 questions. The first question is how distracting it is to a user to have their own application windows placed on top of the movie player. The second question is how distracting it is to have other user's applications windows on the large shared display. 1. How distracting is it to users to have objects overlaid over other objects Nine of 12 subjects preferred when the applications were not overlaid on the movie (see Figure 3.14). Eight subjects commented on the questionnaire that they preferred this option because it did not block the movie or this option did not disturb others. 9 8 7 M D 6 *-£ i 3 1 r 2 ., 1 • Q I | t I Hires-Quarter Mode Full-Screen Mode Preferred Movie Player Screen Mode Figure 3. 7. Bar Chart displaying results from questionnaire for preference for the two movie screen mode, three-quarter screen mode or full-screen mode. Most users preferred the three-quarter screen mode. 57 2. How distracting is it to users to share screen real estate? Four questions are used to answer how distracting sharing a large display is with other users. The mean score and standard deviation for these four questions are summarized in Table 3.2. Questions about Distraction from Questionnaire Mean Score Standard Deviation When others were performing their tasks, it was disturbing to me. 3.3 .98 When performing tasks, I felt I was disturbing others. 3.3 1.37 It was difficult to follow the movie when others were performing their tasks. 3.3 1.07 It was difficult to follow the movie when I was performing my tasks. 4.1 1.31 Table 3.2. Summary of disruption questions showing mean scores and standard deviation for each question about disruption from Questionnaire data (N—12) On a Likert scale of 1 to 5 (1= strongly disagree and 5 = strong agree) subjects on average felt neutral about the statement, "When others were performing their tasks, it was disturbing to me" (M= 3.3, SD=.98), but we observed all possible answers more or less equally (see Figure 3.15 for distribution of responses). 58 7 -6 -tn z I | 1 1 1 o -I—I ' 1—I 1 1—I 1 1—I 1 1—I 1 1 2 3 4 5 Sco res o n L ike it Seal e (1=stronqly disaqree, 5=stronqfY aqree) Figure 3.8. Bar Chart of user's responses (N=12) in the questionnaire for the statement "When others were performing their tasks, it was disturbing to me. " The scores for this statement were distributed across most scores. Subjects felt neutral about the statement, "When performing tasks, I felt I was disturbing others" (M= 3.3, SD= 1.37) and "It was difficult to follow the movie when others were performing their tasks" (M= 3.3, SD= 1.07). In contrast, subjects agreed with the statement, "It was difficult to follow the movie when / was performing my tasks" (M=4.1, SD= 1.31). 3.6.3 Control Affordances The results from this section are derived from data taken from the questionnaires and log files. Our findings are reported in the context of 2 questions about control: how do users know what they can control, and is it useful i f users can only control their own objects. 1. How do users know what objects they can and cannot control? On a Likert scale of 1 to 5 (1= strongly disagree and 5 = strong agree) subjects on average agreed that "It was clear what applications belonged to them" (M= 4.4, SD= .65), "It was clear what applications I could control" (M= 4.2, SD= .71), "It was clear what applications belonged to other users" (M= 4.3, SD= .65), and "It was clear what applications I could not control" (M= 4.3, SD= .62). The mean score and standard deviation are for the above questions are summarized in Table 3.3. Questions about Control from Questionnaire Mean Score Standard Deviation It was clear what applications belonged to me 4.4 .65 It was clear what applications I could control 4.2 .71 It was clear what applications belonged to other users 4.3 .65 It was clear what applications I could not control 4.3 .62 Table 3.3. Summary of control questions showing mean scores and standard deviation for each question about control from Questionnaire data (N=12) 2. Is a control policy where users can only control their own objects useful? The log files revealed that all subjects only interacted with applications windows that they owned and did not try to interact with other subjects' applications windows. 3.7 Discussion The results discussed above are used to answer each of the research questions presented at the beginning of the chapter. We will interpret these results to decide whether multiple users can use a shared display for loosely coupled tasks. The territoriality results indicate that users group their objects together on the large shared 60 display relative to their sitting position, particularly for users sitting on the left or right hand side. More work is needed for different scenarios to determine the preferred placement for users sitting in the middle. The distraction results indicate users prefer when their objects are not overlaid over other user's objects or shared spaces. Generally, other user's objects on the large shared display do not distract users. The control results reveal color is a useful way to indicate ownership of objects on a large shared display. More details are discussed below. 3.7.1 Territoriality 1. Where do users place individual objects on the shared display? We observed a strong trend for subjects to place their application windows in a relative position to where they sat, especially for subjects sitting on the left hand side and users sitting on the right hand side. Subjects sitting in the middle used both the left side and the right side on the large display. This seemed to be dependent on where subjects sitting on the left side and the right side placed their application windows on the large shared display. In 2 groups, the middle subject utilized the same region as another subject, but this did not result in any conflicts because the middle subject's and the other subject's interest were not mentioned during the same time in the travel movie. Most subjects placed their objects on the display because they felt they should use a particular area on the display, but there no discussion between the subjects ever took place. Two subjects in 2 separate groups, sitting in the middle, each utilized two different regions of the large shared display throughout the two travel movies. This seemed to be the result of the right or left hand subject taking the region on the large 61 shared display that the middle subject had used previously. In both these cases, the middle subject adapted a different region of the large shared display for their application windows. This indicates that subjects can adapt to other areas of the large shared display. The design of the trip-planning prototype, including the placement of the application icons and the placement of the movie player, could have been the one of reasons subjects sitting in the middle used varying regions of the display. If subjects placed their application windows in the bottom middle of the display, they would have covered the icons required to open a new instance of an application for other subjects. This meant there was no clear region on the display where the middle subject could have placed their application windows directly in front of where they were seated and not cover the movie player. The placement of the movie player in the three-quarter mode could have been another reason for subjects on the left side and the right side to utilize the left side and the right side of the large shared display. Subjects tried to chose their region on the screen to minimize distribution to others and to not block the movie so subjects had a good reason to use the area around the movie player. Although, we saw the same regions being used in the full-screen mode (these modes were counter-balanced), there was still a tendency to place applications windows alongside the perimeter of the movie. Typically, in a movie, most of the action occurs in the middle of the screen and not in the perimeters. If a subject placed their application window along the perimeter in the full screen mode, they were still minimizing the disruption of the movie. In a controlled study, we the size and the location of both the movie player and the application icons could vary to investigate the relationship between where users place 62 their application windows and where they are sitting when the shared areas are located in different locations. Even i f the design of the trip-planning prototype did have an impact on where subjects placed their applications windows, subjects still felt they should use a particular part of the screen and in almost all cases used this region throughout the experiment. These results suggest three important points about where users want to place their objects on a shared display. 1. There is a direct relation between where people are sitting and where they place their objects on the screen especially for people sitting on the left and right side of the display. 2. People feel strongly that a particular part of the screen belongs to them. 3. People choose to place their objects on a certain place on the display to minimize the disturbance of others in the group. 2. Do users maintain strict boundaries or do they mix their objects? We observed that subjects in the middle would appropriate either side of the screen for their applications. In one circumstance, a middle user changed the area on the screen he was using to allow for the subject on the left the left hand side of the shared large display. This might indicate that subjects sitting on either side have a stronger connection to their space on the screen whereas the middle subject might be more flexible or unsure where the proper space for them to move their applications. This might be particular to the design of our system as mentioned above. 63 It seems that once users have appropriated an area of the screen, they prefer always to use that area and not to mix their areas with those of other users. Interestingly, while subjects seemed to have a strong sense of using a particular area of the screen, there was no discussion amongst any of the subjects regarding this during the sessions. 3. How do groups react to the notion of shared screen real estate? While subjects did not find there was enough room on the shared screen for all of the tasks, they said that they would still like to perform more than one task on their televisions sets at home. This suggests that given a large enough display, subjects would be willing to use their displays in the home to do multiple things. 4. What size do users prefer their personal objects and other objects to be? The fact the group who knew each other acted differently than the other groups might indicate that people who know each other are more comfortable using more area and potentially distracting others. This could have an impact when designing applications for the home because many subjects will have close relationships. There appears to be a tension between subjects wanting to make their applications large enough to use, while minimizing the disruption to others. In our scenario, subjects were forced to make this choice, but having the size of an object chosen automatically might ease some tension for subjects. 64 3.7.2 Distraction 1. How distracting is it to users to have objects overlaid over other objects? Most users prefer that objects not be overlaid on top of the video because it was less distracting. This is also reflected in the places subjects chose to place their Map Applications and Tag Applications. As noted before, these applications were usually placed in the corners of the screens and in three-quarter video screen mode there were very few times that subjects' expanded their windows to cover parts of the video. 2. How distracting is it to users to share screen real estate? Subjects chose the size and placement of their applications so as not to distract others. These choices could have been the reasons why subjects were not disturbed by others in the groups. Although subjects were not disrupted by other's applications, they were disrupted by their own multi-tasking, suggesting that it is only disrupting when switching is required, and not when others are using parts of the screen to complete their personal tasks. 3.7.3 Control Affordances 1. How do users know what objects they can and cannot control? Colored cursors and matching borders and backgrounds were the main control affordances. It is clear from subjects' responses that this was a good indication of ownership. 65 2. Is a control policy where users can only control their own objects useful? On average, subjects agreed they knew what they could and could not control, and what objects belonged to themselves and others. In addition, the log files demonstrate that subjects did not try to control others subjects' applications windows. Because no subjects were confused about what they could control, the control policy can be considered simple and useable. 3.7.4 Using the Large shared display for loosely coupled tasks We believe that our results validate that subjects can share a large display for loosely coupled tasks. It was evident that subjects could adapt their behaviours to utilize screen real estate without conversing with each other. This indicates that sharing the screen was a natural task for subjects; it resulted in few conflicts. A second observation is that subjects were not disturbed by others working on the shared display, indicating that subjects can work on a different task than others without becoming confused. With simple affordances and control policies subjects are able to clearly understand which objects belong to them and which objects they can control. These results lead us to believe that subjects are able to and willing to use a large shared display for loosely couple tasks. 3.8 Design Guidelines The goal of this study was to observe how multiple-users share screen real estate when working independently on a single display. Using our results discussed above, we created a set of design guidelines for our main categories of interests; territoriality, 66 distraction and control affordances. The following a list of design guidelines that we utilize in our prototypes for multiple users sharing screen real estate on a single display. 3.8.1 Territoriality 1. Application windows should be placed, as much as possible, relative to a user's spatial location. 2. When placing user's objects, ensure all of a user's objects are grouped together and that there is sufficient space in between different user's objects. 3. The use of icons reduces the amount of usable screen space for subjects and should be eliminated i f possible. Pop-up menus are another option to use. These will only appear when a user is trying to open an application and can appear in any space on the display the individual user chooses. 4. The size of windows should only be as large as needed to complete a task successfully. To determine what this size should be, log files could be utilized to determine the average size of the application windows. 3.8.2 Distraction 1. If a group is sharing a shared large display, the display size should be adequate to allow for users to complete their tasks without having to overlay their applications over shared objects on the display, such as a movie player. 67 3.8.3 Contro l 1. Color can be used to indicate clearly ownership of objects in the following ways: a. Distinct borders surrounding applications b. Icons and cursors maintain the same color distinctions as the borders and application objects 2. Limiting users to control of their own objects is an effective policy when content does not need to be shared. From these guideline we gain an understanding of where personal and group areas should be placed on a shared display, the impact of the size of personal areas on the shared display, and what control affordances should be used to indicate individual and group ownership of different objects on a shared display. 3.9 Conclusions and Future Work We completed a pilot study to understand how groups of three users share the screen real estate on a single shared large display for parallel tasks. This work builds on previous work showing that with more than two users, users still create personal territories related to their seating position, which is still true when users have more than one application window open at one time (Tse, 2004). In particular, we wanted to find out more about how multiple users share territory on the large shared display, how distracting it is to have other users working in parallel on the large shared display, and 68 what control affordances can be used to indicate ownership of different applications on the large shared display. Our observations indicate that users are able to share a large display for parallel tasks. A l l users grouped their own applications together in the same general region on the large shared display. Users on the left side and right side always put their applications relative to where they were sitting on the large shared display. While users try to place their objects in a way that minimizes distraction from others, users generally felt neutral about whether other user's objects were disturbing them. Color was a clear and useable way to indicate ownership and control of a user's applications. Further investigation needs to be completed to understand how the design of the large shared display can affect the regions in which users place their applications on the large shared display. While this is an interesting topic, our goal of this pilot study was to gain an understanding of how users utilize a large shared display for loosely coupled collaborative tasks. In addition, we created a set of design guidelines for how multiple users can share screen real estate when working independently on a single display. The guidelines from our study are utilized to design the prototype for our main study (discussed in the next Chapter), which looks at the differences between working on a small personal display and working in a personal area on a large shared display, and how multiple users work both collaboratively and individually in such an environment. 69 Chapter 4 Large and Small Display User Study The goal of this thesis was to investigate the impact of a small personal display combined with a large shared display for different levels of coupling during collocated collaborative tasks. In our pilot study (discussed in our previous Chapter), we observed that multiple people are able to work on loosely coupled tasks on a large shared display. While we did observe users are able to work on personal areas on a shared display, there might be times when a user would prefer to work on a small personal display. In this Chapter, we describe our main study where we want to understand what happens when we add a personal display for not only loosely coupled tasks but also tightly coupled collaborative tasks as well. This might include times when users are dealing with private information, when users are embarrassed about working in front of others or when users want to minimize the distraction from others (Dourish & Belotti, 1992). There are challenges when working on a small personal display including the small size of the display, the loss of awareness to the rest of the group, and the need for users to split attention across the small personal display and the large shared display. With both the benefits and challenges of working on a small personal display, it is important to understand what the impact is of working on a small personal display in combination with the large shared display. To meet our goal of this study there are four design questions that must be answered about the impact of adding a small personal display alongside a large shared display. The four design questions are (1) which parts of a task should be distributed 70 over each of the displays, (2) what displays users prefer to work on for certain tasks, (3) how effective it is for users to move between these displays, and (4) how to manage shared control with multiple displays and interaction devices. The results from this study will be used in the design of future applications that utilize both a small personal display in combination with a large shared display for applications in the home. In the scenario used in this study, people create content using their mobile phones, select content to share with others, and collaboratively create a new presentation with the shared content. This scenario was chosen because it uses a small personal display (a mobile phone) alongside a large shared display and can answer our four design question listed above. We wanted a scenario that is ecologically valid and represents how people will use small, personal displays, such as mobile phones, and large shared displays in the homes. If people are able to create content, such as photos, on their mobile phones there will be a need for people to move this content from their phones to a large shared display in order to share with family and friends. People will want to share each other's content and create a collection of the family member's different experiences. In our study, users create content by downloading screen shots onto their mobile phones from a movie playing on the large display. Users then select their favorite screen shots to share with the group. Finally, all of the user's screen shots are combined and the group works together to build an outline of the movie they watched. In order to realize our scenario, we created a system that uses multiple mobile phones and a large shared display. The system we created uses up to 3 mobile phone client applications to connect to and communicate with a server application driving the 71 large shared display. The client application was developed using the programming language J2ME and was deployed on Nokia N80 mobile phones (see Figure 4.1). Figure 4.1. Nokia N80 mobile phone used for prototype in our study. The personal display on this phone has 352 x 416pixels and can display up to 262,144 colors The server application drives the large shared display and listens for commands from the mobile phone client. The server application was developed using Java programming language and ran on a desktop computer. A high-resolution (1380 x 768 pixels) projector was used to display the server application. Below the methodology of our study is discussed, including a detailed description of the tasks, procedure, participants, measures and design. The results and discussion are then discussed finishing this chapter with our conclusions and future work. This study experiment used a mixed 2 x 3 (display in the Selecting Screen Shot Task x control policy in Building Outline Task) design. The display in the Selecting Screen Shot Task, large shared display or small personal display, was a within factor and outline control policy (1 cursor, 1 cursor plus personal display, individual cursors) was a between factor. 4.1 Method 72 A within design was used for selection in order to have higher power for the selection task and have comparative results for subjects using both the large and small display. The between design was used for control policy to eliminate carry-over and order effects of control policy. In the study, we used a mobile phone display as our personal display and subjects sat in front of a large rear-projected display. Subjects created content by grabbing screen shots from a video playing on the large display. Afterward subjects selected their favorite screen shots to share with the group and finally used these shared screen shots to collaboratively build a representative outline of the video they watched. In order to answer our questions for this study discussed above, our tasks needed to meet four requirements. These include (1) A task with aspects of collaborative and individual work, (2) a task where subjects could compare working on both a personal area on a large display and a small personal display, (3) a task which required subjects to switch attention between the large and small display, and (4) a collaborative task where subjects had to share control of the large display. The Creating Screen Shot task and the Selecting Screen Shot task are completed individually. Building the Outline task is completed collaboratively with subjects sharing content selected from the previous tasks. Our tasks fulfill our first task requirement, tasks with aspects of collaborative and individual work Below, we briefly discuss each of the tasks and the reason these tasks were chosen. 73 4.1.2 Creating Screen Shot Task In the Creating Screen Shots task, a video plays on the full screen of the large display (see Figure 4.2). X Figure 4.2. The large shared display in the Creating Screen Shot Task showing the video playing using the full screen of the display. Due to copyright reasons, the image has been removed. The original source is 'Meet the Robinsons' (Borden, 2007). To capture a screen shot from the current scene in the video, subjects presses a button on the mobile phone. The screen shot appears on the mobile phone display about one second after the subject pressed the button (see Figure 4.3). 74 Figure 4.3. Nokia N80 mobile phone showing a screen shot captured from the video playing on the large shared display. Due to copyright reasons, the images have been removed. The original source is 'Meet the Robinsons' (Borden, 2007). We chose the Creating Screen Shot task as a method to create content for two reasons. One reason is this task requires subjects to constantly switch their attention between the large shared display and the small personal display and therefore fulfills our third requirement for our tasks, a task that requires subjects to switch attention between the large and small display. The second reason for creating content by capturing screen shots is subjects can create a great deal of screen shots in a short amount of time. These screen shots are meaningful content to themselves and other subjects in the group. Instead of having subjects travel to an external location and take photos using the mobile 75 phone, the Creating Screen Shots Task enables subjects to create photos in the study room quickly. 4.1.3 Selecting Screen Shots Task The Selecting the Screen Shot Task occurred following the completion of the video. Subjects review their screen shots they created from Creating Screen Shot Task and select the best screen shots to share with the group. This task was completed either on the mobile phone (see Figure 4.4) or a personal area on the shared large display (see Figure 4.5 and Figure 4.6). < X X Eh Share selected flexl Plus* Figure 4.4. Nokia N80 mobile phone display showing interface for selecting screen shots. There are three thumbnails of the screen shots displayed underneath the currently viewed screen shot which is displayed in the center of the screen. Arrows on the bottom left and bottom right of the display indicates that there are more screen shots to view. Due to copyright reasons, the images have been removed. The original source is 'Meet the Robinsons' (Borden, 2007). 76 Figure 4.5. The large shared display showing three different personal areas for three different subjects. Each area has a different colored cursor, red in the left area, blue in the middle area and green in the right area. Due to copyright reasons, the images have been removed. The original source is 'Meet the Robinsons' (Borden, 2007). 77 Figure 4.6. A personal area on the large shared display for the subject with the red cursor. There are four thumbnails of the screen shots displayed underneath the currently viewed screen shot which is displayed in the center of the screen. Arrows on the bottom left and bottom right of the display indicates that there are more screen shots to view. Due to copyright reasons, the images have been removed. The original source is 'Meet the Robinsons' (Borden, 2007). By having subjects complete this task on both displays we were able to have subjects make a comparison about working on both the large shared display and the small personal display and fulfill our second task requirement, a task where subjects could compare working on both a personal area on a large display and a small personal display. One of the goals when designing the interface for both the large shared display and the personal display was to keep the interfaces as similar as possible so the comparison of these two displays would be valid. We kept the interactions and interfaces similar, except that 3 thumbnails were displayed in the personal display and 4 thumbnails were displayed in the personal area of on the large shared display. This was due to the size of the display on the personal display and we found the thumbnails became too small if we included 4. 78 4.1.4 Building the Outline Task Once subjects select their personal screen shots to share with the group, they collaboratively build an outline of the movie trailer they viewed using up to ten shared screen shots. At this point, subjects can only select their previously shared screen shots from the Selecting Screen Shot Task to build the outline and could not go back to select more screen shots from their personal collection. In the Building the Outline Task we investigate three different control policies for controlling the large display. In control policy A , all subjects control one shared cursor on the large display for viewing and selecting screen shots to be added to the outline (see Figure 4.7). Figure 4.7. The large shared display in the Building Outline Task using the control policy A, 1 shared cursor and control policy B, 1 shared cursor plus personal display. The shared cursor is purple. There are sixteen thumbnails of the screen shots displayed underneath the currently viewed screen shot that is displayed in the center of the screen Arrows on the bottom left and bottom right of the display that there are more screen shots to view. Due to copyright reasons, the images have been removed, original source: 'Meet the Robinsons' (Borden, 2007). 79 In the control policy B, all subjects control one shared cursor on the large display as well as the display on their mobile phone for viewing and selecting screen shots to be added to the outline (see Figure 4.7 and Figure 4.8). Share selected *• Iraki ?M« Figure 4.8 Nokia N80 mobile phone display showing the personal display in the Building Outline Task using the control policy B, 1 shared cursor plus personal display. There are three thumbnails of the screen shots displayed underneath the currently viewed screen shot which is displayed in the center of the screen. The blue cursor indicates the currently viewed screen shot. Due to copyright reasons, the images have been removed. The original source is 'Meet the Robinsons' (Borden, 2007). In the control policy C, each subject controls their cursor within a personal area on the large display showing viewing and selecting screen shots to be added to the outline (see Figure 4.9). 80 Figure 4.9. The large shared display in the Building Outline Task using the control policy C, 3 individual cursors showing each area has a different colored cursor, red in the left area, blue in the middle area and green in the right area. Arrows on the bottom left and bottom right of the personal areas show that there are more screen shots to view. Due to copyright reasons, the images have been removed. The original source is 'Meet the Robinsons' (Borden, 2007). We utilized the Building the Outline Task to investigate how adding a small display effects the way subjects collaborate and meet our fourth task requirement, a collaborative task where subjects had to share control of the large display. Table 4.1 is an overview of each of the control policies in the Building the Outline Task. 81 Control Policy Description A 1 shared cursor B 1 shared cursor plus personal display C 3 individual cursors Table 4.1 Overview of 3 different control policies (A, B and C) used in the Building Outline Task with brief descriptions. We created a scenario with different task that meet our requirements. Table 4.2 reviews our requirements and the tasks that meet each of these requirements. Requirement Task 1. A task with aspects of collaborative and individual work ° Creating Screen Shot Task and Selecting Screen Shot Task is completed individually ° Building the Outline Task is completed collaboratively 2. A task where users could compare working on both a personal area on a large display and a small personal display ° Selecting Screen Shot Task 3. A task which required users to switch attention between the large and small display ° Creating Screen Shot Task 4. A collaborative task where users had to share control of the large display ° Building the Outline Task Table 4.2. Overview of our tasks requirements and which task meets each task requirement. 82 While our tasks met all of our requirements described above, we needed to develop a story to motivate the group of subjects. Subjects were told to imagine they were reviewers of upcoming movie trailers and had to give a review about two different movie trailers they were about to watch. Together the subjects needed to use everybody's screen shots to create an outline that best represented different aspects of the movie trailer. To build the outline, subjects were told they should chose screen shots that showed important characters, interesting special effects and different parts of the plot. Movie trailers were chosen as the video because there is a coherent story that subjects could build an outline around. There is a lot happening in the movie trailer that subjects have to follow and try to create screen shots at the same time. The movie trailers used were 'Arthur and the Invisible' (Aubougue et al., 2006) and 'Meet the Robinsons' (Borden, 2007). These are both trailers, about two and half minutes long, about family movies. The trailers are in High-Definition quality. 4.2 Participants Eighteen subjects (4 female) between 20 and 30 years of age participated in the study. The subjects were recruited from the University of British Columbia online reservation system and were compensated $10 for their time. The study took approximately one hour. Groups of three subjects participated during a single session. We asked one subject to sign up and bring two of their friends to guarantee that subjects knew each other ahead of time. This would ensure subjects felt more comfortable and might be more willing to discuss and collaborate with their friends rather than strangers. Groups of three subjects were used to understand how more than two subjects interact and 83 collaborate while still minimizing the clutter and personal areas on the large shared display. Groups of three subjects allow us to validate our design guidelines from our pilot study since groups of three were also used in that study. 4.3 Apparatus and Materials The mobile phone client application communicates with the server through a wireless 802.11 connection using standard TCP/IP. The client application was developed using the programming language J2ME and was deployed on Nokia N80 mobile phones. The screen resolution on the Nokia N80 mobile phones is 352 x 416 pixels and can display up to 262,144 colors (see Figure 4.10). The server application was developed using Java programming language and ran on a desktop computer. A high-resolution (1380 x 768 pixels) projector was used to display the server application. The study was conducted in a Group Usability Lab at the University of British Columbia. Subjects all sat on a large sofa together about 2.5 metres in front of a rear-projected screen. The image on the screen was measured 2.0 metres on the diagonal. In front of the couch, there was a small coffee table. Subjects could place their phones or other objects on the coffee table i f they were not being used. 4.3.1 Design In order to design our prototype, we created a low-fidelity prototype demonstrating several design choices for applications which use a small personal display 84 in combination with a large shared display. Because there is little understanding about how to design applications which use a large shared display in combination with a small personal display, we wanted to gather user feedback on these design choices. Ten users walked through the prototypes in a one hour long session. Users walked through a simulated task where they watched a program on the television and were told they were interested in grabbing screen shots from this program. A power point presentation was used on the large shared display to present different design choices for the large shared display. A photo application on the mobile phone was used to present different design choices for the mobile phone. Users sat on a couch with a person walking them through the scenario. In front of the user was a 66-inch SMART Board. Users held the Nokia N80 mobile phone in their hands. The goal of this session was to gain insight into how to design our application for this study, which uses a small personal display in combination with a large shared display. We were interested in how to layout photos captured from a television program on the large shared display and the small personal display, and i f users preferred to select a location and size for their photos or to have an automatic layout and sizing policy. We found that most users preferred the filmstrip view of photos displayed on both the large shared display and small personal display (see Figures 4.6 and 4.8). Most users agreed that it would be helpful i f both displays had similar interfaces so it would be easier to move between these displays. A l l users agreed that when a photo is grabbed from the television program is does not have to appear on the large display i f it is displayed on the mobile phone. 85 Most users wanted some control of placement of objects manually on the screen, but liked the idea of automatic sizing of objects on the large shared display and the small personal display. While in our prototype we did not allow users to place their objects on the displays this might be a useful guideline to explore at a later date. We used the information and ideas collected from these 10 sessions to design our prototype for this study. 4.3.2 Functionality In our prototype, multiple users watch a video, grab screen shots from the video onto their mobile phones, select a subset of these screen shots to share, either using their personal display on their mobile phone or the large display and use their favorite shared photos to create an outline of the video they just watched. Each of the mobile phones has a client application responsible for sending commands to the large display and displaying screen shots on the display. The large display is controlled by a server application that is listening for commands from the client application. The only way users can interact with the server is to use the mobile phone. We discuss the functionality of the prototype in further detail in Appendix D. 4.4 Procedure The overall procedure is as follows. 1. A questionnaire was used to gather demographic information. 2. A brief training task was performed. 3. Subjects watched a movie trailer and created screen shots onto their phones. 86 4. A questionnaire was completed to collect information regarding preferences for display. 5. Subjects then selected screen shots to share with the group using either the large or small display. 6. A questionnaire was completed to collect information regarding preferences for level of distraction. 7. Subjects completed the outline task. 8. A questionnaire was completed to collect information about ease of control policy and levels of frustration. (3) to (8) is repeated for the other condition for the Selecting Screen Shot Task. In order to ensure subjects understood how to interact with the display and mobile phone subjects participated in a brief training session. In the training session, subjects watched a one minute trailer, 'Dinosaurs: Giants of Patagonia' (Samson, 2007) and created screen shots from this trailer. Subjects then selected screen shots to share with the group and built an outline with their favorite screen shots using five screen shots. Before each of the phases, subjects were given instructions on how to complete their tasks and could ask as many questions as they wanted. 4.5 Measures Our main dependent variable was selection time of personal screen shots in the Selecting Screen Shot Task. We also measured preference for display for the Selecting Screen Shot Task and preference for control policy for the Building the Outline Task. 87 Log files captured all interactions subjects made, including when subjects captured a photo, all selecting interactions and all interactions made when building the outline. We captured the time of interactions and a copy of all screen shots subjects created during the capture phase. As well as the log file, a screen shot of the final outline created by each group for each movie were collected and saved. After each task, we asked subjects to fill out a questionnaire with 5-point Likert scale questions and free-form questions to get subjects' responses about how difficult it was to switch between the two displays, their level of distraction, how difficult the tasks were and on the outline building task how difficult it was to collaborate with others. For each condition in the Selecting Screen Shot Task and the Building the Outline Task, there was a separate questionnaire (See Appendix D for the questionnaires). 4.7 Results To evaluate our user study, we looked at our data from the questionnaires, post-interview answers and examined the log files that were created during the study. A l l data reported from the questionnaire is based on a five point Likert scale (1.00 is strongly disagree and 5.00 is strongly agree). We report our findings in the context of subject's preference of display, the effectiveness of the large shared display and the small personal display, subject's ability to switch attention and the usefulness of our design. We found subjects prefer working on the large display but found the personal display useful for feedback. The large display is more effective than the small personal display for selecting media. Some subject's have difficulty switching between a small personal 88 display and a large shared display. Our design proved useable. More details are discussed below. 4.7.1 Display Preference Questionnaire data and qualitative analysis are used to determine subject's preference of the large shared display and personal display for each of the tasks. Our findings are reported from subject's preferences in the Selecting Screen Shots task, observations from the Outline Task, and 3 questions regarding subject's preferences about where screen shots should be displayed in the Creating Screen Shot Task from the questionnaire Subjects' showed a preference for using the large shared display over the personal display in the Selecting Screen Shot Task and the Building the Outline Task. Thirteen of 18 subjects stated a preference for completing the Selecting Screen Shot Task on the large shared display rather than the small personal display (see Figure 4.10). Most subjects' reason for this was because the screen on the mobile phone was too small and it was easier to view the pictures on the shared display. Subjects who preferred the small personal display mentioned privacy of information as a concern with the large shared display. 89 14 1 0 — _ $ _ -, a _ _ _ •5 6 z , , 4 2 0 - | 1 1 , 1 1 Large Display Small display Preferred Display Figure 4.10. Bar chart showing the number of subjects who prefer the Large Shared Display versus the Small Display in the Selecting Screen Shot Tasks. Most subjects preferred the large display. In the Outline Building Task in the condition where the control policy enabled subjects to use their personal displays to control and view the list of shared screen shots, no subjects took advantage of this. When asked in the post-interview, subjects said it was easier to work together while all looking at the large shared display and the pictures were clearer on the large display. Subjects in the other two conditions (1 cursor and 3 individual cursors) disagreed with the statement, "It would be helpful i f shared photos were also displayed on cell phone" (M= 2.58, SD= 1.00). While most subjects preferred the large display over the personal display when completing the Selecting Screen Shot Task and the Building the Outline Task, for the Creating Screen Shot Task subjects agreed with the statement, "It was helpful to display the screen shots" (M=4.17, SD= .76) and felt neutral about the statement, "It would be 90 helpful i f the screen shots were displayed on the large shared display" (M=3.61, SD= .99). Table 4.3 summarizes are results about subjects preferences for the large display and the personal display for the Creating Screen Shot Task. Questions about Users Preference from Questionnaire Mean Score Standard Deviation It would be helpful i f shared photos were also displayed on cell phone (Building the Outline Task) 2.58 1.00 It was helpful to display the screen shots on the personal display (Creating Screen Shot Task) 4.17 .76 It would be helpful i f the screen shots were displayed on the large shared display (Creating Screen Shot Task) 3.61 .99 Table 4.3. Summary of questions about subject's preference of displays showing mean scores and standard deviation for each of these questions from the Questionnaire data for the Building the Outline Task and the Creating Screen Shot Task. 4.7.2 Effectiveness To evaluate whether using the small personal display is more effective than using the large shared display, we used our log files to look at the time it took to complete the Selecting Screen Shot Task. This task was competed individually and all subjects used both the large shared display and the small personal display to complete the task. A mixed between-within 2 x 3 x 6 (display x control policy x group) A N O V A was used to investigate the difference between selection of screen shots on the large display and the small display. The within factor is display and the between factor is control policy. There was a significant interaction for group and display, F (3, 12) = 9.799, p<01, effect size = .710 to complete the selection task on the large shared display (M= 143 91 seconds, SD= 46.82) and personal display (M= 115 seconds, SD= 48.08) (see Figure 4.11). c o o 4> ____ a> co GO TJ * O a> o a. * 1 * v S 2 I-a> E I-2 0 0 -m 180 1 6 0 140 120 100 8 0 6 0 4 0 2 0 0 P e r s o n a l D i s p l a y Large D i s p l a y Selection Task Display Figure 4.11. Bar Chart with Error Bars showing the time to complete the Selecting the Screen Shot Task using the Personal Display and the Large Display. It is not surprising that group and display had an interaction effect because the group behavior should have a strong influence on individual behavior. For example, i f one member of the group completes the task and puts down their mobile phone, other members in the group might feel that they should try to finish. While considering this four out of the six groups performed faster on the large display than the small personal display (see Tables 4.4 and 4.5 for the number of selections and standard deviation for the personal display and large display). 92 Personal phone selections (seconds) Standard Deviation Group 1 194 35.23 Group 2 117 29.70 Group 3 82 9.61 Group 4 111 52.21 Group 5 144 36.31 Group 6 208 0.00 Table 4.4. Table showing the time to complete the Selecting the Screen Shot Task as well as the standard deviations on the Personal display. 93 Large Display selections (seconds) Standard Deviation Group 1 141 32.37 Group 2 160 6.35 Group 3 91 11.54 Group 4 83 21.70 Group 5 69 18.35 Group 6 249 13.27 Table 4.5. Table showing the time to complete the Selecting the Screen Shot Task as well as the standard deviations on the Large Display. As well as completing the Selecting Screen Shot Task faster on the large display than the personal display, subjects also completed significantly more 'interactions' on the large display (M=119 interactions, SD= 58.92) than the personal display (M= 93 interactions, SD= 20.62) , F(3,12)=l .99, p<.Q5, (see Table 4.6 for the total number of interactions for the personal and large display). Figure 4.12 shows the number of interactions completed during the selection task for each the personal display and the large display. 94 Total interactions on personal display Total interactions on large display Group 1 624 315 Group 2 275 288 Group 3 215 174 Group 4 218 248 Group 5 540 345 Group 6 290 317 Table 4.6. Table showing the total number of interactions in the Selecting the Screen Shot task for each group s for the personal display and the large display .x: co CJ h-c o 4-« o 4> CO 1 • 4) _0> i E c o o L . 4> *-c 180 1 6 0 140 120 1 0 0 8 0 6 0 4 0 2 0 0 Personal Display Large Display Selection Task Display Figure 4.12. Bar Chart with Error Bars showing the number of interactions taken to complete the Selecting the Screen Shot Task on the Personal Display and the Large Display. 95 An interaction is defined as anytime a subject navigates between the screen shot, select a screen shot to share, deselect a screen shot from sharing or share their list of screen shots. The interaction mechanism was equivalent on both the large shared display and the small personal display. For example, to move to the next screen shot on the right in the list of screen shots subjects would hit the right button on the mobile phone's navigation keypad when selecting on either the personal display or the shared display. This is one considered one interaction. 4.3.3 Switching Attention The results from this section come from data collected from our questionnaires. During the Creating Screen Shot Task, we were interested i f subjects found it difficult to switch their attention between the large shared display and the small personal display. Our findings are reported from in the context of 4 questions from the questionnaire regarding how difficult it is for a subject to switch attention between their personal display and the large shared display in the Creating Screen Shot Task and the Selecting Screen Shot Task, and how useful or distracting it is for subjects to have other subject's content on the large shared display. Quantitative data from the log file is also used to investigate the difficulty of switching. The 2 groups who used the shared control condition with one cursor in the Building the Outline Task agreed (M= 4.33, SD= .82) that it was difficult to switch attention whereas the 4 groups in the other condition did not agree with this statement, F (3, 12) =11.83,p <.01. 96 On average, the 2 groups who used the shared control with the personal display (shared cursor plus personal display) scored neutral (M= 3.00, SD= .63), and the group who each had a personal cursor (3 individual cursors), (M=2.5, SD= .54) disagreed with this statement (see Figure 4.13 for each condition's distribution of scores). 4.5 i 4 3.5 !2 I 3 * 2.5 i 1 5 = 1 0.5 0 ft • 1 Cursor • 1 Cm sor + Personal Display • 3 individual Cursors 2 3 4 Score on Likett Scale (l=strongly disagree, 5=strongly agree) Figure 4.13. Bar Chart of subject's responses (N=18) by different conditions in the Building the Outline Task in the questionnaire for the statement "It was difficult to switch attention between the movie and the personal display. " Most subjects felt neutral or agreed (3and 4) with this statement. It does not appear that switching of attention is caused by subjects creating more screen shots since the subjects in condition A did not collect significantly more screen shots than other groups, F(3,12)=1.95, p=.159 (see Table 4.7 for each groups total number of screen shots created ). Control Policy Total Screen Shots Created Group 1 A 135 Group B 99 97 2 Group 3 C 48 Group 4 A 102 Group 5 B 183 Group 6 C 114 Table 4.7. Total showing each groups control policy and the total number of screen shots captured per group In the Selecting Screen Shot Task, subjects disagreed with the statement "Once I shared my pictures it was difficult to switch my attention" from their personal display to the large display when they selected the screen shots to share with the group on their personal displays (M= 2.61, SD= .91). As well as finding out i f it was difficult for some subjects to switch attention between their small personal display and the large shared display, we wanted to investigate i f subjects found it distracting when others were sharing the large display for selecting photos. On average, subjects disagreed with the statement, "It was distracting to me when others were selecting screen shots." (M=l .89, SD= .76). Subjects disagreed that "It was helpful to see what others were doing" (M= 2.56, SD= 1.04 ). Table 4.8 summarizes the mean score and standard deviation for questions about switching attention. 98 Questions about Switching Attention from Questionnaire Mean Score Standard Deviation It was difficult to switch attention between the movie and the personal display 3.28 1.01 Once I shared my pictures it was difficult to switch my attention 2.61 .91 It was distracting to me when others were selecting screen shots. 2.56 1.04 It was helpful to see what others are doing. 1.89 .76 Table 4.8. Summary of questions about switching attention between the personal display and the large shared display in different tasks showing mean scores and standard deviation for each of these questions Questionnaire data 4.3.4 Control Policy The Building the Outline Task gave us the opportunity to see how different shared control policies effect the collaboration of the groups. The results from this section come from observations data collected from one question about the ease of collaboration from the questionnaire. Our findings are reported from qualitative data and a question from the questionnaire regarding how easy it was to collaborate. In condition A (1 shared cursor), subjects shared control of one common cursor. We saw one subject mostly taking control of adding the pictures to the outline and navigating through the list of shared screen shots. Other subjects in the group would ask the controller to move forward or backward or select a photo for the outline by pointing to the large shared display. In one group, two of the subjects put their mobile phones on the coffee table in front of them and no longer used them to control the display. Another subject became very frustrated when trying to get the person in control to look at a particular photo that he stood up and walked to the screen to point out the photo he wanted to select. 99 Subjects in this condition agreed the significantly less (M= 3.00, SD= .89) with the statement, "It was easy to collaborate with others to build the outline of the video" than the other two conditions combined (M= 4.17, SD= .57), F (2, 15) =5.33, p < .05. Figure 4.14 shows the distribution of scores for each condition in the Building the Outline Task. w 4 5 3 i 2 z 1 0 j • 1 Cursor • 1 Cursor + Personal Display • 3 individual Cursors 2 3 4 Score on Likert Scale (1=strongly disagree, 5= strongly agree) Figure 4.14. Bar Chart of subject's responses (N=18)by different conditions in the Building the Outline Task in the questionnaire for the statement "It was easy to collaborate with others to build the outline of the video " Most subjects felt neutral or agreed (3 and 4) with this statement. In condition B (1 shared cursor plus personal display), even though subjects could have used their personal display both groups in this condition chose not to so their actions became similar to the groups in condition A. In condition C (3 individual cursors), subjects could each control separate sections of the shared display. In this condition, subjects would use their own space to find screen shots and show to other subjects. Once a screen shot was found, subjects would point to it to show to other members of their group. Unlike the previous conditions, each member 100 in the groups used their mobile phone to control the display throughout the building of the outline. 4.3.5 Design We wanted to validate our observations from our pilot study about how to design applications where multiple subjects share a single large display. Our findings are reported from the results in the context of 2 questions from the questionnaire. We used personal areas and color to denote what subjects could control on the large shared display. Subjects agreed with the statement, "I understood which screen shots I was able to control on the large display" (M= 4.28, SD= .46) and "It was clear which of my screen shots I selected for sharing" (M= 4.11, SD= .96) in the Selecting Screen Shot Task (see Figures 4.15 and 4.16 for distribution of answers for each o f these questions). 1 4 1 2 5 1 0 2 3 4 Score on Likert Scale ( 1 = s t r o n g l y d i s a g r e e , 5 = s t r o n g l y a g r e e ) Figure 4.15. Bar Chart of subject's responses (N=18) by different conditions in the Building the Outline Task in the questionnaire for the statement "I understood which screen shots I was able to control on the large display ". All subjects agreed and strongly agreed (4 and 5) with this statement. 101 12 -r-10 - -.a 1 2 3 4 5 Score on Likert Scale (1=st rongly d i s a g r e e , 5= s t r o n g l y a g r e e ) Figure 4.16. Bar Chart of subject's responses (N=18) by different conditions in the Building the Outline Task in the questionnaire for the statement "It was clear which of my screen shots I selected for sharing. " Most subjects agreed and strongly agreed (4 and 5) with this statement. 4.4 Discussion The results are used to answer our 4 research questions about small, personal displays and a shared large display. The 4 questions are (1) which part o f a task should be distributed on the large shared display or the small personal display, (2) do users prefer the large shared display or the small personal display the small display, (3) how difficult is it for users to switch between the large shared display or the small personal display, and (4) what is the best way to manage shared control on the large shared display. Our results indicate the large shared display should be used for selecting media and collaborative tasks due to users' preference and efficiency. The small personal display is useful for displaying feedback to users. Switching attention between the large shared display and the small personal display is difficult for some users, but more research is needed to find out about how difficult it is for users. When collaborating, each 1 0 2 user should have a cursor for the large shared display. We find the design of the application useable. 1) Which parts of a task should be distributed over each of the displays? When moving content from the large shared display to the small personal display, most subjects found it helpful that content was presented on the personal display. This indicates that while some subjects may have a difficult time moving focus from one display to the other, it can still be useful for feedback about what content is being moved. In this instance, using both displays in conjunction was successful. Our results indicate that using the large shared display for an individual task such as looking for and selecting images is more effective than the small personal display. The size of the images and space on the screens seem to be a benefit to subjects. In addition, we found out that having others working on different personal areas on the shared display is not distracting to others. For collaborative tasks, there does not seem to be a compelling reason to use the small personal display. Although, subjects did not take advantage of the opportunity to subtask on the small display they had good reason. They thought using the small display made it too hard to collaborate and follow along with others. When looking at a small personal display you lose the awareness of what others are doing and may be that subjects find it difficult to figure out what step the group is at when they bring their attention back to the large display 103 2) What displays subjects prefer to work on? We found in the Selecting Screen Shot Task and the Building the Outline Task, subjects preferred the large shared display to the small personal display, but subjects found it acceptable for the small personal display to display the screen shots in the Creating Screen Shot Task. Subjects liked the large display because of the better resolution and size of the screen shots. Some subjects stated it was difficult to see the screen shots on the mobile phone. 3) How difficult it is for subjects to move between these two displays? In each of our tasks, users were required to switch attention from the mobile phone to interact with the large display. In the Creating Screen Shots Task, some subjects looked at their personal display to see i f they captured the screen shot they wanted. As stated above, some subjects did find it difficult to move their attention between a small personal display and a large display. Two groups who used control policy A had significantly different results from the other 4 groups. We cannot find any reason why the control policy would have an effect on attention in the Creating Screen Shot Task. We believe this difference is coincidental but leads us to believe switching attention for some users is more difficult and distracting than for others regardless of how many screen shots they captured. In the Selecting Screen Shot Task when subjects selected screen shots on their personal displays, subjects did not find it difficult to switch between the small personal display and the large shared display. This might have been because the task only required switching to the large shared display once all the selections were completed. This task 104 required the subject to only switch their attention once. We investigate the cost of switching between a small personal display and a large shared display in our third study discussed in Chapter 5. 4) How to manage shared control with multiple displays? When considering different shared control policies, we found the policy where each subject had their own cursor and control enabled subjects to collaborate whereas the policy where subjects had to share one cursor was most frustrating to the group. However, subjects who had one cursor and the personal display did not seem as frustrated with the single cursor control. Possibly having the option to use their mobile phone was enough to limit this group's frustration. While it was not a driving factor for this study, we also wanted to validate our design guidelines from our pilot study. In the Selecting Screen Shot Task, each subject's screen shots were displayed in a different section of the screen. These areas were grouped separately with a border indicating the different areas. Each subject was positioned in front of the area on the display that corresponded to his or hers seating position. Different colored cursors indicated who owned which space. This designed proved useable since users were not confused about what they could and could not control. 4.5 Design Guidelines From our results, we created a set of design guidelines that in conjunction with our design guidelines from our previous study will be used in our Family Blog 105 application (discussed in Chapter 6) where multiple-users are using a small personal display with a large shared display. 4.5.1 Personal & Large Displays When designing large screen displays which utilize a personal display: . Ensure that different tasks can be accomplished on one display or the other. This will minimize the cognitive load needed for users to switch attention between the different displays. It is acceptable for different phases of the application to be designed to utilize the personal or large display as long as each of the phases is completed before the user must switch to the different display. • When browsing media, the large display should be utilized in order to enable users to view the content more clearly and interact more efficiently with the display. • When collaborating, the large display should be utilized in order to facilitate discussion and co-operation. . When capturing content from the large display to the small display, it is acceptable for the content to appear on the personal display. Consider the issues of user's need for privacy and allow the choice of some content only being displayed on the personal display. 106 4.5.2 Control Policies . When designing applications on a large screen display where collaboration is required, where possible each user should posses individual control of the shared display. When collaborating it is not necessary to utilize the personal display. 4.6 Conclusions and Future Work In this user study, our goal was to gain a better understanding of the difference between working on a small personal display and working on a large shared display. We found that in the Selecting Screen Shot Task, users not only prefer the large shared display to the small personal display, but also perform more effectively on the large shared display. For collaborative tasks, users prefer to use only the large shared display, but i f moving content from the large shared display to the small personal display, it seems acceptable to have the content display on the small personal display. We also discovered that enabling each user control of the shared space led to collaboration that is more meaningful. Building on our work from our pilot study it seems that using separate areas defined by borders and color helped users identify what they were able to control and not control. While we found that moving content form the large shared display to the small personal display is acceptable, this does lead to more questions about understanding the difficulty of moving focus from the large and personal display. There were 2 groups who found that switching was difficult to do in the task were users were creating screen shots and our next study will try to further understand the cost of this switching to better inform 107 the design of multi-display environments in the home. A s well as looking at the cost of switching, our results wi l l be used to design our Family B log application. 108 Chapter 5 Difficulty of Switching User Study There is potential for designing collaborative applications that utilize a small personal display and a large shared display for applications in the home; however, our main study (discussed in the previous Chapter) indicates that some people might find it difficult to move their attention between the two displays. In this Chapter, we describe our follow-up study where we want to better understand an individual's cost of switching between a small personal display and a large display for a simple recognition and memory task. If we are able to realize the cost of switching between a small personal display and a large display, we can generate better design choices about where information should reside. For example, i f this cost is high then important and relevant information should always be maintained on the display in which the primary task is occurring. On the other hand, if the cost is low, then this information can be displayed on either display and its placement should be based on other design guidelines. The challenge when designing applications which utilize both a large shared display and a small personal display is to understand the most suitable place where information should be shown and to allow for the "fluid transfer" of this information while maintaining the task and application flow (Myers, 2001). One approach in the past has been to utilize the small display for private information until users explicitly share this information (Greenberg, 1999), but this does not take into account times when information is not clearly private or public. There are other times users may want to use the large display over the small display, such as i f the small personal display is too small 109 to complete a task, or to maintain awareness of other users in the room (Gutwin, 1998). Understanding how difficult it may be for users to switch attention between the personal and shared display can lead us to better design choices. Many factors can contribute to the cost of switching between a small personal display and a large shared display. These factors include the moving and refocusing of the eye, scene perception, navigational costs, the distance and transformation of objects and the impact on working memory. In our study, we focused on two tasks that look at (1) recognition that involves moving and refocusing the eye, and perceiving the distance and transformation of objects, and (2) working memory. The two tasks were chosen because they allowed us to look at how difficult it is to recognize an object when constantly switching focus on the two displays, and to investigate i f the cognitive load of switching impacts memory. The tasks have ecological validity and can influence real-world designs of applications that utilize a small, personal display and large display In the study, subjects sat in front of a large display holding a mobile phone that was used as their personal display. In the first task, a sequence of stimulus images was displayed on the large display for one second each. A priming sequence of images was displayed either on the small personal display or on the large display underneath the stimulus images. Users were asked to identify when the stimulus images matched the priming sequence. In the second task, users had to recall the priming sequence of 3 images out of 6 other images using the personal display. Below the methodology of our study is discussed, including a detailed description of the tasks, procedure, subjects, measures and design. The results and discussion are then discussed, finishing this chapter with our conclusions and future work. 110 5.1 Method The experiment used a within-subject (priming display) design. A within-subject design was used in order to obtain comparative performance and workload results when the priming sequence was displayed on the large display or on the small personal display. The displays used for the priming sequence were counter-balanced as well as our two image sets. 5.1.1 Tasks When deciding on our tasks for this study, one of our goals was to ensure we maintained ecological validity because it was important these results could inform further design guidelines for applications using a large and small personal display. In the Recognition task, images were displayed across the two separate displays and forced users to move their attention between the two displays. This is similar to a scenario where a user's primary task is occurring on a large shared display but extra useful information is displayed on the small personal display. Users have to move their focus between a small personal display and a large shared display. In the Recall task, the small personal display was used to test a user's working memory. This task imitates a scenario when users might transfer information from the large display to the small display and needs to recall the specific items they were looking at. I l l Task 1: Recognition During the Recognition task, stimulus images were displayed on the large display sequentially for 1 second (see Figure 5.1). X X X Figure 5.1. Sequence of 3 stimulus images (balloons, accordion, basket of apples) displayed on the large display without the priming sequence displayed on the large display. Each of these images is displayed for 1 second. The next image appears directly 1 second after the previous image. Due to copyright reasons, the images have been removed The original source is Microsoft's Clip Art and Media Homepage (Microsoft, 2007). Sequences of three priming images were displayed either on the large display or on the small personal display. When the priming images were displayed on the large display, (see Figure 5.2) they were displayed underneath the main image. If the sequence of three priming images is displayed on the mobile phone display (see Figure 5.3), only the stimulus images were displayed on the large display (see Figure 5.4). 112 Figure 5.2. The large shared display showing J stimulus image (the basket of apples) with a priming sequence of 3 images (turkey, airplane, basket of apples) at the bottom center of the display. Due to copyright reasons, the images have been removed. The original source is Microsoft's Clip Art and Media Homepage (Microsoft, 2007). 113 N N 1 ^ 1 I I X X X Figure 5.3. Nokia N80 mobile phone display showing priming sequence of 3 images (turkey, airplane, basket of apples) at the bottom center of the display. Due to copyright reasons, the images have been removed. The original source is Microsoft's Clip Art and Media Homepage (Microsoft, 2007).. 114 Figure 5.4. The large shared display showing I stimulus image (the basket of apples) without a priming sequence of 3 images at the bottom center of the display. In this condition the priming sequence is displayed on the mobile phone as in Figure 5.3. Due to copyright reasons, the image has been removed. The original source is Microsoft's Clip Art and Media Homepage (Microsoft, 2007). Once a subject identified the sequence of priming images the subject told the experimenter, who then stopped the stimulus images from displaying. The large display no longer displayed images and turned to solid white. If a subject missed the target sequence, 2 more stimulus images were presented. The large display background turned to solid white. Two more stimulus images were presented to allow time for subjects to tell the experimenter they found the priming sequence without having the trial finish. To ensure target images were a consistent size in a subject's visual field for both displays, the visual angle was controlled for. The distance from the phone to the 115 subject's eyes was estimated and used to determine the size of the target images on the large display. Task 2: Recall The Recall task required subjects to choose the target sequence, given in the Recognition task, using the keypad on the mobile phone. Nine images, including the 3 priming images, were displayed on the small personal display with a number corresponding to a number on the mobile phone keypad (see Figure 5.5). 116 Figure 5.5 . Nokia N80 mobile phone showing the nine images displayed on the mobile phone display with numbers underneath each image which corresponds to the phone's keypad. Due to copyright reasons, the images have been removed. The original source is Microsoft's Clip Art and Media Homepage (Microsoft, 2007). Subjects selected in order, the sequence of priming images using the keypad. Once the priming images were selected, subjects selected the middle navigation key and told the experimenter they were done. 117 5.1.2 Blocks Subjects each performed 2 blocks of 20 trials each. One trial consisted of the Recognition task followed by the Recall task. Once subjects had completed one trial, the experimenter was informed and the next trial began until all trials in that block were completed. The first block of tasks, the priming target sequence images were displayed on either the small personal display or the large display and this was reversed during the second block. Performing 20 trials for each block enabled us to measure a user's performance over a number of trials while trying to ensure subjects did not get tired or bored. Our pilot study confirmed 20 trials was a reasonable amount of trials per block. 5.1.3 Description of Image Set We created two image sets (A and B) which were counter-balanced in the priming sequence display conditions. The images used in both image sets were downloaded from Microsoft's Clip Art and Media Homepage (Microsoft, 2007) (see Figure 5.6 for example images). X X X X Figure 5.6. Four example images from the Microsoft Clip Art and Media webpage (balloons, accordion, basket of apples, bag) that were used in the stud. Due to copyright reasons, the images have been removed. The original source is Microsoft's Clip Art and Media Homepage (Microsoft, 2007). 118 No images were repeated since we did not want subjects to remember previously viewed images. A l l of the images were simple, concrete objects. Each image set consisted of 20 sets of target images for 20 different trials. To encourage false alarms, for each of the 20 trials, 6 trials used a subset of the target images before the target sequence was displayed. The position of where the priming target sequence would appear in a single trial was a randomly selected number between 1 and 10. The total images required for each trial included 3 target images, 2 images to be displayed after the target sequence and a number of images displayed before the first target image was displayed. The number of images displayed before the first target depended on when the first image of the priming target sequence was displayed. 5.2 Participants Twelve subjects (2 female) between 20 and 40 years of age participated in the study. The subjects were recruited from the University of British Columbia Computer Science Graduate student mailing list. Each subject was compensated $10 for their time. Each subject participated separately in the study. 5.3 Apparatus & Materials The large display application was written in C# using the Microsoft .NET framework and run on a 1.5 GHz, 504 M B R A M IBM ThinkPad laptop computer. The 119 application was displayed on a 66-inch SMART Board with the screen resolution of 1024 x 768 pixels. In addition to the large display application, there was a mobile phone client application. This was developed using the Python programming language and was deployed on a Nokia N80 mobile phone. The screen resolution on the Nokia N80 mobile phones is 352 x 416 pixels and can display up to 262,144 colors. Subjects sat about 2.5 metres in front of the large display and were told to hold the Nokia N80 mobile phone in their hands at a comfortable distance. The experimenter sat beside the large display to operate the laptop. 5.4 Procedure The overall procedure is as follows. 1. A questionnaire was used to gather demographic information. 2. A training task for was first display target was performed. 3. Subjects performed a block of 20 trials. 4. Two questionnaires were given to assess user's perceived workload for the Recognition Tasks and the Recall Tasks. 5. A training task for the second display target was performed. 6. Subjects performed a block of 20 trials for this condition. 7. Another two questionnaires were given to assess user's perceived workload for the Recognition Task and the Recall Task. During the training task, subjects completed one trial with the priming target sequence displayed on the display (the large display or the small personal display) that 120 the trial would use to display the priming target sequence. If subjects were confused, another trial was completed on the same display. The N A S A T L X (Hart & Staveland, 1988) was used to determine a subject's perceived workload for both the Recognition Task and the Recall Task for both displays. We wanted to confirm that a subject's performance was similar to how difficult they found the tasks on the two displays. 5.5 Measures The main dependent variable for the Recognition Task and the Recall Task was performance. For the Recognition Task, performance was based on the correct identification of the priming target sequence. For the Recall Task, performance was based on the correct recall of the priming target sequence in the correct order. For the Recognition Task, false positives were logged if the subject reported the target priming sequence was present and it was not. Log files were used to measure performance. Perceived workload measures for the. Recognition Task, and the Recall Task in each block were gathered to determine the perceived difficulty of the each of the tasks in the different conditions. The workload factors were determined using the N A S A T L X questionnaire and included mental demand, physical demand, temporal demand, performance, effort and frustration. Instead of having subjects weigh these factors themselves, a consistent weighting was given to all the factors for consistency. 121 5.6 Hypotheses HI: For the Recognition task, detection of the priming target sequence will be superior when targets are displayed on the large display rather than the small personal display. When the target sequence is displayed on the large display, subjects are not required to switch between the large display and the small personal display. Consequently the chance of missing the target sequence is lower than when the target sequences are displayed on the small personal display because more cognitive resources are required for divided attention tasks. A detrimental effect has been found when information is offset by depth (Tan, 2003), similar to a subject holding the phone and watching the large shared display. H2: For the Recall task, recall of target sequence will be superior when targets are displayed on the mobile display. If the original target priming sequence of images and the recall images are displayed on the same display the cognitive effort of completing the task on a different display and depth will be minimized. The performance will be better i f the same display is used for both stimulus and response. Our previous study showed that some users found it difficult to switch attention between a large shared display and a small personal display while other users did report this difficulty. We believe that on average subjects will perform worse when switching is required because we feel that subjects who did not report difficulty in the previous study, may still have a detriment in performance even if they did not report one. The N A S A 122 T L X questionnaires will be able to demonstrate i f there is a difference in performance and the perceived difficulty of each of the tasks. 5.7 Results Each subject was given a score for all blocks in the Recognition Task, and the Recall Task. In the Recognition Task, subjects were given one point for the correct identification of the target sequence. The maximum score for each block is 20. In the Recall Task, subjects were given one point for the correction identification of a correct image in the correct order it appeared in the priming target sequence. The maximum score for each block is 60. We did not use a mixed design A N O V A for presentation of the order of image sets and presentation of the order of displays as our between factors because our hypotheses did not include these factors. 5.7.1 Recognition Task Performance A paired-samples t-test was conducted to evaluate whether performance was better i f the priming target sequence were displayed on the large display (M=18.17, SD=1.90) or the small personal display (M=17.92, SD=2.23) for the Recognition Task. There was not a significant difference, t(ll)=.353, p=.731, between the two displays. Below are the individual scores for the Recognition Task for each subject. See Figure 5.7 for each subject's scores for the large display and small personal display. 123 B Large Display • Personal Display 1 2 3 4 5 6 7 8 9 10 11 12 Subjects Figure 5.7. Bar Chart showing the individual total performance scores for the Recognition Task for both the large display and the personal display. Workload A paired-samples t-test was conducted to evaluate whether the perceived workload was rated higher i f the priming target sequence were displayed on the large display (M=43.83, SD=22.38) or the small personal display (M=44.25, SD=21.07) for the Recognition task. There was not a significant difference t(ll)=-.063, p=.951, between the two displays. See Figure 5.8 for each subject's scores for the large display and small personal display. 124 • Large Display • Personal Display 1 2 3 4 5 6 7 8 9 10 1 1 12 Subjects Figure 5.8. Bar Chart showing the individual total workload scores for the Recognition Task for both the large display and the personal display. 5.7.2 Recall Task Performance A paired-samples t-test was conducted to evaluate whether the performance was better i f the priming target sequence was displayed on the large display (M=49.92, SD=9.11) or the small personal display (M=50.92, SD=11.88) for the Recall task. There was not a significant difference t(l l)=-.37, p=.718, between the two displays. See Figure 5.9 for each subject's scores for the large display and small personal display. 125 1 0 0 % o o c rs O t tti CL. 8 0 % 6 0 % 4 0 % * 2 0 % 0% • Large Display • Personal Display 4 5 6 7 Subjects 10 11 12 Figure 5.9. Bar Chart showing the individual total performance scores for the Recall Task for both the large display and the personal display. Workload A paired-samples t-test was conducted to evaluate the whether the perceived workload was rated higher i f the priming target sequence was displayed on the large display (M=48.75, SD=22.59) or the small personal display (M=48.50, SD=19.45). There was not a significant difference t(l 1)=.05, p=.962, between the two displays. See Figure 5.10 for each subject's scores for the large display and small personal display. 1 0 0 % • Large Display • Personal Display Su bjects Figure 5.10. Bar Chart showing the individual total workload score for the Recall Task for both the large display and the personal display. 126 5.7.3 Difference between Performance and Perceived Workload In order to see i f there was a difference between a subject's performance and a subject's perceived workload, a t-test was completed for both the Recognition task and the Recall task. We compared the difference of a subject's performance score and the difference of a subject's perceived workload score. A negative score indicates the subject's performance or perceived workload was worse for the large display. A paired-samples t-test was conducted using the difference in performance on the two displays (large display - small personal display) (M=.01, SD=.12) and workload of the two displays (large display - small personal display) for the Recognition Task (M=00, SD=.22). There was not a significant difference t(ll)=.22, p=.413. Table 5.1 shows the performance difference and workload difference for the Recall task for all subjects. 127 Recognition Task Subject Performance Workload Difference 1 -10.0% -17% 7.0% 2 -10.0% -4% -6.0% 3 15.0% 18% -3.0% 4 -10.0% 52% -62.0% 5 0.0% -14% 14.0% 6 -5.0% -31% 26.0% 7 10.0% -23% 33.0% 8 0.0% -11% 11.0% 9 0.0% 25% -25.0% 10 30.0% 0% 30.0% 11 -10.0% 0% -10.0% 12 5.0% 0% 5.0% Table 5.1. A table of each subject's difference in scores for the performance and perceived workload for the Recognition task. A negative score indicates the subject's score was higher for the personal display rather than the large display. A paired-samples t-test was conducted using the difference in performance on the two displays (large display - small personal display) (M=.01, SD=.12) and workload of the two displays (large display - small personal display) for the Recall Task (M=.00, SD=.22). There was not a significant difference t(l l)=-.28, p=.391. Table 5.2 shows the performance difference and workload difference for the Recall task for all subjects. 128 Recall Task Subject Performance Workload Difference 1 -1.7% 11.0% -12.7% 2 -31.7% 1.0% -32.7% 3 -11.7% -7.0% -4.7% 4 3.3% 38.0% -34.7% 5 13.3% 23.0% -9.7% 6 -20.0% -19.0% -1.0% 7 8.3% -17.0% 25.3% 8 -6.7% -9.0% 2.3% 9 -5.0% 13.0% -18.0% 10 5.0% -18.0% 23.0% 11 -1.7% -1.0% -0.7% 12 28.3% -12.0% 40.3% Table 5.2. A table of each subject's difference in scores for the performance and perceived workload for the Recall task. A negative score indicates the subject's score was higher for the personal display rather than the large display. 5.7.4 Summary of Results For both the Recognition Task and the Recall Task, there was no significant difference in performance scores when the priming sequence was displayed on the large display or when the priming sequence was displayed on the small personal display (see Figure 5.11). The same is true for the workload (see Figure 5.12). 129 a Large Display • Personal Display Recognition Task Recall Task Task Figure 5.11. A bar chart with error bars showing the average performance score for the Recognition task and the Recall Task by large display and personal display. The difference between the performances of both displays is not significant for both tasks. 100% £ o o 80% to n n o 2 60% k_ o 3 40% * ra 2 $ 20% < 0% • Large Display • Personal display Recognition Task Recall Task Tasks Figure 5.12. A bar chart with error bars showing the average perceived workload score for the Recognition task and the Recall Task by large display and personal display. The difference between the performances of both displays is not significant for both tasks. We summarize our results according to our hypotheses: H I not supported: For the Recognition task, there is no difference in performance. H2 not supported: For the Recall Task, there is no difference in performance. 130 5.8 Discussion In both the Recognition task and the Recall task, no difference in performance or workload was found for either priming display. When the priming target sequences were displayed on the small personal display, users had to switch their attention between the large display and the small display. We discuss three reasons that may have caused the lack of difference in performance: (1) subjects adapted the placement of the personal display to minimize the need to switch, (2) subjects found the task too simple, and (3) there is no difference in performance when switching is required. One reason for the lack of difference in the performance between switching and not switching could be that subjects were able to adapt the placement of the mobile phone so that it was beside the large display and directly in their field of vision. While subjects would still have to switch the focus of their eyes, they would not need to move their head or eyes down to look at the mobile phone display. We did observe 3 subjects holding the mobile phone in this position. While we thought some subjects might have done this, we decided to let subjects hold their mobile phones in a comfortable place of their choosing because we wanted to simulate a situation where people would use their mobile phones with the large display. People would be able to hold their mobile phones whichever way was most comfortable and useful to them when using applications which use a small personal display in combination with a large shared display. A follow-up study where the mobile phone is mounted and could not be moved, would answer this question for us. Another reason for the lack of difference in the performance between switching and not switching is the simplicity of the task. We wanted to start looking at the cost of 131 switching beginning with a simple task to see i f during these simple scenarios subjects had difficulty switching. In the Recognition Task, 7 of the possible 24 blocks, subjects scored 100%. In the Recall Task, 3 of the possible 24 blocks, subjects scored 100%. For both tasks combined, subjects scored 100%, twenty percent of the time. If there is a difference in switching, these tasks may have not required enough cognitive load to get at this difference. During the pilot studies, we did see a difference in performance of switching and not switching, but this difference did not carry out as much through the study. A further study varying the difficulty of these tasks will indicate how difficult a task must be before we see a difference in performance when switching is required. A third reason for the lack of difference in the performance between switching and not switching may suggest that in this scenario users are able to move their attention between a small personal display and a large display without losing their ability to follow what is happening on the large display. This may indicate that the cognition required to switch displays is low and does not impact the ability for users to complete other cognitive tasks at the same time as switching is required. 5.9 Conclusions The goal of this study was to understand the cost of switching between a small personal display and a large display in a simple task. The tasks were based on how these two displays might be used in combination in applications in the home. While our hypotheses are not supported, the information gathered during this study is still valuable and can be used for design of future applications. It appears that in these simple scenarios some users do not find it difficult to switch their attention between a small personal display and a large display, but some users do. We can design applications that 132 require users to move their attention across these displays, but want to ensure to minimize the need for constant switching for the users who do find it difficult to switch their attention. For example, the personal display might be able to be utilized for menus or icons rather than the large shared display. When a user wants to use a menu or an icon, they can select these off the small personal display. This would allow the large shared display to be less cluttered and save screen real estate for other objects more suitable for the large shared display. More studies are needed to better understand when it is difficult for users to switch between a small personal display and large display and when it is not difficult for users to switch between a personal and large display. 133 Chapter 6 Family Blog To demonstrate the results of our user studies and the resulting guidelines for the design of using small personal displays in combination with large shared displays from our Large and Small Display study (discussed in Chapter 4), we designed and built the Family Blog application. The goal of the Family Blog application was to demonstrate our experiences and findings in order to send a working prototype to our industrial partner. Our industrial partner was interested in different interaction techniques for using a large shared display. Using the Family Blog and other interaction techniques they developed, our industrial partner planned to run an evaluation of these different techniques. In the Family Blog application, multiple users create videos, photos, audio clips and text entries on their mobile phone, upload these and share them on a shared display in order to create a new video collaboratively with others using the shared media. The mobile phone runs a client application and connects to the server, a personal computer driving the large display, through a wireless internet connection. The mobile phone is also used for all interactions with the shared display. To inform the design of the Family Blog, we developed an innovative and realistic scenario of how personal devices, such as mobile phones, and large shared displays may be used together in the home. In this scenario, we take advantage of the increasing multimedia capabilities of mobile phones, utilize the large display to share and move content from a personal device to a shared display, and create new content collaboratively. Although we did not implement all details in this scenario, we used the 1 3 4 general theme and ideas to decide which functionality should be included in our development of the Family Blog. The scenario is given below. The Family Blog application is a proposal for a media creation application that combines media creation using mobile personal devices with in home viewing and sharing on a large shared display. Family members carry their personal devices with them at all times. Outside the home it operates as a phone as well as a video and still camera. Users can record important moments during the day as audio, video or photo - or a combination of all three. Editing can be carried out by sitting in front of the home large screen and reviewing blog entries for the day. Any entries can be changed, added to or morphed. Users can make changes directly on the screen, or via a combination of personal communicator and screen. Once a family member returns home, the home based Family Blog running on the home display senses the user has returned and queries their personal communicator for updated entries. Users can chose to simply upload their entries to the master life blog at home, or edit and upload. Users can share their media with the rest of the family and collaboratively build a blog entry. Once done, this presentation can be sent to other family members, friends or downloaded to a person's personal device to be taken with them. In the following section, the design of the Family Blog application will be discussed and how we incorporated our results from our user studies. Next, the 135 implementation details of the Family Blog are given and finally we conclude and discuss future work. 6.1 Functionality The main functionality of the Family Blog Application includes creating media on the mobile phone, sharing that media on a shared display and creating a video presentation collaboratively on the large shared display using different users' media items. We utilized a Nokia N80 Smart Phone for the personal display. There are two main modes on the phone application, (1) creating and capturing media and (2) connecting to the server to share media and control the large display. The main menu allows users to control the large shared display, browse and upload media, and create new video, text, audio clips or photos. The navigation keys can be used to move the cursor over different icons to select the functionality the user wishes to perform (see Figure 6.1). Once the cursor is on the desired functionality, the user can select the middle selection key button on the mobile phone to begin. *;;FattiliyBlog VWw fwl". fc JJ Bdt • :::: • •' ' f€". j •. • • : • Figure 6.1. Nokia N80 phone showing the main menu when users first open up the Family Blog Application. Users can utilize the navigation keypad 136 6.1.1 Creating media Creating media can be done when the phone is connected or disconnected to the server. When users start the Family Blog application on the mobile phone, they are taken to the main menu screen (see Figure 6.1) where users can create new instances of video, photos, audio clips and text. Once users select the desired functionality from the main menu, they are taken to a screen where they can create a new picture, record a video, record audio or enter in text using the keypad on the phone. When creating a picture or video, the screen of the phone becomes the viewfmder for the camera. Once the media item is created, users have the option of adding tags from a list of predefined tags. Predefined tags were used to decrease the difficulty and time needed for users to enter in their own tags using the keypad on the phone. As well as tags, context information such as user, time and date is automatically recorded. While we did not use this information in the presentation of the blog, we felt this was an interesting area to explore in future work and wanted to include some basic context information. Once media is created, users can browse their media collection on the phone by selected the 'Browser' functionality from the phone's main menu. To move through the media, the left and right navigation keys are used. When a video file is the current media object, a still image of the first frame is displayed. Users can also choose to delete, view tags and context information and upload media elements to the large display. 6.1.2 Sharing media To share media, users enter the browsing mode on their phone from the main menu. Users find the media they would like to upload and share on the large display and 137 select upload from the options on the phone. Once this option is selected, the phone connects to the server, and sends the media to the server. The media is copied to the server and displayed within the personal area for that user. 6.1.3 Creating the video blog presentation To create a blog presentation, users select the element they would like to add from either their personal media collection or the home media collection on the large shared display. On the large shared display, users can view their own and others media uploaded from the phones, view and use media from a home media collection from the family's hard drive on the server, collaboratively add media to create a video presentation using the media and view the video presentation on a media player. The large shared display is broken into three areas (see Figure 6.2). A t the top of the display is the blog creation panel, where media is added from user's media collection. The middle of the display is the personal and shared media areas and the bottom of the display is the family awareness panel. 138 1 creation V panel Media areas f panel ~y Family awareness panel Figure 6.2. Large display showing the top Blog Creation Panel, the middle Media area panel and the bottom Family Awareness panel. Currently 2 users are connected to the Family log. 'Mom's' area is on the left, 'Dad's' area is on the right and the family home media area is in the middle. When users are in the large display controlling mode on the mobile phone application, the navigation keys on the phone can be used to look at their media collection on the large display. The middle button of the navigation keypad is used to select the current item to be added to the blog. Once selected the user is prompted on the mobile phone to enter the number slot they wish the element to be added to. If a photo or video is selected, a smaller version of the media element is added to the slot. If it is text which is added, then an icon is used to represent the text. Audio files are added underneath the visual media element so that during the final presentation, the audio will be played alongside video, pictures or text (see Figure 6.3). 139 Figure 6.3. Blog creation pane showing 5 video and images on the top line in the boxes. There is a grey box representing 1 audio file, which will play over the first 4 images and video. Users can also remove elements by selecting the element again from the personal or family media collection. Once users have added all of the media elements they wish to, they can choose to create a video presentation. A SMIL multimedia file is created and played in a Real-Player on the large display. SMIL is an X M L based file which can play images, video, display text and with audio for specified times and layouts. SMIL files can be played on Real-Player media player or Quicktime. We chose to create the video blog presentation using SMIL because it was flexible enough for us to specify how items could be displayed but did not require much processing time to create a media file. In the blog creation panel, users can add video, images and text to the numbered square boxes in the visual area at the top of the panel. Using their phones, when a user adds audio to the blog they can also choose to 'drag' the audio clip over multiple visual items. This allows family members to create an audio description of pictures or to add music to their video blog presentation. Eight media slots are displayed at a time due to screen real estate, but users can add up to twenty visual items to the blog. Twenty visual items were chosen for simplicity. If a user chooses a spot that is already taken, then the new item object overwrites the previous media item. 140 The Media areas panel displays each user's media collection currently connected to the server, as well as the family media collection area (see Figure 6.4). Figure 6.4. The large shared display showing 1 user connected to the Family Blog. The family media collection is on the left side of the Media Area panel, another user's, Mom, media collection of the right hand side of the Media Area panel. The family media collection contains media that is saved in a specific directory on the server. The family media collection is included on the display to allow for shared media that may be higher quality images and videos than what is available on the phones. The family can than use this media as well to create their presentation. Every time the Family Blog server is started, it searches the family media directory for images and movies and displays them in this area. If no users are connected to the display only the family's media centre is displayed. Once a user connects to the display, a personal area for that user is added. In order to maximize the amount of screen real-estate available for each user connected to 141 the display, any other personal areas and the family media are resized (see Figure 6.4,6.5 & 6.6). 9 if n vs a m i ids l l um4 s a • mm •r • B ss9 CS IM. -! I - T H " I B I H H E H B B H P ' ^ ^ S B B B B H B B B Figure 6.5. The large shared display showing 2 users connected to the Family Blog. The family media collection is in the middle side of the Media Area panel, another user's, Mom, media collection of the left hand side of the Media Area panel and the third user's, Dad, media collection is on the right hand side of the Media Area panel. 142 a ft Hi n J* j «r> • t a £ 2 ^ B s a * m i. n L J H ffii y i •i i-rira ip El a ' T i l Figure 6.6. The large shared display showing 3 users connected to the Family Blog. The family media collection is in the top right side of the Media Area panel, another user's, Mom, media collection of the top left hand side of the Media Area panel, the third user's, Dad, media collection is on the bottom left hand side of the Media Area panel and the third user's, Girl, media collection is on the bottom left hand side of the Media Area panel The media collections area uses a 'f i lmstrip' design metaphor to display media in each individual area. In the 'f i lmstrip' design, the currently selected media object is displayed in a large size above thumbnails of the next few media items. If a video is the current media object then the first frame from the video is presented. Once a user moves over a video or audio element, the video begins to play. Users can use the menu on the phone to stop or pause the movie from playing. If the current media item is text, the text is displayed and i f the current item is an audio clip, an audio player is displayed. A n icon on the left indicates the type of media item currently selected. The tags for the currently selected item are displayed above the thumbnails (see Figure 6.7). 143 Figure 6.7. A personal area showing the filmstrip view of the media content. The currently selected content is enlarged above the other thumbnails. In each personal area, a uniquely coloured cursor, indicates the currently selected media element. If the phone is in large display mode, the navigation keypad on the phone can be used to move the cursor left or right within the personal area. The user is also able to move control from the personal area to the home media area by selecting the option on the phone. The phone indicates which area the user is controlling on the mobile phone screen. Also, the colour of the cursor is displayed (see Figure 6.8). 144 Figure 6.8. Nokia N80 mobile phone showing the screen on the phone when controlling large display. The color of the user's cursor (red) is displayed on the top left. The area which the user is controlling (personal media) is stated at the top of the display. Additional instructions are display on the bottom of the display. The family awareness panel contains an icon for each different member of the family. Once a member is connected to the display, their icon becomes coloured to indicate they are connected (see Figure 6.9). In the future we envision this panel to be used to get more information about where family members are including their location in and out of the house. Figure 6.9.The Family Awareness Panel of the Family Blog on the large share display showing 4 users in which 2 users are connected (Mom and Dad) to the Family Blog and 2 users not connected (Girl, Boy) to the Family Blog. 6.2. Design The interface design of the Family Blog application is based on our results from our studies discussed in Chapters 3, 4 and 5. The results from the pilot study (Chapter 3) demonstrate how multiple users are able to share personal areas on a large shared display, 145 including how to display personal areas, and how to use color to distinguish different user's objects on the display. The main study (Chapter 4) demonstrates several important guidelines for using a small personal display and a large shared display for loosely and tightly coupled collaborative tasks. For example, we found the large display should be used for selecting media and collaborative tasks. Finally, the results from the follow-up study (Chapter 5) indicate some users do not find it more difficult to switch attention between a large and small display so the display on a personal device and personal areas on the shared display can be used in conjunction. 6.2.1 Managing Screen Real-Estate on a Large Shared display . Defined personal areas on a screen reduce distraction. (Chapters 3 & 4) Borders, light white backgrounds and spacing are used to define each user's personal areas. The shared media area also has a different background to separate this area from user's personal areas (see Figure 6.10). 146 Figure 6.10. An example of two personal areas ofthe Family Blog from the large shared display. When laying out user's objects on the screen, ensure all of a user's objects are grouped together and there is sufficient space in between different user's objects. (Chapters 3 & 4) A l l of a user's media objects are grouped together in their own personal self-contained area clearly delineated by borders (see Figure 6.10). Size of windows should be automatically sized to be as large as needed to complete a task successfully. (Chapter 3) The layout of the personal areas uses a dynamic sizing policy depending on the number of users connected to the display in order to make each area 147 as large as possible to view their media. The middle area of the large display is maximized in order to give each personal maximum size. If one user is connected this area is divided into two areas (one for the user connected and one for the family media area), i f two users are connected the area is divided into three vertical areas and so forth for up to four users. Application windows should be placed relative to a user's spatial location. (Chapters 3 & 4 ) The placement of the personal areas on the shared display is based on where users in a living room might sit. For example when two users are connected to the display the personal areas are placed beside each other mirroring the position of the two users on the couch The use of icons reduces the amount of usable screen space for subjects and should be eliminated i f possible. (Chapter 3) The only icons used in the design of Family Blog are the in the Family Awareness panel which is used to increase the awareness of family member's location rather than represent functionality. 6.2.2 Shared Control Color is an effective way to indicate clear ownership (Chapters 3 & 4). Different color cursors are used to indicate individual control within a particular area. The color of an individual's cursor is indicated on the 148 mobile display for each user when controlling the large shared display (see Figure 6.11). n >| I ssj id r r > 3 i H "I' Figure 6.11. The bottom view of three personal areas showing two different color cursors. The red cursor is on the left and the blue cursor is on the right. . Limiting users to control of their own objects is an effective policy when content does not need to be shared. (Chapter 4) Each user only has control of objects in their personal area and not other's personal areas. A l l users can move their cursor into the shared media area, but only one user can use this area at a time. • When designing applications on a large screen display where collaboration is required, where possible each user should posses individual control of the shared display. (Chapter 4) Each user controls their individual cursor using the mobile phone. A l l users are able to add, remove and publish media items into the video blog presentation. 6.2.3 Distribution of tasks on the personal and shared large displays . Ensure that different tasks are accomplished on one display or the other. This will minimize the cognitive load needed for users to switch attention between the different displays. (Chapter 4 & 5 ) 149 Different phases of the application are completed on either the personal display or the large, shared display. Creation of media content is completed on the personal display whereas the creating the video blog presentation is completed on the large display. It is acceptable for different phases of the application to utilize the personal or large display as long as each of the phases is completed before the user must switch to the different display. (Chapter 4). See previous point. When browsing media, the large display should be utilized in order to enable users to view the content more clearly and interact more efficiently with the display. (Chapter 4). Once media is shared, users can browse their own and shared media on the large display. Users only have to browse media on their personal device to upload the content or i f they are offline. When collaborating, the large display should be utilized in order to facilitate discussion and co-operation. (Chapter 4) During the creation of the video presentation, the large display is utilized for all adding, removing media, and playing the video presentation. The personal device is only used to control the display. Consider the issues of user's need for privacy and allow the choice of some content to appear only on the personal display. (Chapter 4) 150 Users only share the media they choose to from their personal displays so i f there are private items the user does not which to share, they do not have to share these with the family or group. • When collaborating it is not necessary to utilize the personal display. (Chapter 4) Once the user begins to control the shared display, the personal display is only used for extraneous instructions to the user, meant to help beginner users. 6.3 Implementation Details The Family Blog implementation includes a database, server application and client application and can run up to four concurrent users (see Figure 6.12 ). Server Figure 6.12 Diagram showing Family Blog architecture. The database and client applications both send data to the Server. Each mobile phone has the client application running on them and the server application that is connected to the large screen display listens for all commands from the phones. The client application used the programming language J2ME and deployed on Nokia N80 mobile phones. The screen resolution on the Nokia N80 mobile phones is 352 by 416 which can display up to 262,144 colors. Java was used to develop the server was developed in Java. We use a 66 inch SMART Board as the large shared display with 151 the resolution of 1024 by 768 pixels (SmartTechnologies), but any display connected to the server would work. The server manages the communication between the client and the database. Once the application starts, the server reads the current media from the database and listens for commands from the client application. The server is also responsible for painting the large shared display interface and keeps track of users connected, personal areas, media players and the storyboard. In addition, the server creates the blog video presentation, starts the RealPlayer, and plays this presentation once selected from the client application. The client application creates and browses media and interacts with the shared display. The database manages the lists of users registered with the system and the users' previous media uploaded. We maintain a list of previously uploaded media because it is timely to upload media content and users can continuously build their media collections. As well as a list of user's media, the file location and tag information is included in the database. 6.4 Discussion Throughout the development of this application, we encountered many challenges with working with mobile phone technology. We chose to use the J2ME programming language because it runs on different mobile platforms and contains many multimedia libraries required for the Family Blog. One challenge encountered was the camera viewfinder functionality in J2ME is slow so users must move the phone slowly. Saving the image or video is to the phone's memory is also slow. Besides the speed, the quality of the video was very low which was even more apparent when viewed on the large 152 display. This can cause some usability issues for users who are used to the faster speed of the online Nokia Camera application and investigation about different solutions need should happen before further user studies. Another challenge with J2ME is saving any data in the phone's memory requires the application must be signed from an outside agency. While we were able to sign the application, we found this to be a time consuming and frustrating process. Another challenge was the wireless network. Transferring media across the network took a few seconds and hung the mobile phone display; therefore, the users did not receive any feedback messages regarding the delay. As well as speed, the dependability of the wireless network made connecting to the server difficult. We found the best solution to this problem was to create an ad-hoc network on the personal computer the server was running and use this connection for the server and the client application. 6.5 Conclusions We created an application where users can create, share media and collaboratively create a multimedia video presentation. This Family Blog application uses mobile phones to create media and a large shared display for sharing and collaborating with family members. The application uses the guidelines developed from our studies and initial reactions from users are very positive. Users' feedback regarding the Family Blog demonstrates that it is easy to use personal areas, move control from personal areas and the shared family area and creating the video blog presentation together is fun. While this is informal feedback, it does help validate our user guidelines from our user studies. 153 Our next steps with the Family Blog are to make improvements, such as improving video quality and speed of camera on the phone, to enhance the user experience. While we did not complete an evaluation of the Family Blog to validate our findings from our previous study, the goal of this prototype was to send this to our previous study where they will investigate different interaction devices for large displays in the home. Subsequent to our industrial partner user studies, we would like to complete a study to begin to validate our findings from our previous studies. Using the Family Blog to understand how users prefer to use a personal display and large display in the home would lead to many further insights about designing applications using both of these displays. 154 Chapter 7 Conclusions and Future Work The goal of this thesis was to explore how people in the home can utilize a large shared display and the impact of adding a small, personal display, such as a mobile phone, when users are working collaborating and working individually. Our work builds on previous work by focusing on collaborative activities in the home, and investigating which displays users' prefer and which displays are more efficient during varying levels of coupling in collaborative tasks. We began this research by completing a pilot study to see how multiple users work in parallel on a single shared large display (discussed in Chapter 3). From our observations, we were able to create guidelines for designing applications where multiple people work in parallel on a large shared display. These guidelines include grouping all of a user's objects together on the large shared display relative to his seating position, and using color to indicate ownership of objects on the large shared display. We then wanted to understand if some parallel tasks could take place on a small personal display rather than on the large shared display and what the impact this might have on collaboration. Our main study (discussed in Chapter 4) investigated the distribution of tasks with varying levels of coupling on a small personal display and a large shared display. From our results, we created a set of design guidelines for using small personal displays in combination with a large shared display. We found that users prefer the large display for collaborating and viewing media, users thought the personal display was useful for providing feedback, multiple cursors help collaboration, and 155 switching between a small personal display and a large shared display is difficult for some users. After our main study, we wanted to understand how difficult it is for users to switch between a personal and large display, so we completed a follow-up study. Our follow-up study indicated some users do not find it difficult to switch attention between a small personal display and large display while others do. More research on this issue is needed. Finally, we designed and implemented an application, the Family Blog, which uses multiple small personal displays and a large shared display to demonstrate our results and experiences from the studies. A l l of these were shared with our industrial sponsor for use in their future prototyping. Through our studies and our experience of building the Family Blog application, this thesis makes three main research contributions. 1. The exploration of the design space of using small personal displays in combination with a large shared display for in-home collaborative tasks 2. The development of a task suitable for studying user's preferences and performance when completing collaborative tasks using a small personal displays in combination with a large shared display (Chapter 4). 3. The demonstration of the viability of design guidelines through the creation of an application which uses multiple small personal displays in combination with a large shared display for creating and sharing media content. Future work includes further investigating the cost of switching between a small personal display and a large shared display, validating our design of the Family Blog 156 application, and applying some of our experiences and results for not only collocated tasks in the home and also for distributed collaborative tasks in the multiple home. In our follow-up study, we looked at working memory and recognition to understand the cost of switching between a small personal display and a large display. Further work is required to understand the cost of switching focus between a small personal display and a large shared display in different scenarios that have different cognitive loads. Investigating more complex tasks, requiring a higher cognitive load a the user, will indicate when switching between a small personal display and a large display is too costly. Besides the cost of switching, looking at how to direct a user's attention to the relevant display is an important design guideline to understand. For example, i f a user's focus is on the large shared display but there is important information for the user on the personal display, what affordances can direct the user's attention to the personal display? This information would be valuable not only for applications in the home, but also in work and school environments. One of the next steps in our research is to validate our design of the Family Blog. We need to confirm that the guidelines used to design the Family Blog are valid when combined in a single application. Further, we would like to see i f these guidelines are still relevant i f there is more functionality and more complexity added to the Family Blog. Further functionality could include archiving and searching capability for different media content. It is not clear i f this functionality will be better suited for the small personal display or the large shared display. Another area to explore is extending the Family Blog to allow for distributed collaboration between users in two separate locations. For example, i f a family goes on a 157 vacation with their Grandma, who lives in a different city, the family may want to collaborate with Grandma to create a presentation of the trip that reflects everybody's experience. In long-distance collaboration, what should the design guidelines be for the shared and personal display and how does this effect collaboration? While we are just beginning to explore the use of small personal devices and large displays in the home, our results indicate that a small personal display can be useful in collaborative tasks and that users are able to use a small personal display with a large shared display without difficulty. There are many opportunities in the home to use these two displays in combination to help users. Further research can help explore these different opportunities. 158 Bibliography Apple. (2007). http://www.apple.com/appletv/ Ball, R., & North, C. (2005). Effects of tiled high-resolution display on basic visualization and navigation tasks. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1196-1199. Ballagas, R., Rohs, M . , Sheridan, J. G., & Borchers, J. (2005). Sweep and point & shoot: Phonecam-based interactions for large public displays. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 5, 1200-1203. Berry, L., Bartram, L., & Booth, K. S. (2005). Role-based control of shared application views. Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technolog. 23-32. Biehl, J. T., & Bailey, B. P. (2004). ARIS: An interface for application relocation in an interactive space. Proceedings of the 2004 Conference on Graphics Interface. 107-116. Bier, E. A. , & Freeman, S. (1991). M M M : A user interface architecture for shared editors on a single screen. 79-86. Bolt, R. A. 1980. "Put-that-there": Voice and gesture at the graphics interface. In Proceedings of the 7th Annual Conference on Computer Graphics and interactive Techniques A C M Press, New York, N Y , 262-270. Churchill, E., Nelson, L., Denoue, L., & Girgensohn, A. (2003). The plasma poster network: Posting multimedia content in public places. Proceedings of Interact. Dourish, P., & Bellotti, V. (1992). Awareness and coordination in shared workspaces. Proceedings of the 1992 ACM Conference on Computer-Supported Cooperative Work. 107-114. Elrod, S., Pier, K., Tang, J., Welch, B., Bruce, R., Gold, R., et al. (1992). Liveboard: A large interactive display supporting group meetings, presentations, and remote collaboration. Conference on Human Factors in Computing Systems. Fox, A. , Johanson, B., Hanrahan, P., & Winograd, T. (2000). Integrating information appliances into an interactive workspace., 20(3) 54-65. Gaver, W. (1991). Sound support for collaboration. Proceedings of the Second Conference on European Conference on Computer-Supported Cooperative Work. 91 293-308. 159 Google. (2007). http://maps.google.com/ Greenberg, S., Boyle, M . , & Laberge, J. (1999). PDAs and shared public displays: Making personal information public, and public information personal. Personal Technologies, 5(1), 54-64. Greenberg, S., & Rounding, M . (2001). The notification collage: Posting information to public and personal displays. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 514-521. Gutwin, C., & Greenberg, S. (1998). Design for individuals, design for groups: Tradeoffs between power and workspace awareness. Proceedings of the 1998 ACM Conference on Computer Supported Cooperative Work, 207-216. Hart, S.G., & Staveland, L. E.. (1988). Development of N A S A - T L X (Task Load Index): results of empirical and theoretical research. In P. Hancock and N . Meshkati, editors, Advances in Human Psychology: Human Mental Workload, Elsevier Science, 139-183. Hinckley, K. , Ramos, G., Guimbretiere, F., Baudisch, P., & Smith, M . (2004). Stitching: Pen gestures that span multiple displays. Proceedings of the Working Conference on Advanced Visual Interfaces, 23-31. Huang, E. M . , & Mynatt, E. D. (2003). Semi-public displays for small, co-located groups. Proceedings ofthe SIGCHI Conference on Human Factors in Computing Systems, 49-56. Izadi, S., Brignull, H. , Rodden, T., Rogers, Y . , & Underwood, M . (2003). Dynamo: A public interactive surface supporting the cooperative sharing and exchange of media. Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology, 159-168. L i , K. , Chen, H. , Chen, Y . , Clark, D. W., Cook, P., Damianakis, S., et al. (2000). Building and using a scalable display wall system. Computer Graphics and Applications, IEEE, 20(4), 29-37. Ma, M . , Wilkes-Gibbs, D., & Kaplan, A. (2004). IDTV broadcast applications for a handheld device. Proceedings from the 2004 IEEE International Conference on Communications, , 1 Miyaoku, K., Higashino, S., & Tonomura, Y . (2004). C-blink: A hue-difference-based light signal marker for large screen interaction via any mobile terminal. Proceedings ofthe 17th Annual ACM Symposium on User Interface Software and Technology, 147-156. 160 Myers, B. A . (2001). Using handhelds and PCs together. Communications of the ACM, 44(U), 34-41. Myers, B. A . , Bhatnagar, R., Nichols, J., Peck, C. H., Kong, D., Miller, R., and Long, A . C. (2002). Interacting at a distance: measuring the performance of laser pointers and other devices. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. A C M Press, New York, N Y , 33-40. Myers, B. A. , Stiel, H. , & Gargiulo, R. (1998). Collaboration using multiple PDAs connected to a PC. Proceedings of the 1998 ACM Conference on Computer Supported Cooperative Work, 285-294. Mynatt, E. D., Igarashi, T., Edwards, W. K., & LaMarca, A . (1999). Flatland: New dimensions in office whiteboards. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 346-353. Palen, L., & Dourish, P. (2003). Unpacking" privacy" for a networked world. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 129-136. Pedersen, E. R., McCall, K., Moran, T. P., & Halasz, F. G. (1993). Tivoli: An electronic whiteboard for informal workgroup meetings. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 391-398. Peters, S., & Shrobe, H. E. (2003). Using semantic networks for knowledge representation in an intelligent environment. Proceedings of the First IEEE International Conference on Pervasive Computing and Communications, 323-329. Rekimoto, J. (1997). Pick-and-drop: A direct manipulation technique for multiple computer environments. Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology, 31-39. Rekimoto, J. (1998). A multiple device approach for supporting whiteboard-based interactions. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 344-351. Rekimoto, J. (1998). A multiple device approach for supporting whiteboard-based interactions. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 344-351. Robertson, S., Wharton, C , Ashworth, C , & Franzke, M . (1996). Dual device user interface design: PDAs and interactive television. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 79-86. 161 Rogers, Y . , & Lindley, S. (2004). Collaborating around vertical and horizontal large interactive displays: Which way is best? Interacting with Computers, 16(6), 1133-1152. Sandhu, R. S., Coyne, E. J., Feinstein, H. L., & Youman, C. E. (1996). Role-based access control models. Computer, 29(2), 38-47. Sandhu, R., Coyne, E., Feinstein, H. , & Youman, C. (1996). Role-based access control models. Computer, 29(2), 38-47. Scott, S. D. (2003). Territory-based interaction techniques for tabletop collaboration. VIST 2003 Conference Companion, 17-20. Smart Technologies Inc. (2003). DViT Digital Vision Touch Technology: White Paper, www. smarttech. com/dvi t/D ViT_white_paper.pdf. Snowdon, D., & Grasso, A. (2002). Diffusing information in organizational settings: Learning from experience. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 331-338. Stanford, V. , Garofolo, J., Galibert, O., Michel, M . , & Laprun, C. (2003). The NIST smart space and meeting room projects: Signals, acquisition annotation, and metrics. Proceedings of2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, , 4 Stewart, J., Bederson, B. B., & Druin, A. (1999). Single display groupware: A model for co-present collaboration. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Steves, Rick (Producer). (2001). London: Royal and Rambunctious. [Television series episode]. Rick Steves' Europe. Steves, Rick (Producer). (2003). Berlin: Resilient, Reunited and Reborn. [Television series episode]. Rick Steves' Europe. Steves, Rick (Producer). (2003). Munich and the Foothills to the Alps. [Television series episode]. Rick Steves' Europe. Streitz, N . A. , GeiMer, J., Holmer, T., Miiller-Tomfelde, C , Reischl, W., Rexroth, P., et al. (1999). i - L A N D : An interactive landscape for creativity and innovation. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 120-127. Tan, D. S., & Czerwinski, M . (2003). Information voyeurism: Social impact of physically large displays on information privacy. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 748-749. 162 Tang, J. C. (1991). Findings from observational studies of collaborative work. International Journal of Man-Machine Studies, 34(2), 143-160. Tani, M . , Horita, M . , Yamaashi, K., Tanikoshi, K. , & Futakawa, M . (1994). Courtyard: Integrating shared overview on a large screen and per-user detail on individual screens. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 44-50. Tse, E., & Greenberg, S. (2004). Rapidly prototyping single display groupware through the SDGToolkit. Proceedings of the Fifth Conference on Australasian User Interface, , 28 101-110. Tse, E., Histon, J., Scott, S. D., & Greenberg, S. (2004). Avoiding interference: How people use spatial separation and partitioning in SDG workspaces. Proceedings of the 2004 ACM Conference on Computer Supported Cooperative Work, 252-261. Vogt, F., Wong, J., Po, B. A. , Argue, R., Fels, S. S., & Booth, K. S. (2004). Exploring collaboration with group pointer interaction. Proceedings of the 2004 Computer Graphics International, 636-639. 165 Appendix B: Ethics Certificate UBC The University of British Columbia Office of Research Services Behavioural Research Ethics Board Suite 102, 6190 Agronomy Road, Vancouver, B.C. V6T1Z3 CERTIFICATE OF APPROVAL- MINIMAL RISK RENEWAL PRINCIPAL INVESTIGATOR: DEPARTMENT Kellogg S. Booth rtJBC BREB NUMBER: pBC/Science/Computer Science H03-80151 gNSTITUTION(S) WHERE RESEARCH WILL BE CARRIED OUT: Institution L Site U B C Other locations where the research will be conducted: N/A Point Grey Site CO-INVESTIGATOR(S): Joanna McGrenere Regan Mandryk Ying Zhang Matthias Finke Sheelagh Carpendale Lyn Bartram Mark Hancock Mani Golparr Fard Madhav Nepal Colin Swindells Petra Neumann J. Karen K. Parker Joel Lanir Mike Blackstock Barry A. Po Rodger J. Lea Zhangbo Liu Ima Hajshirmohammadi 166 Alex Merritt Melanie Tory Yamin Htun Nicole Arksey Rachel A. Pottinger Adam Matthews Lior Berry Sheryl AS Staub-French Tamara Munzner Anthony Tang Garth Shoemaker Lu Yu David Sprague Sherman Lai SPONSORING AGENCIES: Natural Sciences and Engineering Research Council of Canada (NSERC) - "Collaborative Visualization and Interaction in Ubiquitous Computing Environments" - "Network for Effective Collaboration Technologies Through Advanced Research (Orsil Title)" - "ARTIFACT: Advanced Research, Techniques, and Informatics for Future Advantages in Construction Technology" - "Interactive Computer Graphics (Orsil title)" - "Collaboration Technology and Multi-User Interfaces" - "Direct Multi-Touch Interaction for a Very Large Wall Display" Panasonic R & D Co. of America - "Collaborative Visualization and Interaction in Ubiquitous Computing Environments" PROJECT TITLE: ARTIFACT: Advanced Research, Techniques, and Informatics for Future Advantages in Construction Technology EXPIRY DATE OF THIS APPROVAL: June 5, 2008 lAPPRQVAL DATE: June 5, 2007 The Annual Renewal for Study have been reviewed and the procedures were found to be acceptable on ethical grounds for research involving human subjects. Approval is issued on behalf of the Behavioural Research Ethics Board 167 Appendix C: Materials for the Pilot Study C.l Trip Planning Prototype The trip planning system consists of a Movie Player Application, a Map Application and a Tag Application. Details about these applications are given below. Movie Player Application The Movie Player Application can play a specified video in full screen mode (see Figure C. 1.1) or in three-quarter screen mode. In the three-quarter screen mode the video takes up approximately half the area of the entire screen and is located at the top-centre of the screen (see Figure C.l.2). Standard movie controls, such as start, stop and pause are only available as keyboard shortcuts. 168 Figure C.l.l. The trip planning prototype showing the Movie Player Application in full screen mode on the large shared display. The two application icons are located in the bottom centre ofthe screen and are overlaid on top of the movie. The Map Application icon is on the left and the Tag Application icon is on the right. 169 Figure C.l.2. The trip planning prototype showing the Movie Player Application in three-quarter screen mode on the large shared display. The two application icons are located in the bottom centre of the screen and are not overlaid on top of the movie. The Map Application icon is on the left and the Tag Application icon is on the right. Map Application The Map Application allows users to add a marker to a map (see Figure C. 1.3). 170 Figure C.l.3. Trip planning prototype in three-quarter screen mode showing an instance of the Map Application in the bottom right hand side of the screen. The Map Application window overlays the bottom right of the movie player. The user simply clicks anywhere on the map and a marker appears. If a user subsequently clicks anywhere else on the map, the marker is moved to the new location (see Figure C. 1.4). We did not allow multiple simultaneous markers because we did not want the map to become too crowded with markers. Too many markers might confuse users when they re-opened the Map Application. Each marker is saved for future review, although this feature was not used in the pilot study because we are only concerned with the initial phase of trip planning. The map displayed by the application is pre-selected when the application is configured and places of interest are labeled in the map so users can easily locate them. The map images were downloaded from Google maps (Google, 2007). 171 Figure C.l.4. An example of the Map Application enlarged from the previous figure showing a map of the City of London with labels for locations of interest and a marker of one of them (Aster House Hotel). Tagging Application The Tag Application contains 30 pre-selected tags describing different locations mentioned in the television programs (see Figure C.l.5). 172 Figure C.l. 5. Trip planning prototype in three-quarter screen mode showing an instance of the Tag Application on the right hand side of the screen. The Tap Application window overlays the bottom right of the movie player. The tags are listed within different groups and are in black text before a user selects the tag. To select a tag, users click on the tag and the tag changes to bold red to show it has been selected (see Figure C.l .6). Users can select as many tags as they wish. 173 Figure C.l.6. An example of the Tag Application enlarged from the previous figure showing a selection of tags for the City of London. The 'Paintings' tag has been selected by the user and has changed to the color red from black and beomces bold to indicate it has been selected. Resizing an Application A small icon was placed on the bottom right corner of the Map Application and the Tag Application to indicate to users which area of the application to drag for resizing the application (see Figure C.l.7). 174 i 1 • 1 Done se/ecr tags for this section of the description Food and Dunk Pai»tw>«s ^ m m~% ... I Statues Hotels 1ml 1 Figure C. 1.7. Trip planning prototype in three-quarter screen mode showing the Tag Application open and highlighting the resizing icon in the bottom right side of the Tag Application. Closing an Application To close any of the applications, users click on the "Done" button and the application window closes. Panning Both the Tag Application and the Map Application have the ability to pan. This functionality was important to enable users to be able to use the application without opening it to full size. Users can click up, down, left and right buttons to pan the application. The buttons were placed on the right and bottom to save space. (See Figure C.1.8). 175 Figure C.l.8. Trip planning prototype in three-quarter screen mode showing the Tag Application with panning arrows alongside the bottom and right hand side ofthe Tag Application window. 176 C.2 Travel Interests Table C.2.1 shows the travel interests given to subjects during the Pilot Study for the training movie and the other two movies used during the study. Each of the 3 subjects in a group was given a different interest for each of the travel movies. Movie Interest 1 Interest 2 Interest 3 Berlin (training) Government Parks & Markets Theatre London Paintings, Theatre & Gardens Statues & Churches Food, Drink & Hotels Munich and the Swiss Alps Food & Drink Paintings, Castles & Shopping Outdoor activities & Hotels Table C.2.1.A table showing the 3 different movies and the 3 interests given to the 3 subjects in a group during the Pilot Study. C.3 Munich and the Swiss Alps Quiz for Pilot Study Subject #: Quiz: Munich & the Swiss Alps 1. What does Mary's Square mark ? 2. Where do a lot of people in Munich go to do their shopping? 3. What was Munich's policy when rebuilding itself after the war? 4. Where is a good place to buy fresh produce? 5. How many different breweries does Munich have? 6. Why were beer gardens created? 7. What is Munich's oldest church? 8. Who gave the tomb of Mundita to Munich? 9. What is Munich called in German? 10. Name one interesting artifact found at the Residenz palace? 11. What are some items sold at the Palace's delicatessen? 12. What is a good, inexpensive way to get around the city? 13. How large is the English Garden? 178 14. What is something interesting mentioned about the English Garden? 15. What does the gallery name Alte Pinahothek mean in English? 16. What two religions were at battle in the 1500s and are depicted in some paintings in the Alte Pinahothek? 17. What types of beer can you choose at the beer hall? 18. How long is the drive into southern Bavaria? 19. When did King Ludwig build his own castle? 20. What type of architecture is the Neuschwanstein castle? 21. What village is visited just over the border into Austria? 22. What are the names of one of the ruins mentioned? 23. How high is the Zugspitze mountain? 24. What is the best way to get to the top ofthe Zugspitze mountain? 25. What is on top of the Zugspitze mountain summit? C.4 London Quiz for Pilot Study Subject #: Quiz: London 1. At what tourist spot in London, does the 'Changing of the Guard' occur? 2. Where should you line up to see Britain's government in action? 3. What is the type of architecture of Britain's parliament building? 4. What street does Britain's Prime Minister live? 5. What is the monument called which honor's Britain's war dead? 6. Who is the one-armed statue at the end of Whitehall Street? 7. What animal did the artist of the use to model the paws of the bronze lion statues? 8. What is the wine bar called 2 blocks from Trafalgar Square called? 9. What is the best way to get cash when traveling in Britain? 10. Where are the most amounts of theatres in London? 11. What is a good way to save some money when going to the theatre? 12. What is the old center of London called? 13. How many people commute to the old center of London? 180 14. Who is the architect who designed St. Paul's Cathedral? 15. Which famous couple got married in St. Paul's Cathedral? 16. What are some warnings given mentioned about the Soho area? 17. What three types of restaurants are visited in Soho? 18. Which hotel is mentioned for an affordable place to stay? 19. Where should you buy tickets for the tube in London? 20. What statue is outside the British Library? 21. Name something historically famous that can be found at the British Library. 22. What is the name of the tea museum? 23. What 'ruined' tea in this 1950's? 24. What river in London can you take a cruise on? 25. What is the name of the gardens mentioned? 181 C.5 Pilot Study Questionnaire Questionnaire Section I: Personal Information 1. In what age group are you? D 19 and under • 20 - 29 • 30-39 • 40 - 49 • 50 - 59 • 60 + 2. Gender: • Male • Female 3. In terms of your current occupation, how would you characterize yourself? • Academic D Professional • Manager • Software Developer • Graduate Student, please specify area of study: D Undergraduate Student, please specify major: • Other, please specify: 4. How many hours a week do you use a computer? • Greater than 40 hours • 20 to 40 hours • 10 to 20 hours • 5 to 10 hours • 0 to5 182 5. How many hours a week do you watch TV? • Greater than 40 hours • 20 to 40 hours • 10 to 20 hours • 5 to 10 hours • 0 to5 Part 2: 1. Please answer the following questions: SD = Strongly Disagree; D = = Disagree N = = Neutral SA = Agree SA = Strongly Agree 'When performing my tasks', I felt I was O S D O D : O N • O A , O S A disturbing the other users. • When others were performing their tasks, it O S D O D O N O A O S A was disturbing to me. It was difficult to follow the movie when O S D O D O N O A O S A others were performing their tasks. It was difficult to follow the movie when I O S D O D •'• O N • O A O S A was performing my tasks. There was enough room on the screen to O S D O D O N O A O S A perform my tasks. It was clear which applications I had to use O S D O D O N O A O S A to complete my tasks. The applications were easy to use. > O S D O D O N O A O S A It was clear what applications belonged to mp O S D O D O A O S A It was clear what applications I could O S D O D O N O A O S A control. It was clear what applications belonged to O S D O D O N . O A O S A other users. It was clear what applications I could not O S D O D O N O A O S A control. 183 In my home, I would like to be able to do OSD H \OD; T ; O N I ^ O A 7 ! O S A more than one task on my TV. ' ^ , (.[•' Part 3: 1. When opening an application to perform a task, please describe why you moved the application to a particular spot on the screen. 2. Did you feel you should use a particular part of the screen to complete your tasks? Why? 3. Please describe how you decided what size the applications should be when performing your tasks. 4. When performing your own tasks, did you prefer when the movie took up the full screen or a smaller part of the screen? Why? 5. When others were performing their tasks, did you prefer when the movie took up the full screen or a smaller part of the screen? Why? 184 6. Would you prefer if the application automatically displayed in a particular location, rather than you have to choose the spot? Why? 7. Would you prefer if the application automatically resized, rather than you have to resize the application? Why? 1 8 5 Appendix D: Materials for the Main Study D.l Large and Small Display Prototype Creating a Screen Shot Task In the Creating Screen Shot Task, users watch a movie on the shared display and create screen shots using their mobile phone. The most current screen shot created is displayed on the mobile phone display. • The movie plays on the whole screen available. • As the movie is playing on the large display, subjects hit the 'Capture' button on the mobile phone to capture a screen shot (see Figure D . l . l ) . Figure D.l.l. Nokia N80 mobile phone display showing a screen shot created from the video playing on the large display. The 'Capture' button is on the bottom left of the display. Due to copyright reasons, the image has been removed. The original source is 'Meet the Robinsons' (Borden, 2007). 186 The capture command is sent directly to the large display and the server takes a screen shot and sends it to the mobile phone. The screen shot is displayed on the screen of the mobile phone. After the video is complete, the 'Next Phase' button is selected to view a list of the screen shots created in this task (see Figure D. 1.2). I) , ImaaeCapture > r iilihirt ' fjexl Pltasc Figure D.l.2. Nokia N80 mobile phone display the 'Next Phase' button is on the bottom right of the display. Due to copyright reasons, the image has been removed. The original source is 'Meet the Robinsons' (Borden, 2007). Selecting a Screen Shot Task In the Selecting Screen Shot Task, users select a subset of their personal screen shots to share with the group. The list of screen shots captured is displayed on either a personal area of the large display or on the mobile phone display. In this task, the selection of screen shots can be completed on the large shared display or on the personal display. 187 • When selecting using the large shared display, the screen is divided into 2 panels (see Figure D.l.3). The bottom panel is the individual areas to select personal screen shots to share with the group. The top panel is where the screen shots go once a user chooses to share them with the group. Shared screenshots Personal areas where users select screen shots to share Figure D.l.3. Large shared display during Selecting Screen Shot Task showing three personal areas with each individual subjects photos from the Creating Screen Shot Task. Above the three personal areas is the shared screen shot panel where screen shots are placed after users has chosen to share them. Due to copyright reasons, the images have been removed. The original source is 'Meet the Robinsons' (Borden, 2007). When selecting on the large shared display, each user is given an equal amount of screen real estate. Each user's personal area is relative to their seating position (see Figure D.l.4). Our observations from our pilot study suggest user's objects should be grouped together, separated from other user's objects and relative to the seating position. 188 X X Figure D.l.4. The large shared display in the Selecting Screen Shot Task showing three individual user's areas on the left, middle and right. The red users personal area is highlighted. Due to copyright reasons, the images have been removed. The original source is 'Meet the Robinsons' (Borden, 2007). • When selecting on the large shared display, the display on the mobile phone is blank. Since we found color was an effective method for indicating individual control in our preliminary study, each personal area on the large shared display had a different color cursor. When selecting using the personal display, the personal screen shots created in the Creating Screen Shot tasks are displayed on phone. • Both the displays utilize a filmstrip interface where the currently selected screen shot is larger and displayed above 3 (the personal display) or 4 (the large shared display thumbnails of other screen shots in the user's collection. • The user navigates through the list of screen shots by using the left and right navigation keys on the mobile phone (see Figure D.l.5). 189 Navigation Keypad Figure D.l. 5. Nokia N80 mobile phone showing the display for the Selecting Screen Shot Task. The navigation keypad on the phone is used to move the cursor left or right on the mobile phone and the large shared display. Due to copyright reasons, the images have been removed. The original source is 'Meet the Robinsons' (Borden, 2007). For both the large shared display and the small personal display, to select a screen shot to share, users select the middle button on the navigation keypad. Once a screen shot is selected, a small yellow border appears around the screen shot (see Figures D. l .6 and D.l.7) 190 Figure D.l.6. Nokia N80 mobile phone the display for the Selecting Screen Shot Task showing the left thumbnail screen shot has been selected by this user (blue) and is highlighted in yellow. Due to copyright reasons, the images have been removed. The original source is 'Meet the Robinsons' (Borden, 2007). Figure D.l.7. A personal area on the large shared display for the Selecting Screen Shot Task showing the left thumbnail screen shot has been selected by this user (blue) and is highlighted in yellow. Due to copyright reasons, the images have been removed. The original source is 'Meet the Robinsons' (Borden, 2007). 191 Once a user has selected which screen shots they would like to share, they hit the "Share selected" button (see Figure D.l.8). A copy of the screen shots is added to the top third of the large display (see Figures D . l . 8 and D.l.9). Figure D.l.8. Nokia N80 mobile phone in the Selecting Screen Shot Task showing the 'Share Selected' Button on the bottom left and the 'Next Phase' Button on the bottom right. Due to copyright reasons, the images have been removed,, original source is 'Meet the Robinsons' (Borden, 2007) 192 Shared Screen y Shots Figure D.l.9. The large shared display in the Selecting Screen Shot Task showing the list of shared screen shot on the top of the display. Due to copyright reasons, the images have been removed. The original source is 'Meet the Robinsons' (Borden, 2007). • After everybody has completed sharing the screen shots they want, the 'Next Phase' button is selected to view a list of the screen shots created in this task. Building the Outline Task In the Building the Outline Task, subjects select screen shots from the list of shared screen shots to create an outline which describes important elements from the movie trailer they watched in the Creating Screen Shots Task. We implement three different control policies, which differ in the number of cursors and personal areas on the shared display. . The large shared display is divided into 2 panels (see Figure D.l.10). The bottom panel is the area to select shared screen shots. The top panel is the outline created from the screen shots. 193 Figure D.l. 10. The large shared display in the Outline Building Task showing the bottom area where the shared list of screen shots are displayed in the single cursor condition . The top panel is the outline created using screen shots from the shared list below. Due to copyright reasons, the images have been removed. The original source is 'Meet the Robinsons' (Borden, 2007). The list of shared photos are combined to a single version of the list Similar to the Selecting Screen Shot task, each subject can navigate through the shared list of photos using the left and right navigation keys on the mobile phones » To select a screen shot to appear in the outline, any subject hits the middle button on the mobile phone navigation keypad. The screen shot is added above the shared list of photos in the order in which the screen shot appeared in the movie (see Figure D . l . l 1). 194 3 4 5 6 7 8 y 10 Figure D.I.J 1. The top panel of the large shared display from Figure D.l 7 showing the outline once screen shots have been added to the outline. Underneath each screen shot is a number corresponding to its current place in the outline. Due to copyright reasons, the images have been removed. The original source is 'Meet the Robinsons' (Borden, 2007). • Once a screen shot is selected to be added to the outline, a yellow border around the screen shot in the list appears to indicate this screen shot appears in the outline (see Figure D . l . 12). Figure D.l. 12. The bottom panel of the large shared display in the single cursor condition from the Figure D.l 7 showing 3 selected screen shots which will appear in the outline. The selected screen shots have a yellow border around them. Due to copyright reasons, the images have been removed. The original source is 'Meet the Robinsons' (Borden, 2007).. 195 • To remove a screen shot from the outline, users move to that screen shot in the shared list and hit the middle navigation key. The screen shot is removed from the outline For the Building the Outline Task, we implemented three different control policies in order to investigate how different control policies along with a small personal display can effect collaboration. In the first control policy (A), all users share one list of shared photos and one cursor. Since each user has a mobile phone, each user in the group can move the cursor left or right and add screen shots to the outline. The second control policy (B) builds on the first control policy (A) by adding the ability for users to look and control the shared list of screen shots on their personal mobile phone display. Users can control either the single cursor on the shared display or a cursor on their personal display. Screen shots can be added to the outline on both the shared display and the mobile phone. In the third control policy (C), there are three personal areas with a duplicated list of the shared screen shots. Within each personal area each user can control a cursor and add screen shots to the outline. Below is a description of the functionality of the three different control policies used in the Building the Outline Task. A) 1 Shared Cursor • There is one list of shared photos displayed in a filmstrip interface (see Figure D.l.13). . There is one cursor which each all users share control of. This cursor is purple, a color not used for any individual in the selection task (see Figure D. 1.13). 196 Figure D.l. 13. The bottom panel of the large shared display in the single cursor condition from the Figure D.ll the single purple cursor on the left hand side on the thumbnail. Due to copyright reasons, the images have been removed. The original source is 'Meet the Robinsons' (Borden, 2007). B) 1 Shared Cursor Plus Personal Display • This option is similar to (A), except users can also navigate and select photos on their personal displays. • Users can switch between controlling the large shared display and controlling the list on their personal display. • When users are controlling the large display, there is nothing displayed on the mobile phone display except for the option to switch to control to the phone. . To switch to control the mobile phone, users hit the "Cell phone" button on their mobile phones (see Figure D . l .14). 197 Figure D.l. 14. Nokia N80 mobile phone in the Building Outline Task with the control policy of 1 cursor plus personal display showing the display when controlling the large shared display. To control the phone, the 'Cell Phone' button on the bottom left is selected. Due to copyright reasons, the images have been removed. The original source is 'Meet the Robinsons' (Borden, 2007). • To switch the control the large shared display, users hit the " T V " button on their mobile phones (see Figure D . l . 15). 198 Figure D.l. 15. Nokia N80 mobile phone in the Building Outline Task with the control policy of 1 cursor plus personal display showing the display when controlling the personal display. To control the large shared display, the 'TV button on the bottom left is selected Due to copyright reasons, the images have been removed. The original source is 'Meet the Robinsons' (Borden, 2007). • When users are navigating through screen shots on their personal display, they are not changing the cursor on the large shared display. If a screen shot is selected for the outline the screen shot selected will have a yellow outline on both the large display and the personal display C) 3 Individual Cursors . Each user has a copy of the shared screen shot list displayed on a personal area of the screen (see Figure D . l . l 6). 199 Figure D.l. 16.The large shared display in the Building Outline Task in the 3 individual cursor condition showing three personal areas with three corresponding colored cursors. Red on the left, blue in the middle and green on the right. Due to copyright reasons, the images have been removed. The original source is 'Meet the Robinsons' (Borden, 2007). • Each user has their color coded cursor and can only control their personal section. Users can add a screen shot to the outline from selecting the screen shot from their own list. If a screen shot is selected by one user, it shows selected (yellow border) on all lists. D.2 Main Study Questionnaire Video Synopsis Questionnaire Parti Section A: Personal Information 1. In what age group are you? • 19 and under • 20 - 29 • 30-39 • 40 - 49 • 50-59 • 60 + 2. Gender: • Male • Female 3. In terms of your current occupation, how would you characterize yourself? • Academic • Professional • Manager • Software Developer D Graduate Student, please specify area of study: • Undergraduate Student, please specify major: • Other, please specify: 201 Section B: With respect to the last task, please indicate the extent to which you agree or disagree with the following statements: SD = Strongly Disagree D = Disagree N = Neutral A = Agree SA = Strongly Agree Grabbing screen shots from the video was o o o o o easy. SD D N A SA 1 knew what screen shots 1 was supposed to o o o O o grab. SD D N A SA 1 knew when 1 grabbed a screen shot from the o o o o o video SD D N A SA It would be helpful if the screen shot 1 grabbed o o o o O also appeared on the shared display. SD D N A SA It was helpful for the screen shot to appear on o , o o o o your personal display. SD D ! N A : SA It was difficult to switch attention between the o o o o o movie and the personal display. SD D N A SA Section C: 1. What particular aspect(s) of using your personal display to grab the photos did you not like? Why? 2. What particular aspect(s) of using your personal display to grab the photos did you like? Why? 202 Part 2 Section A: With respect to the last task, please indicate the extent to which you agree or disagree with the following statements: SD = Strongly Disagree D - Disagree N = Neutral A = Agree SA = Strongly Agree Selecting the screen shots 1 wanted to share o o o o o was easy. SD D '. N A SA It was clear which of my screen shots 1 selected o o o o o for sharing. SD D N A SA It was distracting to me when others were o O o o o selecting screen shots. SD D N A : SA 1 felt 1 was distracting others when 1 was o o o o ' o selecting screen shots. SD D • N A SA 1 would like to be able to see what others are O .O o o o doing. SD D N A SA It was easy to control the shared display using o O o o o my personal display. SD D N A SA It was clear how 1 could interact with the large o O O ; o o display using my personal display. SD D : N A | SA 1 understood which screen shots 1 was able to O : o l o o ; o control. SD D \ N A : SA Section B: 1. What particular aspect(s) of using the display to select the photos did you like? Why? 2. What particular aspect(s) of using the display to select the photos did you not like? Why? 203 3. Would you have preferred to make your selections on the large display? Why or why not? Part 3 Section A: With respect to the last task, please indicate the extent to which you agree or disagree with the following statements: SD = Strongly Disagree D = Disagree N = Neutral A = Agree SA = Strongly Agree It was easy to collaborate with others to build O O O O O the outline of the video. SD I D N A SA The mechanism for selecting screen shots to O o O O O build the outline of the video was clear. SD i D N A SA It was clear what mechanism to use to build the O •o O O o outline of the video from the selected photos. SD 1 o N • A SA It was easy to control the shared display using O o O o o my personal display. SD 1 D N ; A SA 1 liked having my own personal display to help O l o o >Q O complete this task. SD i D N A SA 1 understood which screen shots 1 was able to O l o O O o select. SD 1 I> N A SA It was clear what area of the shared display 1 o o o o O shared with others SD D N A SA It was clear how to control the shared display o o O O O using my personal display. SD D N A SA 1 would like to use a cell phone to control my O l o O O O TV at home. SD D N A SA 1 would like to use my TV at home to share o i o o O O photos with other my friends and family. SD 1 D N A SA 1 would be more likely to do this if the size of O o o o o my TV was large. SD D N A SA Section B: 1. What particular aspect(s) of using the personal display did you like? Why? 2. What particular aspect(s) of using the personal display did you not like? Why? 3. What particular aspect(s) of using the shared display did you like? Why? 4. What particular aspect(s) of using the shared display did you not like? Why? Alternative For Part 2 This study is a 2x3between-subjects design. Phase 1 contains 1 factor, Phase 2.consists of 2' factors and Phase 3 consists of„3 factors: Each factor has different questions for the questionnaire. The following page is an alternative for page 3 which will be given to users who complete the second factor of Phase 2. ' , / , The questions which have been-changed from the above page 3 are highlighted with.red boxes. 206 Part 2 Section A: With respect to the last task, please indicate the extent to which you agree or disagree with the following statements: SD = Strongly Disagree D = Disagree N = Neutral A = Agree SA = Strongly Agree | Selecting the screen shots 1 wanted to share 4 o o o o I was easy. SD D N ; A S A It was clear which of my screen shots 1 selected o o : o O o for sharing. SD D N A S A [ It was distracting to me when others were o o o o o selecting screen shots. SD D N A S A 11 felt 1 was distracting others when 1 was o o o o o i selecting screen shots. SD D N A S A ' - - -i It was helpful to see what others are doing. O O o O o | SD D N A S A I It was easy to control the shared display using o o o o o 1 my personal display. SD D N A S A ; It was clear how 1 could interact with the large O o o o O i display using my personal display. SD D N A S A \1 understood which screen shots 1 was able to o o o O O control on the large display. SD D , N A S A Section B: 1. What particular aspect(s) of using the display to select the photos did you like? Why? 2. What particular aspect(s) of using the display to select the photos did you not like? Why? 3. Would you have preferred to make your selections on your personal display? Why or why not? v ) Alternative #1 for Part 3 This study is a 2x3 between-subjects design. Phase 1 contains 1 factor, Phase 2 consists of 2 factors and Phase 3 consists of 3 factors. Each factor has different questions for the questionnaire. The following page is an alternative for Part 3,. • pages 5-6, which will be given to users who complete the second factor of Phase 3. The questions which have been changed from the above Part 3, pages'5-6, are highlighted with red •boxes'/ 208 Alternative #2 for Part 3 This study is a 2x3 between-subjects design. Phase. 1 contains 1 factor, Phase 2 consists of 2 factors and Phase .3 consists of 3 factors. Each factor has different questions for the questionnaire. 4 ' •. The following page is a the second alternative for -Part 3,.pages 5-6, which will be given to users who complete the third factor of Phase 3. The questions which have been changed from the above Part 3, pages 5-6, are highlighted with red . > , boxes.. 209 Part 3 Section A: With respect to the last task, please indicate the extent to which you agree or disagree with the following statements: SD = Strongly Disagree D = Disagree N = Neutral A = Agree SA = Strongly Agree It was easy to collaborate with others to build O o o o o the outline of the video. SD D N A SA The mechanism for selecting screen shots to o o o o •o build the outline of the video was clear. SD D N A : SA It was clear what mechanism to use to build the O o O O O outline of the video from the selected photos. SD D N A SA It was easy to control the shared display using o o o O o my personal display. SD D N A SA 1 liked having my own personal display to help o o o o o complete this task. SD D N A SA ! 1 understood which screen shots 1 was able to o o o o O i select. SD D N A SA It was clear what area of the shared display 1 o o o o o could control. SD D N A SA | It was clear how to control the shared display o o o o O using my personal display. SD D N A SA 1 would like to use a cell phone to control my o o O o O TV at home. SD D N A SA 1 would like to use my TV at home to share o o o o o photos with other my friends and family. SD D N A SA 1 would be more likely to do this if the size of o o o o o my TV was large. SD D N A SA 210 Section B: 1. What particular aspect(s) of using the personal display did you like? Why? 2. What particular aspect(s) of using the personal display did you not like? Why? 3. What particular aspect(s) of using the shared display did you like? Why? 4. What particular aspect(s) of using the shared display did you not like? Why? Appendix E: Study 3 Material E.l Questionnaire for Recall task recall Subjet t. Condition: Fill in the point on the scale that best indicates your experience with the task Mental Demand 1 1 1 | I 1 1 1 1 1 Low High Physical Demand 1 1 1 1 1 1 1 1 1 1 Low High Temporal Demand 1 1 1 1 1 1 1 1 1 1 Low High Performance 1 1 1 I 1 1 1 1 1 Good Poor Effort I 1 1 1 I 1 , 1 1 1 Low High frustration 1 1 1 1 1 1 1 1 1 1 Low High E.2 Questionnaire for Recognition task recognition Subjet ft Condition: Fill in the point on the scale that best indicates your experience with the task Mental Demand Low Low High Physical Demand High Temporal Demand Low High Performance 1 | 1' 1 I 1 1 1 1 1 Good Poor Effort | 1 1 1 I 1 1 1 1 1 Low High lustration I I I 1 1 1 1 1 1 | Low High 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0052070/manifest

Comment

Related Items