"Science, Faculty of"@en . "Computer Science, Department of"@en . "DSpace"@en . "UBCV"@en . "Himmeto\u00C4\u009Flu, Hikmet G\u00C3\u00B6khan"@en . "2011-11-04T17:24:00Z"@en . "2011"@en . "Master of Science - MSc"@en . "University of British Columbia"@en . "Music listening assumes a number of different forms and purposes for many people who live in a highly digitalized world. Where, how and what songs are listened to can be a highly personalized activity, as unique musical preferences and individual tastes play an important role in choice of music. Today\u00E2\u0080\u0099s portable media devices are high-capacity and easy to carry around, supporting quick access to a nearly unlimited library of media, in many use contexts in nearly any time or place. But these advantages come at a cost. Operating the music player while doing other things can involve a physical and mental demand that ranges from inconvenient to dangerous.\n The Haptic-Affect Loop (HALO) paradigm was introduced by Hazelton et al. (2010) to help users control portable media players by continuously inferring the user\u00E2\u0080\u0099s affective state and the player behaviour they desired through physiological signals. They proposed using the haptic modality to deliver feedback and gathered initial requirements from a single user.\n In this thesis, we present a qualitative participatory design study which broadens Hazelton\u00E2\u0080\u0099s single user participatory design study to include six participants. A more efficient means of obtaining information about a user is developed to support scaling to multiple participants. We then examined these users\u00E2\u0080\u0099 expectations for user-device communication and the functionality of the HALO paradigm, with the objective of identifying clusters of preferred uses for HALO. In this regard, we identified the behaviours of a proposed system that these users would find most useful, and that they would like to interact with. We collectively explored a set of exemplar implicit and explicit interaction scenarios for HALO, finding greater confidence in mechanisms that did not relinquish user control, but openness to trying more implicit control approaches where priority of control in listening music was lower than secondary tasks. The willingness to try more implicit control approaches depends on the reliability of the technology. Finally, we generated a set of interaction design guidelines for the next stage of HALO prototyping."@en . "https://circle.library.ubc.ca/rest/handle/2429/38752?expand=metadata"@en . " Participatory Design of a Biometrically-Driven Portable Audio Player by Hikmet G\u00C3\u00B6khan Himmeto\u00C4\u009Flu B.Sc., Ko\u00C3\u00A7 University, 2009 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Science in THE FACULTY OF GRADUATE STUDIES (Computer Science) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) August 2011 \u00C2\u00A9 Hikmet G\u00C3\u00B6khan Himmeto\u00C4\u009Flu, 2011 ii Abstract Music listening assumes a number of different forms and purposes for many people who live in a highly digitalized world. Where, how and what songs are listened to can be a highly personalized activity, as unique musical preferences and individual tastes play an important role in choice of music. Today\u00E2\u0080\u009Fs portable media devices are high-capacity and easy to carry around, supporting quick access to a nearly unlimited library of media, in many use contexts in nearly any time or place. But these advantages come at a cost. Operating the music player while doing other things can involve a physical and mental demand that ranges from inconvenient to dangerous. The Haptic-Affect Loop (HALO) paradigm was introduced by Hazelton et al. (2010) to help users control portable media players by continuously inferring the user\u00E2\u0080\u009Fs affective state and the player behaviour they desired through physiological signals. They proposed using the haptic modality to deliver feedback and gathered initial requirements from a single user. In this thesis, we present a qualitative participatory design study which broadens Hazelton\u00E2\u0080\u009Fs single user participatory design study to include six participants. A more efficient means of obtaining information about a user is developed to support scaling to multiple participants. We then examined these users\u00E2\u0080\u009F expectations for user-device communication and the functionality of the HALO paradigm, with the objective of identifying clusters of preferred uses for HALO. In this regard, we identified the behaviours of a proposed system that these users would find most useful, and that they would like to interact with. We collectively explored a set of exemplar implicit and explicit interaction scenarios for HALO, finding greater confidence in mechanisms that did not relinquish user control, but openness to trying more implicit control approaches where priority of control in listening music was lower than secondary tasks. The willingness to try more implicit control approaches depends on the reliability of the technology. Finally, we generated a set of interaction design guidelines for the next stage of HALO prototyping. iii Preface All research in this dissertation was conducted under the supervision of Dr. Karon Maclean and Dr. Joanna McGrenere. Study sessions and analysis of the techniques described in Chapter 4 was also assisted by Dr. Charlotte Tang. I am the primary contributor of all work described in the thesis. This work was partially supported by NSERC (Natural Sciences and Engineering Research Council of Canada). Ethics approval for experimentation with human subjects was provided by the Behavioural Research Ethics Board, UBC BREB Number: H10-00783. The author of this thesis was involved in the following related publications: \u00EF\u0082\u00B7 Pan, M. K. X. J., Chang, J.-S., Himmetoglu, G. H., Moon, Aj., Hazelton, T. W., MacLean, K. E., et al. (2011). Now where was I? Proceedings of the 2011 annual conference on Human factors in computing systems - CHI \u00E2\u0080\u009911 (p. 363). iv Table of Contents Abstract ................................................................................................................................... ii Preface .................................................................................................................................... iii Table of Contents .................................................................................................................. iv List of Tables ......................................................................................................................... vi List of Figures ....................................................................................................................... vii List of Abbreviations .......................................................................................................... viii Acknowledgements ............................................................................................................... ix 1. Introduction ............................................................................................................................ 1 1.1. Research Problem .............................................................................................. 2 1.2. Haptic-Affect Loop Framework ........................................................................ 5 1.3. Thesis Research Goals and Approach ............................................................... 9 1.4. Contributions ................................................................................................... 10 1.5. Outline of the Thesis........................................................................................ 11 2. Related Work ....................................................................................................................... 12 2.1. Mobile and Low Attention Human-Computer Interaction .............................. 12 2.2. Physiological Computing ................................................................................ 14 2.3. Music Recommendation Technologies ........................................................... 16 2.4. Participatory Design ........................................................................................ 17 v 3. Methodology ......................................................................................................................... 18 3.1. Recruitment and Participants ........................................................................... 18 3.2. Online Pre-Session Homework ....................................................................... 20 3.3. Session Procedure ............................................................................................ 21 3.4. Checking Experience with Informational Webpage ........................................ 22 3.5. Demonstrations ................................................................................................ 22 3.6. Semi Structured Interview ............................................................................... 27 3.7. Analysis ........................................................................................................... 30 4. Results ................................................................................................................................... 35 4.1. Credibility Analysis of Transcripts.................................................................. 36 4.2. Experience with Technology and Current Practice ......................................... 36 4.3. Self-reported and Observed Responses ........................................................... 50 4.4. Desired HALO-enabled Portable Player Behaviour ........................................ 62 4.5. Goals of HALO-enable Portable Audio Player Practice ................................. 66 5. Discussion ............................................................................................................................. 74 5.1. Similarities in Participant Profiles ................................................................... 75 5.2. Design Implications of HALO ........................................................................ 78 5.3. Self-Critique of Participatory Design Session ................................................. 86 5.4. HALO-enabled Portable Media Player Design Guidelines ............................. 87 6. Conclusion ............................................................................................................................ 89 Bibliography ......................................................................................................................... 91 Appendices ............................................................................................................................ 97 Appendix A: Screening Material ...................................................................................... 97 Appendix B: Participatory Design Session Material ........................................................ 99 Appendix C: Participant Sketches .................................................................................. 120 vi List of Tables Table 1. Audio Player Usage of Selected Participants ................................................................. 20 Table 2. Category and Subcategories of Observations with Example Codes ............................... 34 vii List of Figures Figure 1 Visualized Haptic-Affect Loop at Portable Media Players .............................................. 6 Figure 2 Communication and Goal of Control in HALO-enabled Portable Media Player ............ 9 Figure 3 C2 Tactor ........................................................................................................................ 23 Figure 4 Haptic Sleeve Display .................................................................................................... 24 Figure 5 Emotiv\u00C2\u00AE EPOC Headset ................................................................................................ 25 Figure 6 Galvanic Skin Sensors .................................................................................................... 26 viii List of Abbreviations Abbreviation Full Name ECG Electrocardiography EEG Electroencephalography EMG Electromyograph GSR Galvanic Skin Response HALO Haptic-Affect Loop HCI Human-Computer Interaction HRV Heart Rate Variability P1 \u00E2\u0080\u0093 P8 Participant 1 \u00E2\u0080\u0093 8 ix Acknowledgements The author of this thesis would like to thank to both his supervisors, Karon MacLean and Joanna McGrenere for their never ending guidance and support. Working under their supervision has always been inspiring. I always admired their professionalism and energy. In addition to my supervisors, postdoctoral fellow Charlotte Tang also helped me tremendously during my studies and her guidance was invaluable for me. I also would like to thank Vincent L\u00C3\u00A9vesque for his time and advice as a second reader. My family\u00E2\u0080\u009Fs unconditional love and support was also a very important element throughout my education and life that indirectly contributed to this thesis. Without their support and help, I would never imagined coming this far. I also would like to acknowledge my colleges Tom Hazelton, Matthew Pan, Gordon Chang, AJung Moon, Nuray Dindar, Shailen Agrawal, Yasaman Sefidgar and Steve Yohanan who offered invaluable advice and friendship throughout the tenure of my Master\u00E2\u0080\u009Fs studies. Last but not least, my friends in the community of Marine Drive Residence also deserve special thanks for accompanying me through every moment. 1 Chapter 1 Introduction Listening to the acoustic harmony of sounds is an amusing and appealing activity. Until recently the experience of listening to music involved being present at live performances; then, Thomas Edison\u00E2\u0080\u009Fs invention of the first mechanical phonograph cylinder in 1877 made sounds and music performances reproducible. With this invention, analog records became products that could be bought and sold. The products enabled the listener to listen to the same music over and over again. However, the listener\u00E2\u0080\u009Fs control over the songs being played did require more effort than simply listening. With a stationary gramophone, control over the reproduced music required careful handling and technical knowledge and the gramophone was not very portable. Over the years, gramophones were replaced with vinyl disks, cassette tapes and then digital media (compact disks and MP3s) and then with successively more portable media players. Online media streaming, the latest trend in music, is also on its way and fixed-capacity media players are likely going to be replaced in the future too. Progress in technology continues to change our relationship with music and the ways in which we consume and listen to it. 2 Audio has not only become reproducible but the devices used to play audio have become easier to carry and to move around with. Advancements in storage technology and the miniaturization of portable electrical circuit devices have enabled people to carry their music collection from the living room to a party in the backyard, or on a cross-country road trip. The capacity of current players means that users can listen to music for months and never hear the same song twice. The experience of enjoying music is no longer a stationary activity but something that can happen anytime, anywhere. Today\u00E2\u0080\u009Fs digital era is also full of media, making it possible to purchase music and video online through digital stores as well as real-time on demand radios and podcasts. There is an overwhelming abundance of choice, whether the goal is to listen to a particular artist, or to the most popular songs in a given genre, yet user interfaces of today do not support effective digital media browsing. The use of these interfaces is a highly cognitive task that involves the manual selection and organization of media. 1.1. Research Problem Computers, including portable audio players, have become so ubiquitous and \u00E2\u0080\u009Cminified\u00E2\u0080\u009D today that people can forget that they are carrying them. The owner of a portable audio player can listen to music anywhere, anytime and at any volume, choosing both content and device. However, people have musical preferences and tastes and don\u00E2\u0080\u009Ft want to listen to just any song in a given time or place. Gaining access to vast quantity of songs also gives people the chance to listen the songs they like, instead of sticking with a limited number of songs. The user of a media player is able to select the songs that suit her mental state and physical activity at any given time. People\u00E2\u0080\u009Fs preference for one song over another varies depending on time, location or even their current activities. For example, someone might want to wake up to a new day with an energetic song, but that same person might want to get ready to fall asleep with a milder and softer symphony and she might want to listen to songs that focus her concentration while driving. The number of songs that can be stored in portable players is beyond the human memory and the sheer quantity of songs has made searching for songs into a very daunting and unpleasant task. In order for media player user interfaces to respond to a user\u00E2\u0080\u009Fs needs, the user needs to carry out an interaction explicitly communicating the desired change. In the case of 3 listening to music, people are dependent on the hardcoded hardware and relatively inflexible interface that comes with the portable media player to express themselves. These designs can take a variety of forms, shapes and interactions (number of physical buttons and touch screens, different sizes with graphical screens, variety of quick buttons on device, tapping, gestures, etc.). One player may support a particular usage more effectively than it supports another. Currently these user interfaces do not always reflect human capabilities or support the dynamism present in mobile contexts. People\u00E2\u0080\u009Fs musical desires quickly change based on changes in context, changes in mood or on changes to the volume of information that the listener needs to pay attention to in the surrounding environment. Imagine a person running in the streets of a crowded city and listening to music using a portable audio player. The songs that this person wants to listen to might change depending on his/her musical taste, current mood, phase of his/her exercise or the remaining exercise he or she plans to complete. While running this person has to watch out for deformed sidewalks and pay attention to the traffic signs at the same time. Every additional explicit communication the user makes with his/her media device raises the cognitive and physical demand on him or her. The amount of information available to the user now can come in at a rate higher than the rate at which it is humanly possible to process information. For this reason, when interacting with the device people often adopt interrupt driven behaviour, or multitasking behaviours. Users are forced to continually re-allocate their mental and physical resources in very small units. In the scenario above, this means the person has to continue running and interacting with the player and at the same time avoid hazardous situations (paying attention to all signs and obstacles). Running is only one listening context. Every day, people work and listen to music at the same time. This requires user to switch between multiple tasks. Executing transitions between separate tasks often requires a secondary attentional switch and time to interact with the device which increases the cost of multitasking. But more importantly the fragmented nature of the experience results in a diminished music listening experience and lower utility for each of the simultaneously executed tasks. Furthermore, task fragmentation could also cause widespread lower productivity and dissatisfaction in the long term. 4 Oddly enough, the basic language of music control has not changed much compared to the change in the medium, quantity and role of the music over the years. Simple \u00E2\u0080\u009Cplay\u00E2\u0080\u009D and \u00E2\u0080\u009Cpause\u00E2\u0080\u009D buttons, which probably first appeared after magnetic records, are still the two most frequently used buttons on a music player. However, these controls offer only a very limited control over songs. Not as much control as is necessary to effectively control a modern media device within the changing contexts of the listener. One of the improvements around control of music over the years is more efficient playback \u00E2\u0080\u0093 for example, the speed with which a user can change, replay or search for or through a song. Although this can substantially decrease the time the user spends on carrying out an interaction for control, this alone is not enough to decrease the cognitive demand on users choosing what to listen to. As well, the number and complexity of operations that are possible to perform is far higher. One way people alleviate the cognitive burden of portable music players is to organize and personalize our music players and libraries. By creating playlists, picking artists or genres and ordering songs ahead of time, people customize their experiences to accommodate their future needs and personal preferences. A significant issue with this approach is that this customization may not meet the user\u00E2\u0080\u009Fs needs because the user may not be very successful at predicting future circumstances or her personal affective state ahead of time. Moreover, all this customization comes with a time and attention cost, which some of us are not willing to pay. The result is that people either listen to music that they don\u00E2\u0080\u009Ft like as much or they spend more time with the user interface trying to find the right music to listen to than they are spend enjoying the music. In this regard, the Haptic-Affect Loop was introduced by Hazelton et al. as a framework, which can be used to tackle some of the problems that exist in current music listening practices [14]. Their work focussed on validating HALO\u00E2\u0080\u009Fs technological feasibility, as well as developing a feedback-supported language of interaction. This focus and its initial, exploratory nature justified the use of a single highly involved \u00E2\u0080\u009Cpower\u00E2\u0080\u009D participant. In the thesis, we continue this research and expand the design guidelines on utility of such Haptic-Affect Loop enabled portable media players in collaboration with multiple potential users. Our objectives were to build on Hazelton et al.\u00E2\u0080\u009Fs work in these ways: 5 (1) Developing and deploying a scalable means of obtaining preference and desired customization information from multiple participants, then using it in a medium-sized study; (2) understanding the range of what users would find attractive and useful in a HALO-enabled player interaction and the desired behaviours of such a player on music control, based on a broader sample than Hazelton was able to access, and (3) see if it is possible to cluster this range of desired behaviours and functionalities into a smaller set of archetypical patterns. That is, can we describe representative \u00E2\u0080\u009Ctypes\u00E2\u0080\u009D of HALO users, for which a single customization or set of functionality would work fairly well, as opposed to each individual needing or desiring something completely different? Due to resource and time constraints, and because the process as applied to individual participants is still in a formative stage, we chose a sample size that is larger than Hazelton\u00E2\u0080\u009Fs, but still fairly small (6 participants). This size, when collected on a specific demographic of interest, is wide enough to obtain feedback on our methodology and some preliminary insights into desired behaviours and potential clustering. A larger sample using a process iterated to be even more streamlined as well as validated in the quality of its results, would be required to address Objectives 2 and 3 authoritatively. This iteration and comprehensive validation is beyond our scope. 1.2. Haptic-Affect Loop Framework The framework we use for tackling the problem mentioned above is a two-way interaction of information between humans and the devices that we use every day. We imagine a system that can align with the user\u00E2\u0080\u009Fs needs and expectations without requiring the user to pay any voluntary or conscious attention, or physical effort to it. Specifically, the solution we investigate in this thesis is an intelligent system that continuously infers its user\u00E2\u0080\u009Fs affective state and the desired system behaviour through physiological signals. This system would not impose any attentional demand, and would communicate back to the user with eyes-free haptic feedback. In this way, the system could understand and infer user intent, readiness and other relevant parameters of an ongoing interaction, and establish a minimally intrusive, haptic information communication channel. This framework was introduced as a Haptic- Affect Loop (HALO) by Hazelton et al and is re-visualized in Figure 1 [14]. In this 6 framework the loop is initiated with a single user\u00E2\u0080\u009Fs biometric signals. Through the use of these signals the user\u00E2\u0080\u009Fs affective state is captured to drive autonomous behavioural decisions. Minimally intrusive feedback of the system status is delivered back to the user via haptics. Figure 1 Visualized Haptic-Affect Loop at Portable Media Players This framework proposes that a new interaction paradigm with portable media devices can be established that will complement the current explicit interaction. In the case of using a portable media player, the inferred affective state of the user can be used to steer system behaviour implicitly to the direction desired by the user in multiple ways [14] In this thesis, we continue the study of the HALO framework in a portable media player setting. We propose that the possible goal for control of the songs played can be categorized into two levels of granularity: song level and playlist level. For example, song level control of the music player could be simply skipping a disliked song, whereas a group level control could be automatic generation of a playlist (list of songs) in relation to recognized affective state of the user. In addition to these two deliveries of control with HALO, complementary styles of communication with new interaction techniques are also possible. We project that the design 7 of the communication between the user and the media player could support user affect states that range between implicit and explicit communication. To give an example, a user\u00E2\u0080\u009Fs involuntary negative reactions (e.g. verbalized as \u00E2\u0080\u009Cughh\u00E2\u0080\u009D to a disliked song) might initiate an implicit type of message which HALO could interpret as requesting a change in music. Dislike, joy or other similar human affective states can be detected from changes in multiple biological signals such as in heart rate, blood volume or skin conductance levels of a user [53]. Conversely, the explicit communication style ranges from a manual button press (as is conventional today) to varying degrees of voluntary and conscious, yet biometrically communicated, commands. For example, a user\u00E2\u0080\u009Fs conscious request to simply change a song (to any other song) could be sensed in the users\u00E2\u0080\u009F voluntary control of his or her biological signals. Emotiv EPOC EEG headset demonstrates how position of a virtual box can be controlled through biological signals such as EMG and EEG that are directly connected to the brain [10]. Similar biological signals can also be used for controlling music by giving commands such as pause, resume and skip. In a more futuristic example (due to the empirically more difficult recognition required) a user could \u00E2\u0080\u0093 theoretically - request a specific song like \u00E2\u0080\u009CBeethoven\u00E2\u0080\u009Fs 8th Symphony\u00E2\u0080\u009D. Such a command could still be delivered without the user\u00E2\u0080\u009Fs manual touch of any button if it could be recognized correctly (which for the near future is unlikely). Although this last example would still require user\u00E2\u0080\u009Fs consciousness and expressed command and is thus explicit, it would alleviate the physical dependency of other explicit interactions. This interaction is technically not feasible in the near future, but some participants volunteered interest in it during our study. We will refer to this as voluntary emotional control in later chapters; although it is a form of explicit control, we are not including it in our scope when we refer to explicit control throughout the rest of this thesis. The two aspects of HALO introduced in earlier paragraphs, (1) goal for control of songs and (2) desired communication style are represented as two different axes in Figure 2. The horizontal axis represents the range of goals for the control of music and the vertical axis represents the range of communication that can be established between the user and the HALO-enabled portable media player. An example of highly explicit control (marked with a cross on this figure) can be identified as manual selection of a song or list of songs directly by using the physical buttons on the device. On the opposite end of this axis, from the origin 8 to the rest of the implicit communication axis, implicit communication (marked with a circle) is recognition of any involuntary reactions. A third point on the graph (indicated with a triangle) represents a user expressing him or herself to HALO through biological signals by voluntarily and consciously modifying or amplifying his or her emotions. Participants imagined highly precise but technically unfeasible voluntary emotional control, which is explained earlier in this introduction. This type of control is represented as a hypothetical point (indicated with white dashed diamond). Although, this control lies on the same, side of explicit side half-plane of the communication design axis space stipulated in Figure 2 it is different from physically pressing a button (a complete explicit control) and therefore positioned between manual control and conscious alteration of biological signals for control. In a real world example, a user can express dislike of a particular song through implicit reaction and the choice of not listening to any songs from this particular artist in the next couple hours by amplifying (or continuing) this reaction. These examples are just a few of the possible communication styles and goals for song control which we are particularly interested in exploring further in our study. The goals axis of Figure 2 indicates the granularity with which users might express their desire for songs or lists. This range of granularity can also be seen as users\u00E2\u0080\u009F desired modification of the larger experience of listening to music, where a change of either a single or multiple songs could be the goal. Users\u00E2\u0080\u009F biological signals (independent of which communication style is chosen) could trigger a particular behaviour to a single (indicated with square) or a number of songs (indicated with hexagon). Imagine a user showing a negative reaction towards a song of a particular artist. Depending on the intensity of this reaction, this can be interpreted either as a command to change the music or to take a more strong measure such as not playing any particular music from that artist during the session (control for group of songs). It is important to note that this particular scenario requires system knowledge of user preferences on granularity; in order to ensure that correct granularity on music behaviour is taken. This is also another area we were particularly interested in exploring in our study in light of current media player usage. 9 The HALO loop can be closed with some type of feedback as a response to the user\u00E2\u0080\u009Fs estimated affective state. This feedback can be used to establish the required visibility of the system and to enable a comfortable trust relationship. We posit that haptic feedback is a good candidate for this modality, being minimally-intrusive and eyes-free. 1.3. Thesis Research Goals and Approach Previous research done by Hazelton et al. used multiple biological signals and produced preliminary results of applying out-of-box machine learning techniques to the activity of music listening for testing the feasibility of HALO-enabled portable media players [14]. That work also presented three experimental methodologies for gathering requirements for affect detection and classification as well as the utility and behaviour of the proposed Haptic-Affect Loop in the setting of music listening. One particular study conducted by Hazelton et al. was a series of in-depth participatory design sessions with a single user. In this thesis, our motivation was to expand that single-user, in-depth participatory design study to further Figure 2 Communication and Goal of Control in HALO-enabled Portable Media Player 10 examine user preferences. Namely, our research goals were focused on expanding and working towards generalizing the results of the previous single-user study, and developing a more efficient means of obtaining information about user preferences in this context. We focused on understanding the expectations that different users had of system behaviour and the preferred interaction styles (explained in Chapter 1.2: Haptic-Affect Loop Framework) independent of their profile and the current brand of player used. Furthermore, we explored connections between different user profiles and users\u00E2\u0080\u009F expectations of the utility and behavioural patterns of the proposed Haptic-Affect Loop. In the thesis, we adopted tactile feedback design guidelines from Hazelton and did not focus our efforts on contributing to this aspect of a HALO-enabled portable player design. We designed and then conducted a series of six participatory design sessions during November and December, 2010, one for each of six participants. These participants were recruited to cover a range of player brands, and to have a reasonable level of comfort talking about their portable media player usage. A balanced male to female ratio of young students (age range between 19 and 22) were recruited. We used these criteria to select participants that would support the discovery of further design considerations for HALO portable music listening use-cases. We report the findings of our study and enumerate a list of design implications for HALO-enabled portable media players that are derived from common design patterns and interactions explored throughout the study. 1.4. Contributions The contributions of this thesis are: 1. Analysis: a comprehensive synthesis of experienced participant reactions to prospective HALO-enabled portable player communication styles and behaviours. 2. Data: A set of use-case scenarios encompassing a variety of possible player communication styles and anticipated outcomes of a HALO-enabled portable media player, enhanced and confirmed by participants. 3. Data: A set of contexts and situations for HALO-enabled portable media player communication style and goals of interest to these participants. 11 1.5. Outline of the Thesis This thesis reviews related work (Chapter 2) in research to contextualize and validate the goals of the current efforts of this thesis. Related research in human-computer interaction (HCI), engineering and the social sciences is presented. In Chapter 3Chapter 3, we describe the design and data collection of our participatory design study including pre-study preparation, study participants, and the structure of the study. The qualitative analysis methodology used to analyze interviews is presented. The next section, results (Chapter 4), describes the findings of our qualitative analysis with respect to current practices of participants, self-reported and observed reactions to demonstrations and participant\u00E2\u0080\u009Fs desired practice with the HALO-enabled portable media player. In Chapter 5, we discuss these results in light of our research goals of generalizing previous related results and understanding different groups of desired system behaviour. We derive recommendations for implementing a HALO-style interaction loop in a portable audio system. Conclusions (Chapter 6) are drawn at the end of this thesis based on the findings of the discussion, and suggest relevant areas for future work. 12 Chapter 2 Related Work There are four general areas of research related to Haptic-Affect Loop enabled portable media players. These areas are mobile and minimally-intrusive human-computer interaction techniques, physiological computing, music recommendation systems and participatory design. This chapter presents a review of work in these areas that informed our research efforts. 2.1. Mobile and Low Attention Human-Computer Interaction Interactions with mobile devices are much more ephemeral (short lived) than interactions with stationary desktop user interfaces. People often press several buttons in quick succession until a certain change occurs. Portable media players are mobile devices that users interact with intermittently to listen to music while simultaneously doing other activities (such as walking, exercising or studying). This means that users carefully allocate their physical and mental resources in order to concentrate, and perform each task at a satisfactory level; meanwhile, the external world regularly intrudes, reducing the user\u00E2\u0080\u009Fs control over and the predictability of the listening experience. Various researchers have shown that it is not 13 easy for people to do this mental allocation [23]. The fragmented nature of the attentional resources in mobile human-computer interaction can cause continuous resource competition among multitasked activities. Several researchers have identified that the effects of such frequent competition for limited resources significantly constrains and diminishes the quality of mobile interactions [37], [52]. Researchers have proposed various paradigm shifts to better support human cognitive and physical capabilities to address the widespread problems that originate from human- computer and mobile interaction. Lumsden and Brewster proposed a shift to interactions based on gestures. They proposed eyes-free use and control of mobile or wearable devices [26]. Similarly, Weiser and Brown defined a \u00E2\u0080\u009Ccalm technology\u00E2\u0080\u009D approach [57] under which devices \u00E2\u0080\u009Cmove easily from the periphery of our attention, to the center, and back.\u00E2\u0080\u009D They proposed this approach could alleviate the negative effects of competing attentional demands. Over the years calm technology has received considerable attention and inspired a number of research projects. The design of ambient display, which enabled peripheral notification and supported implicit as well as explicit interaction, is a successful example of calm technology and gesture based communication [55]. The peripheral vision annotation project [18], mobile interactive environment project [8] and ambient haptics [27] are examples of successful applications of calm technology and gesture based communication. The combination of human-computer interactions that follow the principles of calm technology with natural interaction paradigms lead to ubiquitous (also known as pervasive) computing. Together they promise an effective approach for addressing the emergent cognitive challenges of the increasingly digitalized world. Haptic technologies and feedback research arise to alleviate the high cognitive dependency on auditory and visual communication channels. Examples of non-intrusive and low attention communication perceptual haptic user interfaces provide grounded evidence for the advantages of utilizing sense of touch. Starting with one of the obvious advantages of haptic interfaces, i.e. eyes free communication, researchers have investigated a variety of applications of handheld haptic mobile devices [25], [40], [42], [34], [1] and [56]. Wearable tactile devices [2] and [9] and haptic communication languages [50] and [11] for effective interaction between computers and humans are still active research topics. The studies above have shown that people can perceive and distinguish different tactile renderings in mobile 14 contexts. Several studies also have demonstrated that users can recognize different haptic cues accurately even in the presence of mental distractions [53] and [3] and physical activity [20]. The devices used in these studies use different form factors and show that such rendering can be non-intrusive. 2.2. Physiological Computing Starting from the most basic signal of health, a heartbeat, the human body carries out many biological processes that are influenced to varying degrees by mental and emotional state and by activity. Modern sensor technology enables the quantification and observation of some of these physiological effects as they occur on or under the skin. Electrical changes such as the conductance on the surface of the skin, galvanic skin response (GSR), and electrical muscular activity electromyography (EMG) are used in medicine for diagnosing. Monitoring and interpretation of these signals offers valuable information about a person\u00E2\u0080\u009Fs affective state as well as their health. In physiological computing, these signals are monitored and analyzed to create innovative real-time interactions between humans and computers [15]. Physiological computing is a strong candidate for supplementing the contextual information currently available to computing systems, to enable them to adapt dynamically to the idiosyncratic needs of the user in the future [12]. Research in this area is shifting the conventional master-slave relationship between computers and humans toward a more collaborative and cooperative relationship. The literature shows that the state of monitoring and recognition of physiological signals is mature enough to begin building applications that can be directly operated by transforming physiological data into control signals. For example, EMG signals were used by Crawford to control a robotic arm [7] and by Costanza to create more intimate and personal interfaces for mobile interaction [6]. These applications did not make use of intelligent adaptation of the interface or provide the autonomy required to establish a fluid interaction between humans and computers. Norman shared this vision of intelligent adaptation as he envisions next generation \u00E2\u0080\u009Esmart\u00E2\u0080\u009F technologies to overcome the limited ability of machines and people to communicate with each other [32]. Picard et al. attempted to augment human- computer interactions with biological signals [44]. They classified user\u00E2\u0080\u009Fs affective states to build emotionally intelligent and natural human-computer interaction. Subconsciously and 15 spontaneously transmitted biological feedback opened new possibilities for human-computer interaction and enabled a rich and trustworthy information channel without any conscious effort from the user. Other researchers have seen the potential of affective computing and have built a variety of applications to begin realizing this potential. For example, Kulic et al. developed an affect based human robot interaction by capturing human arousal and valance to generate robot arm motions [22]. Riener et al. monitored heart rate variability (HRV) and electrocardiography (ECG) to recognize driver arousal and provide appropriate feedback during safety critical parts of a car journey [43]. Pan et al. augmented an audiobook player with GSR signals to automatically create bookmarks for interruption recovery [39]. Solovey et al. used near-infrared spectroscopy sensors to identify four different cognitive multitasking processes that could be used in real-time adaptive human-computer systems [48]. These successful integrations of no-effort affective computing promise significant improvement to the field of HCI. There are relatively few publications that explore applications of affective computing within the music listening context. In our research group, we investigated the feasibility of a haptic-affect interaction loops (HALO) in portable music players [14]. This work explored prospective user needs and gathered requirements for an affect augmented human computer interaction in portable music players. In laboratory experiments conducted by Hazleton et al. a recognition rate of 77% was achieved with a time window of nine seconds for three different (like, dislike and neutral) responses towards a song. Janssen et al. built an affective music player that classified songs into negatively, positively or neutrally affecting songs based on people\u00E2\u0080\u009Fs physiological trends during multiple training sessions with three participants [19]. The effects of songs were later validated by using participant responses to a 5-point Likert scale that ranged from sad (-2) to happy (+2). The songs that were positively classified were significantly more positively rated than the songs that were negatively classified. Chung and Vercoe [5] built a system that selects enjoyable music on basis of user\u00E2\u0080\u009Fs foot tapping and physiological cues. Data collected from their experiments provided evidence suggesting that foot-tapping is a useful indicator of a subject\u00E2\u0080\u009Fs valence response to music stimuli. They reported inconsistencies in the correlation between GSR and arousal level. A purpose-aware automatic playlist generator was developed by Oliver et al. for helping users to reach exercise related self-defined goals on the basis of song meta-data 16 information and physiological cues [35]. Oliver et al. qualitatively evaluated their system with 20 participants and received positive feedback of listening to automatically generated music during an exercise in a qualitative comparison study, where three conditions were tested (a running session without music, with random music, and with music selected by the system). These works inform the efforts of this thesis as evidence to emerging physiological computing research and applications. Hazelton is the only one that presents a framework for addressing possible interaction issues that arise from the autonomy of a developed physiological computing system in addition to new methods of interacting with a media player while listening to music. 2.3. Music Recommendation Technologies Music recommendation systems are computer software systems that suggest and play songs to a user based on predictions about what songs the user might like to hear. Although researchers are still interested in building better systems [36] and [46], there are systems like Pandora and Last.fm that offer music recommendation services online to the public [38], [24]. Music recommendation systems promise users a personalized and effortless music listening experience and they are sometimes called online personal radios. In essence, recommendation systems and the HALO-enabled music player (described in Chapter 1.2) envisioned in this thesis have a common end-goal. They both try to satisfy user\u00E2\u0080\u009Fs music needs, but they use technologies and purpose solutions that are fundamentally different [47]. Recommendation systems work either by extracting information from the music or by learning the listening patterns of the user through feedback; whereas, a HALO-enabled portable player interprets physiological signals to both control the music player and to understand the user\u00E2\u0080\u009Fs responses to the music. We anticipated that users who already utilize these recommender systems might be potential users of HALO-enabled music players. Our efforts were not intended to substitute these systems but to build on them to get closer to the goal of intelligent adaptation in music listening. In the future, these two complementary mechanisms are likely to be integrated to work together. 17 2.4. Participatory Design Participatory Design is a design methodology that incorporates end-user feedback early in and throughout the design process by including users in the design team and it has been used widely in the prototyping stages of HCI design since the 1970s [21]. This collaborative approach aims to maximize the likelihood that the end system will be useful and well integrated into the current practices of the user within their environment. Various practices that employ this technique can be found both in human factors research and real-life application design [31], [28] and [51]. Many possible approaches of conducting participatory design are summarized by Muller [29]. Those that are particularly relevant to our work (description of study can be found in Chapter 3.6. Semi Structured Interview), because of the similarity of the approach taken, use scenarios, sketching and design mock-ups as triggers for conversation and to illustrate and help make more real a starting set of design possibilities that may be far outside of participants\u00E2\u0080\u009F current experience. Participatory design is not only valuable because of user\u00E2\u0080\u009Fs skills and experience but also their interests in the design outcome are acknowledged and supported through the design process. In this thesis, participatory scenarios were employed to stage and act out current and future use scenarios of music listening and new technology to understand user needs. This qualitative methodology provides the depth and detail required to obtain and analyze rich information; it encourages both participants and researchers to be open to new information they might not initially consider. 18 Chapter 3 Methodology This chapter introduces our participants and describes our study and analysis methodology. All of the study materials such as questionnaires can be found in Appendix B, with the exception of the physical prototypes. Photo documentation of the physical devices used in our study are included in this chapter. We first introduce our participants and describe the methodology used to find and recruit those participants. Then we report the approach we took to prepare participants for the study. Next we describe the procedure followed during the study session itself. The procedure section includes a description of how we introduced HALO to our participants and built interaction scenarios. The last section of this chapter, introduces the qualitative open coding methodology we used to analyze the data collected. 3.1. Recruitment and Participants Participants were recruited through advertisements posted at various locations on campus (including the student union building, faculty bulletin boards and student residences). We sought a diverse range of participants. People had to be at least eighteen years old to participate. 19 Individuals who were interested were forwarded an online screening survey (Appendix A). The survey asked prospective participants for the following information: \u00EF\u0082\u00B7 Portable Media Player Device(s) that they currently own \u00EF\u0082\u00B7 Date they started using this device \u00EF\u0082\u00B7 Amount of time they use their portable player to listen media in a typical week \u00EF\u0082\u00B7 Types of media they listen to (e.g. music, audiobooks and podcasts) \u00EF\u0082\u00B7 Familiarity with online music recommendation systems In one week of recruitment, 35 individuals responded to our survey and we selected 9, of which two were backups and were not used. No participants dropped out of the study. The emails sent to the participants can be found at Appendix A. We selected individuals with extensive music listening and who as a grouped used a broad range of different player brands. We wanted to include different portable media player brands in order to avoid any bias towards any brand or bias based on limitations that might be prevalent in certain portable players. Additionally, we selected participants that had greater experience in terms of duration and ownership of their player. We expected that these people would listen to music in variety of locations and times. Also we expected these participants to spend more time with their players. We expected them to be more critical towards the device and to be able to provide richer examples during the study. The screener asked potential participants about their familiarity with online music recommendation systems (such as Pandora, Last.fm, Grooveshark, Apple Genius). This criterion didn\u00E2\u0080\u009Ft play a strong role during the selection of participants although we did prefer people with some experience of these tools. Six students from the University of British Columbia participated in our study: 2 females and 4 males. All participants were between 19 and 22 years of age. Participants came from the following disciplines: psychology, engineering, forestry, chemistry, and commerce. One participant (P5) was a PhD student and the rest were undergraduate students in their second, third or fourth year. Participants received $25 compensation for their time. Table 1 20 summarizes the information on audio player usage of our participants collected through our screening. Table 1 Audio Player Usage of Selected Participants Portable player(s) used Length of ownership of current digital audio player Average hour of mobile audio player usage per week P1 Sony NW Series Over 1 year but less than 2 years Between 20 and 30 P2 Apple IPod Classic Over 6 months but less than 1 year Between 20 and 30 P3 Apple IPod Touch Over 2 years Over 30 P4 Apple IPod Touch and Nano Over 2 years Between 20 and 30 P5 Coby Over 1 year but less than 2 years Between 20 and 30 P6 Apple IPod Nano and Blackberry Storm Over 2 years Between 10 and 20 3.2. Online Pre-Session Homework A webpage was compiled to inform and demonstrate the haptic and biological signal recognition technologies that were used in our portable audio player design to participants. The goal of this webpage was twofold. The first goal was to shorten the duration of the study by familiarizing participants in advance with the relevant definitions and terminology. The second goal was to prepare participants for the types of questions that we were going to be asking them during the study. We chose to give a representative sample of these questions because we believed that providing them would get participants to think about it before the study. Definitions and terminology on biological feedback recognition and haptic technology (e.g. haptics, tactile feedback, EGG) were put on the webpage. Selected Emotiv EPOC and haptic technology videos were also embedded on the webpage (described below). These videos provided further explanations of the technology in simple everyday vocabulary 21 (without any need of substantial scientific knowledge) and in clear English. Besides definitions, participants also had chance to see the demonstrations included in these videos. A URL of this webpage was sent to each participant a week in advance and they were asked to take a look it before coming to the study. The generic emails sent to all of the participants can be found in Appendix A. A text version of the web page and links of the actual videos can be found in Appendix B. The total duration of the 4 embedded videos was 13 minutes. We expected that participants would spend 20 minutes on the website, based on its content and their willingness to do so. The first two videos were about haptic technology. Both of these videos were prepared by Immersion Corporation, a leading haptics research and development company. The first video was an educational video. We provided this video to raise participants\u00E2\u0080\u009F awareness of haptics. The second video showed a demonstration of a haptic touchscreen at CES 2009, which is an international consumer electronics show. The third and fourth videos were about the Emotiv EPOC headset. The first Emotiv headset video was a promotional video. It included a brief introduction to the Emotiv Headset, a brief introduction to brain waves, and an introduction to the capabilities of the Emotiv EPOC headset. This video was initially published on the Microsoft Community website. The second Emotiv video was demonstration of a prototype that used the Emotiv headset. The prototype allowed a user using an Emotiv Headset to ring a bell by blinking. 3.3. Session Procedure All study sessions were held in the UBC Computer Science Department building. Sessions took approximately 2 hours and 30 minutes, except for P6\u00E2\u0080\u009Fs session which took around 3 hours. The researcher gave participants a small break approximately an hour into the study. Participants were compensated with cash in the amount of $5 per half an hour at the end of the session. Each session was divided in three major sections: demos and technology, semi- structured interview and interaction building exercise with a HALO-enabled portable media player. We referred to the HALO-enabled player as \u00E2\u0080\u009CRemote\u00E2\u0080\u009D or \u00E2\u0080\u009CRemo\u00E2\u0080\u009D throughout all the sessions. The technology demos and semi-structured interview took approximately the first 22 half of the session, around 50 to 60 minutes. After a small break, the researcher introduced Remote and then lead the participant through the interaction building exercise. On average this part of the study took 75 minutes. Participants were briefed on their ethical rights and they provided consent at the start of the session. Before starting the session participants were invited to be comfortable and asked to be candid in their responses. All of the sessions were recorded with a video camera which was positioned near the corner of the room to capture both the primary researcher and the participant. In addition to the primary researcher, a secondary researcher observed all sessions. The second researcher took independent notes that were later used to validate the observations made by the primary researcher. We had initially planned to run a second study with the same participants. The first study took more time than expected, and provided enough data for analysis to fulfill the scope of the project. We therefore chose to focus our efforts on the analysis of this first study phase and not to run a second phase. 3.4. Checking Experience with Informational Webpage Each participant was asked if they had visited the pre-session webpage. If participants had not watched all the videos, they are asked to watch them at the beginning of the session. After ensuring that participants had watched the videos, the researcher started a conversation with participants about their impressions of the videos. The researcher asked participants to comment on things they found memorable, interesting, surprising, strange or confusing. 3.5. Demonstrations The sessions continued with demonstrations of two haptic devices, the Emotive EPOC EGG headset [10] and GSR bookmarking [39]. The purpose of the demonstrations was to give participants a chance to test the technologies they were going to be asked to use. Our goal was to get participants to understand and experience a haptic signal, and to see the capabilities of state-of-the-art biological signal recognition. Demonstrating the possible 23 haptic augmentations was particularly important as novel touch interactions are hard to comprehend without experiencing them. We expected that when participants experienced these technologies it would get them more interested in participating fully in the study. These particular demonstrations were selected because we had easy access to these devices either because they were developed by our research team or because they were previously been used by other members of our group who had expertise with the device. Another reason we selected some of these demos (tactor and Emotiv EPOC EGG headset) was that they had been used in some of the related work. Lastly, we selected devices that we anticipated could be used for prototyping based on the results of this study. 3.5.1. C-2 Tactor and haptic sleeve display The haptic devices that we demonstrated to participants were an Engineering Acoustics vibrotactile transducer (the C-2 Tactor, hereafter \u00E2\u0080\u009Ctactor\u00E2\u0080\u009D) (Figure 3) and a haptic sleeve (Figure 4). The tactor rendered a 50ms 250Hz square wave followed by 20ms of haptic silence, this pattern repeated indefinitely while participants held the tactor in their palms. Participants were encouraged to test the tactile rendering on different parts of their bodies such as the back of their hands. The researcher asked participants about what associations they would make with the haptic sensations if those sensations were coming from their portable media player. Their responses were recorded. The researcher answered any questions that participants had about details of its operation. Figure 3 C2 Tactor 24 The Haptic sleeve is a prototype tactile display that was built by our research group. The haptic sleeve is used by slipping it on the user\u00E2\u0080\u009Fs arm. By switching on and off nine actuators located (in a 45cm line extending from wrist to shoulder) on the sleeve at different times and for different durations, dynamic tactile patterns can be rendered on the arm of the user. For our study demonstrations, we used a single pattern due to time constraints. This pattern was of a single point moving from the wrist to the shoulder during a 30 second period. This pattern was rendered by turning actuators on and off in a linear fashion. After the tactor demonstration, participants slipped the sleeve on and the device was turned on. Participants were given a few minutes to observe the pattern. After a few minutes while participants were still experiencing the sleeve, the researcher started asking several questions similar to those asked during the tactor display demonstration. The researcher recorded their responses. Next, the researcher asked participants to compare their two tactile experiences. The study then moved on to the demonstrations of biological signal recognition. Figure 4 Haptic Sleeve Display 3.5.2. Emotiv EPOC EGG headset and GSR Bookmarking The primary researcher put the Emotive headset (Figure 5) on his own head and ran the demo software that came with the product software development kit. Researcher was chosen as the recipient of this demonstration instead the participant since it was necessary to train the system. This software included an avatar that mimicked the facial expressions such as 25 blinking, frowning and lip movements. At this stage, the screen was rotated towards the participant. The researcher asked the participant to observe the researcher\u00E2\u0080\u009Fs face and the avatar mimicking the facial gestures of the researcher on the screen at the same time. The researcher explained to the participants that the Emotive headset accomplishes this mimicking using biological signals. The researcher answered participants\u00E2\u0080\u009F questions and provided further information about how the biological signals were recognized based on the participant\u00E2\u0080\u009Fs interest level and time constraints. Figure 5 Emotiv\u00C2\u00AE EPOC Headset The second part of this demo used another software demonstration that came with the Emotiv SDK kit. For this demonstration the researcher continued wearing the Emotiv EPOC EGG headset. The screen displayed a small box floating in the center of a virtual 3D square room. To demonstrate that the box position was controlled by the will of the researcher, he asked the participant to instruct him when to start thinking to push the floating box. After the participant told the researcher to push, the researcher started to imagine pushing it until the box hit the wall of the virtual room. After the researcher stopped thinking about pushing it, the box returned to its original position. This process it was repeated as many times as the participant wanted to repeat it. The researcher also demonstrated pushing the box on the opposite direction (bringing position of the box closer to the participant) to some participants, based on their interest. 26 Figure 6 Galvanic Skin Sensors The final biological signal recognition demonstration given during the study was of a prototype that was developed by our research group. This demonstration included the capturing of GSR signals to recognize orienting responses of participants. Initially our group developed this prototype to support the creation of bookmarks for audiobook listeners [39]. However, due to time constraints the researcher did not get participants to listen to an audiobook during the demonstrations. Instead the researcher explained this usage to them verbally. GSR sensors were first put on by the researcher himself (Figure 6) while he explained the GSR and the goal of the prototype. Then the sensors were passed to the participants and they were asked to wear them if they were comfortable doing so. At the time of this demonstration, the computer screen was facing both the researcher and the participant. It displayed the plot of the GSR signals that it was capturing. The researcher answered participants\u00E2\u0080\u009F questions about GSR signals and how they could be utilized elsewhere throughout the demonstration. Participants\u00E2\u0080\u009F reactions to demonstrations and their questions were recorded during both of the biological signal demonstrations. The researcher concluded this section of the study by asking each participant his/her opinion on each demonstration and then asking if they had any final remarks about the demonstrations. 27 3.6. Semi Structured Interview A semi-structured interview followed the demonstrations. The researcher\u00E2\u0080\u009Fs goal with this interview was to learn about each individual\u00E2\u0080\u009Fs usage of portable media players and about the roles that music has in his/her life. The researcher also wanted to learn about the strategies participants employed to select music. We expected this part to take 15-20 minutes. It took longer than 30 minutes due to the richness and usefulness of information participants provided at this stage. After this section the researcher gave participants a break and offered them snacks or drinks. 3.6.1. Introduction of Remote System During the third section of the study, the HALO-enabled portable media player, which was called \u00E2\u0080\u009CRemo\u00E2\u0080\u009D or sometimes \u00E2\u0080\u009CRemote\u00E2\u0080\u009D during the session, was introduced to the participant. Researchers chose this nickname because it was more familiar sounding than \u00E2\u0080\u009CHALO\u00E2\u0080\u009D. We expected that a familiar name would facilitate the conversation. We avoided using a human name to avoid the anthropomorphic effects reported in related work by Hazelton [14]. The researcher provided participants with a printout that gave them a definition of HALO. In summary, HALO was introduced as a computer system that recognized user biological signals and act based on inferred user behaviour or state. Haptic technology was presented as a preferable communication modality for the player to give feedback to user. Two example cases were prepared beforehand to use to support discussion of any scepticism and doubts that participants might have about the potential usefulness of HALO. The first case was the strong, non-verbal communication that can be established between a pet and its owner and the second one was a situation where a user\u00E2\u0080\u009Fs auditory and visual senses were completely utilized while they were also needed for interaction with a device. Both of these examples were presented to all participants to provide more understanding of HALO, even when participants were not sceptical or doubtful. The definition of HALO and the two cases used can be found in Appendix B. After a brief introduction to the proposed HALO-enabled portable media player, the researcher presented two improvements to the current interaction: (1) hands free interaction and (2) the ability to have richer communication with a HALO-enabled portable device 28 compared to their current devices. The first improvement, increased interactive capability with device, was proposed primarily for situations where user\u00E2\u0080\u009Fs physical movements were constrained. Being in a crowded bus or using hands for carrying items were representative use-cases suggested. The second benefit, richer communication, was suggested for users who might be interested in expressing their music preferences in terms of content (music that makes user emotional, energetic, etc.) rather than using buttons that can only control music streaming. Next, we introduced an example usage scenario in which an imaginary individual had a problem interacting with her portable music player. This first scenario is presented in detail below. Another four scenarios were kept ready as printouts (Appendix B). \u00E2\u0080\u009CSusie is riding her bicycle and listening to music using her portable player. Her player is mounted on her arm and set to shuffle mode. After the end of an upbeat song that she was enjoying, an economics lecture that her professor had put online for the class unexpectedly begins. Susie becomes annoyed at this change and wants to return to listening to music. She thus stops her bicycle, removes her player from the arm mount, and presses the \u00E2\u0080\u009Cforward\u00E2\u0080\u009D button on the player until she finds a song she likes. She then resumes cycling.\u00E2\u0080\u009D The reason that a scenario was initially provided to the participants was to give a sample case of a clear and easy to relate to problem in interacting with a media player while listening to music. Through this scenario, participants could see a potential usage of the HALO without too much effort. If a participant did not relate to the scenario provided, one of the other prepared scenarios was presented to them. Based on this scenario and the participant\u00E2\u0080\u009Fs experience, the researcher asked him or her what possible solutions he or she saw to the problem presented in the scenario if the HALO was available. The researcher discussed the participant\u00E2\u0080\u009Fs solution with him or her. This exercise was expected to lead to a clarifying discussion of the concept of HALO and increase participants\u00E2\u0080\u009F level of comfort with the concept. Later, participants were asked to describe their own scenarios and to develop a corresponding HALO interaction. 3.6.2. Interaction Building Based On a Scenario The researchers asked participants to create scenarios similar to those that had been introduced to them. Specifically, at least one scenario in which they thought a HALO- 29 enabled audio player could have a positive impact for them personally. They were also informed that this scenario should be either from their past or a scenario that they felt familiar with. The researcher reminded participants of the places and times that they had reported using their media player earlier in the semi-structured interview to help them in coming up with their own scenario. The researcher provided a pen and paper and encouraged participants to draw a representation of their scenario. Although it was not mandatory to sketch their ideas, some of the drawings made collaboratively with the participant can be found at Appendix C. The researcher asked questions about the interaction and any relevant habitual behaviour or other related details that participants did not describe, in order to learn more about the following: \u00EF\u0082\u00B7 Location, time, and place of the scenario. \u00EF\u0082\u00B7 Motivations for listening to music in the scenario. \u00EF\u0082\u00B7 Relationship of activities that are carried out simultaneously as secondary tasks while listening to an audio stream 1 \u00EF\u0082\u00B7 Challenges or limitations faced in the scenario \u00EF\u0082\u00B7 Physical and mental workload of the activities involved in the scenario The focus of the conversation was then turned to the desired behaviour of a HALO- enabled portable player that the participant imagined in one of the scenarios that they had discussed. Although participants were not required to come up with a minimum number of scenarios, they were asked to come up with as many as they could. Based on the time spent on earlier parts of the study session, the researcher guided participants to discuss rich and promising scenarios based on the limitations and specifications of the scenario where an improvement could be made using an intelligent system. The researcher asked participants questions to learn about their expectations of a HALO-enabled audio player and encouraged them to think out loud. Participants were encouraged to think of and describe activities during which they would like to give control of their portable media player to HALO, to gain information about the following: 1 For example, studying (primary) while listening to music (secondary), or listening intently to an Music album (primary) while eating (secondary). 30 \u00EF\u0082\u00B7 Type of behaviour or support that was expected from a HALO-enabled audio player \u00EF\u0082\u00B7 Goal(s) of the interaction described in the scenario \u00EF\u0082\u00B7 Any dependencies on location and time \u00EF\u0082\u00B7 The role(s) of listening to music Answers to these questions helped us to understand users\u00E2\u0080\u009F needs and the types of support that HALO could provide. The researcher asked participants to imagine interactions and communication they could have with the HALO-enabled audio player. We discovered the behaviours of the proposed system that participants expected and anticipated to be most useful, and their beliefs about how they would like to interact with it. The researcher helped participants to elaborate on different aspects of the potential system behaviours by referring to the scenarios that they had provided earlier. Often, participants felt a need to ask the researcher questions related to certain system behaviours to understand the technical possibilities (e.g. recognition of certain user emotion or interaction). Several details of the HALO-enabled audio player usage scenario were discussed: \u00EF\u0082\u00B7 Behaviour that HALO was expected to support \u00EF\u0082\u00B7 Information that HALO should give feedback about \u00EF\u0082\u00B7 Factors that could alter the systems behaviour If participants drew sketches while describing the scenarios, these sketches were utilized as notes of the interactions. Participants provided details about the medium of communication and frequency of communication in the scenario they discussed with the researcher. Further, conversations on information that participants wanted to receive at anytime during the scenario (including system status and notification of actions taken by HALO) took place. The study ended with participants\u00E2\u0080\u009F giving a brief summary of interaction scenarios that they imagined would take place with the support of HALO. 3.7. Analysis This section describes how an open coding technique was used to analyze the data collected and also reports what measures were taken to control researcher bias in our study. Steps of this analysis are described thoroughly with examples to demonstrate the process and give the 31 reader an outline of the processes involved. The results of this analysis are later reported in Chapter 4. Analysis steps were as follows: Step 1: Recordings were transcribed. Step 2: Open coding categories, subcategories and code examples were selected based on the study goals and an initial review of data. Step 3: One randomly chosen transcript was coded using opening coding technique (Strauss and Corbin [49]) to identify individual participant behaviours or opinions during each section of our study (namely in the demonstrations and semi-structured interview). Step 4: The same randomly chosen transcript in previous step was coded again using the same open coding technique by a second researcher who is experienced at using the open coding technique in data analysis. Step 5: Intercoder reliability between these two independently coded transcripts was calculated to measure and control researcher bias in open coding; from this step also emerged an initial set of codes (this was done prior to coding the remaining five transcripts). Step 6: All remaining transcripts were coded using the finalized set, with adjustments made as appropriate to the first one. Step 7: Affinity diagramming techniques were applied to reveal key emergent themes in generated codes. These techniques are widely accepted for analyzing qualitative data in social sciences [3]. The remainder of this section will focus on a more detailed description of each step in the Open Coding analysis process. This is a content analysis technique where \u00E2\u0080\u009Ccodes\u00E2\u0080\u009D or lists of words are assigned to observed behaviour or participant comments, and recurrences of these events and opinions are marked (Strauss and Corbin). Step 1: All the video recordings with timestamps were verbatim transcribed for qualitative analysis. Additional notes were also taken to capture interesting nuances specific to individual participants. 32 Step 2: An initial set of main categories and sub-categories for codes was first established from the interview questions and initial goals of the study, i.e. after the data was collected but before it was rigorously analyzed. These main categories (category A, B and C in Table 2 respectively) were \u00E2\u0080\u009Cexperience with technology and current practice\u00E2\u0080\u009D, \u00E2\u0080\u009Cself- reported and observed responses\u00E2\u0080\u009D and \u00E2\u0080\u009Cdesired HALO-enabled portable player behaviour\u00E2\u0080\u009D. These three main categories are further divided into three, four and two sub-categorizes respectively (category A into A1, A2 and A3; category B into B1, B2, B3 and B4; and category C into C1 and C2 as in Table 2) to make the large number of observations manageable in each category. In this way, three main categories were divided into total of nine sub-categories. For example, main category A, \u00E2\u0080\u009Cexperience with technology and current practice\u00E2\u0080\u009D is categorized into three sub-categories: A1: \u00E2\u0080\u009Ctechnology experience\u00E2\u0080\u009D, A2: \u00E2\u0080\u009Ccurrent use of music players\u00E2\u0080\u009D and A3: \u00E2\u0080\u009Cpain points\u00E2\u0080\u009D. Steps 3-4-5: To take measures to prevent and control bias on the part of the main researcher in open coding early in our study, a second researcher experienced in the open coding technique assisted in assessment of the credibility of coding analysis. A randomly chosen transcript of a single participant was coded independently by this experienced researcher in addition to primary researcher\u00E2\u0080\u009Fs coding analysis of the same transcript. These two encodings of the same transcript are then compared to measure intercoder reliability between the main researcher and the experienced researcher. We report results of these measurements in Chapter 4.1. Step 6: Adjustments to the codes from the transcript that was used in measuring intercoder reliability were made by incorporating mismatching codes that were discovered by the experienced researcher. The main researcher then went through all the remaining transcripts and assigned codes (which automatically defines a sub and main category that holds that code) to the observed opinions or behaviours. Each observation could be assigned with multiple codes and thus multiple main- and sub-categories. These codes were used to mark any recurrences of similar events. New codes were created during this stage (Step 5 in list above) only if existing codes did not fit the observation. While new codes did materialize, we did not need to create any new categories and there was no code without a category. As an example of new code creation: a particular participant\u00E2\u0080\u009Fs comment \u00E2\u0080\u009EI wasn't familiar with any of those [haptics] technologies before\u00E2\u0080\u009F was assigned to the [No previous experience with 33 haptics] code under \u00E2\u0080\u009Ctechnology experience\u00E2\u0080\u009D sub-category (A1 in in Table 2). If similar observations statements were made by this participant or other participants, the same code was assigned to each occurrence. However, this code did not fully describe comments where the knowledge of haptic technology was expressed. Thus, a new code [Heard of haptic technology as in concept] was created under the same sub-category to augment the description of the observation. Similarly, a new code [Familiar with biological signals and biological signal recognition] was created to describe a comment when the participant referred to his/her well established experience in biological signal recognition during the study. Other example codes and the number of codes used in each sub-category can be found in Table 2. Due to the nature of our study, which was guided by a relatively structured set of questions for exploring the participants\u00E2\u0080\u009F experiences with mobile music players and their perception of haptic technology, recurrences of the same event or comment within a single participant\u00E2\u0080\u009Fs transcript were not common. As the study involved personalized scenarios of individual participants\u00E2\u0080\u009F music listening experiences, recurrences of codes between participants were also not common. Therefore we were not able to extract any reliable quantitative information such as occurrence of codes. Instead, we assigned codes thematically into groups for filtering only the relevant observations and then investigating a specific theme at a time. Step 7: After coding and categorizing, we applied the affinity diagramming technique (Holtzblatt and Jones, 1995, Holtzblatt et al., 2005) [16] and [17] to reveal key emergent themes from the subcategories. This categorization revealed several key themes which are the focus of our results. Chapter 4 reports these detailed results with supporting quotes from the study. 34 Table 2. Category and Subcategories of Observations with Example Codes Category Subcategory and Example Codes. For each participant, the number of codes used in each subcategory is indicated as (P1:21, P2:6, P3:30) A. Experience with technology and current practice A 1. Technology Experience (P1:21, P2:10, P3:6, P4:9, P5:8, P6:6) \u00EF\u0082\u00B7 Familiar with biological signals and biological signal recognition \u00EF\u0082\u00B7 Heard of haptic technology as in concept A 2. Current Use of Music Players (P1:79, P2:52, P3:38, P4:62, P5:42, P6:45) \u00EF\u0082\u00B7 Listen to music when walking between classes or commuting \u00EF\u0082\u00B7 New albums or songs are mentally engaging and distracting A 3. \u00E2\u0080\u009CPain Points\u00E2\u0080\u009D (P1:19, P2:10, P3:11, P4:18, P5:17, P6:13) \u00EF\u0082\u00B7 Shuffling songs is not very reliable and doesn\u00E2\u0080\u009Ft fit to users\u00E2\u0080\u009F needs \u00EF\u0082\u00B7 Losing momentum while cooking is annoying because timing is important B. Self-reported and observed responses towards to study material B 1. Procedural Details (P1:3, P2:5, P3:3, P4:8, P5:1, P6:6) \u00EF\u0082\u00B7 Participant asked interesting questions related to the biological signal recognition \u00EF\u0082\u00B7 Forgot player at home because he/she could not find it in the morning B 2. Positive Responses to Material Presented (P1:19, P2:15, P3:24, P4:9, P5:17, P6:18) \u00EF\u0082\u00B7 Not having to press a button to change songs when listening music would be nice \u00EF\u0082\u00B7 Tactor is preferred over Sleeve because it does not have a jump in the signals B 3. Concerns (P1:13, P2:24, P3:11, P4:11, P5:2, P6:14) \u00EF\u0082\u00B7 Biological signals can be unreliable \u00EF\u0082\u00B7 Physiological headsets are hard to setup B 4. Scenarios (P1:4, P2:9, P3:2, P4:2, P5:3, P6:5) \u00EF\u0082\u00B7 Scenario involves him lifting free weights and a unexpected song such as a lecture or a lullaby coming up C. Desired HALO-enabled portable audio player behavior C 1. Requirements for a HALO Implementation (P1:21, P2:8, P3:4, P4:10, P5:9, P6:4) \u00EF\u0082\u00B7 Bookmarking is more useful if fingertip sensors can be less intrusive \u00EF\u0082\u00B7 Established long time connection is important for her to trust the system C 2. Goals of HALO-enable Portable Audio Player Practice (P1:64, P2:31, P3:24, P4:43, P5:51, P6:12) \u00EF\u0082\u00B7 Ability to override the control is reassuring \u00EF\u0082\u00B7 Imagine remote recognizing that users intent to switch songs or skip 35 Chapter 4 Results This chapter presents findings of our participatory design sessions. These results are generated through the qualitative analysis that was described methodologically in Chapter 3. The scope of these results includes outcomes of content analysis based on the transcripts of the participatory design study; using open coding and affinity diagramming. These insights summarize participants\u00E2\u0080\u009F current practice of portable audio player usage, responses to the provided descriptive and exemplary HALO related material (including demonstrations, scenarios and definitions) and the desired HALO-enabled portable audio player system behaviour which they articulated during these sessions. Each observation we represent in our results is a conceptual combination of responses from multiple participants. We present these concepts with a definition and a supporting quote from a participant which demonstrates this observation. We chose a categorization of results, similar to the structure of our analysis, to retain the analysis\u00E2\u0080\u009F structure as much as possible and streamline reporting. 36 Results are divided into four sections: (1) credibility analysis of transcript coding, (2) experience with technology and current practice, (3) response toward the demonstration material used in the study and (4) desired or preferred practice. In the Discussion chapter, we will draw connections between these results and further discuss our findings. 4.1. Credibility Analysis of Transcripts Transcripts of each study were hand coded by the primary researcher. As an outcome of our open coding the number of observations (or codes) made for each participant transcript were: 224 (for P1), 145 (for P2), 125 (for P3), 165 (for P4), 145 (for P5) and 146 (for P6). To assess the credibility of coding analysis made by a single researcher we calculated intercoder reliability of a randomly chosen transcript. The secondary researcher, who attended all of the study sessions, independently coded the randomly chosen transcript of P5. Two of these coded transcripts (one from the main and one from the secondary researcher) were compared. In total, 145 unique coding decisions matched 84 percent of the time (122 codes), which qualifies as \u00E2\u0080\u009Cacceptable\u00E2\u0080\u009D intercoder agreement [30]. Out of the 23 code mismatches between codes, there were only 7 unique codes (4 unique codes from the primary and 3 unique codes from the secondary researcher) that are not shared by both of the transcripts. Remaining mismatching codes were repeating codes that are shared already but not used in the same location in both transcripts. Since an acceptable reliability between two experienced coder and main researcher was found, intercoder reliability on other transcripts was not measured. 4.2. Experience with Technology and Current Practice In this section, we report participants\u00E2\u0080\u009F knowledge of haptic interface and biological signal recognition technology as well as their current practice with portable media players. Three main subsections cover previous experience with technology. The first describes current practice (including types of players, habitual behaviours and motivations for 37 listening to music with portable media players). The second covers points of dissatisfaction with current media players. And the third reports participants\u00E2\u0080\u009F discontent and issues they have with portable music players. Previous experience was assessed to understand participants\u00E2\u0080\u009F working knowledge of haptic and affect recognition technology and their attitudes towards these them. Knowledge of current use allows us to make connections between it and desired goals. 4.2.1. Technology Experience We wanted to learn participants\u00E2\u0080\u009F knowledge of the technologies we used and their previous experiences with and current attitudes towards the technologies they would encounter later in the session. Therefore, we asked each participant to tell us about their experiences with haptic and biological signal recognition technology, as well as their general interest in new technology. The following points summarize the group\u00E2\u0080\u009Fs relevant experiences on these technologies. Little previous experience with biological signal recognition technologies Only P1 expressed that she had a previous experience with biological signals and its possible areas of usage through interpreting them. P1 described a psychology experiment where she wore an Electroencephalography (EEG) headset (similar to the Emotiv EPOC EGG Headset demonstrated in the present study) and commented on the time-consuming placement of the headset. She expressed no other negative or positive feelings towards these technologies. P1 also self-reported familiarity with brain signal measurement and said that she used it in her undergraduate studies in psychology. 38 No previous experience with haptic technologies None of the participants had firsthand previous experience with the haptic technology. Only P4 stated that he had seen a similar notion of haptic technology before in the E3 game entertainment conference. Not particularly\u00E2\u0080\u00A6 [familiar with the technologies before]. I did know little bit about haptic technology. Not the term [but] the concept of it. E3 and something like that, a gaming conference. \u00E2\u0080\u0093 P4 Experience with computer-based music recommender systems Four participants (P1, P4, P5 and P6) expressed that they current use or have used a music recommendation system. We also asked about motivation for using these systems. Three of them said that they have used them to browse or explore music. However, only P5 had used it for listening to music continuously without paying attention to maintaining it. Following is a quote from P5 describing his usage of Last.fm, a music recommendation system. Yeah, I guess the main one will be the second one [continuous, radio-like music listening]. Just sort of something that can almost pick up on theme and continue with that theme, and sort of make its own playlist. That was pretty good! Especially, you just picked one artist and it built a whole playlist around that. That was really nice and then it is just sort of added benefit of discovering new music. But the mostly the hands free aspect, is kind of nice. \u00E2\u0080\u0093 P5 P4 similarly reported that he listened to online radios on his portable music player for an uninterrupted music listening experience. P3 Behaviour of early adoption Participants\u00E2\u0080\u009F inclination to invest in new products and technologies is queried, in particular music players. Only P3\u00E2\u0080\u009Fs responses indicate early adoption behaviour. He typically buys a product immediately after it becomes available. In case of the music, I wanted to use it before friends and I wanted to use technology. In case of the music \u00E2\u0080\u0093 like this IPod touch \u00E2\u0080\u0093 my friends didn't [use to use] IPod touch but I was really really interested in it. So I bought it and showed it to my friends. Some of my friends bought it as well. \u00E2\u0080\u0093 P3 39 4.2.2. Current Use of Music Players Results in this section include information about participants\u00E2\u0080\u009F experiences with their music players. This information enables us to understand the background with which each participant came into the study. The three subsections (basics, customizations and roles of music) summarize the portable media devices that participants have used, their typical usage behaviours, and their motivations for listening to music. Basics Specifics about the media players (including brands and capacities) that participants own, and their frequency of use are reported at this section. Participants have been using media players for more than 2 years Four participants (P1, P3, P4, and P6) have been using their current media player for more than two years. P2 has used her current player less than a year but longer than 6 months. She reported previously using another portable media player for more than 2 years. P5 also reported that he has been using his current player over a year and used a previous player for more than 2 years. Brands of the players that are used by our participants were: Sony NW Series (P1), Apple IPod Classic (P2), Apple IPod Touch (P3), both Apple IPod Touch and Nano (P4), Coby (P5) and Apple IPod Nano and Blackberry Storm (P6). Participants use media players of varying capacities Participants owned players with capacities ranging from 1GB to 60GB. P4 and P6 each owned two portable players with two different capacities. P1, P2, P3, P4, P5 and P6 owned players with capacities of 1G, 4GB, 8GB, 2GB, 5GB and 4GB respectively. In addition, P4 and P6 also had second players both with 16GB capacity. P5 currently owns a player with only 5GB but had previously used a 60GB media player. Participants keep media in their player that they listen to infrequently 40 Moreover, all participants indicated that they have songs in their players or in their players that they listen to infrequently. Each individual had a different reason for not removing those songs ranging from lack of time to negligence. There are some songs that I keep in my player which I don't listen to very frequently. They will be useful for some specific situations. There is a one song that I listen when I'm doing yoga, but I can't listen that at bus. I\u00E2\u0080\u009Fd only go to yoga every one or two weeks. \u00E2\u0080\u0093 P1 Frequent use of music player P1, P2, P3, and P4 reported between 20 and 30 hours of media player usage on average per week. Other participants, P5 reported over 30 hours and P6 reported between 10 and 20 hours of usage on average per week. Although verification of such behaviour was beyond the scope of our study, it was reinforced by participant\u00E2\u0080\u009Fs manner of speaking. For example, here P3 and P5 described when they listened to music during the day. Before I go to bed, I listen to music for 2 hours. \u00E2\u0080\u0093 P3 Usually either I\u00E2\u0080\u009Fm at home I\u00E2\u0080\u009Fm queuing it up for some homework or reading or lab reports to grade, that\u00E2\u0080\u009Fs good for 4 to 5 hours. Sometimes in the lab, I\u00E2\u0080\u009Fll put it on and not take it off all day. So about 8 or 9 hours in that case, except for lunch. Maybe take a break half way through. \u00E2\u0080\u0093P5 Carrying portable media player daily All participants report that they carry the portable media player frequently in mobile contexts. Examples that many participants gave included commuting, and exercising indoors and outdoors. Usually I carry it with me. I think most of the time I will carry it. Today is just an exception [that she didn't take it with her today] I couldn\u00E2\u0080\u009Ft find it and hurried out... \u00E2\u0080\u0093 P1 None usage of headphones with control Participants were asked if there were any accessories that they used with their portable media player, excluding battery chargers. None of the participants reported using any accessories except for varying styles of earphones and headphones. Participants were 41 explicitly asked if they ever used headphones with volume or media control. Only P1 has volume-controlling headphones. Identified music as a hobby Three participants (P3, P4 and P5) identified music as a hobby. They explained that they liked to discuss music with their friends and exchange music with each other. I guess music is sort of my main hobby. Just sort of constantly finding new artists, new bands, analyzing new albums and seeing if an album stacks up against another album. \u00E2\u0080\u0093 P5 P3 reported disc jockeying, selecting and playing music for an audience in school. P4 had experience music mixing, where a number of recorded sounds were combined together to produce a mix. Customizations Given music players with different capacities and brands, it is important to understand how participants are using these devices in daily life. We found differences as to how each participant created playlists and how they chose music to listen. These behaviours are reported in this subsection. Carrying media player P4 and P5 stated that usually they carry their devices in a specific location. For both the participants it was either their right or left pocket. Usually in my pocket. Yeah. Always (emphasis on always) the right pocket in my pants. Probably just because that is where I started putting it, and now it's just entirely... I have to reach for it. It\u00E2\u0080\u009Fs always been here and if it's not there, then I just get really confused. \u00E2\u0080\u0093 P5 P1 and P3 stated carrying their players where they are easy to reach and to enable quick interaction. For example P1 reported carrying her player in her hand because it\u00E2\u0080\u009Fs very handy. It's a very small one and handy. I sometimes carry it in my hand. \u00E2\u0080\u0093P1 It's easy to reach because I want to change my song. \u00E2\u0080\u0093 P3 42 P2 and P6 did not report using a standard location to carry their media players during mobile usage. Creating playlists All participants said that they maintain at least one playlist. P5 was the only exception. He commented that the whole capacity of his device acted as a single playlist. He uploads the albums he wants to listen every two or three days. Usually it's like the 5GB that I\u00E2\u0080\u009Fm more interested in now. I sort of use it as a single playlist MP3 player I guess. Just because I don't have that much space, I guess. \u00E2\u0080\u0093 P5 Each participant described a unique way of categorizing music into different playlists. Playlist represented different things to different participants. Participants categorized songs in terms of mood, tempo, genre or chronological. When I shuffle it, I'm almost asleep. So when very loud songs and speech comes up [songs with lyrics], I totally wake up. So it made me categorize songs into different moods and music \u00E2\u0080\u0093 P1 I have 17 playlists. They are sorted based on different themes. Well like time frames\u00E2\u0080\u00A6For instance I have a playlist of songs that I listened to when I was 15 [years old] or I listen to [the songs that I listened to during the] winter of last year or spring of this year\u00E2\u0080\u00A6. \u00E2\u0080\u0093 P2 Shuffle usage Participants were asked to comment on the shuffle functionality of their player. P1 stated that she used shuffle from time to time when she did not have a particular musical preference. P6\u00E2\u0080\u009Fs use shuffle to pick songs most of the time and only utilized playlists when he was exercising. P6 stated most frequent usage of shuffling. I will usually keep it on shuffle. I\u00E2\u0080\u009Fll make a high energy playlist for it and shuffle it. [Researcher asks: What about commuting and studying?]. I haven't bothered to make a playlist for that. So if I ever came across a strange song, I\u00E2\u0080\u009Fll keep shuffling, until I find something suitable. \u00E2\u0080\u0093P6 P2, P3, P4 and P5 stated that they tended not to use shuffle to pick songs. Instead, they utilized browsing capabilities and playlists to select the songs they wanted to listen. 43 Whenever I do shuffle, I find that I'll listen to one song that I really like and then five songs that I don't like as much. So, I try not to shuffle as much just because if I know I'm listening to playlist I know all and each and every most of the songs back to back. \u00E2\u0080\u0093 P4 Shuffling to alter the experience of a playlist P2, P3 and P6 stated that they use shuffle functionality to randomize the order of music in a playlist. Participants expressed that they randomize the music order to change the experience of a particular playlist. According to participants, this helped them to avoid getting bored from an accustomed experience (well-known and not appealing as it used to be) which reported to happen after listening to list of song in the same order repeatedly. When I listen [to] all of the playlist by order, I use shuffle function to listen them in another order. I don\u00E2\u0080\u009Ft always want to listen with the same order. When I shuffle it [order of the songs in a playlist] is new for me. \u00E2\u0080\u0093 P3 Roles of Music The reasons users listen to music are reported in this section. Some of these reasons are simply the direct consequences of hearing and listening to any type of music. Other roles are more obscure and hard to define. In these cases, music listening plays a more important and sometimes deliberate role in participants\u00E2\u0080\u009F lives. To \u00E2\u0080\u009CKill Time\u00E2\u0080\u009D Listening to music to make time pass more quickly was a motivation common to all participants. Although participants defined boredom differently, the phrase \u00E2\u0080\u009CKilling time\u00E2\u0080\u009D was a common way of describing this usage. Often, participants listened to music to kill time during short and repetitive activities \u00E2\u0080\u0093 for example, riding the bus, walking to work, or working out at a gym. For example, P5 who had to monitor a chemical experiment for several of hours indicated listening to music while doing this to not get bored. Similar observation can also be made from comments of P1 below. 44 To change or accommodate mood All participants agreed that sometimes they used music either to match or to alleviate their mood. They expressed that listening to particular songs made them more relaxed, comfortable, energetic, productive, or awake. Causality was unclear. It was hard to know whether listening to music was making participants feel this way, or if participants were listening to this music because they were feeling this way already. P1, P2, P3, P4 and P5 created playlists in advance to match particular moods. P2 explained how she used playlists to evoke a past period in her life. I have 17 playlists. They are sorted based on different themes. Well like time frames\u00E2\u0080\u00A6For instance I have a playlist of songs that I listen to when I was 15[years old] or I listen to [the songs that I listened at] winter of last year or spring of this year\u00E2\u0080\u00A6. \u00E2\u0080\u0093 P2 For mood-driven listening, participants seemed to expect the impact on their mood to occur over a period longer than a single song. For example they might expect to feel more relaxed after 10 songs instead of a single song. The mood sought was closely related to the context of the activity. P1 and P3 demonstrated this through the playlists they created to listen to before going to sleep or during times when they felt stressed. Similarly, P5 commented that he felt more productive and energized when he listened to music while studying or working out. If I had bad things before the exam or before the sleep, I\u00E2\u0080\u009Fll feel upset. But when [I] listen to the music like comfortable music or slow music, I feel like really comfortable. I could forget it (being in a stressed state), so I could concentrate on test or falling asleep. So I usually... I always (emphasizing always) use these kind of music playlists. \u00E2\u0080\u0093 P3 The place I enjoy most, for my mp3 player, would be while I'm studying. Just because, I feel like I'm being productive while I'm listening to music. Certain songs, electro songs, house songs [and] stuff like that... those types of songs. They have a part called build up. While I'm studying I feel like I'm accomplishing something, like I'm getting a lot done, it really motivates me. \u00E2\u0080\u0093 P4 If I want to just relax or something ... and just listen to something good, I will listen to R&B music and stuff like. Because I know the words (lyrics) and the way that 45 makes me feel. So I'll have my favourite R&B songs, I will listen to those. Just be more relax and more easy going stuff like that. \u00E2\u0080\u0093 P4 If I\u00E2\u0080\u009Fm going to be exercising I want some music that is more higher energy. Something that would get you there right? You wouldn't want to listen to a lullaby while you are exercising because that might just put you in sleep. So, in that sense, I want to listen to songs that keep me moving. \u00E2\u0080\u0093 P6 Participants used music to change their mode with varying frequencies. This frequency often depended on the frequency of certain activities for which they wanted to be in a particular mood. For example how frequently P5 wanted to feel motivated while working out was dependent on the regularity of going to exercise. To cancel noise P1, P4, and P6 listened to music to mentally block out noise in the environment. These participants expressed that they listened to music specifically for this purpose. The noises they described were conversations in cafes, libraries or on public transportation. A common theme was that ignoring human voices was hard in a variety of contexts. Participants wanted to eliminate this source of distraction by overriding the voices with the sound of music. It is arguable that participants\u00E2\u0080\u009F main motivation is to have better concentration by avoiding outside distractions. Here, we aim to report sound overriding but we also report motivation for concentration in more detail under the next observation. P6 described listening to one source of sound when he was studying in the library. P6\u00E2\u0080\u009Fs comment describes how he uses music to cancel outside noise which would prevent him from concentrating. If I\u00E2\u0080\u009Fm in an area where it's very distracting and there is a lot of noise, I\u00E2\u0080\u009Fll use music to cancel it out. But if I\u00E2\u0080\u009Fm in a quite area I can be fine without music. So it allows me concentrate on one particular thing instead of having to filter so many outside noises \u00E2\u0080\u0093 P6 To stay mentally engaged Comments that P2, P4 and P5 made indicate that they tended to listen only to particular songs when they were trying to stay mentally engaged. Participants gave examples of instances when they needed to actively think or direct their attention. In order to do this, they either turned off their player or changed the music, because the music they were 46 listening was not matching the creative mindset or concentration they need. Some of the participants reported listening to classical songs or a music which didn\u00E2\u0080\u009Ft have any vocal singer when they needed to concentration for working. I mainly use it for homework like routine math or like algebra solving. More like stuff that I don\u00E2\u0080\u009Ft really need to think about. But when I need to think about something more then I will turn it off. \u00E2\u0080\u0093 P2 Music helps me to focus and concentrate. It makes me concentrate, it makes me not mobile, and it just makes me sit down otherwise I will be getting up a lot. \u00E2\u0080\u0093 P2 P4 and P5 compared their ability to pay attention when listening to familiar versus unfamiliar songs. They stated that the music they are familiar with requires less of their attention when compared to an unfamiliar song. Participants expressed picking a list of songs that were familiar and comfortable as a common strategy when they felt they need to concentrate on something. I guess if I was listening [to] my playlist. Those songs are the ones that like but they are songs that I have heard many times. So it's more like a familiarity. So that way I already know what the songs is like that. I don't really have to pay attention much it kinds of sits in the background \u00E2\u0080\u0093 P4 However, in the context of exercising the desired effect was different. The selected music was used to help people keep exercising and pushing personal limits. Likewise, features of music such as tempo and genre played a critical role in this context. If I'm working really hard on the treadmill; so I have another kilometer to go [run]. If I'm listening to techno, which has constantly in your face things\u00E2\u0080\u00A6 Doesn't make me focus more it just make me want to push myself farther. Keep going and keep going. Just cause while I'm listening that song... it's almost as if you are fulfilling the song's mood as well. \u00E2\u0080\u0093 P4 Self-reward A unique use of music that was expressed by P4 was to indulge himself after completing a task. He enjoyed listening to a favourite song or two after completion of part of the task. This participant also indicated that the time spent listening to music was a break from ongoing study sessions. 47 Avoiding motion sickness P2 used music to avoid getting motion sick. She commented on feeling motion sick when commuting by bus from home to school. She said that she listened to music every time she commuted for longer for 10 minutes. She found listening and concentrating on music and therefore distracting herself from the condition of motion sickness. 4.2.3. \u00E2\u0080\u009CPain Points\u00E2\u0080\u009D We asked each participant his or her \u00E2\u0080\u009Cpain points\u00E2\u0080\u009D or parts of the experience of using a portable music player that were unsatisfactory. These negative aspects mostly came out when participants described their current experiences. The results in this section are the summary of the most commonly reported pain points, as well as some which were less frequently reported but were also considered important. Frequent interaction required and too much time spent creating playlists Many participants said that they wanted a consistency to the songs that were being played. Nearly all participants did not like needing to interact with their player frequently to pick or skip songs (P1, P3, P4, P5 and P6) in order to achieve this consistency. The length of each interaction was specific to the action that was taken: for example, the time it took to skip an album was much longer than skipping a single song, because of the player\u00E2\u0080\u009Fs interface. Browsing songs and modifying playlists (i.e. the planning of later listening) were also identified as inconvenient. I pick the songs that I like manually when I want to enjoy listening to music and I don't use shuffle. Instead, I pick a song every time. This sometimes becomes annoying or uncomfortable when I don't continue picking songs and unexpected songs come up. It becomes especially annoying when I'm close to falling asleep. \u00E2\u0080\u0093P1 Although creating playlists in advance in front of computer was identified as a way to alleviate this problem, P5 stated that predicting the songs that would fit his or her future mood in advance was not feasible: 48 You can't choose or you can't predict the mood you are going to be all day when you are setting up your playlist. So it's not feasible to expect to perfectly model what you are going to feel like at all times. Even you have a playlist sometimes you don't like the song that is up next in the playlist. \u00E2\u0080\u0093 P5 The mood of songs that are played after each other All participants stated that each song should correspond (in some way which felt consistent to the user) to the one that played before it. If two consecutive songs did not correspond, it caused annoyance and required a manual switch to a more appropriate song. A disagreement between songs occurred when a new song did not go with the songs previously played. Sometimes there was a clear distinction between two songs such as that the genre or tempo was very different. But sometimes, it was an obscure property such as a mood that was not matching. These were the comparisons where it was very hard to make a definition that suited every participant. You don't want to be listening to those types of songs back to back because it's not the same mood. It's uplifting and another one is more casual and sad type of songs. \u00E2\u0080\u0093 P4 Songs that did not fit were commonly associated with shuffling, where all the songs stored on the player were played with a randomized order. Many participants repeatedly reported a frequent inconsistency between songs when shuffling. Physical constraints on interaction with the device P2, P4, P5 and P6 reported that a pain point for them was when they wanted or needed to interact with their music player but their hands were already occupied. The most common examples given were cooking, exercising, driving and studying. P5 described such a situation that occurred when he was listening to music while working in a chemistry laboratory. I had to shake my headphones off while holding on to this flask, so that they would fall -kind of around my neck. So sort of I could talk to him. Because I couldn\u00E2\u0080\u009Ft hear what he was saying \u00E2\u0080\u0093 P5 49 Participants reported varying degrees of discomfort and inconvenience in these situations depending on how critically their hands were occupied. P5 was forced to make a decision between continuing listening to a disliked song and halting his activity in order to make his hands available for interacting. The quote below shows an example of such scenario. A disliked song started to play when he was biking along a busy road and it was unsafe to make a stop. Depending on how if I was late or not... I'd stop the bike and switch it [media player] off or keep on going if I was late for work or something. \u00E2\u0080\u0093P5 P2 expressed similar discontent. For example standing on a bus and if it's really crowded and you can't change it. Because you have to listen to the song and you can't do anything about it. You can't turn volume on low; you can't pause [or] stop it. \u00E2\u0080\u0093 P2 Undesired effects of pausing primary activity Apart from indicating a discomfort from the sudden necessity to interact with the device, especially when involved with an activity besides listening to music, five participants (P1, P2, P4, P5 and P6) reported further undesired effects of interaction on the main task at hand. The interaction caused problems for two reasons. First, because it required the user\u00E2\u0080\u009Fs attention which is mainly coupled with vision and to pause the activity mentally (if user was in the middle of a question) and possibly physically (if user was using their hands or if using vision to watch something). Secondly, because it was not always a good time for pausing the activity. Participants described having to make a choice in these situations between continuing listening to a song they did not like and interrupting their task to find another song. \u00E2\u0080\u00A6and then having to stop what you are doing\u00E2\u0080\u00A6 and then... finding a new song and then resume \u00E2\u0080\u00A6but taking that time to stop. Let's say running for instance... you stop and then you have to get back into the momentum... It's annoying to have to stop to change the music. \u00E2\u0080\u0093P2 If I\u00E2\u0080\u009Fm going through exercise, I can't just stop and change the song. I have to finish with the set or end that set at that point and put the weight back\u00E2\u0080\u00A6 and then take it [player] \u00E2\u0080\u00A6and then resume your set which might ruin the whole thing. It will be just very inconvenient and annoying. \u00E2\u0080\u0093 P6 50 Participants commented that this was not an issue if they were not doing anything other than listening to music. While discussing a commuting scenario, P5 implied that it\u00E2\u0080\u009Fs not hard for him to imagine other people having problems while interacting with their portable player and carrying something similar to a shopping bag. (P5 is talking about listening to music and commuting with a bus.) It's not too much of a concern, unless you are carrying groceries. (I don't [go to] grocery shopping with the bus but I see people grocery shopping and holding bag and stuff. Then it will be a concern. \u00E2\u0080\u0093 P5 Inadequate device robustness P3, P4, and P6 avoided using their music player in situations where the music player itself could be harmed. Examples of such environments were the gym or outside on a rainy day. These participants worried that the device would get wet and break down or that a heavy free weight could fall on the device. Users were either avoiding taking their player out of their pocket or were risking another player that they own but which they felt they could afford to lose. I don't really care if it drops or something like that. Because I know it's really old. But my iPod Touch, if drop it I don't want to ruin the screen and like that. \u00E2\u0080\u0093 P4 Wire tangling P1 and P6 complained about the wires that connected headphones or earphones to the music player. The problem they found with wires was that the wires got in the way of the interacting with the device. Battery P1 and P5 said that they found it annoying when they were wanting to use the device but batteries were drained. 4.3. Self-reported and Observed Responses This section first captures small procedural differences that occurred between participant sessions. Then, we summarize participants\u00E2\u0080\u009F attitudes to the concept of utilizing biological 51 signal recognition technology and to the haptic communication with the to-be designed HALO-enabled music player, as conveyed by the technology demonstrations and scenarios. The next four subsections present these in order: \u00EF\u0082\u00B7 Small procedural differences that occurred between participant sessions. \u00EF\u0082\u00B7 Participants\u00E2\u0080\u009F positive responses to the material used in study to facilitate a participatory design discussion including definition of the HALO-enabled portable media player, \u00EF\u0082\u00B7 Responses to the scenarios that were presented and scenarios generated by the participants which facilitated \u00E2\u0080\u009Cdesign\u00E2\u0080\u009D discussions, \u00EF\u0082\u00B7 Discussions that were made during the sessions which reflected participants\u00E2\u0080\u009F anxieties and doubts about the technologies and descriptions of HALO. 4.3.1. Procedural Details During the study some adjustments were made to the protocol which affected subsequent participants. These adjustments were made in response to the facilitator\u00E2\u0080\u009Fs observations of early participants\u00E2\u0080\u009F behaviour, due to technical difficulties or when a participant came to the study with different level of preparation. These details are reported in this subsection. In general, these adjustments should not have any important impact on our results. Brought own player After the session with the second participant we believed that asking participants to bring their media player to study might help them to demonstrate how they use it. Therefore we asked P3, P4, P5 and P6 to bring their portable media player to the study. P3, P4 and P6 brought their player to study as requested. Asking participants to bring their portable player to the study turned out to be particularly useful while describing interaction and giving examples of examples of particular usage. 52 Good understanding of the concept of HALO Responses of all participants indicate that they understood the definition of HALO presented to them clearly enough to be able to reflect back their ideas as well as their preferences on such HALO-enabled portable media player devices. A clear understanding of the HALO concept was sought to ensure rich feedback from participants. Difficulties at GSR triggered bookmarking system Reliability and responsiveness of the GSR signals were low for P1 and P2 during the GSR bookmarking demonstration in these two participants. Therefore researcher explained how the signals would have worked and how would it be seen on the graph, rather than participants observing their own GSR signal. Watched the introductory videos during the study Before the study, participants were sent a link to a website which they were asked to visit and watch the 4 videos in this website before coming to the study session. The content and purpose of this webpage is explained in the methodology (Chapter 3.4: Checking Experience with Informational Webpage). P3 did not watch the videos before coming to the session. He reported that he had not noticed some of the videos because he had not scrolled down the webpage. P6 did not have any audio when he watched the videos at home. Both these participants watched the videos during the study session. We ended up asking P6 for another 30 minute of his time to not encounter a time constraint. This participant was compensated with an extra $5. The other participants reported that they had watched all the videos on the website. Problems with English P3 informed researchers ahead of the study that he was an exchange student and that English was his second language. From time to time he struggled to express himself. 53 4.3.2. Positive Responses to Material Presented In this section we summarize positive responses of participants to the technology demonstrations of haptic and affect recognition through biological signals and also to the scenarios and definitions material used to describe the HALO-enabled portable media player concept. These materials are described in Appendix B. Haptic technology P1, P3, P4 and P5 appreciated haptic technology and showed interest in the haptic sleeve and the handheld tactor. This was an interesting utilization of HALO that did not foresee. Touch instead of the audio and visual\u00E2\u0080\u00A6 I think that\u00E2\u0080\u009Fs an excellent feature just sort of the basic being putting cell phone on vibrate. \u00E2\u0080\u0093P5 Participants P2 and P6 were more neutral in their responses to the haptic technologies used in the study. Haptics enhances the listening experience P3, P4, P5 and P6 imagined using sleeve displays to help them select music appropriate to their activities. I think I'd probably use it [haptic sleeve] like as a heart rate monitor. Just the way it feels, like you are like exercising with that pattern will increase and increase \u00E2\u0080\u0093 P4 If you go to a concert and you are standing kind of close to the front. You can kind of feel the music almost, especially the bass. If you had sort of even get [make] it so that you can have it pulsing with the base or drum beat or whatever\u00E2\u0080\u00A6 It sort of feel for the music. \u00E2\u0080\u0093 P5 Similar to P3 and P4, P6 also said that he could imagine using the haptic sleeve display for enhancing the experience of music listening through tactile renderings that somehow related to the music or activity that he is involved with. Later in the study session, P6 commented that this functionality wasn\u00E2\u0080\u009Ft necessary and would be redundant. 54 Sleeve is more natural All participants found haptic sleeve display more expressive compared to the tactor. If I use it [iPod Touch] when it vibrates like this [tactor] it's kind of strange but this [haptic sleeve] is kind of musical. I think when I use iPod Touch and this kind of auditory, I feel like connected to music. So I feel like its vibrating according to the music and adapted to the music. \u00E2\u0080\u0093 P3 Sleeve is weird. [Smiles and laughs] I feel like I\u00E2\u0080\u009Fm in a hospital actually. I think it's better than the tactor because when there are many of these points of vibrators it's more fluid. Sleeve feels like music, where the tactor feels like a single beat and it [tactor] is monotone. It [haptic sleeve feelings] reminds me of sea waves of the ocean. \u00E2\u0080\u0093 P1 Compared to that [tactor] it's a lot more... It's infrequent in that sense. It\u00E2\u0080\u009Fs not vibrating constantly. It's pulsing on in intervals\u00E2\u0080\u00A6 It's more rhythmic. If it was coming from my player, I can imagine that it could be pulsing along the rhythm of the song... or something like that. I can see that. \u00E2\u0080\u0093 P6 Positive and negative reactions to biological recognition demos P1, P3, P4, and P5 found GSR bookmarking and the Emotiv EPOC headset interesting and useful. They commented positively and negatively about both of these technologies. I think that would be very interesting both for audiobooks and even for music. Obviously less important for music because you can pick up halfway through but especially if you know if you had a setup such that it would just instantly pause whenever you are distracted. I think that would be incredibly useful. \u00E2\u0080\u0093 P5 During the Emotiv EPOC EGG demonstration, P3 expressed his astonishment when the avatar on the digital screen was mimicking the researcher\u00E2\u0080\u009Fs facial gestures. AAAa Woow (P3 crosses his arms and sits back he seems surprised) It's quite amazing. Is it possible?! (with a surprised tone). \u00E2\u0080\u0093 P3 The other two participants; P2 and P6, did not think that utilizing emotions or biological signals would be valuable. As for the Emotiv ones... I can't really say that I see any use for it. [P6 laughs] But they have given some suggestions as gaming and that sort of thing but I don't really see it being popular. \u00E2\u0080\u0093 P6 55 HALO in music players beneficial All participants immediately suggested ways of utilizing HALO in media players (including being more convenient, seamless and increased control). Participants used HALO-enabled portable media player for controlling music without getting distracted and to give control of repetitive demeaning tasks, such as generating playlists that suit their mood or activity. They saw advantage in spending less time with interacting with music and benefited from concentrating on other tasks. Also few particularly liked the personalized response of HALO. They stated that they would likely have a deeper connection with their media player if it had such functionality. I like the concept just because it [HALO] just makes things so much easier. You don't have to bother fiddling around and like you know trying to do those types of things. \u00E2\u0080\u0093 P4 These kinds of situations sometimes happen to me as well but my situations are not that wrong [or] challenging as in this situation [written scenario given to her]. In this case, the Remote is very useful. \u00E2\u0080\u0093 P1 A much more extensive discussion of how HALO functionality matched with individual participants\u00E2\u0080\u009F usage goals can be found in Chapter 4.5: Goals of HALO-enable Portable Audio Player Practice. 4.3.3. Scenarios This subsection summarize the participants\u00E2\u0080\u009F responses to the provided scenarios (see Appendix B: Participatory Design Session Material for a full description of the scenarios), and a short summary of the additional scenarios that participants came up with during the course of the study. Provided scenario familiar The first prepared scenario (described earlier in Chapter 3.6: Semi Structured Interview) was presented to the participants and they were asked if they could relate to it. We sought to ascertain participants\u00E2\u0080\u009F familiarity with the situation depicted in the scenario to ensure the quality and richness of the feedback in the ensuing \u00E2\u0080\u009Cdesign\u00E2\u0080\u009D conversations. 56 In this scenario Susie paused riding her bicycle while listening to music in order to change the music. All of our participants were able to relate to this scenario and stated that they had been in situations similar to the one presented in the scenario I have been in Susie's place. I have been where Susie is. Well not like bicycling but the same scenario. \u00E2\u0080\u0093 P2 Participants also found this scenario useful for comparing the problems that they face regularly. A few participants even found some of situations described in the scenarios more extreme than their situations they faced in daily life. For instance, P1 said that she would find HALO useful although she faces these less challenging situations than the one on this scenario. These kinds of situations sometimes happen to me as well but my situations are not that wrong [or] challenging as in this situation. In this case, the Remote is very useful. \u00E2\u0080\u0093 P1 Appropriateness of the scenario in order to build a discussion We asked all of our participants to tell us scenarios in which they thought utilizing HALO-enabled portable music player, which was capable of recognizing and acting based upon human affect, would provide the most value for them. Most of them did not have a hard time coming up with one. Researchers were careful not to lead participants when participants were presenting their scenarios. During this stage participants were encourage to generate and describe as many scenarios as they felt like. Considering the possibility that some participants could having a hard time in creating a scenario, we kept two other scenarios other than the first scenario on hand to give them as examples. The only participant that initially had slight problems coming up with a scenario was P6. Like I can't really think of particular situation but if the main purpose is to help you make decision while reducing the effort for you to make decisions. I mean wouldn't you want that! It seems to be a good idea. It's not really concrete on how you will apply it. \u00E2\u0080\u0093 P6 Later he found multiple advantages of utilizing HALO in the scenario we provided to him. 57 Then if that's the case she wouldn't need to stop. She wouldn't need to remove the player she wouldn't have to press the button on the player and then she could continue cycling without stopping and at the same time the Remote she uses will allow her to do all these actions without actually taking the player and I find that out very useful. \u00E2\u0080\u0093 P6 Since none of the participants had struggled to come up with a scenario that made sense to them, the researcher did not need to use the second and third scenarios which had been prepared to be used in the case where a participant was not able to come up with one. Range of scenarios In the paragraphs below, a non-exhaustive but representative sample of the scenarios participants came up with during interaction are presented. These samples are short summaries of the scenarios participants imagined and discussed with the facilitator extracted from the video transcriptions made for each study session. More importantly, these scenarios were referred to and used primarily to facilitate the \u00E2\u0080\u009Cdesign\u00E2\u0080\u009D discussions (describe in Chapter 5) of the HALO-enabled portable audio player. Participant 1 P1 described a scenario where she listened to music in bed. Her motivation was either listening to music to relax and slowly fall asleep or to just listen to music for pleasure. In the case where her goal was to sleep, she was either picking the songs she wanted to listen to or using a playlist that she created beforehand. Two of the main problems, which were both described as things that prevent her from sleeping, she was having in this scenario particularly were: disruption caused by looking as the bright screen of the music player, and both the physical and mental attention required for continuously selecting songs to maintain the harmony of songs played after each other. This participant imagined that a HALO-enabled portable player playing songs she would like to hear while avoiding playing energetic and loud songs that would prevent her from sleeping. She later described this behaviour as HALO choosing songs to match her mood and filter songs that she did not wish to listen to at that time. 58 Participant 2 P2 described listening to the music while commuting on the bus, as one of her regular activities. Her primary goal listening to music is this situation was to avoid motion sickness that could occur while she was commuting for long durations. In addition to this, P2 also told us that physical access to her portable player is often constrained by an umbrella or a bar that she holds in her hands. She imagined a HALO- enabled portable player would help her stay excited about the music she was listening to by recommending and playing songs that she might like to listen to. Generally, she desired HALO to take actions to maintain the joy she had from listening to music. As an example, putting a song on repeat when it is enjoyed and taking it off from repeat when it is no longer appreciated were the two typical behaviours she desired from a HALO- enabled audio player. Participant 3 P3 described several scenarios where he would like to see HALO working. P3 imagined HALO would understand his moods and play songs that he wanted to listen to according to his mood. He imagined HALO helping him to play songs that were comfortable and relaxing, instead of playing loud and noisy songs when he is listening to music before falling into sleep or in situations where he felt stressed like before entering an exam. He also imagined HALO would recognize that he wanted to listen to up-tempo songs when he was walking outside or exercising. Both of these scenarios were based on actions that he currently does by physically interacting with his media player. Therefore HALO would be used to replace the physical actions that he currently needs to take. Participant 4 P4 described several scenarios in which HALO could be utilized. He thought HALO would be useful when he was exercising at the gym and wanted to listen to songs that would keep him motivated. The second scenario was the one that he spent the most time discussing with the facilitator. In this scenario he imagined studying in the library and listening to music to keep his concentration and to feel productive. He imagined that 59 HALO would adjust the volume, chose types of songs that he would like to hear to keep him excited and play his favourite songs when he was taking a break from studying. Participant 5 P5 was very interested in using HALO anywhere as long as it was hands free. The main scenario he presented was working in his laboratory where his hands were occupied with conducting chemical experiments. This participant indicated the greatest value from HALO would be controlling the music player hands-free. He also described a similar value when listening to music and grading exams. Finally this participant described the music he listened to when walking to football practices as motivational and stimulating. Participant 6 P6 imagined using HALO to help him skip songs when he was annoyed with the song that was playing. The situations that he described wanting this behaviour to occur in were when he was studying in library or when he was working out in the gym either running or lifting weights. He also reported that he listened to music in his car using an auxiliary jack to connect it with cars speaker system. Since driving involves a safety concern, experimenter directed the participant to discuss the interaction design scenario while studying. 4.3.4. Concerns Participants thought critically during the session and raised some anxieties and doubts about the technologies, descriptions of HALO and scenarios presented to them. We report these concerns in this section. Annoyance and distraction from haptic sensations P1, P2 and P6 stated some concerns about the haptic technologies presented. They found the tactile feelings that come from the haptic displays to be unnatural and were not able to relate them to their previous experiences. Besides the unfamiliar haptic feeling, they are also found distracting by several participants. 60 For me, if I have an MP3 player with this these kinds of vibrations I may be not like it very much because when I feel the vibration I don't start to pay more attention to music. When I feel this, I have to put more attention on my sense of touch and it will distract me. \u00E2\u0080\u0093P1 (while using the tactor) It feels like when I'm taking my blood pressure. It's a little bit annoying especially if it was on for a long time. It would get annoying at first and I won't feel it after some time \u00E2\u0080\u0093 P2 (when wearing the haptic sleeve) Adjustment and training time P2 and P4 were concerned about the time required to get comfortable with haptic signals and the time required to train the biological recognition system. P2 expressed that she would need time to establish a relationship with a HALO-enabled portable player, similar to one between friends. If we established that connection at the beginning for a long time before the scenario ... I guess it wouldn't be annoying then. If it's like friend who notices that you're tired or whatever and says \"oh cheer up' or something. It will be nice to know that it's not just a device and you but more like two entities interacting with each other. \u00E2\u0080\u0093 P2 Intrusiveness of GSR and EGG Sensors GSR bookmarking and Emotiv EPOC headset demonstrations gave participants a chance to try the GSR and EGG sensors. P1, P2, P3, P4 and P6 all directly or indirectly commented or suggested that they found the sensors or the setup of the sensors to be intrusive. A few participants implied this opinion through comments on aesthetics and by suggesting possible alternative forms for the sensors. Other participants directly asked whether it would be possible to hide the sensors such that it wouldn\u00E2\u0080\u009Ft be noticeable from externally at all. Umm...I guess if you first look at it [P4 is looking at the headset], just the design of it. It's kind of it seems like a bit intrusive to put it on. So I guess I prefer having a closed system around like a helmet or something like that. That way, as a user, I wouldn't see what exactly it's doing. \u00E2\u0080\u0093 P4 Participants found the finger strap GSR sensors less intrusive and easier to setup compared to Emotiv EPOC EGG headset. 61 Accuracy, reliability and detail Accuracy and reliability of biological signals emerged as one of the most prominent concern of the participants. P1, P2, P4, P5 and P6 were sceptical as to the reliability of the recognition of biological signals. They stated that their concerns were based not only on their understanding of biological recognition technology, but also on the risk of misinterpreting complex human emotions. Many of the participants were particularly worried about a false recognition occurring in situations where making wrong recognition it was most likely. I have concerns about privacy and miscommunication of the emotion. There could be multiple feelings and they are not boxed into any particular mood. Sometimes they are very complex and entangled to each other. It's much more complicated than that. \u00E2\u0080\u0093 P2 I think that the idea is good but then... I think there are also a lot of problems\u00E2\u0080\u00A6 Like how accurate it is going to be? ... Like is it actually what you are feeling?... Could there be any disconnect between how you are feeling and how machine thinks you are feeling? \u00E2\u0080\u0093 P6 Another important aspect of reliability was the detail in the biological signal recognition. Granularity and limitations of the HALO reasoning intelligence was critical for participants while imagining interactions with the device. Participants did not hesitate to give examples of cases where they believed a misinterpretation of the emotional signals would be more likely. One thing though, would be, I guess; the fine tuning of it [remote]. Being able to detect what I want, but accurately. So like say for example; I want to listen to something uplifting and it gives me the wrong type of song or something like that. \u00E2\u0080\u0093 P4 Losing control and overriding Participants sometimes expressed that they might feel as if they don\u00E2\u0080\u009Ft control the music. Similarly P1, P2, P4, P5 and P6 expressed mixed reactions at leaving control of their music to HALO especially when talking about implicitly controlled scenarios when HALO was imagined to pick songs based on the user\u00E2\u0080\u009Fs affect instead of his or her direct commands. Their doubts presumably pushed several participants to express their desire to 62 test it before making a decision to adopt it. On the other hand, knowing that the user\u00E2\u0080\u009Fs manual commands (e.g. using physical buttons on portable device) can always override the decisions made by HALO was to reassure most participants. Not uncomfortable enough to not try it out I think. I\u00E2\u0080\u009Fd feel comfortable that \u00E2\u0080\u00A6 I think giving that (music) control to a device when it comes to something things like that that are less important technic \u00E2\u0080\u0093 P5 So I\u00E2\u0080\u009Fm not concerned about losing control because you are still making the decisions it's just with less effort. I think it's quite a useful device \u00E2\u0080\u0093P6 Safety and privacy Participant\u00E2\u0080\u009Fs raised concerns about the safety of the HALO-enabled audio player because it would be mounted on the body. P2, P3 and P5 were concerned both about safety and privacy. P3 expressed that he doesn\u00E2\u0080\u009Ft like the idea of wearing any sensor that is going to be mounted on his head or anything that is closely related to his brain. He was afraid of getting injured from an electronic device. Moreover, P2 was more worried about her emotions and the implications for her privacy, even though an externalized indication of the sensors\u00E2\u0080\u009F readings was not part of any of the experimenter-supplied scenarios. Is it harmful to human body? I though it's harmful to human body because it looked like really bad or harmful to human because machine can understand the brain. I just think it's installed on my brain\u00E2\u0080\u00A6 I people wear on brain it's scary [and] looked like harmful \u00E2\u0080\u0093 P3 My initial reaction was \u00E2\u0080\u009CWoww this really neat kind of thing!\u00E2\u0080\u009D I probably would not use it [Emotiv] at home or daily life because of privacy. Although I'm with my family it would still be a problem to show my emotions to them. I tend to keep my thoughts and emotions to myself personally. \u00E2\u0080\u0093 P2 4.4. Desired HALO-enabled Portable Player Behaviour Results in this section summarize participants\u00E2\u0080\u009F expectations and desires for how a HALO-enabled portable media player would work. They described these while they imagined themselves using HALO in the scenarios that they came up with. Results are divided into two subsections: specifications and constraints on HALO behaviour, and desirable uses of HALO. These specifications and constraints were the 63 requirements that need to be satisfied in order for to ensure their primary goal of utilizing HALO can be achieved. In the second section we report the desired functionality, motivation and behaviour for HALO-enabled portable media devices. These finding are based on the study participants\u00E2\u0080\u009F desired usage scenarios. 4.4.1. Requirements for a HALO implementation The results presented here are the requirements, dependencies and barriers that HALO would need to fulfill or pass in order to satisfy participants. Without these minimal capabilities participant thought that HALO would be too annoying, compromising of privacy or unsafe to use. Some of these requirements existed due to a concern about the technology and therefore have appeared previously in concerns over material provided to the participants (Chapter 0: Concerns). Here, the points raised were specifically related to HALO. Participants asked to assume that these problems had been dealt with as much as possible when considering scenarios of HALO-enabled portable player usage. 4.4.2. Less intrusive biological sensors Some participants found the current biological sensors to be intrusive, and aesthetically unappealing. P2, P3 and P4 wanted sensors that would easier to put on and less noticeable from outside. For example, P3 expressed this desire by wondering if one could use alternative finger sensors instead of the more intrusive sensor headset. It is impossible to install from fingers, I wanted to push the box? Because I think it's more comfortable in my finger. I want to use it using the [finger sensor] not the brain [Emotiv EPOC headset]. \u00E2\u0080\u0093 P3 Umm...I guess if you first look at it [P5 is looking at the headset], just the design of it. It's kind of it seems like a bit intrusive to put it on \u00E2\u0080\u0093 P4 4.4.3. Time commitment P2 and P3 identified two concerns related to the time involved in using HALO. P3 wondered about how long it would take to train HALO to accurately recognize individual 64 signals. P2 it would take her a long time to get used to using HALO. Both of these time commitments are requirements that need to be taken into consideration. If we established that connection at the beginning for a long time before the scenario, I guess it wouldn't be annoying then. If it's like friend who notices that you're tired or whatever and says \"oh cheer up' or something. \u00E2\u0080\u0093 P2 4.4.4. Effect of context P1, P5 and P6 raised the importance of HALO understanding the context in which people were using it as an important consideration. Participants imagined that in particular contexts, user\u00E2\u0080\u009Fs biological signals might be harder to pick up or might require a context- specific response from HALO. A conversation about music with friends was an example given by two participants. In this scenario, these participants thought that HALO would be more likely to make a mistake than in contexts where a user\u00E2\u0080\u009Fs intentions are more straightforward and there is no activity that could possibly interfere with biological signals. The issue of context was raised by P1 in regard to her own response. She felt that she would not be tolerant of HALO\u00E2\u0080\u009Fs mistakes when she was feeling sensitive. It\u00E2\u0080\u009Fs very important for me to listen to correct mood of music (sequence of songs that are right for the context and mood the user is in) especially when I'm very emotional. \u00E2\u0080\u0093P1 For example if you say \u00E2\u0080\u009Cnext song\u00E2\u0080\u009D while talking and player starts to play next song, it\u00E2\u0080\u009Fs a mistake. Maybe you should be able lock the player someway. It must be reliable while using it. \u00E2\u0080\u0093P1 4.4.5. Need for reliability and detail of understanding, and notification Detail of biological affect recognition played a crucial role when participants were designing the behaviours of the proposed system. All participants envisioned HALO behaviours differently but expected high reliability at recognition of their biological signals. Some of the participant-envisioned behaviours were technically not possible in the near future because they consisted of far more advanced commands than near-term 65 technology can offer. This misconception of affect recognition sometimes caused participants to imagine capabilities close to \u00E2\u0080\u009Cmind reading\u00E2\u0080\u009D such as HALO start playing a song that they wanted to hear. Such misinterpretations were more strongly observed by P2, P3, and P6. I was thinking if you can think of \u00E2\u0080\u009Epush\u00E2\u0080\u009F you could also think of names as well right? So that\u00E2\u0080\u009Fs how I imagined how interaction would be. You will envision the name of the artist and then name of the song. That\u00E2\u0080\u009Fs how I see it. That\u00E2\u0080\u009Fs probably how I want it think of an artist, think of a song and it will go to it. \u00E2\u0080\u0093 P6 I think the most basic thing to have to be able to skip to next song but then a more advanced functionality would be to be able to go a specific song. \u00E2\u0080\u0093 P6 Participants\u00E2\u0080\u009F perceived utility of HALO was highly dependent on the accuracy of the biological recognition. Those who imagined more detailed and reliable recognition of their intention via affect (in some cases beyond what will be feasible for some time) placed a higher value on a HALO-enabled portable media player. On the other hand, those who imagined frequent false recognition of user\u00E2\u0080\u009Fs affective state placed a lower utility on a HALO-enabled portable audio player. The uses that participants imagined for HALO was dependent on the presumed affect recognition accuracy of a HALO-enable portable audio player. For example, P6 preferred a confirmation prompt when the affect recognition reliability is high, because he didn\u00E2\u0080\u009Ft want to be overwhelmed with frequent requests. If it works 100 percent there is not really need of a confirmation. If it works well enough I really don't see a need for notification. But if it's really inaccurate let's say 10 percent of the time there is no way I would not want a notification. \u00E2\u0080\u0093 P6 Desire for notification was also connected to complexity of command, as illustrated by this imagined (infeasible) command. I think I definitely want some sort of response after I say 'Beatles. Abbey Road'. It would be somehow asking in return like 'is this correct\u00E2\u0080\u009F like \u00E2\u0080\u009Ehave I skipped to the right place?\u00E2\u0080\u009F. That would be sort of confirmation would be nice because it's a complicated command. \u00E2\u0080\u0093 P5 66 4.4.6. Privacy P2 and P5 were critically concerned about the privacy of their emotional states. They expected the information gathered through the biological signals to be kept personal or to not be visible to others unless explicitly permitted. It's scary because they could track your emotions and if you have really have strong emotions of something that will be portrayed on the screen. It is like reading a mind almost in a way. What you are thinking is now projected on visible screen and everyone can see it. If sometimes I want to hide something it will be visible to everyone \u00E2\u0080\u0093 P2 4.5. Goals of HALO-enable Portable Audio Player Practice This subsection reports the objectives and expectations of HALO-enabled portable media players as explored during the study. 4.5.1. Enhancing music experience using new technology P1 and P4 imagined the experience they had listening to music could be directly enhanced by haptic signals, and for completeness we report these ideas despite their being outside the scope of HALO as a vehicle for implicit control. They imagine signals that would work as natural cues complementing the music. The goals they had for these cues varied. Ideas included copying an audio cue in haptic modality to enhance their experience of listening to music and cues to help them monitor their heart rate while exercising. But for the purpose of MP3 player perhaps [this] could this fall into same concept as \u00E2\u0080\u00A6 Like you know there are programs if you are listening to music it will be giving you a visual show. Could this fall into same principle? It could be vibrating, reflecting what's happening there. Cause that is pretty neat. \u00E2\u0080\u0093 P4 67 4.5.2. Controlling music player to instruct traditional play/pause/skip commands All participants imagined HALO understanding and executing traditional media control commands (play, pause, resume, skip, etc.). Some participants also imagined browsing albums, artists or songs using HALO. If I'm doing homework and listening to something and so I'm writing and then I say \" Oh I wanted to switch song\", so I don't have to click physically, put down my pen, go to the IPod to fix it. I just think \"Oh I wanted to switch songs because I feel like listening to this\" and then I just do it all happily. Actually which is pretty neat because you can keep going and you don't worry about it. And then you if like say \"stop\", it will just stop. That's pretty cool. (Laughs). \u00E2\u0080\u0093 P2 There is a correlation between types of commands that participants imagined using and shortcuts or commands that are enabled in many popular media players (e. g play, pause, previous songs, and next song). Some participants, such as P5 in the example below, imagined other commands that were rather untraditional. For example skipping to a particular artist, which would require a user to execute a series of commands on all media players. You can have \u00E2\u0080\u009Eskip to artist\u00E2\u0080\u009F, where it starts at the beginning of the artist and goes through. \u00E2\u0080\u009ESkip to artist\u00E2\u0080\u009F where you have a random assortment of that artist. Same thing with album, you can skip through a specific album or skip to any point in that album sort of more of a shuffle and skip the song, just sort of self-explanatory. \u00E2\u0080\u0093 P5 4.5.3. Understanding mood and user preference trends P1, P3, P4 and P5 wanted HALO to support playing music that corresponded to a particular mood or helped to change their mood. Participants imagined HALO playing the right music to get and keep them relaxed when getting ready to sleep, energized when working at the gym, focused when studying in the library and entertained while doing a boring activity such as riding a bus or watching an experiment. Secondly, it can choose music to my mental situations. If I'm stressed it will play relaxing music instead of rock things. There might be some cases that it might be useful where it makes a decision for me. \u00E2\u0080\u0093 P1 68 It [HALO] help me to listen to comfortable music, because it [Remote] can choose the comfortable songs for me before going to the bed it can choose all of the comfortable songs; not the noisy songs. If this can choose it and it can understand my feeling, it is possible to help me, not distract me. So it's very helpful for me. \u00E2\u0080\u0093 P3 So while listening to it, it [Remote] could somehow see that you wanted to change the song or do something like that. That would be very interesting. So you don't have to go to touch the screen or something like that. You just listen music or you can program it to know what songs you like or what mood you are feeling it will be pretty cool \u00E2\u0080\u0093 P4 I don't know how it would be done but... interpreting mood into that [Remote] would be very very interesting. Especially just because that's such a complicated command that right now I have to build a playlist based on which is a lot of individual commands to the computer, like which songs which order which artists. But if you could somehow do it so that it would respond to your mood and create a playlist and adjust that playlist depending on your mood\u00E2\u0080\u00A6 that would be very interesting. \u00E2\u0080\u0093 P5 If it's a slow song coming on and I\u00E2\u0080\u009Fm starting to lose focus or it would be nice to switch over to something a little more upbeat and kind of wake me up and keep me going again to do whatever is that I\u00E2\u0080\u009Fm doing or have done. But the mood switch would be nice. \u00E2\u0080\u0093 P5 Although most of the time mood is defined with particular goals, HALO is also expected to understand trends in any activity, as well as any trends in the music being listened to. Participants expected that the HALO would not play \u00E2\u0080\u009Ecompletely random\u00E2\u0080\u009F songs one after another. One of participant\u00E2\u0080\u009Fs requirements was for the HALO to play songs that corresponded to previous songs in some aspect. A representative example was to keep on playing energetic, high tempo songs after each other, instead of following a high energy song with a mellow lullaby. This expected behaviour was also sometimes referred as portable audio player autonomously filtering out upcoming songs that do not suit the mood of the user. It should know that you don't want to listen to a lecture, especially when you are riding a bike or something. You are doing an activity like that you are enjoying yourself. So you want to listen to a type of music that matches your mood, you don't want to drop off to a sad song. \u00E2\u0080\u0093P4 Actually I choose 'all of the shuffle' function but I really don't want to shuffle all of the songs because I want to listen all of the comfortable songs. I want to filter this kind of up tempo songs, to suit my feeling. \u00E2\u0080\u0093P3 69 4.5.4. Partial notification of system status System status was important for P1, P3, P4 and P5. They wanted to be able to verify that HALO was making correct decisions. However simple definition of notifications alone did not always satisfy them in this respect, because they wanted these notifications to be subtle or easy to ignore. Participants expressed a desire to understand the status of the system either through a haptic or a visual display. If he could change the music because of my feeling; I want to know what happened or something. I want to know what happens to the system or to the songs \u00E2\u0080\u0093P3 A feedback would make me more comfortable with the device. Sort of a notification of that is doing something. It wouldn't have to be \u00E2\u0080\u0093 you know a large amount of information or anything \u00E2\u0080\u0093 just like a vibration kind of like a phone. Very small amount of information but it's enough that you notice it and you process what's going on and you can ignore it or address it depending on what you feel like. Especially in this kind of device like this it would be nice \u00E2\u0080\u0093 P5 4.5.5. Minimizing attention and distraction All participants imagined a HALO-enabled device would require a very minimal amount of their attention. Participants frequently expressed that they would like these devices to recognize an ongoing activity of the user in order to support appropriate feedback or in case of an interruption. They wanted HALO to pay attention to their state and support help to those interruptions. If you had it setup such that it would just instantly pause whenever you are distracted. I think that would be incredibly useful. Especially on the situations like airplanes or bus rides or public transit and you have to interact with someone all of a sudden or there are things like that. I think that would be very interesting \u00E2\u0080\u0093P5 Participants were concerned about possible distractions caused by the notification and confirmations related to communicating with HALO. They wanted these communications to cause as little distraction as possible. They suggested that either a haptic or visual display would be best. Participants imaged communication requests from HALO to be subtle signals that could be ignored easily. P1, P3, P4 and P5 stated this requirement. 70 I guess a small notification would be ideal then. Something that is not distracting but sort of notifying you that it has to do something. \u00E2\u0080\u0093P5 HALO asking confirmation or informing users of its actions are the two main types of communication that were investigated in the study. Participants saw both notification and confirmation as a potential distraction. Since confirmation required a conscious response, it was therefore seen as an unavoidable distraction. Participants imagined the communication to be necessary when HALO had low confidence. Therefore, utilization of confirmation and notification was highly dependent on accuracy of HALO. Two participants voiced this opinion. They commented that they would not need to be notified if the accuracy and recognition rate of HALO was high. If it works 100 percent there is not really need of a confirmation. If it works well enough I really don't see a need for notification. But if it's really inaccurate let's say 10 percent of the time correct there is no way I would not want a notification. \u00E2\u0080\u0093P6 I guess if it was really accurate I would like it to be totally automated but for more practical standpoint. I\u00E2\u0080\u009Fm trying to think of it mostly being hands free. If there was some sort of signal almost asking or something like that nature 'do you want me to pause?'... I guess a small notification would be ideal then. Something that is not distracting but sort of notifying you that it has to do something. \u00E2\u0080\u0093P5 4.5.6. Prevent mistakes or taking measures In relation to concerns on reliability, P1, P2, P4 and P5 thought of measures that can be utilized in a HALO-enabled portable audio player usage in order to prevent mistakes. One measure that came up frequently was the utilization of confirmation when the HALO had lower confidence in the recognition (as discussed above). Beyond this, participants suggested that HALO should evaluate the risk of possible misunderstanding due to the context user was in. For example if you say \u00E2\u0080\u009Cnext song\u00E2\u0080\u009D while talking and player starts to play next song, it\u00E2\u0080\u009Fs a mistake. Maybe you should be able lock the player someway. It's just must be reliable while using it. \u00E2\u0080\u0093 P1 Sometimes the mistake could come not only from HALO but from the user. In the example below, one user (P2, who clung to the idea of a more advanced, explicit 71 command recognition mode for HALO that is not feasible) does not have a clear decision for HALO to follow. Sometimes feedback could be useful. For example if I thought that I wanted to turn it off when I didn't. I accidentally thought that I wanted to turn it off but I didn't wante to. I wanted to keep it playing. \u00E2\u0080\u0093 P2 4.5.7. Overriding control For P2, P4 and P5 capability to override control of an HALO was reassuring. Participants understood that HALO could misinterpret their emotions or decisions so they wanted to be able to take control of the player at any time. I think probably my reflex would be to go to for manual approach as soon as there is an error, some sort of button set that would override whatever the what remote use was. I can let it go and if it makes mistake I can manually switch it myself. \u00E2\u0080\u0093 P5 4.5.8. Addressing privacy, safety, and intrusiveness Participants suggested solutions to their concerns of privacy, safety and intrusiveness of sensors. For example P2 was concerned about the privacy of her emotions, especially after she observed the Emotiv EPOC headset projecting her emotions on the screen. A device which had the capability to read her emotions was not as appealing for her. She would want to be able to restrict this capability of the device. It would be nice and better if it is possible to filter, not show, some of the things that I'm feeling or thinking. This would reassure me a bit but still there is chance that computer can go haywire and make some mistakes. \u00E2\u0080\u0093P2 For P3, an electrical device that was connected to this brain, including EGG sensors, was uncomfortable. He suggested using GSR sensors instead of the headset sensors for the same purpose. He found GSR sensors less intrusive compared to EGG sensors and easier to attach. Although there is no known health risk in using EGG sensors, P3 had concerns about the safety of the headset sensor. Based on these concerns he wondered about the possibility of using alternative sensor types or sensor locations without losing any recognition power. 72 It is impossible to install from the finger? I wanted to push the box? Because I think it's more comfortable in my finger. I want to use it using the [finger sensor] not the brain [headset]. It's less scary and less strange [with finger sensors]. I think I just feel scary. Because it's brain\u00E2\u0080\u00A6 It\u00E2\u0080\u009Fs connected with the person. More important part of the body part. In finger it's more easy to attach, not so scary. I think it's really really interesting. \u00E2\u0080\u0093P3 Another concern about the aesthetics and visual intrusiveness of the sensors was raised both by P3 and P5. Both of these participants wanted to be able to hide these sensors by positioning them in garments or accessories. A few examples they gave were t-shirts, bike helmets, and rings. I guess if you first look at it [P5 is looking at the headset], just the design of it. It's kind of it seems like a bit intrusive to put it on. So I guess I prefer having a closed system around [your head] like a helmet or something like that. \u00E2\u0080\u0093 P5 4.5.9. Hands free control The most prominent advantage of the HALO-enabled music player identified by participants was the capability to control the device hands-free. P2, 3, 4, 5 and 6 repeatedly expressed that they would benefit from HALO when their hands were occupied with a task such as holding on to a bar on the bus or lifting a free weight while exercising. See, if I'm doing homework and listening to something and so I'm writing and then I say \" Oh I wanted to switch song\", so I don't have to click physically, put down my pen, go to the IPod.. to fix it. \u00E2\u0080\u0093P2 Even when you have a playlist sometimes you don't like the songs that are up next in the playlist. Skip without using your hands would be cool. \u00E2\u0080\u0093P5 4.5.10. Control of repetitive tasks P3 and P4 stated that they would be willing to give up control of their player in two different activities. P3 was modifying a playlist at computer by repeatedly deleting songs that he didn\u00E2\u0080\u009Ft like in it. He was doing this to tune a new playlist to suit his musical preferences. He saw using HALO as an opportunity to help him modify these kinds of playlists. It would help him to identify these songs by monitoring his affect while 73 listening to a song. P4 said that he often skips the songs, particularly when the remainder of a song is very similar to earlier part of the song. He suggested that HALO could skip to the next song when the current songs got this point. Knowing when to change songs maybe just cause there is always like last 45 seconds -for vocal songs mostly- where it not really anything and I don't want to listen to that. I just skip that part. I always have to skip this part just that last part of the songs. Last 30 seconds of the song where it's more like a repeat like a lead off. I always skip those parts just because I want a new song now. \u00E2\u0080\u0093P4 4.5.11. Achieving deeper connection and control through personalization P2, P4 and P5 felt that an outcome of using HALO would be that they would establish a deeper and tighter connection with their personal media player. Three participants commented that they would become more dependent to their device over time. They imagined HALO and user as two entities interacting with each other. That would be a better communication because you are not limited, I guess. You and your device are more connected in a way. So then you are not limited to the physical buttons. You can express almost anything! You are not limited to play, pause... whatever. \u00E2\u0080\u0093 P2 74 Chapter 5 Discussion In this discussion our main purpose is to organize the feedback received through our participatory HALO-enabled portable audio player design process (detailed in Results, Chapter 4) into meaningful categories. We must first inform this transition by revisiting our initial research goals. We conducted the present study with the goal of extending Hazelton et al\u00E2\u0080\u009Fs single-user but highly in-depth participatory process [14] to multiple potential users. The study reported here was intended to assess the kind of variability (in terms of environments, reasons for listening to music, different audio player models, music player capabilities used and points of dissatisfaction with current usage) we might find in a larger sample of participants who used portable media players frequently and heavily. Furthermore, we wished to understand the relationship between the behaviours people envision for a HALO-enabled portable media player and their current media player usage and music listening habits. The purpose of this chapter is to translate the rich feedback collected 75 from six participants into a set of user requirements and personal goals that could or should be accommodated by a HALO-enabled portable player. In the following sub-sections, we first consider points of similarity and diversity in our users\u00E2\u0080\u009F profiles. Next, using the HALO system behaviours imagined or proposed by participants, of which a few are common to most participants and others are unique, we discuss elements of HALO systems that were explored during the sessions. From there we will move on to the question of how to design HALO to support a larger user space \u00E2\u0080\u0093 e.g. by employing one vs. multiple designs or elements to customize adequately to an individual user profile, scenario or an activity in order to achieve satisfactory utility. At the end of this chapter we present an initial set of design guidelines and enumerate some design requirements which can be used by future researchers to create next generation low and high fidelity prototypes. Finally, we critique the variant of qualitative methodology we used in this study. 5.1. Similarities in Participant Profiles We found many similarities in our participants\u00E2\u0080\u009F general attitude towards music players and their customizations independent of brand, capacity (storage volume) and contexts of use. All participants stated that they average at least 20 hours with a music player in a week and carry their music player daily in various mobile and stationary contexts such as exercising, walking, using public transportation, at work and in bed. This suggests that using portable media players for listening to music is a common activity for our participants and one that they enjoy. This observation is also supported by the fact that a few of them indicated that music as their main hobby. Reporting an average time spent with a media player is not very easy question to answer, and our study did not try to verify participants\u00E2\u0080\u009F answers; so, participants\u00E2\u0080\u009F self-reported hours of usage should be considered as approximations. Participants\u00E2\u0080\u009F responses to our other questions convinced us that they had the experience with their music players needed for our study. Participants engaged in scenario sharing or development in two ways: they described their own current portable usage, and were asked to give reference scenarios 76 during design discussions of the prospective HALO-enabled portable player. In both cases, all participants exhibited similar range in terms of mobility and multitasking while using their portable player. They described listening to music in active contexts such as exercising or public transportation, as well as in less active contexts such as sleeping or studying in the library. All participants identified music listening as an occasional secondary activity in various contexts. Commuting, exercising and studying/working were the most common contexts in which participants reported listening to music; less frequently they mentioned simply enjoying music by itself on a couch or in bed. But in some multitasking cases it was not clear whether listening to music or doing the other tasks was more important. Extensive music listening experience was one of our recruitment attributes, but these patterns are consistent with observations in literature on consumption of music, e.g. in a deeper study of what, where and why people listen to music by North et al. [33]. 5.1.1. Pain Points in Interacting with Music Participants described similar issues using portable audio players independent of the brand and storage volume of their players. We previously presented the pain points (Chapter 4.2.3: \u00E2\u0080\u009CPain Points\u00E2\u0080\u009D) reported by all participants, such as having to frequently interact with the player to control music or to maintain playlists, adjusting the mood of the songs played after each other, undesired effects of pausing their primary activity to interact with the player, and physical constraints on interaction with the device. These issues can be partially explained by the participants needing to frequently make changes while listening to music, or by their anticipating the need for change and therefore spending effort to create playlists. For example, our participants responded positively to the question: \u00E2\u0080\u009CAre there songs that you still keep in your player or playlist although you usually skip them when they come up?\u00E2\u0080\u009D This might suggest that participants can sometimes miscalculate songs that they think they will listen to in the future. And this issue is likely to remain an annoyance until the user is motivated enough to remove the song from the player or playlist. Participant responses suggest that song(s) choice can be 77 unfavourable when the experience of a particular song is no longer appealing (an effect that can occur over time). In either case, it would seem that portable players are not successfully addressing desired real-time song selection, making them painful to use. Playlists are not solving the problem completely because they need frequent manual maintenance and adjustments. High storage capacity size negatively impacts both mental (selection decisions) and physical (browsing or searching) selection effort. Thus, selecting songs will get harder and more time-consuming as larger player capacities and online music streaming becomes available, making it an increasingly attractive target of automation. 5.1.2. Pain Points in Active or Multitasking Contexts We heard similar complaints from participants who used different player brands (i.e. user interface varied) across different environments suggesting that there may be an intrinsic problem in conventional approaches to portable media player interfaces, which the HALO alternative could help to solve. For example, P2, P4, P5 and P6 reported being unable to interact with the device because their hands were busy and undesired effects of pausing listening to the music as major pain points. These participants identified scenarios where their hands were not free such as commuting on bus and lifting weights while exercising as scenarios in which they listen to music or they have issue with their music player. It\u00E2\u0080\u009Fs not surprising to hear that these participants suffer from unavailability of their hands since they need their hands for number of tasks (Chapter 4.2.3). Some users did not identify this pain point. This could mean that they are satisfied with the player; but it also might be because use in these contexts is considered as a relatively minor inconvenience and seem less important compared to other challenges with the music player. Or, these users might simply avoid using their player in contexts that require other physical interaction because it is too difficult. The latter explanation is supported by the results. The contexts that participants, who did not complain from unavailability of their hands, use their music player (Chapter 4.3.3) suggest that they listen to music in contexts that compete less for physical interaction. 78 In summary, the most important pain points expressed by users appear to be related to interactions made with the player at music selection and problems that arise in contexts where users multitask. 5.2. Design Implications of HALO 5.2.1. Explicit and Implicit Player Control We asked participants to imagine behaviours of a HALO-enabled portable player as a way of controlling music in response to the recognition of biological signals. Their responses to this request are referred to as \u00E2\u0080\u009Econtrol\u00E2\u0080\u009F in the rest of this chapter. Our investigation of the scenarios that participant designed for portable media playback suggests that there are explicit and implicit deliveries (or communication styles) of control and they impose a control over a list of songs or over an individual song (as previously introduced in Chapter 1.2 Haptic-Affect Loop Framework). Participants were generally creative in imagining scenarios that they would find utility in. Different participants perceived and valued these functionalities differently. These differences are probably related to their current utilization of music and are discussed below. Variants on explicit control imagined by participants Although the experimenters defined \u00E2\u0080\u009Cexplicit control\u00E2\u0080\u009D to participants as a deliberate manual interaction \u00E2\u0080\u0093 like pushing a button \u00E2\u0080\u0093 participants came up with another version of explicit control. This involved voluntary, conscious control over their physical processes in a way that could be sensed \u00E2\u0080\u0093 for example, giving an explicit command to HALO by tensing oneself to alter galvanic skin response or heart rate. These controls were examples from the communication style axis described earlier in Chapter 1.2, where this behaviour is on the side of explicit communication. . This communication style of music control was envisioned as useful by all participants for at least one of their scenarios. When compared to the traditional explicit control (i.e. manual), which was one of the main pain points in current usage, this mode was attractive as it did not suffer from 79 the physical interaction constraints. Additionally, participants\u00E2\u0080\u009F equal interest in both a traditional set of commands that control playback (such as pausing, skipping music) and non-traditional controls that control on list of songs (mood and user preference trends) suggest that a wide range of granularity in music control would be useful. Voluntary emotional control: A few participants overestimated the capabilities of biological signal recognition by envisioning scenarios where a HALO-enabled portable device was capable of making highly detailed inferences (Chapter 0 Need for reliability and detail of understanding and Chapter 4.4 Desired HALO-enabled Portable Player Behaviour). Sometimes the voluntary emotion-driven control participants described was close to \u00E2\u0080\u009Cbrain power\u00E2\u0080\u009D controlled music or \u00E2\u0080\u009Cmind reading\u00E2\u0080\u009D. We described this unrealistic version of explicit control earlier in a spectrum of communication with HALO in Chapter 1.2: HALO Framework. Although the researchers did not suggest this type of control to participants the fact that they imagined such functionality suggests the value that participants\u00E2\u0080\u009F attach to high detail and control as well as, the importance of hands-free interaction. Both could be achieved with voluntary emotional control. Their belief that HALO could offer this type of control is understandable given that they were not expected to have any knowledge on biological signal recognition. We believe that participants\u00E2\u0080\u009F exposure to the Emotiv EPOC headset might also had an role in encouraging participants to consider their biological signals being used in this way. Direct musical control goals Musical control goals ranged from simple (e.g. skip an album or artist, go to a playlist) to complex (having an effect over a group of songs). All participants showed interest in simple commands that changed the playback (e.g. skip, pause, next, repeat) independent of the communication style chosen and found them valuable. The common pain points of simple manual interaction with traditional players in Chapter 4.2.3 \u00E2\u0080\u009CPain Points\u00E2\u0080\u009D (e.g. inconsistency of sequential songs, unavailability of hands, undesired effects of pausing the song or pausing another task) could be alleviated by the participant-envisioned voluntary emotional control (if it was possible to have such 80 high detailed recognition and reliability). Controls that participants developed in their scenarios suggest that they wanted to decrease the time and discomfort involved in manually interacting with the music player and removed dependency on hand interaction. This \u00E2\u0080\u009Cvoluntary emotional control\u00E2\u0080\u009D that participants-envisioned would make switching, pausing or resuming more convenient as long as user affect was recognized correctly. It would not solve inconsistency between sequences of songs, nor would it alleviate the mental burden associated with maintaining consistent enjoyment from listening to music or creating playlists. Some participants (namely P1, P3, P4 and P5) wanted another types of control mechanisms. Participants considered implicit biological control of music less attention-demanding than voluntary emotional control by participants. They saw implicit biological control as mainly consisting of long term effects on music, changes which would adapt the music played to a particular mood or affective state. Participants thought of implicit control as recognition of their involuntary biological reactions to music, such as like or dislike, as well as their longer term affect towards a list of songs over an hour period. The participants\u00E2\u0080\u009F scenarios suggest that another goal for a listening session is to change one\u00E2\u0080\u009Fs affective state (e.g. altering mood, increasing arousal etc.). They said that this goal that can be achieved only through a listening session and not with just a single song (Chapter 4.5 Goals of HALO-enable Portable Audio Player Practice). Participants that perceived different advantages for implicit control than for voluntary emotional control; were interested in the possibility of HALO providing implicit control. The participants who saw no attraction in implicit control were only interested in using emotion control for skipping songs or artists. These participants were not willing to go anymore toward the implicit side on the emotion control spectrum. Perceived requirements for functioning implicit control and shared control In participants\u00E2\u0080\u009F view, implicit communication did require HALO to have more sophisticated affect recognition and understanding of the trends of their moods and music listening habits compared to explicit communication (these communication styles are described in detail with examples in Chapter 4). This view was a reason for much of the 81 scepticism of the reliability and specificity of the physiological recognition that participants expressed in conversations relating to implicit control. But this wide range of imaged affect recognition capabilities from HALO tells us: (1) participants found different system behaviours and ways of interacting with HALO more useful, (2) the definition of HALO that we gave to our participants was descriptive and clear enough to create space for creativity but not constrain their imagination. Implicit control was perceived as a more subconscious medium and therefore a less clearly defined mechanism (an implicit mechanism is bound to be noisier). Those participants who did express an interest in implicit control, namely P1, P3, P4 and P5, perceived HALO as a more fundamental improvement in their music listening experience. An improvement which could span a longer duration, e.g. an entire sequence of songs or listening session. Participants were less able to imagine scenarios that involve understanding ownership of control as implicit control, where control was shared control between the user and HALO. Participants appeared more comfortable in imagining use of voluntary control compared to implicit control and were able to easily and clearly identify what they wanted the HALO system to be able to recognize. Participants who did not prefer implicit control might hesitated to try this style of communication because of this less clearly defined ownership of control. Higher-level roles of implicit control: \u00E2\u0080\u009Cleading\u00E2\u0080\u009D versus \u00E2\u0080\u009Csustaining\u00E2\u0080\u009D mental state Implicit control can be understood as broader and more inclusive control than explicit control, and it might require a deeper understanding of human physiological signals. Scenarios presented by participants who were more interested in implicit control generally involved multitasking, and demanded that user\u00E2\u0080\u009Fs attention not be non- fragmented \u00E2\u0080\u0093 e.g. for studying, work (doing an intellectually demanding task) or moderate to intense exercise. Participants indicated these activities as their primary goals in these scenarios where their music listen goal was more comprehensive than simply controlling music. Participants interested in implicit control had the goal of concentrating 82 on work or sustaining their performance in their exercise. They wanted music to adapt to this goal and they were less interested in fine control of each individual song (Chapter 4.5: Goals of HALO-enable Portable Audio Player Practice). These users welcomed the idea of not having to attend to music selection, and saw implicit control alleviating other reported problems including irritation from needing to interact frequently with the device, inconsistency of the songs played and the time required to prepare playlists. In addition, participants imagined implicit control of a music player taking over other routine control activities that they reported as repetitive, intrusive or tedious. These activities included adjusting the music volume or categorizing songs by moods where definition and implementation of such functions of implicit controls are not well defined. We assume that some implicit control functions (such as keeping a user excited who is exercising, or working) may be more difficult to design in practice when compared to others, as their purpose is to influence the state of the user in a relatively unobtrusive fashion. A common use of implicit control in participant scenarios was for the HALO music choice to achieve or lead the user to a desired emotional state \u00E2\u0080\u0093 we refer to this as a \u00E2\u0080\u009Cleading dynamic\u00E2\u0080\u009D for HALO control. For example, if one desires to feel motivated or awake (a state of arousal that would manifest with a high heart rate and blood volume pressure) when feeling unmotivated or sleepy, a HALO controller could be given this directive, and choose upbeat, lively music as a result. Conversely control could be switched to a more neutral songs to maintain users\u00E2\u0080\u009F current affective state, when a desired level is reached. Relating interest in implicit control to users\u00E2\u0080\u0099 current practices What explains the great variance in participant interest in implicit interaction? We speculate that it might be related to users\u00E2\u0080\u009F current practices, the role of music in their lives, and degree of scepticism in either psychophysiological validity or on the state of the technology in affect recognition. For some roles of music (e.g. \u00E2\u0080\u009Ckilling time\u00E2\u0080\u009D or cancelling environment noise), implicit control lends little extra value. Users, who are not 83 inconvenienced by a need for frequent music player interaction or conversely, are not very concerned with the choice of music, are likely a mismatch for implicit control. Likewise, even voluntary emotional control may not be appealing to users who like full control of music, or have low tolerance for mis-gauged automation. Additionally, it was the participants who indicated previous use of music recommendation systems (P4 and P5) for continuous personalized music who imagined implicit control of HALO controlling of their music based on their mood or affective state. Although this evidence is not enough to validate a connection between preferences on goal and communication style, it raises an interesting idea to explore. Therefore the connection between implicit control and particular use of music recommendation systems is still left as an open question to be investigated in future studies. 5.2.2. Improvements on the Experience Availability of affect recognition and implicit biological input with a haptic display modality inspired some users to think of applications that we did not foresee. Participants imagined enhancing their music listening experience and perhaps also their affect with tactile feedback. They imagined haptic renderings that would be in harmony with the music playing (for example matching the tempo or rhythm) or with the role of the music and the context of the activity. For example, P4 imagined receiving haptic feedback through the wearable sleeve display while listening to music during his exercise. There are studies which have already investigated positive effects of haptic modality on exercising [13]. Although enhancing or complementing the music or activity could be one possible application where music and haptics used together, it was outside the scope of our study and therefore we did not investigate it further. 5.2.3. User Representation and System Visibility In our study we explored how HALO should communicate to the user, in particular communicating its own status and actions and representation of its view of the user\u00E2\u0080\u009Fs state or desires. Conventional music players afford a clear master-slave relationship, 84 where the device is explicitly controlled by the user and there is no ambiguity about ownership of control (although commands might still be confused). A HALO-enabled music player is a shared control relationship and it attempts to model the user\u00E2\u0080\u009Fs emotional state or desires, chooses an autonomous action based on the estimated user intent, and thereby participates in system control. The resulting cooperation should benefit from greater visibility in both directions. Some participants understood this value and wanted more visibility of system status in HALO, especially when they were imagining music to be controlled implicitly. Although implicit control is still user-driven, it is indirect and its outcome cannot be counted on, raising the need for users to have some assurance that the system is recognizing their emotions correctly. Participants felt notifications would be unnecessary if the system affect classification worked with high reliability and accuracy. Altogether, this is a predictable reaction to a situation in which trust in an automatic controller must be earned. The desire for feedback is dependent on whether participants are more interested in implicit control mechanisms or voluntary emotional control. Participants who were more interested in voluntary emotional control were less interested in increased system visibility than those interested in implicit control. This could be explained by more rapid anticipated responsiveness \u00E2\u0080\u0093 the feedback to the user would come quickly in the anticipated change in the media as the more direct and predictable outcome of voluntary emotional control (given that recognition is made correctly) than the response generated with implicit control. 5.2.4. Respecting User Privacy and Ethics Participants raised concerns about the implication on privacy and ethics of modelling representations of a user\u00E2\u0080\u009Fs affective state in HALO. Some participants indicated that they had concerns about the accidental sharing of private information about themselves to the outside world. Although the definition of HALO which was given to the participants took privacy into consideration, their attention to the privacy of emotions 85 suggests that HALO should be designed to prevent any information leakage to the outside world, unless a user specifically desired sharing that information. 5.2.5. Notifications and Confirmations in HALO Delving deeper into the visibility issue, we explored two specific methods of communication from HALO to the user: notifications of a completed change, and requests for confirmation of a suggested change. We asked why, how and when participants imagined they would use each in interactions with HALO. Participants\u00E2\u0080\u009F tended to want to use notifications instead of confirmations suggest that they did not want a dialog that required a follow-up action. By contrast, some participants\u00E2\u0080\u009F comments indicated that confirmation requests could be preferable in situations where the probability that HALO\u00E2\u0080\u009Fs change would be correct is low, giving the balance of control back to the user and thus minimizing disruption relative to reversing an incorrect change. Looking at the participant concerns regarding HALO reliability and confirmations, we saw that users with high scepticism imagined interactions with more confirmation requests. High correlation between use of confirmations and notifications and reliable affect recognition suggest that design decisions around these interactions require more information about affect recognition and its performance in naturalistic settings. 5.2.6. Haptic Communication and Distractions Some participants saw HALO as a solution for particular problems in interactions with music players, but it also raised new concerns. Participants\u00E2\u0080\u009F desired practices suggest that they were concerned about distraction that could occur from tactile feedback. In our related work section we described research in the careful design of minimally intrusive haptic signals. The form and duration of this study did not allow us to deeply probe this issue. Our session did not have time to study the information that should be communicated through the haptic modality. Between the two tactile displays that were presented during 86 the study, participants exhibited more interest to the wearable tactile arm display and found it more expressive compared to the handheld tactile display. 5.3. Self-Critique of Participatory Design Session We understood the limitations of our study from the start. Our goal was to scale the single-user participatory design study by Hazelton to multiple participants using a reasonable increment of per-participant time. The selected number of participants was therefore comprehensive enough to explore a range of expected HALO use-cases with multiple subjects. However, due to time constraints, it was not large enough to cluster users into archetypes according to their needs and desires. The demographic was only moderately diverse, focusing on a particular age group, education level and life-stage. Other uses of HALO and other types of HALO users might lead to different results, and a more broad study could help researchers to understand design requirements of HALO better. The interviewed participants\u00E2\u0080\u009F tendency to wish to please experimenters is a known issue [41]. Participants\u00E2\u0080\u009F reaction to different aspects of HALO during the study ranged from strongly positive to strongly negative suggesting that the efforts we took (briefing participants to be candid in their responses) were successful and the data we collected reflects their opinions with minimum influence. The qualitative methodology used to collect and draw conclusions here is known to be prone to researcher bias [44]; and we took measures to prevent and control researcher bias in our study both at the study and analysis stage by employing interceding reliability measures. The use of only six participants to obtain this quantitative data makes the analysis inconclusive in terms of scientific evidence but certainly provides enough basis for suggesting worthwhile design directions. We took measures to prevent and control researcher bias in our study both at the study and analysis stage by employing interceding reliability measures. We were also aware upfront of the potential obstacle of getting participants to even consider the possibility of a successful emotional control technology, through the work by Hazelton [14]. We attempted to overcome this problem by demonstrating related 87 technology to participants, showing them that reliable biological and affect recognition were possible. Our efforts appeared to be partially effective: participant attitudes during the study towards online affect recognition quality were generally positive but there was still a high variance between participants. These concerns regarding reliability, accuracy, and privacy manifested themselves with almost all participants; P2 and P6 were strongly sceptical and made it harder to for them to imagine satisfactory HALO scenarios. The study researchers are also aware that a follow up study would be very beneficial to validate and build upon the research results of this thesis. Although we initially planned a second study with the same participants to accommodate this follow- up, it proved to be beyond the scope of the project. 5.4. HALO-enabled Portable Media Player Design Guidelines A primary goal of the study was to compose an initial set of HALO-enabled system design guidelines, to inform future studies and explore the design space for prototypes or applications of HALO-enabled systems. On the basis of our results, a first set of such design guidelines are proposed here: \u00EF\u0082\u00B7 Provide access to system status: Clearly communicate and represent recognized emotion, user status or system action and inform users to monitor HALO\u00E2\u0080\u009Fs state change continuously. The system should be easy to monitor, but inform its users in a non-intrusive fashion with a reasonably frequency. Messages should not leave users in question or doubt. Adaptations of the system should be clearly defined and predictable according to the goal of the user. \u00EF\u0082\u00B7 Maximize affect recognition reliability: The affect recognition algorithm which feed into a HALO-enabled portable player should be designed to keep reliability of recognition high, if need be by avoiding excessively noise-prone contexts. Designing or testing system functionality in contexts where users\u00E2\u0080\u009F biological signals are hard to interpret or noisy should be avoided. Contexts that involve 88 distinct emotional states or biological signal responses should be initially explored instead of exploring use-cases which involve external sources that may affect users. \u00EF\u0082\u00B7 Allow modes of voluntary emotional control versus implicit control: Implicit control could be designed for multitasking use-cases or when the user requires an uninterrupted mental or physical space to engage in activities other than listening to music (such as intellectually demanding tasks or exercising). Whereas, voluntary emotional control (as far as technology of affect recognition allows) could be employed by users who like to be more directly in control of their music. \u00EF\u0082\u00B7 Provide explicit overriding mechanism: Provide mechanisms to manually override control and error recovery. Interaction design should consider helping users to prevent and recover from system mistakes in an easy fashion. Keeping the manual interaction techniques is promising for this purpose, but alternative mechanisms such as confirmations could be considered and tuned. \u00EF\u0082\u00B7 Use confirmations vs. notifications appropriately: Subtle haptic notifications can be utilized whenever biological signal recognition confidence is high but if confidence of is low prompt users for confirmation. A confirmation dialog can prompt users to accept or reject changes, possibly avoiding the disruption that might be caused by wrong behaviour. \u00EF\u0082\u00B7 The HALO system itself should not distract: Haptic displays should be chosen to have minimal intrusiveness and distraction but their expressiveness should be kept high. Avoid large, intrusive system interruptions and large variations in behaviour unless the system has very high confidence in their utility. \u00EF\u0082\u00B7 Protect privacy: Information about users that is recognized or provided to the system by the user should be kept private. Any information that is communicated to the user should not be shared with the outside world, unless specifically desired. 89 Chapter 6 Conclusion This work investigated the similarities and differences in the envisioned behaviour of the proposed Haptic-Affect Loop. We studied highly experienced users, in terms of their current media player usage and music listening habits, to gather design guidelines for carrying HALO prototyping forward by using participatory design methodology. We also aimed to understand the different use cases of the HALO-enable portable player where it addresses the specific \u00E2\u0080\u009Cpain points\u00E2\u0080\u009D in the current paradigm of music listening. Participatory studies of this sort are used to include user feedback early in the design iterations, to clearly understand the consequences of design decisions, and to support key usages. Our results suggest that residual scepticism in technological abilities impacted some participants\u00E2\u0080\u009F willingness to envision HALO functionality. Moreover, our participant base of six participants was not enough to make clear and confident distinctions that separate users into groups for distinct HALO-enabled portable media player uses. Instead, a range of scenarios were explored which shed light on several 90 design decisions needed as HALO development proceeds. Our findings for future HALO design can be summarized in two key points: \u00EF\u0082\u00B7 Traditional means of precise, quick, low-level user control must be part of any type of music player and should be supported at all times. However some categories of users, when finding themselves in contexts which involve a high degree of multitasking, cognitive and physical concentration or lower priority of music control, see a potential for implicit styles of communication. They value its potential for protecting their mental or physical capacity to engage in other activities. The explored scenarios involve different use-cases where two forms of physically recognized control (voluntary emotional and implicit) could be advantageous. \u00EF\u0082\u00B7 Independent of any design and use case, reliable affect recognition, visibility of system status for necessary sense of control, overriding mechanisms and intrusiveness of the system are crucial to a usable and trust-inspiring HALO system. The next steps in this design process must validate our preliminary design guidelines in control dynamics and proposed approaches for high reliability and visibility of system status through use of a variety of functional prototypes. This work will likely need to be performed in follow-up iterative design and testing sessions with careful characterization of stakeholders using quantitative metrics. The present preparatory work provides insight on desired behaviours of a HALO-enabled design, and provides a list of important design guidelines for performing this follow-up work. Finally, recognition technologies must be highly reliable, sufficiently portable and minimally intrusive, and be tested effectively as the backbone of this technology. Ongoing miniaturization of non-intrusive sensors and decreasing cost of hardware makes the implementation of a HALO system more plausible in the future. We hope that this work brings us closer to this goal; closer to a world where everyday devices read and adapt to human needs seamlessly. 91 Bibliography [1] Amemiya, T., & Sugiyama, H. (2009). Haptic handheld wayfinder with pseudo- attraction force for pedestrians with visual impairments. Proceeding of the eleventh international ACM SIGACCESS conference on Computers and accessibility - ASSETS \u00E2\u0080\u009909 (p. 107). New York, New York, USA: ACM Press. doi: 10.1145/1639642.1639662. [2] Baumann, M. A., MacLean, K. E., Hazelton, T. W., & McKay, A. (2010). Emulating human attention-getting practices with wearable haptics. 2010 IEEE Haptics Symposium (pp. 149-156). IEEE. doi: 10.1109/HAPTIC.2010.5444662. [3] Berg, B. L. (2001). Qualitative research methods for the social sciences. (S. L. Kelbaugh, Ed.)Qualitative Research (p. 304). Allyn and Bacon. Retrieved from http://www.faess.jcu.edu.au/saas/downloads/ss2020/Cairns/SS2020-8344289.pdf. [4] Chan, A., MacLean, K., & McGrenere, J. (2005). Learning and Identifying Haptic Icons under Workload. First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (pp. 432-439). IEEE. doi: 10.1109/WHC.2005.86. [5] Chung, J.-woo, & Vercoe, G. S. (2006). The affective remixer. CHI \u00E2\u0080\u009906 extended abstracts on Human factors in computing systems - CHI \u00E2\u0080\u009906 (p. 393). New York, New York, USA: ACM Press. doi: 10.1145/1125451.1125535. [6] Costanza, E., Inverso, S. A., & Allen, R. (2005). Toward subtle intimate interfaces for mobile devices using an EMG controller. Proceedings of the SIGCHI conference on Human factors in computing systems - CHI \u00E2\u0080\u009905 (p. 481). New York, New York, USA: ACM Press. doi: 10.1145/1054972.1055039. 92 [7] Crawford, B., Miller, K., Shenoy, P., & Rao, R. (2005). Real-time classification of electromyographic signals for robotic control. AAAI\u00CA\u00BC05 Proceedings of the 20th national conference on Artificial intelligence, 2, (p. 523-528). [8] Davies, N., Friday, A., Newman, P., Rutlidge, S., & Storz, O. (2009). Using bluetooth device names to support interaction in smart environments. Proceedings of the 7th international conference on Mobile systems, applications, and services - Mobisys \u00E2\u0080\u009909 (p. 151). New York, New York, USA: ACM Press. doi: 10.1145/1555816.1555832. [9] Erp, J. B. F. V., Veen, H. A. H. C. V., Jansen, C., & Dobbins, T. (2005). Waypoint navigation with a vibrotactile waist belt. ACM Transactions on Applied Perception, 2(2), 106-117. doi: 10.1145/1060581.1060585. [10] Emotiv Systems. (2011) Emotiv EPOC Neuroheadset. [Online]. http://www.emotiv.com/apps/epoc/299/ [11] Enriquez, M., MacLean, K., & Chita, C. (2006). Haptic phonemes. Proceedings of the 8th international conference on Multimodal interfaces - ICMI \u00E2\u0080\u009906 (p. 302). New York, New York, USA: ACM Press. doi: 10.1145/1180995.1181053. [12] Fairclough, S. H. (2009). Fundamentals of physiological computing. Interacting with Computers, 21(1-2), 133-145. Elsevier B.V. doi: 10.1016/j.intcom.2008.10.011. [13] Ferber, a R., Peshkin, M., & Colgate, J. E. (2009). Using Kinesthetic and Tactile Cues to Maintain Exercise Intensity. IEEE Transactions on Haptics, 2(4), 224-235. doi: 10.1109/TOH.2009.22. [14] Hazelton, T.W. Investigating, Designing, and Validating a Haptic-affect Interaction Loop Using Three Experimental Methods. Unpublished M.Sc. thesis, The University of British Columbia (2010). Retrieved July 18, 2011, from https://circle.ubc.ca/handle/2429/27813 [15] Hettinger, L. J., Branco, P., Encarnacao, L. M., & Bonato, P. (2003). Neuroadaptive technologies: applying neuroergonomics to the design of advanced interfaces. Theoretical Issues in Ergonomics Science, 4(1-2), 220-237. Taylor & Francis. doi: 10.1080/1463922021000020918. [16] Holtzblatt, K., J. B. Wendell, and S. Wood (2005). Rapid contextual design: A how-to guide to dey techniques for user-centered design. San Francisco, CA: Morgan Kaufmann. [17] Holtzblatt, K., & Beyer, H. R. (1995). Requirements gathering: the human factor. Communications of the ACM, 38(5), 31-32. doi: 10.1145/203356.203361. 93 [18] Ishiguro, Y., & Rekimoto, J. (2011). Peripheral vision annotation. Proceedings of the 2nd Augmented Human International Conference on - AH \u00E2\u0080\u009911 (pp. 1-5). New York, New York, USA: ACM Press. doi: 10.1145/1959826.1959834. [19] Janssen, J. H., Broek, E. L. van den, & Westerink, J. H. D. M. (2009). Personalized affective music player. 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, 1-6. Ieee. doi: 10.1109/ACII.2009.5349376. [20] Karuei, I., MacLean, K. E., Foley-Fisher, Z., MacKenzie, R., Koch, S., & El-Zohairy, M. (2011). Detecting vibrations across the body in mobile contexts. Proceedings of the 2011 annual conference on Human factors in computing systems - CHI \u00E2\u0080\u009911 (p. 3267). New York, New York, USA: ACM Press. doi: 10.1145/1978942.1979426. [21] Kensing, F., & Blomberg, J. (1998). Participatory Design: Issues and Concerns. Computer Supported Cooperative Work (CSCW) (Vol. 7, pp. 167-185-185). Springer Netherlands. doi: 10.1023/A:1008689307411. [22] Kulic, D., & Croft, E. a. (2007). Affective State Estimation for Human\u00E2\u0080\u0093Robot Interaction. IEEE Transactions on Robotics, 23(5), 991-1000. doi: 10.1109/TRO.2007.904899. [23] Kristoffersen, S., & Ljungberg, F. (1999). \u00E2\u0080\u009CMaking place\u00E2\u0080\u009D to make IT work. Proceedings of the international ACM SIGGROUP conference on Supporting group work - GROUP \u00E2\u0080\u009999 (pp. 276-285). New York, New York, USA: ACM Press. doi: 10.1145/320297.320330. [24] Last.fm free internet radio. Retrieved from http://www.last.fm. [25] Luk, J., Pasquero, J., Little, S., MacLean, K., Levesque, V., & Hayward, V. (2006). A role for haptics in mobile interaction. Proceedings of the SIGCHI conference on Human Factors in computing systems - CHI \u00E2\u0080\u009906 (p. 171). New York, New York, USA: ACM Press. doi: 10.1145/1124772.1124800. [26] Lumsden, J. (2003). A paradigm shift: Alternative interaction techniques for use with mobile & wearable devices. Proc. of the 13th Annual IBM Centers for Advanced Studies Conference CASCON\u00E2\u0080\u00992003. Retrieved June 13, 2011, from http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.116.2943. [27] MacLean, K. E. (2009). Putting Haptics into the Ambience. IEEE Transactions on Haptics, 2(3), 123-135. Published by the IEEE Computer Society. doi: 10.1109/TOH.2009.33. [28] Moffatt, K., McGrenere, J., Purves, B., & Klawe, M. (2004). The participatory design of a sound and image enhanced daily planner for people with aphasia. Proceedings of 94 the 2004 conference on Human factors in computing systems - CHI \u00E2\u0080\u009904 (pp. 407-414). New York, New York, USA: ACM Press. doi: 10.1145/985692.985744. [29] Muller, M. J. (2009). Participatory design: the third space in HCI. Human-Computer Interaction: Development Process, 165, 1-32. Boca Raton, FL: CRC Press. [30] Neuendorf, K. A. (2002). The content analysis guidebook. Thousand Oaks, CA: Sage Publication [31] Neustaedter, C., & Bernheim Brush, A. J. (2006). \u00E2\u0080\u009CLINC-ing\u00E2\u0080\u009D the family. Proceedings of the SIGCHI conference on Human Factors in computing systems - CHI \u00E2\u0080\u009906 (p. 141). New York, New York, USA: ACM Press. doi: 10.1145/1124772.1124796. [32] Norman, D.A., 2007. The Design of Future Things. Basic Books, New York. [33] North, A. C., Hargreaves, D. J., & Hargreaves, J. J. (2011). Uses of Music in Everyday Life Uses of Music in Everyday Life. Music Perception, 22(1), 41-77. [34] Oakley, I., & O\u00CA\u00BCModhrain, S. (2005). Tilt to Scroll: Evaluating a Motion Based Vibrotactile Mobile Interface. First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (pp. 40-49). IEEE. doi: 10.1109/WHC.2005.138. [35] Oliver, N., & Flores-Mangas, F. (2006). MPTrain. Proceedings of the 8th conference on Human-computer interaction with mobile devices and services - MobileHCI \u00E2\u0080\u009906 (p. 21). New York, New York, USA: ACM Press. doi: 10.1145/1152215.1152221. [36] Ogihara, M. (2009). Music Recommendation Based on Acoustic Features and User Access Patterns. IEEE Transactions on Audio, Speech, and Language Processing, 17(8), 1602-1611. doi: 10.1109/TASL.2009.2020893. [37] Oulasvirta, A., Tamminen, S., Roto, V., & Kuorelahti, J. (2005). Interaction in 4- second bursts. Proceedings of the SIGCHI conference on Human factors in computing systems - CHI \u00E2\u0080\u009905 (p. 919). New York, New York, USA: ACM Press. doi: 10.1145/1054972.1055101. [38] Pandora Internet Radio. Retrieved from http://www.pandora.com. [39] Pan, M. K. X. J., Chang, J.-S., Himmetoglu, G. H., Moon, Aj., Hazelton, T. W., MacLean, K. E., et al. (2011). Now where was I? Proceedings of the 2011 annual conference on Human factors in computing systems - CHI \u00E2\u0080\u009911 (p. 363). New York, New York, USA: ACM Press. doi: 10.1145/1978942.1978995. 95 [40] Pasquero, J., Stobbe, S. J., & Stonehouse, N. (2011). A haptic wristwatch for eyes- free interactions. Proceedings of the 2011 annual conference on Human factors in computing systems - CHI \u00E2\u0080\u009911 (p. 3257). New York, New York, USA: ACM Press. doi: 10.1145/1978942.1979425. [41] Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common Method Bias in Behavioral Research: A Critical Review of the Literature and Recommended Remedies. Journal of Applied Psychology, 88(5), 879-903. [42] Poupyrev, I., Maruyama, S., & Rekimoto, J. (2002). Ambient touch. Proceedings of the 15th annual ACM symposium on User interface software and technology - UIST \u00E2\u0080\u009902 (p. 51). New York, New York, USA: ACM Press. doi: 10.1145/571985.571993. [43] Riener, A.: Sensor-Actuator Supported Implicit Interaction in Driver Assistance Systems. Phd thesis, Department for Pervasive Computing, Johannes Kepler University Linz, Austria (2009) [44] Picard, R. W., Vyzas, E., & Healey, J. (2001). Toward Machine Emotional Intelligence: Analysis of Affective Physiological State. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(10). [45] Sackett, D. L. (1979). Bias in analytic research. Journal of chronic diseases, 32(1-2), 51-63. Retrieved July 18, 2011, from http://www.ncbi.nlm.nih.gov/pubmed/447779. [46] Sarwar, B., Karypis, G., Konstan, J., & Reidl, J. (2001). Item-based collaborative filtering recommendation algorithms. Proceedings of the tenth international conference on World Wide Web - WWW \u00E2\u0080\u009901 (pp. 285-295). New York, New York, USA: ACM Press. doi: 10.1145/371920.372071. [47] Shardanand U. and Maes P. (1995) Social information filtering: Algorithms for automating word of mouth. ACM Conference on Computer Human Interaction (CHI). [48] Solovey, E. T., Girouard, A., Jacob, R. J. K., Lalooses, F., Chauncey, K., Weaver, D., et al. (2011). Sensing cognitive multitasking for a brain-based adaptive user interface. Proceedings of the 2011 annual conference on Human factors in computing systems - CHI \u00E2\u0080\u009911 (p. 383). New York, New York, USA: ACM Press. doi: 10.1145/1978942.1978997. [49] Corbin, J. M., & Strauss, A. C. (2007). Basics of qualitative research: Techniques and procedures for developing grounded theory (3rd ed.). Thousand Oaks, CA: Sage. [50] Swerdfeger, B.A., Fernquist, J., Hazelton, T.W., and MacLean, K.E. Exploring melodic variance in rhythmic haptic stimulus design. In Proceedings of Graphics Interface ( 2009), 133-140. 96 [51] Svanaes, D., & Seland, G. (2004). Putting the users center stage. Proceedings of the 2004 conference on Human factors in computing systems - CHI \u00E2\u0080\u009904 (pp. 479-486). New York, New York, USA: ACM Press. doi: 10.1145/985692.985753. [52] Tamminen, S., Oulasvirta, A., Toiskallio, K., & Kankainen, A. (2004). Understanding mobile contexts. Personal and Ubiquitous Computing, 8(2), 135-143. doi: 10.1007/s00779-004-0263-1. [53] Takahashi, K. (2004). Remarks on SVM-based emotion recognition from multi- modal bio-potential signals. ROMAN 2004 13th IEEE International Workshop on Robot and Human Interactive Communication IEEE Catalog No04TH8759 (pp. 95- 100). IEEE. doi: 10.1109/ROMAN.2004.1374736. [54] Tang, A., McLachlan, P., Lowe, K., Saka, C. R., & MacLean, K. (2005). Perceiving ordinal data haptically under workload. Proceedings of the 7th international conference on Multimodal interfaces - ICMI \u00E2\u0080\u009905 (p. 317). New York, New York, USA: ACM Press. doi: 10.1145/1088463.1088517. [55] Vogel, D., & Balakrishnan, R. (2004). Interactive public ambient displays. Proceedings of the 17th annual ACM symposium on User interface software and technology - UIST \u00E2\u0080\u009904 (p. 137). New York, New York, USA: ACM Press. doi: 10.1145/1029632.1029656. [56] Yatani, K., & Truong, K. N. (2009). SemFeel. Proceedings of the 22nd annual ACM symposium on User interface software and technology - UIST \u00E2\u0080\u009909 (p. 111). New York, New York, USA: ACM Press. doi: 10.1145/1622176.1622198. [57] Weiser, M. and Brown, J.S., Designing Calm Technology. Accesed from http://www.ubiq.com/ hypertext/weiser/calmtech/calmtech.htm, December 1995. 97 Appendices Appendix A: Screening Material Online Screening Interview Questions 1. First Name: 2. LastName: 3. Are you a student in the University of British Columbia? 4. Do you currently own a portable audio player (Apple Ipod, Microsoft Zune, etc.)? 5. Which brand/device do you currently own? Please check all that apply. \u00EF\u0082\u00B7 Apple iPod (touch or iPhone), Apple iPod shuffle, Apple iPod classic, Zune HD, Samsung, Creative Zen, River, Coby, Sony,Toshiba,Other 6. How long have you used or owned your current digital audio player? (If you own more than one player please answer for the one that you use most frequently) \u00EF\u0082\u00B7 Over 6 months but less than 1 year \u00EF\u0082\u00B7 Less than 6 months \u00EF\u0082\u00B7 Over 1 year but less than 2 years \u00EF\u0082\u00B7 Over 2 years 7. Which types of media do you listen to with your portable audio player? (Please check all that apply.) \u00EF\u0082\u00B7 Music (Never, Rarely, Occasionally, Frequently) \u00EF\u0082\u00B7 Audiobok (Never, Rarely, Occasionally, Frequently) \u00EF\u0082\u00B7 Podcast (Never, Rarely, Occasionally, Frequently) \u00EF\u0082\u00B7 Other.. 8. In a typical WEEK, how many hours do you use your mobile audio player? If your usage patterns vary drastically from week to week; comment on this in the box below. \u00EF\u0082\u00B7 More than 3 hour but less than 6 hours \u00EF\u0082\u00B7 More than 6 hours but less than 10 hours \u00EF\u0082\u00B7 More than 10 hours but less than 20 hours 98 \u00EF\u0082\u00B7 More than 20 hours but less than 30 hours \u00EF\u0082\u00B7 Over 30 hours 9. Are you familiar with any music recommender systems, or using one currently ? ( Pandora, Last.fm, Grooveshark, iTunes Genious, etc. ) 10. Enter your Name and Surname: 11. Enter your Email: Emails to Confirmed Participants Hello Ms/Mr X. I can confirm that you have been selected as a participant of the Portable Audio Player Participatory Design Sessions study. For finding the most suitable date and time for our first session, please visit: http://[doodle or google calender link] Please contact me if you have any questions. Participant encouragement email Hello Ms/Mr X. Our first session is going to take place in DATE in BUILDING/LOCATION. You can find the directions to the room from the here: (room/floor diagram from website). This session is going to take 2 hours and you will be compensated with $20. During the first 30 minutes of our session, we will spend some time by watching YouTube videos and talking about some definitions which I believe will help you to be more familiar with the research area that I am working on. For easiness, I assembled those videos into a single web page and I highly recommend you to take a look at them before coming to the study. In the case that you watch these videos before coming to the session, we might skip watching and finish our session earlier. I also uploaded some materials (diagrams, images and some vocabulary) that I am going to use in our session to this webpage. Although I don't expect you to understand them, please feel free to take a look and feel free to contact me if you have any questions. 99 Appendix B: Participatory Design Session Material Text Version of the Introductory Webpage This webpage is compiled to inform and demonstrate related technologies that our participatory portable audio player design is utilizing. By the end of this webpage tutorial we hope you will: a. Learn about Haptic technologies and how they are used b. See interesting demonstrations of biological-feedback recognition c. Have some idea on questions that will be asked during the session Our goal is to find out your preferences in such hypothetical player and design a customized prototype. Browsing this webpage will take approximately 12 minutes. These videos have been selected to help you to understand a few technologies which are going to be introduced to you during the session. There are four videos (shown below) which relate to two different topics: Sense of Touch (Haptic) Technology and Biological-Feedback Recognition. In the first session, we will ask you to comment on the products and technologies shown below. Don't hesitate to mention if you have a strong positive or negative feedback to any of the products as our research group is not affiliated with any of these technological examples. Keywords:You will also find a few keywords under each topic's small explanation. If you're not familiar with a word please click on it to view a short explanation, in order to understand videos much clearly. Mimicking Sense of Touch Haptic technology, or haptics, is a tactile feedback technology that takes advantage of a user's sense of touch by applying forces, vibrations, and/or motions to the user. You can find basic examples of this technology in most of the cell phones and video game entertainment systems currently on the market. Keywords: haptics, tactile feedback, vibration Video 1: Haptic Technology \u00EF\u0082\u00B7 http://www.youtube.com/watch?v=bZq3bCGlrjA (Haptic Technology) Tactile touch screen. Duration 6 minutes Video 2: Touch Screen Demo http://www.youtube.com/watch?v=lUWy2GW7XQ4 (Haptic Touchscreen Demo Complete) Haptic Touchscreen at CES 2009, demonstrated by Immersion Corporation. Duration 1 minutes 3 seconds Biological-feedback Recognition, a New Computer Interface Biological signals in human body carry complex and dynamic information which can be interpreted into human emotional states. Body temperature, skin moisture, heart rate, eye movements, electrical muscle and brain activity are some of the examples of such channels of signals. Slight or dramatic changes in these signals can be recognized and used as direct commands to a computer. Now, you will see some few examples of such human - computer interaction in videos below. 100 Keywords: SDK, Electroencephalography (EEG) Video 3: Emotiv Headset \u00EF\u0082\u00B7 http://www.youtube.com/watch?v=GXu2hEfg6gE (Emotiv - On10) This is a video from Microsoft TV. A description of Emotiv EPOC Headset is given and its capabilities are explained in this video as well as two demonstrations of its usage. Duration 5 minutes. Video 4: Emotiv Headset Bell Demo \u00EF\u0082\u00B7 http://www.youtube.com/watch?v=No4oXJxNmP4 (Emotiv Epoc Bell) This video is a user generated content. Emotiv Epoc headset is being used to control ringing of a bell by recognizing eye blinking through the sensors. Duration under a minute. This page is prepared to give you a idea about the structure of our session. A representative number of questions, to give you a rough idea about our session are presented here. It is highly likely that the same or similar questions will be directed to you during the session. The information that is going to be collected during the first session will be used to design, build and test the prototype in our second session. Environment a) Could you describe the places that you use your portable player the most? Is it noisy, crowded? b) Is there any location that you wish you could use your mobile player more easily? c) Do you listen to music tracks in non-mobile environments such as sitting at front of your computer? Interaction a) Do you ever want to skip a song? How often? How would you like to tell your player to do this different from using buttons? b) Do you have a strategy that you use for sorting/arranging tracks that you want to listen to? c) Do you multitask while using your mobile player? any examples? If you had a magic wand... a) What would you change about your current mobile player? Use Case a) Can you relate to the use case below; \"John is listening to a Podcast of a talk show while he works in the garden. He is deeply immersed in the show. His neighbour suddenly interrupts him to borrow hedge trimmers, causing John to become startled and remove his headphones. After retrieving the hedge trimmers for his neighbour, John puts his headphones back on to find that he has missed an important part of the show. He iteratively rewinds and plays back the Podcast in order to find his place. Eventually he recognizes some of the content and begins listening to his media again from that point. Due to his neighbour\u00E2\u0080\u009Fs interruption, his level of immersion in the show is reduced to almost nothing.\" Music Assistant a) If there was someone else, who knows your music listening habits well that is going to decide on the songs that is going to play (e.g., a best friend, a roommate, or a spouse); do you have any obvious habits that this person would use to pick music for you? 101 Definition of HALO provided to the Participants I want you to imagine an assistant. This assistant knows you so well that it is very good at interpreting your feelings in certain situations; sometimes, it can even make predict your wishes a little before you are aware of them yourself. It will follow your direct orders, but sometimes also does things that it thinks you want but haven\u00E2\u0080\u009Ft explicitly asked for. In these cases, it might also ask you to confirm before it takes that action. As a good assistant, it avoids distracting you unless it\u00E2\u0080\u009Fs very important; but you know that it is always there to serve only you, and does not share what it knows with anyone or anything else. The assistant we are talking about today is a computer system that understands you in certain ways and can infer some your actions. However instead of predicting your behaviour, it will only act based upon your intention and will try to decrease the effort you need to make. I like to call this system a Remo as shorthand of a TV remote because it reminds me of dealing with things remotely. It makes changing channels easier. Remo is actually the computer system that can recognize biological feedback to gather information to make inferences to communicate through sense of touch technology. The way Remo works is instead of using hearing and seeing channels, Remo senses through touch which is the least distracting sensory channel while conveying information. We believe that this communication channel will be particularly useful in situations where our auditory and visual senses are highly occupied. Our objective is to understand where, when and how Remo can be useful to you. In order to find answers to these questions, we will first look at a scenario. Whether you are already familiar with the example scenario from your daily life or not, I want to you to try to be open-minded during our conversation because we\u00E2\u0080\u009Fre working on a new paradigm that you probably have never experienced before. Please try not to limit your imagination by what you know or what don\u00E2\u0080\u009Ft know about the technology. Similar to Pandora, which offers music discovery and automatic playlist, Remo also offers several improvements to the current interaction and music listening experience. We are curious to know which category of benefit is more important for you or which one will increase your satisfaction most? Our research group hypothesized various potential advantages of Remo. One of the evident improvements that Remo brings is being able to control your player without making physical interaction with buttons. For instance while your hands are occupied with the grocery bags you\u00E2\u0080\u009Fre carrying; Remo can recognize that you didn\u00E2\u0080\u009Ft like the music and skip it without your manual intervention. What value do you think improvement has? In addition to giving simple commands to your player such as pause and resume; you will also be able to give more complex and richer commands to your player. For example you could communicate that you want to listen an artist for the next song. 102 Two Anticipated Improvements 1. Hands-free Interaction One of the evident improvements that Remo brings is being able to control your player without making physical interaction with buttons. For instance while your hands are occupied with the grocery bags you\u00E2\u0080\u009Fre carrying; Remo can recognize that you didn\u00E2\u0080\u009Ft like the music and skip it without your manual intervention. What value do you think this improvement has? 2. Rich Interaction In addition to giving simple commands to your player such as pause and resume; you will also be expressive by being able to give richer commands to your player. For example you could communicate that you want to listen an artist for the next song. Scenarios Prepared Scenario 1 Susie is riding her bicycle and listening to music using her portable player. Her player is mounted on her arm and set to shuffle mode. After the end of an upbeat song that she was enjoying, an economics lecture that her professor had put online for the class unexpectedly begins. Susie becomes annoyed at this change and wants to return to listening to music. She thus stops her bicycle, removes her player from the arm mount, and presses the \u00E2\u0080\u009Cforward\u00E2\u0080\u009D button on the player until she finds a song she likes. She then resumes cycling. Scenario 2 Monique is resting in bed, listening to calm, ambient music on her iPod to block external distractions. She drifts off to sleep, and the iPod continues to play. After waking up refreshed 4 hours later, she discovers that her player is out of batteries. Scenario 3 Theresa is listening to her portable audio player while waiting for a bus on a serene corner of her neighbourhood. Her player is in her purse. Once on the bus, she can no longer hear her music due to a noisy group of passengers. Frustrated, she reaches for her player in her purse to adjust the volume, which involves unlocking her player using its touch screen interface. The noisy passengers exit the bus a few stops later, and Theresa wants to reduce the volume of her player, again requiring her reach for it and unlock it. Scenario 4 Brian is going for his daily morning run. As he warms up, he prefers to listen to relatively slow- paced, happy pop. After his warm-up, however, he much prefers driving, intense, Euro-infused electronica. Knowing his preferences, Brian makes an appropriate playlist of music ahead of time, but on his run, he discovers that the lengths of the songs in this list do not match up with the schedule of his exercise routine, requiring him to manually advance through the playlist after his warm-up and before his cool-down. 103 Scenario 5 Janet is a choreographer and gives dance lessons to a group of students. In order her students develop a synchronously dance, she repeats parts of the dance repeatedly. She goes next to her player; pauses it and navigates backward to the specific time of the song, repeatedly during her lessons. Study Interview Script Welcome again XXX. Please take a seat. We can sit side by side or I can sit front of you which ever you prefer. Also please turn your phone into silent mode. ~INTRODUCE: My name is Gokhan, I\u00E2\u0080\u0099m a second year Master of Science student in the department of computer science. We are conducting a participatory design study to investigate design of a portable media player. And this is Charlotte. She is another researcher working in our group and she is going to observe our session. It is possible that she might ask some questions or join our conversation time to time. However I\u00E2\u0080\u0099ll be the person leading our conversation most of the time. Our main goal in this study is first, to understand what a few people would like to see in portable media players; second, to prototype a preliminary version of a tool which addresses some of those needs. We invited you here because we are interested in including potential users such as you in the design process with the participatory design approach here right now. During our first session today, I will ask questions to start conversations about certain things in order to understand your current music listening experience. Later in the next session, I will use this information to build a prototype for our next session and you will be able to try it out. ~CONSENT FORM Here is the informed consent form that I need you to sign before we start. Please take your time to read it and ask me questions if you 104 have any. You have the right to stop and leave at any time of the study, if you feel uncomfortable by any means. I am recording this session with a camera so that I can concentrate on our conversation rather than taking notes, although I may also jot down some quick notes. All of the information that is going to be collected, including the video recording, will be kept confidential and secure. ~START RECORDING Our schedule for today is going to be like this: 1. Informative Webpage 2. Demos & Technology 3. Starter Semi-Structured Interview 4. Exploring Preferences a. Introducing the Remote b. Looking at a scenario c. Discovering personal scenarios 5. Wrap-up You might be already comfortable or never heard about the technologies and products that I will show you today. If you feel overwhelmed at any time please don\u00E2\u0080\u0099t hesitate to stop me to ask questions. Also if you feel that you would need to rephrase, please do. Our conversation is going to be quite casual, although there may be some moments when I will ask you to concentrate on certain things. During our conversation, please try to be honest and personal in your thought. Please say exactly what you think. The goal of these sessions is to have YOUR opinion. Nothing is right or wrong here. 105 DEMOS & TECHNOLOGY (20 MINUTES) Did you have a chance to visit the website that I emailed you? Y/N Were you already familiar with any of these technologies? Y/N What did you find most memorable from these videos? Video: Comments: _______________________________________________________ _______________________________________________________ _______________________________________________________ Did you find any of them particularly interesting or hard to comprehend? _______________________________________________________ _______________________________________________________ After watching these videos what was the most interesting thing for you? _______________________________________________________ _______________________________________________________ I would like to have you try out some of these technologies in person now. Is this okay? (HAPTIC TECHNOLOGY) FIRST DEMO: TACTOR / HAPTIC SLEEVE [5 MIN] Comments: TACTOR:_______________________________________________ _______________________________________________________ _______________________________________________________ 106 _______________________________________________________ _______________________________________________________ SLEEVE:________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ (BIOLOGICAL SIGNAL RECOGNITION) SECOND DEMO: EMOTIV HEADSET [10MIN] Comments: _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ (COMBINATION OF BOTH TECHNOLOGIES) THIRD DEMO: AUDIO-BOOK PLAYER [10MIN] Comments: _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ 107 SEMI-STRUCTURED INTERVIEW (15MIN) I want to first start with asking some general questions about you and then we will continue with the questions that will help me understand your listening habits. 1) What are you studying at UBC? ______________ 2) Was there something particular about the study description that caught your interest? _____________________________________________________ _____________________________________________________ 3) What type of portable audio player do you own? __________________ a. How long have you been using it? ______________ b. Do you usually carry it with you while going outside? Y/N ________________________________________________ ________________________________________________ c. Which features do you find most useful? ________________________________________________ ________________________________________________ d. Do you own any accessories for your portable player, besides usual things like headphones and extra batteries? Does you headphone support quick buttons to adjusting volume or skipping songs? ________________________________________________ ________________________________________________ ________________________________________________ 108 4) For how long typically do you listen to something each time you start using it? _____________________________________________________ _____________________________________________________ 5) In what sort of situations do you use your portable audio player most? _____________________________________________________ _____________________________________________________ a. Can you think of common properties of these environments? ________________________________________________ ________________________________________________ ________________________________________________ ________________________________________________ b. Are there particular environments that you always have problems using your portable player? ________________________________________________ ________________________________________________ ________________________________________________ ________________________________________________ ____________________ 6) What strategies do you find yourself using while deciding on the order of the files/songs you listen with your portable audio player? _____________________________________________________ _____________________________________________________ _____________________________________________________ _____________________________________________________ ________________ a. Do you make playlists in advance? Y/N b. Do you shuffle all of your songs? Y/N 109 c. What is your immediate reaction when you want to skip a song? ________________________________________________ ________________________________________________ _______________________________________________ d. Are there songs that you still keep in your player although you usually skip whenever they come up? (Are there songs in your player that you don\u00E2\u0080\u0099t listen to at all?) Y/N ________________________________________________ ________________________________________________ 7) Mood and Purpose : a. Are there particular moods where you're most likely to want to listen to some type of music? ________________________________________________ ________________________________________________ ________________________________________________ ________________________________________________ b. How important is music to achieving these purposes, and does the music have to be \"exactly right\"? What does this depend on? ________________________________________________ ________________________________________________ ________________________________________________ ________________________________________________ 8) Are you using or have you tried any music recommendation or discovery tools before? Y/N a. Do you have a recommender that is already implemented in your player? Y/N 110 b. What do you think about getting recommendations? Were you satisfied with the service? ________________________________________________ ________________________________________________ ________________________________________________ ________________________________________________ 9) Generally, are you the kind of person who prefers to let others work out glitches in new technology or are you usually among the first to try out new technology? _______________________________________________________ _______________________________________________________ EXPLORING PREFERENCES (60 MIN) INTRODUCING THE REMOTE OK, I have a general idea about your music listening experience. Now I want to talk about the idea we want to develop. I want you to imagine an assistant. This assistant knows you so well that it is very good at reading your feelings in certain situations; sometimes, it can even make predict your wishes a little before you are aware of them yourself. It will follow your direct orders, but sometimes also does things that it thinks you want but haven\u00E2\u0080\u0099t explicitly asked for. In these cases, it might also ask you to confirm before it takes that action. As a good assistant, it avoids distracting you unless it\u00E2\u0080\u0099s very important; but you know that it is always there to serve only you, and does not share what it knows with anyone or anything else. The assistant we are talking about today is a computer system that understands you in certain ways and can infer some your actions. However instead of predicting your behaviour, it will only act based upon your intention and will try to decrease the effort you need to 111 make. I like to call this system a Remote as shorthand of a TV remote because it reminds me of dealing with things remotely. It makes changing channels easier. Remo or Remote is actually the computer system that can recognize biological feedback to gather information to make inferences to communicate through sense of touch technology. Comments : _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ ________________________________________ The way Remo works is instead of using hearing and seeing channels, Remo senses through touch which is the least distracting sensory channel while conveying information. We believe that this communication channel will be particularly useful in situations where our auditory and visual senses are highly occupied. Our objective is to understand where, when and how Remo can be useful to you. In order to find answers to these questions, we will first look at a scenario. Whether you are already familiar with the example scenario from your daily life or not, I want to you to try to be open-minded during our conversation because we\u00E2\u0080\u0099re working on a new paradigm that you probably have never experienced before. Please try not to limit your imagination by what you know or what don\u00E2\u0080\u0099t know about the technology. 112 ~ Think about a relationship between a dog and its owner. How does the owner come to know what her dog needs and wants without using any verbal communication? How does the dog know what the human wants? USED? Y/N Comments : _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ ________________________________________ ~ Think about cases where your sight and hearing is already being used, or the times when interacting with your audio player is just annoying because it has done the wrong thing. These are the cases where we believe a system like this will help. USED? Y/N Comments : _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ _______________________________________________________ 113 Remo also offers several improvements to the current interaction and music listening experience. We are curious to know which category of benefit is more important for you or which one will increase your satisfaction most? We hypothesized various potential advantages of Remo. One of the evident improvements that Remo brings is being able to control your player without making physical interaction with buttons. For instance while your hands are occupied with the grocery bags you\u00E2\u0080\u0099re carrying; Remo can recognize that you didn\u00E2\u0080\u0099t like the music and skip it without your manual intervention. What value do you think this improvement has? Comments : ____________________________________________________________ ____________________________________________________________ ____________________________________________________________ ____________________________________________________________ ____________________________________________________________ ____________________________________________________________ In addition to giving simple commands to your player such as pause and resume; you will also be expressive by being able to give richer commands to your player. For example you could communicate that you want to listen an artist for the next song. Comments : ____________________________________________________________ ____________________________________________________________ ____________________________________________________________ ____________________________________________________________ ____________________________________________________________ ____________________________________________________________ ____________________________________________________________ 114 SCENARIO EXERCISE (10 MIN) Let\u00E2\u0080\u0099s take a look at the example scenario to start with building our own scenario: \u00E2\u0080\u009CSusie is riding her bicycle and listening to music using her portable player. Her player is mounted on her arm and set to shuffle mode. After the end of an upbeat song that she was enjoying, an economics lecture that her professor had put online for the class unexpectedly begins. Susie becomes annoyed at this change and wants to return to listening to music. She thus stops her bicycle, removes her player from the arm mount, and presses the \u00E2\u0080\u009Cforward\u00E2\u0080\u009D button on the player until she finds a song she likes. She then resumes cycling.\u00E2\u0080\u009D \u00EF\u0082\u00B7 Did this scenario felt familiar, or represented your own experiences? _______________________________________________________ _______________________________________________________ \u00EF\u0082\u00B7 Even if you haven\u00E2\u0080\u0099t been in some this situation before, did it still seemed realistic? Before: Y/N Realistic? Y/N _______________________________________________________ _______________________________________________________ Scenario might not seem familiar at first glance; but, what if you change some of the settings in the scenario? For instance, changing snowboarding instead of walking. __________________________________________________ __________________________________________________ \u00EF\u0082\u00B7 Do you think that you can give me scenario like this that is from your own experience? _______________________________________________________ _______________________________________________________ _______________________________________________________ 115 CREATING USE-CASE & FEATURE RANKING (40MIN) To prototype a preliminary version of a tool which addresses some of your needs, I would like to hear more about your scenarios how you would like to use your Remote. I will ask you some questions to help you imagine the situation. [If the participant struggles to come up with his own scenarios giving them our scenarios one by one to help them out] SCENARIOS USED: _X_1 __2 __3 __4 __5 A. ENVIRONMENT A.1. Please think about When, Where and How you like to enjoy listening to music most? When: ________________________________________________ Where: ________________________________________________ How: __________________________________________________ A.2. What are you doing? What is your main goal? ____________________________________________________ _____________________________________________________ Are there secondary goals that are less important? Y/N _____________________________________________________ _____________________________________________________ _____________________________________________________ What is the purpose of music listening in this scenario? _____________________________________________________ _____________________________________________________ 116 A.3. Do you ever find yourself in a situation that is challenging to use your portable device? Y/N _____________________________________________________ _____________________________________________________ Is there something in the environment that might distract you frequently or occasionally? Y/N _____________________________________________________ _____________________________________________________ _____________________________________________________ A.4. Are you usually deeply engaged in this situation? Y/N ____________________________________________________ ____________________________________________________ Are you often multitasking? Y/N ____________________________________________________ ____________________________________________________ Do you get startled or surprised sometimes? Y/N ____________________________________________________ ____________________________________________________ Do you tend to be working hard, either mentally or physically? ____________________________________________________ ____________________________________________________ A.5. Sketching a diagram helps me to verify what I have understood sometimes. I would you like to draw a workflow of the actions/decision you take with the properties of that environment; could you help me? ____________________________________________________ ____________________________________________________ ____________________________________________________ 117 B. TASK / BEHAVIOUR B.1. What type of assistance would you like to get from the Remote? _____________________________________________________ _____________________________________________________ B.2. What types of information/feedback will be useful in this case? _____________________________________________________ _____________________________________________________ Are there things (either simple or complex) that you have to do over and over, that you wish could be handled without your explicit (direct) input? _____________________________________________________ _____________________________________________________ B.3. Can you think of cases when you have to interact with your player a lot / excessively? _____________________________________________________ _____________________________________________________ What are you trying to achieve at that time? _____________________________________________________ _____________________________________________________ How do you get it done? _____________________________________________________ _____________________________________________________ B.4. Does this solution change when you are in another time/place? _______________________________________________________ _______________________________________________________ 118 B.5. Do you ever use music for a purpose - e.g. to change your mood, or to get yourself into or stay in a particular state of mind? Y/N _____________________________________________________ _____________________________________________________ If so, do you associate any of these purposes with a particular task or place? _____________________________________________________ _____________________________________________________ C. DETAILS OF BEHAVIOUR & WORKING C.1. How can Remote help you during these situations? _____________________________________________________ _____________________________________________________ C.2. What are the things that you want the Remote to communicate with you? _______________________________________________________ _______________________________________________________ C.3. Could you now walk me through your scenario again but include the help of Remo? _______________________________________________________ _______________________________________________________ Using a similar approach: sketching the diagram with best possible perfect recognition case _______________________________________________________ _______________________________________________________ C.4. How should the system notify you about its predictions? _______________________________________________________ _______________________________________________________ _______________________________________________________ C.5. What are the factors that will that the system\u00E2\u0080\u0099s action to alter? 119 ________________________________________________________ ________________________________________________________ How would you communicate differently at this case? _____________________________________________________ _____________________________________________________ How would Remo communicate differently to you? _____________________________________________________ _____________________________________________________ C.6. Where are benefits of this system on this task: Revisiting accessibility of the device and richer Interaction. _____________________________________________________ _____________________________________________________ WRAP-UP It\u00E2\u0080\u0099s been a great session and I have heard a lot of interesting comments. We\u00E2\u0080\u0099re unfortunately going to have to wrap up for today, but if you think of something on your way home that you really want to tell us, please send a comments email. 120 Appendix C: Participant Sketches This page contains the sketches from for the participatory design session with P1 121 This page contains the sketches from for the participatory design session with P2 122 This page contains the sketches from for the participatory design session with P5 "@en . "Thesis/Dissertation"@en . "2012-05"@en . "10.14288/1.0052092"@en . "eng"@en . "Computer Science"@en . "Vancouver : University of British Columbia Library"@en . "University of British Columbia"@en . "Attribution 3.0 Unported"@en . "http://creativecommons.org/licenses/by/3.0/"@en . "Graduate"@en . "Participatory design of a biometrically-driven portable audio player"@en . "Text"@en . "http://hdl.handle.net/2429/38752"@en .