Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Personalizing haptics : from individuals’ sense-making schemas to end-user haptic tools Seifi, Hasti 2017

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2017_may_seifi_hasti.pdf [ 19.99MB ]
Metadata
JSON: 24-1.0343521.json
JSON-LD: 24-1.0343521-ld.json
RDF/XML (Pretty): 24-1.0343521-rdf.xml
RDF/JSON: 24-1.0343521-rdf.json
Turtle: 24-1.0343521-turtle.txt
N-Triples: 24-1.0343521-rdf-ntriples.txt
Original Record: 24-1.0343521-source.json
Full Text
24-1.0343521-fulltext.txt
Citation
24-1.0343521.ris

Full Text

Personalizing HapticsFrom Individuals’ Sense-Making Schemas to End-User Haptic ToolsbyHasti SeifiB.Sc., The University of Tehran, 2008M.Sc., Simon Fraser University, 2010A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFDoctor of PhilosophyinTHE FACULTY OF GRADUATE AND POSTDOCTORALSTUDIES(Computer Science)The University of British Columbia(Vancouver)April 2017c© Hasti Seifi, 2017AbstractSynthetic haptic sensations will soon proliferate throughout many aspects of ourlives, well beyond the simple buzz we get from our mobile devices. This view iswidely held, as evidenced by the growing list of use cases and industry’s increasinginvestment in haptics. However, we argue that taking haptics to the crowds willrequire haptic design practices to go beyond a one-size-fits-all approach, commonin the field, to satisfy users’ diverse perceptual, functional, and hedonic needs andpreferences reported in the literature.In this thesis, we tackle end-user personalization to leverage utility and aes-thetics of haptic signals for individuals. Specifically, we develop effective hap-tic personalization mechanisms, grounded in our synthesis of users’ sense-makingschemas for haptics. First, we propose a design space and three distinct mecha-nisms for personalization tools: choosing, tuning, and chaining. Then, we developthe first two mechanisms into: 1) an efficient interface for choosing from a largevibration library, and 2) three emotion controls for tuning vibrations. In developingthese, we devise five haptic facets that capture users’ cognitive schemas for hap-tic stimuli, and derive their semantic dimensions and between-facet linkages bycollecting and analyzing users’ annotations for a 120-item vibration library. Ourstudies verify utility of the facets as a theoretical model for personalization tools.In collecting users’ perception, we note a lack of scalable haptic evaluationmethodologies and develop two methodologies for large-scale in-lab evaluationand online crowdsourcing of haptics.Our studies focus on vibrotactile sensations as the most mature and accessiblehaptic technology but our contributions extend beyond vibrations and inform othercategories of haptics.iiPrefaceIn conducting my PhD research, I benefited from collaboration and feedback fromseveral others. In particular, all aspects of the research were conducted under su-pervision and feedback from my PhD supervisor, Prof. Karon MacLean, who alsoassisted with preparing the conference and journal publications resulted from thisresearch. Also, my PhD committee, Prof. James Enns and Prof. Tamara Mun-zner, provided feedback on different components of this thesis as needed. Further,several components of this thesis were results of collaboration with other individu-als. I acknowledge the collaborative nature of the work by using the pronoun “we”throughout the thesis. In addition, in this preface I clarify my contribution(s) toeach component, present the resulting publications and demos, and note high levelpragmatic points about the language and structure of the thesis.Statement of Co-AuthorshipChapter 1 and Chapter 8, namely the Introduction and Conclusions chapters,are framed and written by myself, with feedback from my PhD supervisor andcommittee members.Chapters 2 and 3 provide the grounding for my PhD proposal. The work pre-sented in Chapter 2 is based on the RPE (Research Potency Evaluation) componentof my PhD program. I proposed the project to my PhD Committee and carried outall aspects of the research independently (study design, data collection, analysis,and write up), with supervisory input from Dr. MacLean. The work was publishedand presented at World Haptics 2013.Seifi and MacLean. (2013) A first look at individuals’ affective ratingsof vibrations. Proceedings of IEEE World Haptics Conference (WHC’13).iiiFor Chapter 3, I supervised and worked closely with a summer undergraduateresearch assistant, Chamila Anthonypillai. I devised the five design parameters andthree personalization mechanisms. Anthonypillai designed the paper prototypes ofthe three mechanisms, helped with vibration and study design, and conducted theuser study. I provided high-level feedback on those aspects and contributed thedata analysis and paper writing, developed medium fidelity prototypes of the threepersonalization mechanisms, with feedback and guidance from Dr. MacLean onall aspects of the work. I presented the paper and demonstrated the prototypes atHaptic Symposium 2014.Seifi, Anthonypillai, and MacLean. (2014) End-user customization ofaffective tactile messages: A qualitative examination of tool parame-ters. Proceedings of IEEE Haptics Symposium (HAPTICS ’14).Seifi, Anthonypillai, and MacLean. (2014) [D69] End-user vibrationcustomization tools: Parameters and examples. HAPTICS ’14 De-mos.Chapter 4 followed my proposed research questions and PhD trajectory. TheVibViz interface was designed in close collaboration with Kailun Zhang, a formerM.Sc. student, as the final project for the Information Visualization course byProf. Tamara Munzner. The interface was programmed by Zhang and later refinedby myself. Specifically, I replaced the tag cloud filters in the initial design by astructured group of buttons, added search functionality and removed a few bugs.In a follow up exploratory study of the interface, I contributed the study design,data collection, and analysis. Finally, I led the paper writing efforts and presentedthe paper and a demo of VibViz at World Haptics 2015, receiving supervision andfeedback from Dr. MacLean on all the aspects. Dr. Munzner offered additionalsupervision and guidance in designing the VibViz interface but declined to be listedas a co-author after being invited.Seifi, Zhang, and MacLean. (2015) VibViz: Organizing, visualizingand navigating vibration libraries. Proceedings of IEEE World Hap-tics Conference (WHC ’15).ivSeifi, Zhang, and MacLean. (2015) VibViz: an Interactive Visualiza-tion for Organizing and Navigating a Vibrotactile Library. WHC ’15Demos.I was the primary contributor of all aspects of the work in Chapter 5 (studydesign, data collection, analysis, and paper writing) under Dr. MacLean’s super-vision. Oliver Schneider, PhD candidate, and Kailun Zhang, provided annotationsfor the library as haptic experts. At the time of this writing, the resulting paper isunder review at a journal venue. The resulting paper will appear in the InternationalJournal of Human Computer Studies, Special Issue on Multisensory HCI.Seifi and MacLean. (2017) Exploiting Haptic Facets: Users’ Sense-making Schemas as a Path to Design and Personalization of Experi-ence. To Appear in International Journal of Human Computer Studies(IJHCS), Special issue on Multisensory HCI.Chapter 6 was a collaboration with three other students, namely: Oliver Schnei-der, Matthew Chun, a summer undergraduate research assistant at the time (anM.Sc. student a the time of this writing), and Salma Kashani, an M.Sc. studentat UBC’s ECE department. Schneider provided overall leadership on the project,but we evenly divided and contributed intellectually to the project, with Schneiderleading low-fidelity proxy design while I led design of the visual proxies. Kashaniand Chun designed the visual and low-fidelity vibration proxies respectively andjointly ran the user studies. Study design was a joint team effort based on feed-back from all the members. I joined the discussions remotely via Skype for the lastfew months of the project, while doing an internship in the United States. Thus,Schneider led the data analysis and paper writing with close feedback from me andother group members and guidance from Dr. MacLean. The paper was publishedat CHI 2015 and presented at the conference by Schneider. My main interest inthe paper is its methodological contribution to haptic studies where small-scalelab-based evaluation is insufficient. Schneider, in contrast, examined the role ofcrowdsourcing in the design process, providing a method to disseminate haptic de-sign widely. Thus, we both incorporate the work in our theses. Combined with ourdistinct thesis goals and in the context of other chapters, different aspects of thework become of interest and contribution to the readers.vSchneider, Seifi, Kashani, Chun, and MacLean. (2016) HapTurk:Crowdsourcing Affective Ratings for Vibrotactile Icons. Proceedingsof the ACM SIGCHI Conference on Human Factors in ComputingSystems (CHI ’16).For Chapter 7, I worked closely with Matthew Chun, with feedback and super-vision from Dr. MacLean. Salma Kashani was involved for a limited time whereshe helped with data collection for one of the user studies (Study 1). I planned theproject direction and next steps. Chun and I worked closely on defining the relevantvibration parameters and designing the studies. Chun designed the stimuli for thestudies and conducted the pilots and final study (Study 2). I led the data analysisand paper writing, and conceptualized the three example prototypes. Chun pro-vided help and feedback on the above and developed the medium fidelity versionsof the prototypes. At the time of this writing, this paper is in final preparation for ajournal submission.Since the majority of the above thesis components were previously peer-reviewedin international conferences and journals, we present them as separate chapterswith minimal formatting and (occasionally) wording changes. We include a briefpreface at the start of each chapter to further link it to the overall story of the thesis.Evolution of Key Thesis TerminologyDuring the course of my PhD work, our understanding of the concepts andcomponents of this thesis evolved, which sometimes led to changes in the languageand labels we used to refer to them. In particular, initially we referred to users’cognitive schemas for haptic sensations as taxonomies (Chapters 4 and 6), whichwe later revised to be facets (Chapters 5 and 7). Also, we initially labelled ourthree personalization mechanisms as choice, filter, and block but later opted forchoosing, tuning, and chaining as more semantically expressive names. Finally,in our papers, we used customization and personalization interchangeably, similarto the literature. To provide a consistent thesis document, we replace our earlierlanguage by our final phrasing but note the changes in the footnotes of each chapter.All research including human participants were reviewed and approved byUBC’s Behavioural Research Ethics Board, #H13-01646.viTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xivList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xixDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Situating Our Work . . . . . . . . . . . . . . . . . . . . . . . . . 41.2.1 Supporting Personalization . . . . . . . . . . . . . . . . . 41.2.2 Understanding Common Patterns and Individual Differences 61.2.3 Evaluating at Scale . . . . . . . . . . . . . . . . . . . . . 91.3 Approach - The Chronological View . . . . . . . . . . . . . . . . 91.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 Linking Emotion Attributes to Engineering Parameters and Indi-vidual Differences . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20vii2.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.3.1 Affective Evaluation . . . . . . . . . . . . . . . . . . . . 222.3.2 Vibrotactile Stimuli . . . . . . . . . . . . . . . . . . . . . 232.3.3 Tactile Tasks . . . . . . . . . . . . . . . . . . . . . . . . 232.3.4 Individual Differences in Tactile Task Performance . . . . 242.4 Design of Setup and Assessment Tools . . . . . . . . . . . . . . . 252.4.1 Apparatus . . . . . . . . . . . . . . . . . . . . . . . . . . 252.4.2 Stimuli Design . . . . . . . . . . . . . . . . . . . . . . . 252.4.3 Tactile Task Design . . . . . . . . . . . . . . . . . . . . . 262.4.4 Affective Rating Scales Design . . . . . . . . . . . . . . 282.5 Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302.5.1 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . 302.5.2 Results and Analysis . . . . . . . . . . . . . . . . . . . . 302.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322.6.1 Dimensionality and Utility of Affective Response . . . . . 332.6.2 Vibration Parameters . . . . . . . . . . . . . . . . . . . . 342.6.3 Demographic, NFT Score and Tactile Performance . . . . 352.7 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . 362.8 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . 373 Characterizing Personalization Mechanisms . . . . . . . . . . . . . 383.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.3.1 Haptic Design . . . . . . . . . . . . . . . . . . . . . . . . 413.3.2 Haptic Design Tools . . . . . . . . . . . . . . . . . . . . 423.3.3 Challenges & Potentials of End-user Personalization . . . 433.4 Conceptualization of Haptic Personalization Tools . . . . . . . . . 433.4.1 Three Personalization Tools . . . . . . . . . . . . . . . . 443.4.2 Proposed Tool-Characterization Parameter Space . . . . . 463.5 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50viii3.6.1 Comparison of the Tools . . . . . . . . . . . . . . . . . . 503.6.2 Interest in Personalization . . . . . . . . . . . . . . . . . 523.6.3 Vibrations Designed by Participants . . . . . . . . . . . . 523.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533.7.1 Desirable Characteristics (Q1) . . . . . . . . . . . . . . . 533.7.2 Value and Outcomes (Q2, Q3) . . . . . . . . . . . . . . . 543.7.3 Wizard-of-Oz Approach . . . . . . . . . . . . . . . . . . 553.8 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . 553.9 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . 574 Choosing From a Large Library Using Facets . . . . . . . . . . . . . 584.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614.3.1 Vibrotactile Libraries . . . . . . . . . . . . . . . . . . . . 614.3.2 Vibrotactile Facets . . . . . . . . . . . . . . . . . . . . . 624.3.3 Inspiration from Visualization and Media Collections . . . 634.4 Library & Facet Construction . . . . . . . . . . . . . . . . . . . . 634.4.1 Library Population . . . . . . . . . . . . . . . . . . . . . 634.4.2 Visualizing and Managing Diversity During Growth . . . 644.5 VibViz: An Interactive Library Navigation Tool . . . . . . . . . . 664.5.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . 664.5.2 VibViz Interface . . . . . . . . . . . . . . . . . . . . . . . 664.5.3 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . 674.6 User Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684.7 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704.7.1 Q1) Does VibViz Satisfy Our Design Requirements? . . . 704.7.2 Q2) How Useful and Interesting Is Each Vibration Facet? 714.8 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734.8.1 Interface Requirements . . . . . . . . . . . . . . . . . . . 734.8.2 Vibrotactile Facets . . . . . . . . . . . . . . . . . . . . . 744.9 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . 744.10 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . 75ix5 Deriving Semantics and Interlinkages of Facets . . . . . . . . . . . . 765.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775.2.1 Facets: Aligning Content Access with Mental Frameworks 795.2.2 Research Questions . . . . . . . . . . . . . . . . . . . . . 825.2.3 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855.3.1 Tools for Vibrotactile Design and Personalization . . . . . 855.3.2 Knowledge of Perceptual and Qualitative Attributes of Vi-brations . . . . . . . . . . . . . . . . . . . . . . . . . . . 865.3.3 Methodology for Evaluating Qualitative Attributes of Vi-brations . . . . . . . . . . . . . . . . . . . . . . . . . . . 875.3.4 Instruments for Evaluating Haptic Sensations . . . . . . . 875.4 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885.4.1 Rich Source Vibrations . . . . . . . . . . . . . . . . . . . 885.4.2 Inclusive and Concise Annotation Instrument, for a FlatDescriptor Set . . . . . . . . . . . . . . . . . . . . . . . . 895.4.3 Scalable and Robust Data Collection Methodology . . . . 895.4.4 Data Analysis Methods . . . . . . . . . . . . . . . . . . . 905.5 Data Collection and Pre-processing . . . . . . . . . . . . . . . . . 915.5.1 Stage 1: Annotation by Haptic Experts . . . . . . . . . . 915.5.2 Stage 2: Validation of the Dataset by Lay Users . . . . . . 945.5.3 Pre-Processing of the Dataset . . . . . . . . . . . . . . . 975.5.4 Definition of Analysis Metrics . . . . . . . . . . . . . . . 975.6 Analysis and Results . . . . . . . . . . . . . . . . . . . . . . . . 975.6.1 [Q1] Facet Substructure: What Are the Underlying FacetDimensions That Dominate User Reactions to Vibrations? 985.6.2 [Q2] Between-Facet Linkages: How Are Attributes andDimensions Linked Across Facets? . . . . . . . . . . . . 1045.6.3 [Q3] Individual Differences: To What Extent Do PeopleCoincide or Differ in Their Assessment of Vibration At-tributes? . . . . . . . . . . . . . . . . . . . . . . . . . . . 106x5.6.4 Methodology: How Does Staged Data Collection ImpactAnnotation Quality? . . . . . . . . . . . . . . . . . . . . 1095.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105.7.1 Within-Facet Perceptual Continuity: Scenarios . . . . . . 1105.7.2 Facet Dimensions and Linkages . . . . . . . . . . . . . . 1125.7.3 Individuals’ Annotation Reliability and Variation . . . . . 1145.7.4 Review of Our Methodology . . . . . . . . . . . . . . . . 1155.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1176 Crowdsourcing Haptic Data Collection . . . . . . . . . . . . . . . . 1206.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1216.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1216.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1236.3.1 Existing Evaluation Methods for Vibrotactile Effects . . . 1236.3.2 Affective Haptics . . . . . . . . . . . . . . . . . . . . . . 1246.3.3 Mechanical Turk (MTurk) . . . . . . . . . . . . . . . . . 1256.4 Sourcing Reference Vibrations and Qualities . . . . . . . . . . . . 1276.4.1 High-Fidelity Reference Library . . . . . . . . . . . . . . 1276.4.2 Affective Properties and Rating Scales . . . . . . . . . . . 1276.5 Proxy Choice and Design . . . . . . . . . . . . . . . . . . . . . . 1286.5.1 Visualization Design (Visdir and Visemph) . . . . . . . . . 1306.5.2 Low Fidelity Vibration Design . . . . . . . . . . . . . . . 1316.6 Study 1: In-lab Proxy Vibration Validation (G1) . . . . . . . . . . 1326.6.1 Comparison Metric: Equivalence Threshold . . . . . . . . 1336.6.2 Proxy Validation (Study 1) Results and Discussion . . . . 1336.7 Study 2: Deployment Validation with MTurk (G2) . . . . . . . . . 1376.7.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 1386.8 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1396.8.1 Proxy Modalities Are Viable for Crowdsourcing (G1,G2:Feasibility) . . . . . . . . . . . . . . . . . . . . . . . . . 1396.8.2 Triangulation (G3: Promising Directions/Proxies) . . . . . 1406.8.3 Animate Visualizations (G3: Promising Directions) . . . . 1406.8.4 Sound Could Represent Energy (G3: Promising Directions) 140xi6.8.5 Device Dependency and Need for Energy Model for Vi-brations (G4: Challenges) . . . . . . . . . . . . . . . . . 1416.8.6 Vibrotactile Affective Ratings Are Generally Noisy (G4:Challenges) . . . . . . . . . . . . . . . . . . . . . . . . . 1426.8.7 Response & Data Quality for MTurk LofiVib Vibrations(G4: Challenges) . . . . . . . . . . . . . . . . . . . . . . 1426.8.8 Automatic Translation (G4: Challenges) . . . . . . . . . . 1426.8.9 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . 1436.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1436.10 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . 1447 Tuning Vibrations with Emotion Controls . . . . . . . . . . . . . . . 1457.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1467.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1467.2.1 Research Questions, Approach and Contributions . . . . . 1487.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1507.3.1 Haptic Design, and Inspirations from Other Domains . . . 1507.3.2 Affective Vibration Design . . . . . . . . . . . . . . . . . 1527.4 Starting Points: Use Cases, Initial Vibrations and Linkages . . . . 1537.4.1 Design and Personalization Use Cases . . . . . . . . . . . 1547.4.2 Choosing Basis Vibrations . . . . . . . . . . . . . . . . . 1557.4.3 Identifying Influential Engineering Parameters . . . . . . 1557.5 User Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1567.5.1 Pilot Study . . . . . . . . . . . . . . . . . . . . . . . . . 1577.5.2 Study 1 and 2 Objectives . . . . . . . . . . . . . . . . . . 1587.5.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 1607.5.4 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 1637.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1647.6.1 Verbal Descriptions for Emotion Attributes . . . . . . . . 1647.6.2 Ratings . . . . . . . . . . . . . . . . . . . . . . . . . . . 1667.6.3 RQ1: Impact of Engineering Parameters on Emotion At-tributes (Study 1) . . . . . . . . . . . . . . . . . . . . . . 166xii7.6.4 RQ2: Evidence of Continuity of the Engineering-EmotionMappings (Study 2) . . . . . . . . . . . . . . . . . . . . 1687.6.5 RQ3: Impact of Base Vibrations on Emotion Attribute Rat-ings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1697.6.6 RQ4: Orthogonality of Emotion Dimensions . . . . . . . 1707.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1717.7.1 Findings . . . . . . . . . . . . . . . . . . . . . . . . . . 1717.7.2 Automatable Emotion Controls and Study Approach . . . 1727.7.3 What Do These Results Enable? Revisiting Our Use Cases 1747.7.4 Future Work . . . . . . . . . . . . . . . . . . . . . . . . 1787.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1798 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1808.1 Personalization Mechanisms . . . . . . . . . . . . . . . . . . . . 1808.2 Facets as an Underlying Model for Personalization Tools . . . . . 1828.3 Large Scale Evaluation for Theory and Tool Development . . . . . 1848.4 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1868.4.1 Incorporating Personalization in Users’ Workflow . . . . . 1868.4.2 Expanding the Mechanisms and the Underlying Model . . 1878.5 Final Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189A Supplemental Facet Analysis . . . . . . . . . . . . . . . . . . . . . . 212A.1 List of Tags and Their Disagreement Values . . . . . . . . . . . . 213A.2 Tag Removal Summary . . . . . . . . . . . . . . . . . . . . . . . 215A.3 Rating Correlations . . . . . . . . . . . . . . . . . . . . . . . . . 215A.4 Multidimensional Scaling Graphs on Tag Distances . . . . . . . . 216A.5 Individual Differences in Vibrations . . . . . . . . . . . . . . . . 218A.6 Between-Facet Tag Linkages . . . . . . . . . . . . . . . . . . . . 221B Consent Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225xiiiList of TablesTable 1.1 The mapping from contributions to thesis chapters . . . . . . . 16Table 2.1 ANOVA results on the five affective rating scales . . . . . . . . 31Table 3.1 Characterization of choosing, tuning and chaining concepts . . 47Table 3.2 Configurations of each filter setting in the tuning tool . . . . . 49Table 4.1 Five vibrotactile facets used in study . . . . . . . . . . . . . . 64Table 4.2 VibViz user interface view descriptions . . . . . . . . . . . . . 68Table 4.3 VibViz study scenarios . . . . . . . . . . . . . . . . . . . . . 69Table 5.1 Vibration facets used in Chapter 5 . . . . . . . . . . . . . . . . 81Table 5.2 Definition of our analysis metrics . . . . . . . . . . . . . . . . 96Table 5.3 Facet dimension analysis . . . . . . . . . . . . . . . . . . . . 101Table 5.4 Final facet dimensions and their most frequent tags . . . . . . 102Table 5.5 Factor analysis outcome . . . . . . . . . . . . . . . . . . . . . 105Table 5.6 Summary of our annotation dataset . . . . . . . . . . . . . . . 109Table 7.1 Three emotion attributes and their linkages to sensory attributes 157Table 7.2 Influential sensory attributes, their definition and implementation 158Table 7.3 Our hypotheses for the tuning studies . . . . . . . . . . . . . . 160Table 7.4 Participant definitions for agitating, lively, and strange vibrations 165Table A.1 Sensation f tags and disagreement scores . . . . . . . . . . . . 213Table A.2 Emotion f tags and disagreement scores . . . . . . . . . . . . . 213Table A.3 Metaphor f tags and disagreement scores . . . . . . . . . . . . . 214xivTable A.4 Usage f tags and disagreement scores . . . . . . . . . . . . . . 214Table A.5 Percentage of tags removed by normal users . . . . . . . . . . 215Table A.6 Correlation of the five rating scales . . . . . . . . . . . . . . . 215xvList of FiguresFigure 1.1 Experts and lay users’ mental model of haptic sensation . . . 3Figure 1.2 Conceptual sketch of individual differences in affective per-ception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Figure 1.3 Conceptual sketch of three haptic personalization mechanisms 11Figure 1.4 Conceptual sketch of the choosing approach with VibViz . . . 12Figure 1.5 Conceptual sketch of the five vibration facets, their dimen-sions, and linkages . . . . . . . . . . . . . . . . . . . . . . . 13Figure 1.6 Conceptual sketch of haptic crowdsourcing . . . . . . . . . . 13Figure 1.7 Conceptual sketch of an emotion tuning control . . . . . . . . 14Figure 2.1 Individual differences in affective perception of vibrations . . 19Figure 2.2 Actuator, prototype, and setup for the study . . . . . . . . . . 25Figure 2.3 Rhythm patterns for the affective ratings, and the tactile tasks . 26Figure 2.4 Interface for frequency matching task . . . . . . . . . . . . . 28Figure 2.5 The user interface for the affective ratings . . . . . . . . . . . 29Figure 2.6 Distribution of total scores in the tactile tasks . . . . . . . . . 33Figure 3.1 Conceptual sketch of personalization mechanisms . . . . . . . 38Figure 3.2 Design space of the personalization tools and three personal-ization mechanisms . . . . . . . . . . . . . . . . . . . . . . . 40Figure 3.3 Prototypes of three personalization mechanisms . . . . . . . . 45Figure 3.4 Apparatus and setup for the study of personalization mechanisms 48Figure 3.5 Seven rhythm patterns in the choosing prototype . . . . . . . 49Figure 3.6 Participants’ rankings of the three personalization prototypes . 51xviFigure 4.1 Conceptual sketch of VibViz . . . . . . . . . . . . . . . . . . 58Figure 4.2 An intuitive interface for navigating a vibration library . . . . 60Figure 4.3 Using Audacity for visual comparison of vibrations . . . . . . 65Figure 4.4 The VibViz interface and C2 wristband . . . . . . . . . . . . . 67Figure 4.5 Average filter and space usage per participant . . . . . . . . . 71Figure 4.6 Average filter and space usage per scenario . . . . . . . . . . 72Figure 4.7 Subjective ratings of the vibrotactile facets . . . . . . . . . . 73Figure 5.1 Conceptual sketch of haptic facets . . . . . . . . . . . . . . . 76Figure 5.2 Design, evaluation, and personalization scenarios . . . . . . . 78Figure 5.3 Vibration facets, their dimensions, and linkages . . . . . . . . 83Figure 5.4 Expert annotation interface . . . . . . . . . . . . . . . . . . . 92Figure 5.5 Validation interface for lay user annotation . . . . . . . . . . 95Figure 5.6 Eigenvalue plots for the four facets . . . . . . . . . . . . . . . 99Figure 5.7 Vibration distribution across the sensation dimensions . . . . 103Figure 5.8 Vibration distribution across the emotion dimensions . . . . . 103Figure 5.9 Vibration distribution across the metaphor dimensions . . . . 103Figure 5.10 Vibration distribution across the usage dimension . . . . . . . 104Figure 5.11 Co-occurrence of the sensation and emotion tags . . . . . . . 106Figure 5.12 Tag disagreement scores in the four facet . . . . . . . . . . . 107Figure 5.13 Disagreement scores for a subset of the vibration library . . . 108Figure 6.1 Conceptual sketch of haptic crowdsourcing . . . . . . . . . . 120Figure 6.2 Source of high-fidelity vibrations and perceptual rating scales 126Figure 6.3 Visdir Visualization, based on VibViz . . . . . . . . . . . . . 127Figure 6.4 Visualization design process . . . . . . . . . . . . . . . . . . 129Figure 6.5 Visemph visualization guide . . . . . . . . . . . . . . . . . . 129Figure 6.6 Vibrations visualized as both Visdir and Visemph . . . . . . . . 130Figure 6.7 Example of LofiVib proxy design . . . . . . . . . . . . . . . 132Figure 6.8 Confidence intervals and equivalence test results for Study 1 . 134Figure 6.9 Rating distributions from Study 1 . . . . . . . . . . . . . . . 135Figure 6.10 Confidence intervals and equivalence test results for Study 2 . 139Figure 7.1 Conceptual sketch of an emotion tuning control . . . . . . . . 145xviiFigure 7.2 Mapping emotion controls to vibration engineering parameters 148Figure 7.3 Use cases for tuning vibrations’ characteristics . . . . . . . . 154Figure 7.4 Ten basis vibrations for tuning mechanism . . . . . . . . . . . 156Figure 7.5 Functional implementation of the vibration engineering param-eters in the tuning studies . . . . . . . . . . . . . . . . . . . . 159Figure 7.6 An example of vibration derivatives in the tuning studies . . . 162Figure 7.7 Experimental setup for the tuning studies . . . . . . . . . . . 162Figure 7.8 Boxplot of agitation, liveliness, and strangeness ratings . . . . 166Figure 7.9 Average emotion ratings across the 10 basis vibrations . . . . 167Figure 7.10 Instagram for vibrations . . . . . . . . . . . . . . . . . . . . 174Figure 7.11 Emotion toolbox . . . . . . . . . . . . . . . . . . . . . . . . 175Figure 7.12 Haptic palette generator . . . . . . . . . . . . . . . . . . . . 176Figure 8.1 Summary of our contributions to personalization mechanisms 181Figure 8.2 Summary of our contributions to affective haptics . . . . . . . 183Figure 8.3 Summary of our contributions to large scale evaluation . . . . 185Figure A.1 Spatial configuration of sensation f tags . . . . . . . . . . . . 216Figure A.2 Spatial configuration of emotion f tags . . . . . . . . . . . . . 216Figure A.3 Spatial configurations metaphor f tags . . . . . . . . . . . . . 217Figure A.4 Spatial configurations of usage f tags . . . . . . . . . . . . . . 217Figure A.5 Vibration disagreement scores - part A . . . . . . . . . . . . . 219Figure A.6 Vibration disagreement scores - part B . . . . . . . . . . . . . 220Figure A.7 Co-occurence of sensation f and emotion f tags . . . . . . . . . 222Figure A.8 Co-occurence of sensation f and metaphor f tags . . . . . . . . 223Figure A.9 Co-occurence of sensation f and usage f tags . . . . . . . . . . 224xviiiAcknowledgmentsFirst and foremost, I would like to thank my PhD supervisor, Dr. Karon E. MacLean,for all the research and career mentorship I received during these years, and for hersupport and compassion during the most difficult times in my PhD studies.I am grateful to my supervisory committee, Dr. Tamara Munzner and Dr. JamesT. Enns. I thank Dr. Munzner for her great information visualization course, whichshaped part of this thesis, and for her constructive feedback on this research. Ithank Dr. Enns for agreeing to be part of my PhD and MSc. studies and for histhought-provoking questions and feedback in our meetings. My decision to pursuea PhD was in part the result of working with him in my MSc. studies. I am alsograteful to Drs. Seungmoon Choi, Machiel Van der Loos, and Ronald A. Rensinkfor agreeing to be on my examination committee and for their comments on thisthesis.I would like to give special thanks to my student collaborators. In particular,Dr. Oliver Schneider, for being a wonderful research buddy and for all our pro-ductive discussions and teamwork. Also, special thanks to Kailun Zhang, MatthewChun, Salma Kashani, Chamila Antonypillai, and Dilorom Pardaeva for all theircontributions to this work. These collaborations enabled me to aim higher andachieve more than what would be possible by me, as an individual.My special gratitude and friendship go to Dr. Mona Haraty for our discussions,and for the countless number of times I have benefited from her experience andadvice in my studies.I thank the faculty members and the students of the MUX and SPIN researchgroups, for all their great feedback on my research, paper drafts, and practice talks.I have learned a lot by being a part of the MUX team. In particular, I thank thexixfaculty members, Drs. Joanna McGrenere, Tamara Munzner, and Kellogg Booth,and the MUX students, Kamyar Ardakani, Paul Bucci, Laura Cang, Matthew Chun,Jessica Dawson, Francisco Escalona, Anna Flagg, Dr. Brian Gleeson, Dr. MonaHaraty, Izabelle Janzen, Dr. Idin Karoui, Salma Kashani, Soheil Kianzad, JulietteLink, Dr. Syavash Nobarany, Antoine Ponsard, Yasaman Sefidgar, Dr. OliverSchneider, Andrew Strang, Diane Tam, Dilan Ustek, and Kailun Zhang.I am grateful for having many wonderful friends who made these years a mem-orable experience. In particular, I thank Saeedeh Ebrahimi Takalloo, Reza Shahidi-Nejad, Dr. Maryam Saberi, Marjan Alavi, Mahsa Khalili, Neda Tayebati, and Dr.Pooyan Fazli for many joyful memories, and their emotional support in this jour-ney.I thank UBC’s Four Year Doctoral Fellowship (4YF) program and the NaturalSciences and Engineering Research Council of Canada (NSERC) for providing thefunding for this work.Lastly, I have immense appreciation and love for my parents, Dr. MansourehTadayoni and Dr. Yahya Seifi, and my siblings, Dr. Mozhdeh Seifi and SoroushSeifi, for their emotional support and for being my best mentors and friends eventhough we are apart.xxDedicationTo my parents, Mansoureh and Yahya.xxiChapter 1Introduction1.1 MotivationWith today’s early state of haptic technology and of consumer exposure to its po-tential, personalization of haptic experiences may seem premature: few have ex-perienced haptics beyond the binary on/off buzz delivered by a phone or watch.However, far greater possibility is waiting in the wings, with the haptics industryprojected to expand dramatically in the coming years [178] and industry practition-ers seeking guidelines for how to design rich expressive sensations [104].In fact, a primary motivation for research in haptic personalization is that, first,broad uptake of the haptic modality is unlikely without personalization, becauseof major differences in how individuals perceive, prefer and (very likely) will ul-timately utilize it. Secondly, supporting it is not straightforward because so littleis understood of how people cognitively interpret and remember haptic sensations.Beginning to address this causality dilemma is our present purpose.Leveraging Haptic UtilityHaptic signals can convey rich information. Although most people’s everyday ex-posure to haptics is limited to simple binary buzzes from their cellphones, studiesshow that rich sensations and high utility is possible [16, 20, 77, 97]. Haptic sig-nals can serve purely functional and informational purposes (e.g., facilitate timetracking [164], provide navigation information and guidance [16, 86, 134], support1remote collaboration [20]) or enhance realism and aesthetic experience of enter-tainment media (e.g., multimodal interfaces [97, 98], games [6], and storytelling[77]).However, the utility of haptic signals depends on their match to users’ cog-nitive schemas. Although people can learn arbitrary meaning-mapping schemes[35, 158], signals that “make sense” are easier to learn and memorize, and havehigher aesthetic appeal [48, 77]. In everyday physically and cognitively demand-ing scenarios (e.g., presentation, meeting, exercising), these characteristics eitherdrive wide adoption of haptics or constrain their use to a niche group of people.Unfortunately, designing intuitive haptic signals is a challenge [139, 140]. Dueto hardware limitations, a large portion of the design space is not aesthetically ap-pealing and many points in this space are perceptually similar. Further, despiteongoing research efforts, limited guidelines are available on affective and intu-itive design. Designing intuitive signals remains an art, requiring extensive designexperience as well as constant evaluation and refinement. Individual differencesin experiencing haptics amplify the problem. Decades of research suggest thatpeople differ on several levels from tactile acuity and receptors, to tactile informa-tion processing and memory, as well as preference and description of sensations[26, 68, 98, 100, 128].To have effective signals despite the above challenges, individuals must beable to improve personal salience by altering available designs aimed for an aver-age user. While adjusting signal strength can address differences in tactile acuity,tweaking can go beyond that to adjust information density, signal-meaning assign-ment, and aesthetic qualities of the signals.To achieve these, personalization tools must be simple and efficient. Difficultthings seem fancy and become obsolete in the cost-benefit trade offs by users.In contrast, there are many examples of well-designed tools for self-expressionfinding a large audience. According to personalization literature in other domains,take-up improves with sense of control and identity, frequent usage, ease-of-useand ease-of-comprehension in personalization tools and is hindered by difficultyof personalization processes [10, 101, 105, 118, 120]. Color and photo editingtools are good examples where wide suites of tools, available for selection andediting (e.g., color swatches and gamut, preset photo filters and sliders), have led2to large adoption by end-users.In haptics, however, a large knowledge and motivation gap divides haptic pro-fessional and lay users. Existing design and authoring tools support the formergroup by providing control over low-level engineering parameters. For wide adop-tion by novice users, haptic personalization tools must be far easier to use, andthis entails operating in users’ perceptual and cognitive space (Figure 1.1). Weanticipate that such improvements will be valuable to haptic professionals as well:despite having the knowledge to derive haptic sensations by controlling indirectparameters, having perceptually salient ”knobs” to turn will add creativity and ef-ficiency to their process.Figure 1.1: A large gap exists between experts and lay users in thinking about and describing hapticsensations. Experts think in terms of engineering parameters, whereas lay users describe thesensations according to their sensory and affective connotations.Informing Haptic Design and EvaluationLast but not least, research on personalization can inform haptic design practicesand tools. Developing simple yet effective personalization tools requires a deepunderstanding of common patterns in users’ perception, which in turn enables ef-fective and rich vibration design for a large audience. Further, simple and efficientauthoring tools are useful for design; they enable rapid sketches and refinements,and facilitate the creative design process. The tools and guidelines we developedin this thesis are motivated by and contribute to both design and personalizationdomains. Finally, designing tools for a diverse audience requires haptic evaluation3at a large scale. This requirement, in turn, highlights gaps in the tactile evaluationmethodology that are not faced in typical small-scale lab-based studies. Solutionsdevised for those gaps expand the suite of haptic evaluation methodologies avail-able to designers and point to future directions for research and development.Thus, this thesis has three main research themes. The goal of this thesis isto support haptic personalization (1). In doing so, we also investigate commonpatterns in users’ perception of haptic signals (2), and devise methodologies forlarge-scale evaluation of haptics (3). We focus on vibrotactile stimuli, as the mostmature, ubiquitous, and accessible type of haptic feedback for end-users. Tech-nological advances and research on psycho-physical attributes, design tools, andapplications for vibrotactile stimuli enable investigation of affective qualities forthese sensations.In the following, first we outline past progress in the above theme areas (Sec-tion 1.2), then summarize the components of this thesis with a chronological lens(Section 1.3). Finally, we present our high-level contributions to each of the themes(thematic view in Section 1.4) and highlight the links between the chronologicalstructure (i.e., thesis chapters) and contributions in Table 1.1.1.2 Situating Our WorkHere, we present a brief overview of the related literature on the three main themesof this thesis. Focused related work sections in the following chapters will buildupon this first pass, each of them emphasizing literature pertinent to their researchquestion(s).1.2.1 Supporting PersonalizationThere has been substantial personalization research in other modality and applica-tion domains, providing insights on effective mechanisms, in contrast to minimalefforts to date for haptic experience personalization.Personalization mechanisms in other domains - Henderson and Kyng de-scribed three approaches for changing the behavior of a software tool: 1) choos-ing between pre-defined behaviors, 2) constructing new behaviors from existingpieces, and 3) altering an artifact through modifying the source code [60]. These4approaches vary in the background knowledge and time investment required ofusers. In the first approach, a settings panel allows users to choose between existingconfigurations and add/remove toolboxes and features from the interface [130]. Inthe second approach, the interface provides users with a set of building blocks thatthey can combine for new behaviors [45, 53]. The last approach requires end-userdevelopment and programming and is typically facilitated by visual programminglanguages or light-weight scripting [99].Existing commercial interfaces deploy and expand upon the above mecha-nisms. A suite of tools exists for choosing and adjusting colors including pre-designed palettes, color picker, and color gamut for choosing from a set as well assliders to change RGB, brightness, hue, etc. In the photo editing domain, one canmake detailed modifications (e.g., crop, select, move or rotate a region, adjust colorfor an individual or groups of pixels) or apply overall effects to a picture. Instagram[39], Adobe Lightroom [38], and Adobe Photoshop [2] include a suite of tools andsliders for these manipulations. Similarly, in games and virtual worlds, users canmodify features of a single character (e.g., appearance, power, etc.), or configurecomponents of an environment by choosing from a set(s) of alternatives, or ad-justing sliders [30, 92]. These instances highlight the prevalence of pre-designedcollections and simple tuning mechanisms for personalization in other domains.Haptic personalization - In comparison, there exists very little support forpersonalization in haptics. iOS 5.0 and later versions offer users a short list of(less than 10) vibration patterns to choose from. In addition, users can create acustom vibration by tapping a pattern on the interface [176]. Besides these, twohaptic collections were introduced in the last few years, offering a wide range ofpre-designed sensations, each with a unique interface and organization schema.Pre-designed haptic collections and their structure - In March 2011, Im-mersion Inc., a multinational company specializing in haptic technology, releaseda library composed of 120 vibrotactile effects and an API for accessing them[72]. Two Android applications showcase Immersion’s vibration library and APIto users. The first application, released in 2011, provides a list view of the ef-fects, grouped based on their functionality or signal content (e.g., vibrations with“two clicks” are grouped together.) [72]. “Haptic Muse” was the second applica-tion, temporarily released in 2013, to showcase usage examples of the vibrations5in the context of simple multimodal game scenes [71]. In 2014, Disney Researchintroduced their FeelEffects library which is composed of 54 sensations, groupedinto six families of metaphors (e.g., rain, explosion) [77]. Vibrations in each fam-ily can be accessed through a set of presets (e.g., heavy rain, downpour, sprinkle)and sliders (e.g., drop strength, size, frequency). FeelMessenger is an instant mes-saging application prototype, based on the FeelEffects library, that allows users toaccompany their text messages with customized vibration sensations [76].To fill the large personalization gap in haptics, a first step is to develop effectivemechanisms and tools for haptic personalization which can in turn enable futureresearch in the area.Adaptive approaches - A closely related topic is research on adaptive inter-faces which can automatically adjust their functionality and/or content or providerecommendations based on users’ preferences, interaction history, or state (e.g.,location, activity, etc.) [44, 46, 79]. While adaptive interfaces eliminate the per-sonalization effort for users, research suggests that they prefer easy-to-use person-alizable systems and perceive to have higher performance with them [44]. Further,improper automatic adaptation can, in fact, lower users’ performance and increasetheir cognitive load compared to using a static one-size-fits-all interface [43, 110].In haptics, limited understanding of users’ preferences and suitable adaption tar-gets for different individuals makes proper adaption particularly challenging. Thus,haptic personalization research takes precedence over adaptive approaches. Ourwork informs future efforts on adaptive haptic systems by characterizing users’cognitive and affective schemas for haptics (Chapters 4 and 5).1.2.2 Understanding Common Patterns and Individual DifferencesThe haptics community has established foundations of haptic design. Past studieshave outlined psychophysical properties of vibrations (e.g., just-noticeable differ-ence and detection thresholds for different body locations) [64, 83, 87, 129, 157],identified a set of design parameters (e.g., rhythm, energy, envelope) [15, 63, 102,166, 174], and provided guidelines for designing a set of perceptually distinct vi-bration sensations [102, 103]. However, few guidelines exist on translating high-level design descriptions (e.g., intended emotions, metaphors, or usage examples)6to sensory or engineering parameters available in the authoring tools. Here, weoutline efforts on devising affective guidelines and categorize various instances ofindividual differences reported in the literature.Devising guidelines for affective design - Previous studies in this area havesimplified the question to characterizing the link between the engineering param-eters of vibrations (e.g., frequency) and the two emotion attributes of pleasant-ness and arousal [91, 139, 184, 186]. Vibrations with longer duration, higher en-ergy, roughness, or envelope frequency are perceived less pleasant and more ur-gent [139, 184]. Sine waveform is perceived smoother than square waveform andramped signals feel pleasant [123, 139]. Little or no guidelines exist on design-ing for other emotion or qualitative attributes. Further, little is known about users’cognitive schemas for vibrations, the range of qualitative and affective attributesperceived for the signals, and their underlying semantic structures.Characterizing users’ language - Users’ descriptions of haptic sensationsprovide a window to the signals’ affective attributes. Recent studies in this do-main suggest that people use a mixed language for describing haptic sensations[28, 52, 119, 139]. Sensory and emotion attributes are used most often; Guestet al.collected a dictionary of sensory and emotion words for tactile sensations andproposed comfort and arousal as the underlying dimensions for the tactile emo-tion words. For tactile sensation words, the results of the MDS analysis suggestedrough/smooth, cold/warm, and wet/dry orthogonal dimensions [52]. Others re-ported using metaphors (e.g., boat, car), usage examples (e.g., warning, stop), en-gineering attributes (e.g., high frequency), or vocalizations (e.g., beooo, dadada,Zzzz) for describing vibrations [28, 119, 139, 175].We developed these into haptic facets (categories of attributes related to one as-pect of an item), that can encapsulate users’ sense-making schemas for vibrations(Chapters 4 and 5) and thus offer an effective theoretical grounding for personal-ization tools.Characterizing individual differences - Besides generalizable guidelines, de-signing for a large audience requires an understanding of the type(s) and extent ofvariations that exist around an average, aggregated perception. At least three cate-gories of individual differences are reported in the haptic literature:7• Sensing and perception: Sensitivity and signal resolution of mechanore-ceptors can vary among individuals leading to differences in tactile acuity,threshold, and difference detection [100, 156, 157]. These differences aremore pronounced for subtle sensations such as programmable friction andcan impact the perceptual space of sensations. In an old study of naturaltextures, Hollins et al.. reported a 2D perceptual space for some participantsvs. a 3D space for some others [68]. Individual differences in this cate-gory are commonly investigated with psychophysical studies and avoid useof subjective components such as language terms.• Tactile processing and memory: People vary in their ability to processand learn tactile stimuli [26, 36, 47, 98]. As an example, an early study onOptacon at Indiana University, a tactile reading device for blind individuals,suggested two groups of “learners” and “non-learners” in a spatio-temporaltactile matching task [26]. In a longitudinal study of tactile icon learning,participants had different learning trajectories over time [158]. Similarly,recent studies with a variable friction interface show notable differences ina set of tactile tasks [98]. Others suggest that people vary in the extent theyrely on touch for information gathering or hedonic purposes [128]. Hapticprocessing abilities can improve with practice; visually impaired individualsdevelop exceptional tactile processing abilities regardless of their degree ofchildhood vision [49].• Meaning mapping and preference: People commonly need to map abstracthaptic signals to a meaning. In the absence of shared cultural connotationsfor haptics, mapping meaning to abstract haptic signals relies on personalexperiences and sense-making schemas. Differences in describing and pref-erence for haptic stimuli, reported in the literature, suggest individualizedschemas for meaning mapping [4, 98, 139].The last category has been studied less than the other two in the literature,contributing to the challenge of designing meaningful and aesthetic haptic icons. Inthis thesis, we contribute to the last category by reporting on the variations observedfor the above meaning-mapping facets.81.2.3 Evaluating at ScaleDeveloping generalizable themes and design guidelines is hard, if not impossiblewith small scale studies. In contrast, much more can be learnt by collecting data ona wide range of sensations from a large and heterogeneous group of users. Despiteongoing progress in haptic evaluation methodologies and metrics, there is littleliterature on supporting tactile evaluation at scale. Past researchers have adoptedor revised existing methodology in the haptic and other domains to fill this gap.Here, we focus on studies of large sets and large participant pools.Collecting data for a large set - Studies of large sets (>40 items) are rare in thehaptic literature, partially due to lack of an effective data collection methodology.When studying large sets, feedback is commonly limited to a few ratings and/oritems are divided to smaller subsets, evaluated in different sessions [155, 166, 172].In particular, Ternes et al. devised a methodology for collecting extensive multidi-mensional scaling (MDS) data for a large set (84 items), and established a mathe-matically sound procedure for merging the results together [165, 166]. We expandon these ideas in our proposed evaluation procedure.Crowdsourcing - In other domains, user perception is collected through onlineplatforms such as Amazon’s Mechanical Turk (MTurk) [5]. Initial studies in thesedomains established validity of the data collected and best practices with MTurk[59, 89, 106], enabling a wide range of studies to collect data in a fraction of timeand cost compared to lab-based studies [22, 152, 179]. Haptic studies, however, areleft out due to the need for specialized hardware, not available to “crowds”. To uti-lize the MTurk platform, we need a workaround for existing hardware limitationsas well as studies validating data collected with remote platforms.In this thesis, we propose efficient methodologies for collecting data for a largehaptic set in both lab-based (Chapter 5) and remote settings (Chapter 6).1.3 Approach - The Chronological ViewHere, we describe the components of this thesis in a chronological order with eachchapter motivating and contributing to the next one. In Section 1.4, we list thesiscontributions and link them to the work reported in individual chapters (most ofwhich are published papers) in Table 1.1.9Chapter 2 - Linking emotion attributes to engineering parameters andindividual differencesFigure 1.2: Conceptual sketch of individual differences in affective perception of vibrationsThe first step of this research was motivated by our interest in affective designand further confirmation of the gap by the literature and industry. In a review ofthe haptic literature, we noted few studies on affective attributes of synthetic hapticstimuli and several reports of individual differences in haptic perception and affect.At the same time, Vivitouch (a subsidiary of Artificial Muscles Inc.) contactedour lab with an interest in designing aesthetically pleasing vibrations. Together,these shaped our first research question: What parameters contribute to affectiveperception of vibrations?To address this, we investigated the impact of vibrations’ engineering prop-erties (specifically rhythm and frequency) on affective perception of the signals.Further, we tested if individuals’ characteristics (e.g., demographics, tactile perfor-mance) can account for differences in their perception. Results from our lab-basedstudy showed a significant impact of engineering parameters on ratings of energy,roughness, rhythm, urgency, and pleasantness but no link to individuals’ charac-teristics. Further, we noted that individual differences in haptics are nuanced andcannot be easily modelled or prescribed for in design.Chapter 3 - Characterizing personalization mechanismsTo support affective design given individual differences, we proposed a pragmaticapproach: enabling people, untrained in haptics, to personalize their everyday hap-tic signals (e.g., notifications) for their taste and utilitarian needs. Thus, we asked:What characteristics will make a vibration personalization tool usable?Based on a review of existing tools in haptics and other domains, we proposed10Figure 1.3: Conceptual sketch of three personalization mechanisms for haptic sensationsfive design parameters for haptic personalization tools and varied these parameterswithin low-fidelity prototypes of three mechanisms: a) choosing1: users can se-lect from a list of pre-designed vibrations, b) tuning2: users can adjust high-levelcharacteristics of a vibration by changing the value of a control, and c) chaining3:users combine short pre-designed tactile building blocks (e.g., by sequencing them)to create a new vibration sensation.Results from a Wizard of Oz (WoZ) study with paper prototypes of the toolssuggested tuning to be the most preferred approach for being “fast”, “effective”,and providing a sense of “control”. Chaining was “fun” but it required “time” and“a good mood”, thus it was less practical for everyday scenarios. Finally, choosingwas the least preferred for its limited “control” but was rated as the easiest to use.Based on the results from this study, we focused on further developing choosingand tuning as the most practical mechanisms for personalization tools.Chapter 4 - Choosing from a large library using facetsWe conjectured that the low preference ratings for the choosing approach was dueto the limited set of vibration options. i.e., limited control and choice. Thus, wefocused on providing a wide range of vibration sensations to satisfying varioustastes and needs, and facilitating simple and efficient access to the library.People unconsciously use a multiplicity of cognitive schemas to make sense ofand describe qualitative and aesthetic attributes of vibrations [119, 139]. Facets andfaceted browsing, from the information retrieval and library sciences literature, canencapsulate these multiple schemas. A facet includes all properties or labels relatedto one aspect of or perspective on an item and offers a categorization mechanism.1called “choice” in the original conference manuscript2called “filter” in the original conference manuscript3called “block” in the original conference manuscript11Figure 1.4: Conceptual sketch of the choosing approach with VibVizWe compiled five haptic facets4 based on the literature and the expertise inour research group: 1) physical attributes of vibrations that can be objectivelymeasured such as duration, rhythm structure, etc. 2) sensory properties such asroughness, 3) emotional connotations, 4) metaphors that relate the vibration’s feelto familiar examples, and 5) usage examples or events where a vibration fits (e.g.,speed up). In parallel, we designed a library of 120 vibrations with a wide rangeof characteristics, and developed VibViz, an interactive visualization interface, thatprovides multiple pathways to navigate the library through the above facets.Results from a lab-based study confirmed utility of VibViz for searching andexploring our library. The majority of participants used and preferred the emotionview/facet the most but we found an interesting variation, with some preferring theother facets (e.g., usage example), and several asking for access to multiple facets.Chapter 5 - Deriving semantics and interlinkages of facetsConfirming the facets’ utility for end-users, we further investigated haptic facets togo beyond a flat list of attributes and understand their underlying semantic struc-tures as well as the linkages between different facets.First, we collected annotations (ratings and tags) for the 120 vibrations in atwo-stage methodology, where data from both haptic experts and lay users werecombined into a final validated dataset. Next, we analyzed the annotations fortheir underlying semantic structure(s) and interlinkages. Specifically, we appliedMultidimensional Scaling (MDS) analysis to our validated dataset, resulting in 4sensory, 3 emotion, 2 metaphor, and 1 usage example dimension(s). Further, we4called “taxonomies” in our original conference publication12Figure 1.5: Conceptual sketch of the five vibration facets and their underlying semantic dimensionsand linkagesinvestigated the linkages between the dimensions in different facets using factoranalysis as well as linkages between the tags based on their co-occurrence rate inour dataset. We also reported variations, representing individual differences, in theratings and tags for the four facets. Finally, we discussed how these results caninform three common scenarios in design and personalization of affective hapticsensations. Our dataset, source vibrations, and proposed facet dimensions werepublicly released for future investigations.Chapter 6 - Crowdsourcing haptic data collectionFigure 1.6: Conceptual sketch of crowdsourcing data collection for high fidelity vibrationsOur two-stage data collection methodology allowed us to collect rich informa-tion for a large library. However, it still required considerable time and effort, aswell as access to haptic experts. We could collect data from a large and diversegroup of users at a fraction of time and cost if we had access to crowdsourcingplatforms such as Amazon Mechanical Turk. Unfortunately, haptic studies rely onspecialized hardware, thus cannot be crowdsourced.13In this project, we investigated the feasibility of crowdsourcing haptic data col-lection using vibration proxies. A proxy is a sensation that communicates keycharacteristics of a source vibration within a bounded error. We asked: Can proxymodalities effectively communicate both engineering properties (e.g., duration),and high-level affective properties (roughness, pleasantness)? Can they be de-ployed remotely?To address these questions, we developed two visual proxies and a low-fidelityvibration proxy and examined them in a local lab-based as well as an online MTurkstudy. Results suggested that proxies are a viable approach for crowdsourcinghaptics and highlighted promising directions and challenges for future work.Chapter 7 - Tuning vibrations with emotion controlsFigure 1.7: Conceptual sketch of an emotion tuning control and its mapping to engineering attributesof vibrationsAmong our three personalization mechanisms, users preferred the tuning mech-anism the most for its “ease of use” and “sense of control” (Chapter 3). Thus, inthis chapter, we investigated the feasibility of designing emotion controls that allowtuning (i.e., moving) vibrations in a facet space. We chose agitation, liveliness, andstrangeness, the three underlying dimensions for the emotion facet (Chapter 5), asour target for emotion controls and asked: Can we find a continuous mapping be-tween a vibration’s specific emotion property (e.g., liveliness) and its engineeringparameters that apply to a diverse set of vibration patterns?Results from two user studies, where participants rated vibration alternatives14relative to the corresponding unaltered base vibrations, suggested existence of amapping between emotion and engineering attributes for a wide range of base vi-brations. We show, based on these results, that emotion controls are automatableand discuss three example interface enabled by these.1.4 ContributionsWe started by looking at individual differences and factors that contribute to af-fective perception of vibrotactile stimuli, and that led us to the central goal of thisthesis: enabling personalization of haptic sensations for end-users. We investigatedhaptic facets as a theoretical grounding for effective personalization tools and fur-ther developed choosing and tuning personalization tool approaches. Through ourstudies, we faced challenges and shortcomings in the tactile evaluation methodol-ogy and devised mechanisms to overcome those.Our work has four major contributions: The first three pertain to the themes ofsupporting personalization, understanding common themes and individual differ-ences, and evaluating at a large scale identified in Section 1.1. The last contributioncomprises public and open-source tools and datasets resulting from our work. Weoutline these contributions here, but elaborate on them in Chapter 8 (Conclusion).Table 1.1 illustrates the interleaved mapping between the chapters and contribu-tions.I - Effective mechanisms for haptic personalizationWe propose a design space for vibrotactile personalization mechanisms and de-velop the theoretical grounding and prototypes for two distinct mechanisms ofchoosing and tuning which we found to be most practical for personalization. Con-crete outcomes of our progress are:• A design space for personalization mechanisms outlined with five parame-ters (Chapter 3);• Three distinct mechanisms in the above design space: choosing, tuning, andchaining (Chapter 3);15Table 1.1: The mapping from contributions to thesis chaptersI- PersonalizationMechanismsII- Facets &Individual Differ-encesIII- EvaluationMethodologyIV- Tools &DatasetsChapter 2Individual differ-ences in emotionperceptionChapter 3Design space &three mechanisms:choosing, tuning,chainingDemonstration ofchoosing, tuning,chainingChapter 4 Choosing withVibVizFive vibrotactilefacetsVibViz interface &source codeChapter 5Facet dimensions,linkages, & indi-vidual differencesTwo-stage evalua-tion with experts &lay usersVibViz library &annotation datasetChapter 6 Crowdsourcingwith proxiesChapter 7 Tuning with emo-tion controlsEmotion to engi-neering mappingThree exampletuning interfaces• Development of the choosing mechanism: an interactive library navigationinterface (VibViz) and a first evaluation of its effectiveness (Chapter 4);• Development of the tuning mechanism: a technical proof-of-concept on thefeasibility of emotion controls and three example interfaces that can incor-porate such controls (Chapter 7).II - Haptic facets encapsulating common patterns and variations in affectRealizing that facets could effectively structure users’ cognitive processes for hap-tics, we compile five facets for vibrations, and characterize their attributes, under-lying semantic dimensions, interlinkages, and individual differences. Our concretecontributions include:• Five facets that encapsulate people’s cognitive schemas for describing and16making sense of haptic stimuli (Chapters 4 and 5);• Empirically derived semantic dimensions of four vibrotactile facets (Chap-ter 5)5;• Between-facet linkages at dimensional and individual tag levels, and discus-sion of their implications for vibrotactile design process and tools (Chap-ter 5);• Mapping between emotion and engineering attributes of vibrations (Chapters2 and 7);• Quantification and analysis of individual differences in rating and annotatingvibrations (Chapters 2 and 5);• Preliminary findings on the effect of demographics, NeedForTouch (NFT)score, and tactile task performance on individual differences in affective rat-ings (Chapter 2).III - Methodology for evaluating haptic sensations at a large scaleWe contribute to the tactile evaluation methodology for two cases: a) collectingrich feedback for a large stimuli set, and b) accessing crowds efficiently:• A two-step methodology for annotating large sets of vibrotactile effects, anddata on its validity and reliability (Chapter 5);• A way to crowdsource tactile sensations (vibration proxies), with a technicalproof-of-concept (Chapter 6).IV - Tools and datasetsOur work resulted in three open-source application packages and a public datasetthat serve to demonstrate our contributions and support future research and devel-opments in the area:5One facet is left out of the analysis as it pertains to engineering attributes of vibrations.17• Prototypes of the three personalization mechanisms for an Android phone(Chapter 3);• VibViz (tool): A web-based interactive library navigation interface (Chap-ter 4);• VibViz (dataset): Dataset of our 120-item vibration library including thevibrations’ source files (.wav), annotations (facet attributes), and characteri-zation according to the facet dimensions (Chapter 5).18Chapter 2Linking Emotion Attributes toEngineering Parameters andIndividual DifferencesFigure 2.1: Individual differences in affective perception of vibrationsPreface:1 Here, we made a first attempt at developing guidelines for affectivevibration design. Specifically, we investigated if vibrations’ ratings of pleasantnessand arousal could be linked to their engineering parameters as well as characteris-tics of individuals providing the ratings (e.g., demographics, tactile memory). Our1The content of this chapter was published as:Seifi and MacLean. (2013) A first look at individuals’ affective ratings of vibrations.Proceedings of IEEE World Haptics Conference (WHC ’13).19results suggested a link between emotion and engineering parameters. However,we noted that individual differences in emotion perception are nuanced and cannotbe modelled based on user performance or background.2.1 OverviewAffective response may dominate users’ reactions to the synthesized tactile sensa-tions that are proliferating in today’s handheld and gaming devices, yet it is largelyunmeasured, modelled or characterized. A better understanding of user percep-tion will aid the design of tactile behavior that engages touch, with an experiencethat satisfies rather than intrudes. We measured 30 subjects’ affective responseto vibrations varying in rhythm and frequency, then examined how differences indemographic, everyday use of touch, and tactile processing abilities contribute tovariations in affective response. To this end, we developed five affective and sen-sory rating scales and two tactile performance tasks, and also employed a published‘Need for Touch’ (NFT) questionnaire. Subjects’ ratings, aggregated, showed sig-nificant correlations among the five scales and significant effect of the signal con-tent (rhythm and frequency). Ratings varied considerably among subjects, but thisvariation did not coincide with demographic, NFT score, or tactile task perfor-mance. The linkages found among the rating scales confirm this as a promisingapproach. The next step towards a comprehensive picture of individuals’ patternsof affective response to tactile sensations entails pruning, integration, and redun-dancy reduction of these scales, then their formal validation.2.2 IntroductionTouch is an important means of obtaining information about objects, but it is alsohighly connected to our emotions [42]; as a consequence, affective reactions areinfluential in the many small decisions we make about the objects that surroundus. Only a few studies have investigated affective response to touch stimuli of anykind [37, 115, 159, 163]; but affective study of synthetic tactile stimuli such asvibrations or variable friction is even more sparse.While the programmable synthetic stimuli available to interaction designersare currently far less expressive than natural textures, growing attention to surface20interaction in recent years means tactile technology is evolving rapidly. Alreadydesigners need to optimize its affective potential. However, we lack relevant mea-sures and methodology for quantifying tactile affect. A multidimensional pictureof subjects’ opinions will help reveal patterns of preference more effectively thancan a single preference measure.There is also a dearth of data on individualized responses. Affect studies havetypically reported only responses averaged over subjects [37, 186]. There is tanta-lizing evidence that such variances may be substantial: e.g., Levesque et al.’s find-ings for subjects’ preference for different patterns of variable friction [98]. Tactiledesigners must understand this variation’s extent and driving factors.Evidence from the literature and our own early analyses suggest that differ-ences in everyday touch behavior, tactile abilities, and demographics might ex-plain substantial affective response variation. A recently developed scale (‘Needfor Touch’ (NFT)) assesses individual differences in extracting and using hapticinformation for everyday pleasure or utility evaluation [128]). Tactile task perfor-mance, employed as an indicator of tactile memory and processing resources, alsocan vary considerably across subjects [23, 36, 98]; are functional touch ability andhedonic preferences linked?Together, these factors raise questions about the relation of demographics, NFTscores and tactile task performance to variations in affective response. Long-term,we aim to optimize and validate a set of rating scales which reflect relevant dimen-sions of subjective response to tactile sensations; link affective and sensory per-ception of tactile technology parameters (e.g., frequency, amplitude); and assessthe individual differences in affect and perception and parameters that contributeto these differences.Here, we more specifically ask: what are the relevant dimensions for measuringaffective response, and can we integrate multiple rating dimensions? How does thevibration design space impact affective response? How is affective response linkedto demographics, NFT scores, and tactile task performance? Below, we discussthese questions in light of our study results.For maximum vibrotactile expressivity, we used a recent electroactive polymer(EAP) display from Vivitouch [8]. We examined 30 subjects’ affective ratings of 1svibrations (e.g., alerts and notifications). The rating scales, tactile stimuli and tasks21were drawn from the literature and refined via pilot studies. The main study usedfive rating scales to examine the effect of the vibration parameters and individualdifferences on the subjective ratings for vibrations. The contributions of this workare:• An initial examination of five proposed affective and sensory dimensions forrating tactile sensations (thorough validation requires further study);• Qualitative and quantitative data on the effect of rhythm pattern and fre-quency on affective and sensory ratings;• Quantitative data on individuals’ variation in time and frequency matchingperformance;• Preliminary findings on the effect of demographic, NFT, and tactile task per-formance on variations in affective ratings.In the following we describe our apparatus, and the design and selection of the vi-brations, tactile tasks and affective and sensory rating scales we used (Section 2.4).We report the main study and its results (Section 2.5), then discuss our findings andoutline future work.2.3 Related Work2.3.1 Affective EvaluationThe touch literature lacks a consistent vocabulary for affective response. Guestet al. recently collated a large list of emotion and sensation words describing tac-tile stimuli [52]; then, based on Multi-Dimensional Scaling (MDS) analysis ofsimilarity ratings, proposed comfort and arousal as underlying dimensions for thetactile emotion words, and rough/smooth, cold/warm, and wet/dry for sensation.We founded our affective rating scales on these words.Study of affective reaction to natural stimuli [37, 115, 159] revealed dependen-cies on many factors, such as materials and body sites, preventing generalizations[37]. Swindells et al. obtained valence and arousal response to touching variousnatural materials. Comparing self report ratings and physiological recordings from22subjects’ bodies (EMG and skin conductance), they found self report more sensi-tive in discriminating the subtle affective variations to these stimuli [159]. Oth-ers have examined affective reaction to synthetic stimuli in a variety of contexts[98, 186]. Most relevantly, Takahashi et al. studied feelings of pleasantness and an-imacy for low frequency vibrations (0.5 to 50 Hz) applied to finger tips and wristsof six subjects [163]. They found a significant effect of frequency on animacy butno effect on pleasantness. They also found an inverted-U relation between ratingsof pleasantness and animacy. Swindells et al. studied the link between the utility ofvarious haptic feels and subjects’ preference for the feel, in the context of a Fitts’law targeting task and without it. In some cases, subjects preferred the feedbackproviding inferior task utility [159]. In contrast, here we examine the relation ofaffective ratings to human tactile abilities rather than feedback utility.2.3.2 Vibrotactile StimuliPast studies have examined the impact of several parameters on information trans-fer, salience, and learnability of vibrotactile icons; these include frequency, rhythm,waveform, and texture [65, 165]. These parameters are also promising candidatesto evaluate in terms of their affective properties.2.3.3 Tactile TasksBoth sensory acuity and tactile processing resources, such as tactile working mem-ory, contribute to a person’s tactile abilities. Examination of tactile acuity fordifferent demographics and for various body locations has shown that acuity islower in sighted individuals and declines in old age [94]. However, acuity and JustNoticeable Difference (JND) studies did not report major individual differences[54, 94]. On the other hand, tactile individual differences were reported in somestudies involving remembering or processing of tactile stimuli [23, 36, 98]. Thus,we focused here on the tasks involving tactile working memory.Most short-term or working memory evaluation has focused on visual (iconicmemory) and auditory (echoic) stimuli. A few studies have investigated timeand capacity constraints of haptic working memory using tasks such as delayedmatching-to-sample task or n-back task (see [85] for a review). These report 5-10s23of sensory memory, which is consistent with our observations.2.3.4 Individual Differences in Tactile Task PerformanceConsiderable individual differences in tactile tasks have been reported in the liter-ature [23, 36, 68, 98]. An early study on vibrotactile pattern recognition with theOptacon [23] found four distinct groups based on subjects’ performance in threetactile tasks and their overall pattern of learning. The grouping remained consistentacross the tasks and two participant pools. Another study reported two groups oflearners and non-learners in a spatio-temporal pattern matching tactile task [36].Non-learners showed little improvement over four task sessions (400 trials), whilelearners had better initial performance and improved. Another study with variablefriction feedback showed considerable individual differences in task performanceand found various preferences for different friction patterns [98]. Finally, there isevidence of individual differences in texture perception [68]. An MDS analysison a texture similarity rating task suggested a three-dimensional space for someparticipants, two-dimensional for others.In everyday life, people vary in the extent that they seek information throughtouch or use it for sensory pleasure [128]. ‘Need for Touch’ (NFT) is a 12-itemquestionnaire developed for consumer research that measures these differences ondimensions of pleasure (Autotelic) and information (Instrumental) touch [128]. Anexample Autotelic item on the questionnaire is “Touching products can be fun”,whereas, “I place more trust in products that can be touched before purchase” is anInstrumental item. NFT is based on motivational differences among individuals inusing touch, whereas scores on a tactile task show tactile ability differences amongindividuals.Later studies have shown that higher NFT individuals have greater memoryaccess to haptic information, seek and use it more for forming judgments [128].These NFT studies used a relatively large number of subjects (60-100); our 30-subject exploratory trial provided less power than it required, but we included theNFT questionnaire to get an estimate of its effect size and to determine its utilityfor future research.24Figure 2.2: Actuator(a), and prototype and setup (b) for the study2.4 Design of Setup and Assessment ToolsIn this section, we describe our apparatus and the vibrations, tactile tasks and ratingscales used in our main experiment.2.4.1 ApparatusWe used an EAP vibrotactile actuator from Vivitouch, a subsidiary of ArtificialMuscles Inc. [8]. The module translates an input audio waveform to a tactileoutput, with an effective range of 20 Hz-200 Hz. Biggs et al. empirically modeledthe actuator performance and the resulting fingerpad and palmar sensations [9]),estimating a palmar stimulation of approximately 22 dB for 75 Hz and 175 Hz, and29 dB at 125 Hz, with a peak of 32 dB at 100 Hz. For our prototype (Figure 2.2), wesandwiched the actuator between two thin rectangular plastic plates, each 0.5mm×12.5cm×6cm; and encased the assembly in a protective case with same size, shapeand markings of a smartphone. The prototype’s total mass was 64 grams.2.4.2 Stimuli DesignFocusing on vibratory stimuli, we wanted to know which parameters could mostimpact subjective response and to choose a relevant range. In pilots, subjectsshowed some patterns of preference for longer vibrations (1s for alerts and noti-25Figure 2.3: Rhythm patterns for (a) the affective ratings, and (b) the tactile tasks chosen from [165].Filled slots represent a vibration; unfilled slots represent silence or pause.fications) compared to no preference among various short vibrations (0.1-0.3s forkeypress feedback). Thus, we focused on 1s signals. Follow-up pilots with a largeset of simple and complex waveforms suggested the importance of frequency andtemporal (rhythmic) pattern on subjects’ preference. Base frequencies of 75 Hzand 175 Hz captured variations in subjects’ preference for different actuator fre-quencies in pilots; for rhythmic pattern, we drew from a perceptually validated setof rhythmic icons [165].For our main study, we chose seven representative patterns from this rhythmset [165] (Figure 2.3-a). The patterns were each 1s, rendered in two frequencies(75 Hz and 175 Hz), and repeated twice (7 patterns× 2 frequencies× 2 repetitions= 28 ratings per subject).2.4.3 Tactile Task DesignWe wanted to know if subjective ratings for vibrations would be affected by tactileabilities. Studies in other domains (e.g., music) have shown that proficiency withstimuli influences an individual’s pattern of preference for the stimuli [122]. Also,research in processing fluency indicates a link between information processing andaffective response [4]: people provided more positive affective ratings for easier-to-process stimuli, e.g., with slightly higher contrast. In addition, our post-hocanalysis of data from [98] suggested that subjects preferred friction patterns thatthey were better at detecting; and subjects with better performance provided twiceas many positive ratings as lower-performing subjects. Clearly, tactile processingabilities may contribute to affective response.26For our purpose, a tactile task must predominantly detect tactile abilities (asopposed to general cognitive abilities, such as intelligence); i.e., have construct va-lidity. It must engage tactile memory and processing resources since simple tactileacuity or JND tasks did not show considerable performance variations among sub-jects in past studies (Section 7.3). Finally, it must have a difficulty level that revealsindividual differences, and be reliable enough to allow between-subject compari-son. We are not aware of a standard battery of tasks that satisfies these criteria.There is one, however, for visual processing [33], and thus our task design wasguided by this as well as the touch literature.We examined rhythm, amplitude, time, and frequency matching tasks in whichsubjects matched a vibration to an available choice. Choices varied in rhythm, am-plitude, time, or frequency. In pilot studies, rhythm matching did not rely on tactileabilities (lack of construct validity) and amplitude matching performance revealedvery small individual variation. Time and frequency matching more closely metour criteria.In our main study, tactile tasks comprised stimulus sets and a protocol. Thestimulus set for both time and frequency matching tasks consisted of five rhythmpatterns (Figure 2.3-b). Time matching task (two alternative forced choice, 2AFC):each rhythm was rendered at 75 Hz and durations of 1s and 1.3s (pilots suggested0.3s difference was appropriately difficult). Frequency matching task (3AFC): thesame five rhythms were each rendered at 75, 125 and 175 Hz and a duration of 1s.The same procedure was used for both tasks. For each choice we asked subjectsto indicate their confidence in the answer by choosing “Maybe” (for a score of 1 or-1, for correct and incorrect matching respectively) or “Sure” (2 or -2) (Figure 2.4)[17]. In each trial, subjects could feel the stimulus and the matching choices exactlyonce and were instructed to go through the choices from left to right to maintaincontrol over order effects. Stimuli were presented in a random order and subjectswere told that their choices differed in the feeling (frequency) or the timing of thevibrations.27Figure 2.4: Interface for frequency matching task (similar interface for the time matching task butwith two selection buttons)2.4.4 Affective Rating Scales DesignMost affective haptics studies have used a single measure of affective response(e.g., liking, pleasantness) or a set of self-selected scales [37, 90, 115]. An idealaffect measurement scale for our purpose must capture important dimensions of af-fect and perception, allow integrated analysis of those dimensions and examinationof individuals’ variations from average patterns of ratings, and ideally accommo-date diverse tactile sensations including synthetic and natural stimuli. An inte-grated rating scale could also guide the design of new tactile sensations by reveal-ing unexplored parts of the affect and sensation space based on subjects’ ratings. Inour discussion, we outline our progress towards these criteria, and identify futuresteps required for validation and further development of the scales. Nevertheless,the criteria for a desirable scale evolve as we further study affective response totactile sensations. In the following, we use ‘rating dimensions’ and ‘scales’ inter-changeably.As a first step towards such an integrated scale, we designed an initial setof subscales based on the touch vocabulary derived by Guest et al. (see RelatedWork [52]). We chose a representative word from each part of their resultantemotion and sensation spaces, resulting in unpleasant/pleasant, uncomfortable/-comfortable, and boring/exciting for emotion. From their sensation space, afterremoving words which our hardware cannot literally render (e.g., cold/warm, andwet/dry), we were left with smooth/rough and soft/hard. We added weak/strong28Figure 2.5: The user interface for the affective ratingsand non-rhythmic/rhythmic to better capture the characteristics of our vibrations.This resulted in eight initial scales: weak/strong, smooth/rough, soft/hard, non-rhythmic/rhythmic, boring/exciting, unpleasant/pleasant, uncomfortable/comfort-able, dislike/like.In a pilot, 6 subjects (4 males) used these scales to rate vibrations described inSection 2.4.2, using the interface shown in Figure 2.5. We removed the liking andcomfort dimensions because of high correlation with pleasantness (r=0.8). We alsoremoved the soft/hard dimension as subjects had difficulty in attributing hardnessto the vibrations. Further, we re-labeled the boring/exciting to calm/alarming toachieve neutral valence and avoid inconsistent interpretations. Although not de-liberate, unpleasant/pleasant and calm/alarming dimensions map to well-knownvalence and arousal dimensions for emotions.This resulted in five dimensions employed in the main study: three sensory(weak/strong, smooth/rough, non-rhythmic/rhythmic) and two affective (calm/alarm-ing, unpleasant/pleasant).292.5 Study2.5.1 Procedure30 subjects participated in a one-hour, 3-part study and were compensated with$10. (1) Subjects completed a general information questionnaire and the ‘Need forTouch’ survey; then (2) rated 28 vibrations (Section 2.4.2) each on five affectiveand sensory scales. Vibration presentation order was randomized across subjects.On the rating interface, labels were randomly placed on the left or right side of eachscale for each subject to reduce rating bias. (3) Subjects completed two rounds ofthe time and frequency matching tasks (Section 2.4.3). Time and frequency taskswere interleaved and their order counterbalanced among subjects. Subjects heldand felt the cell phone prototype in the non-dominant hand and listened to whitenoise to mask actuator noise.2.5.2 Results and AnalysisSubjects were diverse. All subjects were students between 18-45 years old, 15female, 3 left-handed, 15 from computer science and 15 from psychology, arts,chemistry etc. Sixteen participants (16) were from North America or Europe, 14from Asia and Middle East. Fourteen participants had more than two years ofmusical background, six had less than two years and ten reported none. Elevenused eye glasses, and no one reported tactile deficiency. Touch tablets and smartphones, guitar, piano, Wii, and Dictaphone were mentioned as frequently usedtouch devices. NFT scores varied from -25 to +30. Following the same procedureas [128], we used a median split on NFT scores to divide the subjects into high andlow NFT groups.Rating scales revealed correlations. Overall, smooth/rough, calm/alarming,and unpleasant/pleasant ratings were significantly correlated. The bivariate Pear-son correlation of the five ratings for all subjects showed medium significant corre-lation between smooth and pleasant (r=.53), rough and alarming (r=.42), unpleas-ant and alarming (r=.39), and strong and alarming (r=.38). Directionally, subjectsfound rougher patterns more alarming and unpleasant. Stronger patterns were per-ceived as more alarming and rhythmic patterns were more pleasant (r=.2).30Stimulus composition influenced subjective ratings. On average, rhythmsignificantly impacted ratings for all scales, while frequency only impacted thecalm/alarming ratings. To examine the effect of rhythm and frequency on ratings,we ran five separate within-subject ANOVA tests with each rating scale as the de-pendent factor and rhythm, and frequency as two independent factors. All reportedeffects were significant at p<0.01. Rhythm had a main effect on all five scales (seeTable 2.1). The long continuous vibration (pattern 1) was perceived as strongest,smoothest, and most non-rhythmic. The pattern with several very short vibrations(p6) was the roughest, most alarming and most unpleasant. The long vibration withone short silence (p4) was most pleasant and among the strongest. Patterns withfew short vibrations (p3, p7) were the weakest and most calm. Frequency onlyhad a main effect on the calm/alarming scale (Table 2.1). 175 Hz vibrations weremore alarming than 75 Hz. There was an interaction effect of rhythm*frequencyfor weak/strong scale, i.e., 75 Hz was perceived stronger or weaker than 175 Hzdepending on the pattern.Table 2.1: Summarized results of the ANOVA tests on the five affective rating scalesRating Scale Significant Factors F Value, Effect SizeWeak/Strong Rhythm F(3.07,107.44)=49.46,η2=0.58Rhythm*Frequency F(6, 210)=7.5, η2=0.18Smooth/Rough Rhythm F(2.8,100.83)=6.44,η2=0.15Non-rhythmic/Rhythmic Rhythm F(3.11,112)=25.94,η2=0.42Calm/Alarming Rhythm F(3,109)= 10.64, η2=0.23Frequency F(1,36) = 10.62,η2=0.23Unpleasant/Pleasant Rhythm F(2.75,99)=4.1, η2=0.1Individuals’ affective and sensory ratings varied. The average ratio of meanto standard deviation for the five scales were: weak/strong: 0.71, smooth/rough:0.27; non-rhythmic/rhythmic: 0.87; calm/alarming: 0.45; unpleasant/pleasant-ness: 0.22. Thus, reactions varied most for unpleasant/pleasant, smooth/rough,and calm/alarming respectively, two of which are affective dimensions.Individuals deviated from overall affective/sensory scale correlations. Sinceexamining the complex patterns of all correlations for each subject is a large task, as31a first step we analyzed the correlations for one pair of scales (pleasant and alarm-ing). Post-experiment comments had suggested differences in subjects’ opinionsfor these two dimensions, making it a promising place to look for evidence thatdifferences exist. Alarming and unpleasant ratings did not correlate for 11 subjects(r<0.35 and non-significant), but were highly correlated for seven other subjects(r>0.7 and significant). Such a large variation in affect justifies further examina-tion. In future analysis, we will investigate the complex patterns of correlationsamong all dimensions; for example, MDS and factor analysis may better reveal thestructures in individuals’ ratings.Variation in subjective ratings did not correspond to demographic or NFT.For each scale, we ran a between-subject ANOVA using the sum of ratings forthat scale as the dependent variable. Gender (two levels), culture (two), musicbackground (three), and NFT category (two) were the between-subject factors. Wedid not find a significant effect of these factors on the ratings. The effect size ofNFT was very small (less than 0.1) which did not justify its practical significanceeven for a larger sample size.Task performance varied, but variation did not coincide with affective rat-ings. Total score in each task, calculated as the sum of negative and positive scoresfor all items, varied from 50% to 85% for both tasks. However, all subjects per-formed above chance (>50% in the time task and >33% in the frequency task).Also, the distribution of our task scores did not show distinct groups of perfor-mance, in contrast to previous individual difference studies [23, 36, 98]. The dis-tribution for the time task suggested three overlapping normal distributions whichwe used to divide subjects into three groups. The distribution for the frequencytask was even more flat. For consistency, we divided subjects into three groups oflow, medium and high scores (see Figure 2.6); these groups held different membersthan for the time task. However, variations in subjective ratings did not correspondto time and frequency task performance in our study.2.6 DiscussionWe now relate our study results to our near-term research questions.32Figure 2.6: Distribution of total scores in time and frequency tasks; orange boxes show one possiblegrouping for the tasks.2.6.1 Dimensionality and Utility of Affective ResponseWhat are the relevant dimensions for measuring affective response, and isthere utility in multiple rating dimensions?We derived five affective and sensory dimensions for rating vibrations using liter-ature and pilot studies (Section 2.4.4). Here we point to the findings that emergedfrom analyzing crosslinkages between affective and sensory dimensions.Ratings showed a structure in affect and sensory ratings that might ex-tend to other modalities. Based on the correlation among ratings, the vibrationswere mostly perceived as rough, alarming, and unpleasant; or, smooth, calm, andpleasant. This organization can point to the inherent association of these attributesin subjects’ mind. Future work can examine whether this structure holds for othervibrations and even other modalities.Our stimulus set largely bypassed the positive valence/positive arousal re-gion of the emotion response space. On average, few alarming vibrations re-33ceived pleasant ratings. However, exciting rhythms (positive valence and arousal)are conceivable for vibrations and seem to be a relatively unexplored part in ourvibrations. Thus, ratings on multiple dimensions can guide future stimuli design.Affective and sensory ratings showed how individuals’ patterns of pref-erence deviated from average. Based on the correlation matrix for each sub-ject, several subjects deviated from the overall correlation between unpleasant andalarming ratings. The integrated set of affective and sensory dimensions also en-able investigation of more complex structures in future.This initial set of scales needs further development and validation. As a firststep, their utility in describing synthetic stimuli (e.g., various vibrations and tac-tile technologies) must be developed. Eventually, the proposed dimensions mustevolve to support rating of natural stimuli, as a means to compare users’ responseto synthesized and natural stimuli. We also need to determine how accurately thesedimensions can reflect human affective response in real-world contexts. One pos-sibility is to test how well the rating instrument assists haptic designers in creatingtactile stimuli that are indeed preferred by users in real-world scenarios. Anotheris to use neuroimaging studies to compare brain patterns for ratings to those fornatural pleasant stimuli, e.g., fur.2.6.2 Vibration ParametersWhat parameters from the vibration design space impact affective response,and how?On average, rhythm pattern (duration of vibrations, number and timing of pauses)influenced subjective ratings for all five affective and sensory scales. Frequencyonly significantly impacted calm/alarming. Overall, rhythm pattern impacted theratings the most. Drilling down: vibration duration directly influenced weak/strongratings and the number of pauses determined smooth/rough and calm/alarmingratings. Overall, longer vibrations with fewer pauses were perceived as smooth andpleasant. Several short vibrations were considered rough, alarming and unpleasant.The affective range in response to these vibrotactile stimuli is more limited thanwhat we would expect to find for natural stimuli. However, even this small studyfound distinct preference for some vibrations over others. This suggests that having34a scale can help designers now using this relatively inexpressive media in avoidingnegative affect and designing more acceptable feedback. With improved renderingtechnology, we can expect to move towards more engaging touch sensations.Some individuals’ ratings diverged considerably from these overall trends, asindicated by the average ratio of mean ratings to standard deviation. Rating varia-tions were especially high for unpleasant/pleasant, smooth/rough, and calm/alarm-ing scales which were also highly correlated. In future, using a composite valuebased on ratings for the three dimensions might reveal different clusters of subjectsand preferences.2.6.3 Demographic, NFT Score and Tactile PerformanceWhat is the link between affective response and demographics, NFT scores,and tactile task performance?Subjective ratings did not coincide with demographics, NFT scores, or tactile abil-ities. Our results are consistent with past studies which also did not find any con-siderable effect of demographics. Regarding NFT, we had determined a priori that30 subjects would not have enough power to detect an effect (Section 7.3), but weincluded the NFT questionnaire to assess its sensitivity. Our results suggest a verysmall effect size for NFT (less than 0.1 on subjective ratings). Regardless of powerof a later study, such a small effect on subjective ratings does not have practicalsignificance. NFT might not be sensitive enough to account for the affective rangeof synthetic stimuli. We thus plan to exclude the NFT in future work with syntheticstimuli and focus on tactile performance. For natural stimuli with a larger range ofaffective response, NFT might prove a more useful instrument.To assess our results for tactile performance, we need to answer two questions:1. How well did the time and frequency tasks reflect tactile abilities? Ouranalysis suggested that the frequency task better reflected tactile abilities (reason-able validity and reliability) but the reliability of the time task needed improve-ment. First, both tasks had a reasonable difficulty level to generate a low to highperformance range (50% to 85% of correctly matched items). Second, our analysissuggests that the tasks relied on tactile sensory memory (subjects’ scores in the twotasks did not correlate with their report of using pitch or rhythm for matching the35stimuli). As a future test of discriminant validity, we can compare subjects’ per-formance in auditory vs. tactile matching tasks. Finally, the correlation betweenthe two rounds of the frequency task (r=0.67) and the two rounds of the time task(r=0.37) indicated a reasonable reliability for the frequency task, while the timetask needed improvement. Convergent validity of the tasks must be established infuture, e.g., by using time and frequency discrimination tasks.2. Do individuals exhibit considerable differences in tactile processingability? Although task score distributions showed some variations in performance,they did not suggest obvious groupings. In contrast, past studies reported distinctgroups of performers. What was the reason for these different results? Are therereal differences in people’s tactile abilities? In retrospect, almost all studies report-ing huge individual difference in task performance involve a spatial component[23, 36, 98]. So it could be that people are different in some aspects of tactileabilities and not in others. If so, a battery of tasks is needed to measure tactile abil-ities. Moreover, most of those past studies used a specific instrument (Optacon),and their tasks had a cognitive component involved: subjects needed to map a tac-tile pattern to its visual representation. Both the instrument characteristics and thecognitive element could cause the variations in performance. A next step wouldbe to study the potential differences in spatial tactile tasks by eliminating thoseconfounds.Based on past work, we started with the hypothesis of considerable differencesin tactile abilities; we did not see this in these particular conditions. Now, the ques-tion is: Do people vary substantially in their processing of tactile stimuli; if so,in what respect? Does learning account for those differences? Only after answer-ing these questions we can examine links between tactile abilities and affectiveresponse.2.7 Conclusion and Future WorkWe have examined affective response to vibrations for a handheld device. We pre-sented our progress towards an integrated set of rating scales for measuring variousdimensions of affect and perception, specifically weak/strong, smooth/rough, non-rhythmic/rhythmic, calm/alarming, and unpleasant/pleasant. Using these scales,36we measured subjective response to rhythm pattern and frequency of vibrations.The correlation of ratings indicated that subjects found smooth patterns and rhyth-mic patterns more pleasant. Rougher patterns as well as stronger vibrations wereperceived more alarming. According to the overall ratings, pleasant and alarmingvibrations were relatively underrepresented in our vibrations and can be exploredfurther in future. Within-subject ANOVA on the subjective ratings showed a maineffect of the rhythm on all five rating scales, a main effect of frequency on thecalm/alarming ratings, and interaction of rhythm*frequency for the weak/strongscale. Ratings varied considerably among subjects for unpleasant/pleasant, smooth/rough, and calm/alarming dimensions. However, demographics, NFT scores andtask performance did not coincide with these variations.This study was a first step towards our long-term objectives. Future steps areguided by questions such as: 1) Measurement tools: Do affective responses to nat-uralistic stimuli differ qualitatively from those to synthetic stimuli, like vibrations;and can the same assessment tools uncover both types of responses? 2) Key At-tributes: To what extent the effects of rhythm and frequency generalize to othertactile technologies? What other signal parameters are affectively important? 3)Individual Differences: How can we quantify individuals’ deviation from the over-all patterns of ratings for affect and sensation? Can we cluster people based onthese patterns? To what extent individuals vary in other tactile tasks, e.g., tactilespatial tasks? What is the role of learning?Answering these questions not only provides a better picture of affect and per-ception of tactile sensations but can also guide the criteria for further developmentof the proposed set of scales.2.8 AcknowledgementsWe would like to thank Vivitouch/Artificial Muscles Inc. and Dr. Colin Swindellsfor providing the actuator and for their feedback. This work was funded in part byCanada’s Natural Sciences and Engineering Research Council.37Chapter 3Characterizing PersonalizationMechanismsFigure 3.1: Conceptual sketch of three haptic personalization mechanismsPreface:1 In Chapter 2, we found that individual differences in affect cannot besimply modelled based on users’ tactile performance or background. To improveperceptual salience of haptic signals despite individual differences, here we set outto enable haptic personalization. As a first step, we investigated the design spacefor personalization mechanisms, introduced three distinct mechanisms of choos-ing, tuning, and chaining for haptic personalization, and examined their utility ina Wizard-of-Oz study. Results informed our path for the rest of this thesis, by1The content of this chapter was published as:Seifi, Anthonypillai, and MacLean. (2014) End-user customization of affective tac-tile messages: A qualitative examination of tool parameters. Proceedings of IEEEHaptics Symposium (HAPTICS ’14).38suggesting choosing and tuning to be the most practical mechanisms for end-userpersonalization.3.1 OverviewVibrotactile signals are found today in many everyday electronic devices (e.g., no-tification of cellphone messages or calls); but it remains a challenge to design en-gaging, understandable vibrations to accommodate a broad range of preferences.Here, we examine personalization2 as a way to leverage the affective qualities ofvibrations and satisfy diverse tastes; specifically, the desirability and compositionof vibrotactile personalization tools for end-users. A review of existing design andpersonalization tools (haptic and otherwise) yielded five parameters in which suchtools can vary: 1) size of design space, 2) granularity of control, 3) provided de-sign framework, 4) facilitated parameter(s), and 5) clarity of design alternatives.We varied these parameters within low-fidelity prototypes of three personaliza-tion tools, modeled in some respects on existing popular examples. Results of aWizard-of-Oz study confirm users’ general interest in customizing everyday vibro-tactile signals. Although common in consumer devices, choosing from a list ofpresets was the least preferred, whereas an option allowing users to balance vi-brotactile design control with convenience was favored. We report users’ opinionof the three tools, and link our findings to the five characterizing parameters forpersonalization tools that we have proposed.3.2 IntroductionIncreasingly present in consumer electronics, vibrotactile stimuli generate mixedreactions. Genuine utility is possible, yet a given user may find the stimuli them-selves unsuitable in their context, but cumbersome if not impossible to modify. Acommon example is call or message notifications in cellphones, generally providedwith a limited set of basic vibrations (or perhaps just one) that cannot accommodatethe broad range of user preferences.This problem is not merely aesthetic: mappings between stimuli and theirmeanings can be hard to learn when mnemonic links are not apparent, and mean-2called “customization” in the original conference publication39Figure 3.2: Study paradigm: Five proposed personalization tool parameters (top left) and three per-sonalization tool concepts (low-fidelity prototypes) which capture variance in these param-eters.while users may wish to deploy salience (e.g., due to amplitude, duration and rep-etition) according to an intensely personal scheme. When mappings and saliencedo not work well for an individual, utility is overwhelmed by irritation; the signalsare relegated to minimal roles or disabled altogether.In this research, we are exploring the further premise that appropriately lever-aging affective qualities of haptic stimuli in interface design could change this.Not only might “design for affect” add to the variety, pleasure and fun of usingelectronic devices, it could be exploited to enhance functional benefits by makingindividual signals more intelligible and memorable.However, incorporating affect into haptic design is not easy. Affective re-sponses to synthetic haptic stimuli are not yet well catalogued, precluding a heuris-tic approach at this time. Individual differences in both perception and affect fur-ther complicate the matter [98, 128]. While academic and industry experts are pro-gressing towards a better understanding of affective response and design principles,we consider a different approach: empower ordinary users, having no previous de-sign knowledge, to design or personalize haptic feedback for their own preferenceand utilitarian needs.A first question is thus: (Q1): What characteristics will make a vibrotactile40personalization tool usable? The design space for vibrotactile stimuli appearslarge if we consider all combinations of the controllable variables (e.g., frequency,amplitude, waveform and even rhythmic presentation). Yet, many are percep-tually similar when rendered, and this further depends on device characteristics[165]. A typical user, with a limited conceptual model of this structure and itsnon-independence, would get little traction if given these comprehensive, low-levelcontrols. Thus, we investigate the productivity and desirability of a diverse set oftools that might support typical end-users in personalizing haptic effects, with thedual hope of such utilities leading to better tools for haptic designers as well.The second question is whether given a manageable tool, this is desirable.Specifically, (Q2) Do users want to personalize vibrations for their everyday de-vices?Finally, as a step towards understanding affective preferences themselves, wewonder (Q3): What kind of vibrations do people design when given the opportu-nity?In this chapter, we focus on Q1, and establish insights and future directionsfor Q2 and Q3. We identified parameters that characterize existing personaliza-tion tools, then evaluated their manifestations in three haptic tool concepts viaa Wizard-of-Oz (WoZ) study where we asked participants to design urgent andpleasant cellphone notifications (Figure 4.2). Our contributions include:• Five dimensions for vibrotactile design and personalization tools;• Three tool concept prototypes that capture this variation;• Quantitative and qualitative data on user opinions of the three concepts,viewed in context of the proposed tool parameters;• Informal qualitative data on vibrations designed by users.3.3 Related Work3.3.1 Haptic DesignHaptic effects can take many forms, the most common of which is vibrotactile (alsothe focus of our work). By “haptic design”, we refer to creating haptic effects to41be rendered by a haptic display. Existing haptic devices vary considerably in theircapabilities, leading to a tight coupling of effect design to device development.Haptic designers must intimately understand technical device parameters, and cur-rently must usually design within that technical space. For example, vibrotactiledesigners can typically vary frequency, waveform, amplitude, duration and rhythm[103, 165]. Documentation of a mapping from technical space to users’ perceptualspace for tactile stimuli is underway [13, 165, 174]. Here, we have structured ourproposed tools in an intuitive and perceptual rather than a technical control space,positing that this will lead to more satisfying results, particularly for inexperienceddesigners.Vibrotactile effects have been designed both to communicate information (see[103] for a survey) and affect [21]. To ensure effective design, haptic designerstypically use iterative design and user evaluation of haptic stimuli [103]. However,this approach has been less successful for haptic effects with affective qualities;convergence is difficult in the absence of adequate evaluation metrics, and in theface of notable individual preference differences (Chapter 2).3.3.2 Haptic Design ToolsThe haptic community has proposed a number of design tools in the past decade,each aiming to reduce technical knowledge required for design and thus openingthe domain to a wider audience.Categorization of Tools: Paneels et al. [125] categorizes haptic design tools basedon their support for one vs. multiple actuators; and type of representation: a directsignal (e.g., Haptic Icon Prototyper [160], and Immersion’s Haptic Studio [73])or an indirect, metaphor-based view (e.g., VibScoreEditor [95], TactiPed [125]).We find that this organization does not adequately differentiate tools for end-userpersonalization. For example, all of our prototypes use indirect representation andcurrently support one actuator, yet vary in other substantive ways.Creation and Modification: All the tools we have seen are primarily concernedwith creating haptic effects. For example, to create vibrations, Hong et al. [69]mapped user touch input (e.g., pressure, location) to amplitude and frequency, anapproach found useful for prototyping and demonstration but not suitable for modi-42fication of effects. Other tools support both creation and modification of the effects.The Haptic Icon Prototyper provides more flexibility by allowing users to combineshort haptic snippets in a sequential or parallel form along a timeline [160]; one ofour three concepts (chaining) uses a similar approach. With a focus on creation andmodification, all the above tools provide fine-grained control over stimuli. For amodification-only tool, the importance of various tool requirements can shift – forexample, convenience might outweigh design control. Here, we are also primarilyinterested in modification or personalization of pre-existing templates, as it couldbe a more practical approach for users without design knowledge.Audience: Existing tools differ in the design knowledge they require and thususability for ordinary users. Some (e.g., VibScoreEditor, TactiPed) specificallytarget ordinary users; but despite their promising evaluations, they have remainedin the academic domain. A notable exception is the iPhone tapping tool for creatingcustomized vibrations for a user’s contact list [176].3.3.3 Challenges & Potentials of End-user PersonalizationWhile these tools typically aim to be accessible to ordinary users, these users’ abil-ity to design has rarely been investigated. Oh and Findlater [120] studied customgesture creation by this group, and found they were able to create a reasonableset of gestures but tended to focus on variations of familiar gestures. Personaliza-tion might suit at least some end-users better than creation, affording satisfactioninstead of frustration.We can gain insight from personalization literature in software engineering onfactors involved in end-user personalization of software applications. Sense of con-trol and identity, frequent usage, ease-of-use and ease-of-comprehension in toolsallowing personalization engender takeup [105] while personalization is discour-aged by lack of time or interest, and difficulty of personalization processes [101].3.4 Conceptualization of Haptic Personalization ToolsAs a first exploratory attempt to conceptualize haptic personalization tools, weexamined, brainstormed and discussed characteristics of existing design tools inthe haptic and other domains. As a result, we propose five parameters along which43design and personalization tools can vary, including: 1) size of design space, 2)granularity of control, 3) provided design framework, 4) facilitated parameter(s),and 5) clarity of design alternatives (Table 3.1). We posit that these parameterscan influence users’ perception of flexibility and effort to design haptic effects andconsequently, their preference and tool choice.Although desirable, dependencies among the parameters make it infeasible tostudy the effect of each parameter in isolation or to examine users’ opinions aboutall variations of the parameters in a meaningful study. Existing tools co-vary onmany of these parameters and a realistic study would need to examine many to-gether. Thus, we define three haptic personalization tool concepts that are consid-erably different, capture variations along all tool parameters, and are practicallyinteresting. Our concept prototypes borrow from existing tools in haptic and photoediting domains.3.4.1 Three Personalization ToolsWe begin by describing our three proposed tool concepts, implemented as paperprototypes, then use these and existing tools to explain our proposed tool charac-terization parameters. We chose to evaluate manually operated low-fidelity pro-totypes because a tool concept can be implemented in various ways differing ininterface elements or interaction style and we wanted to avoid reactions focusedon those differences. In contrast, a paper prototype allows users to flexibly inter-act with the tool concept, thus we could obtain reactions focused on conceptualdifferences of the tools.1. Choosing3 (baseline: minimal personalization, focuses on convenience):This tool models a conventional way of personalizing ringtones and other auditoryalerts on consumer electronics, wherein users are provided with a list of vibrationsto choose from. Our prototype (Figure 3.3a) lists the vibrations in a tabular struc-ture where rhythm varies by row and vibrotactile frequency by column. The userplaces the Play button over each vibration number to signal to the experimenter(acting as a computer) to play the vibration. The Remember Me buttons are usedto mark some vibrations and facilitate future comparison and choice.3called “choice” in the original conference publication44(a) Choosing concept: a 7x3table of vibration pre-sets liesbeneath blue Play and orangeRemember Me buttons. In thispaper prototype, moving theblue or orange sticker to oneof the vibration cells repre-sents (in a real device) cursor-selection of a vibration andthen the execution of that func-tion on it. In our WoZ study,the experimenter executed thisresponse manually, s.t. the par-ticipant felt the selected vibra-tion on the display device.(b) Tuning concept: user canapply 3 filters (bottom) to 5rhythm presets (top); the pre-sets cannot otherwise change.The roughness and strength fil-ters have three settings each,and the symmetry filter hastwo. The blue Play buttonagain selects a preset. Here,the movable orange Level cir-cles show the current filter set-tings for playback (shown: de-fault setting).(c) Chaining concept: lowerarea visualizes the time se-quence for 5 initial rhythms(purple indicates vibration-on,and white is silence, over a500ms period). Users can mod-ify the rhythm itself by se-lecting and overlaying a differ-ent block structure (top mid-dle) and an available block sen-sations (colored rectangles ontop right). The 3 small coloredcircles (top left) allow usersto try the 3 block sensations(45Hz, 75Hz, 175Hz) beforeusing them.Figure 3.3: Three personalization tool concepts2. Tuning4 (more power, still emphasizes convenience by allowing high levelcontrol): Inspired by color adjustment filters in photo editing tools like AdobePhotoshop, users have a small initial set of vibrations and three perceptual filtersto vary roughness, strength, and symmetry. These dimensions have repeatedlyemerged as the most salient and important [165]. Tuning’s paper prototype (Fig-ure 3.3b) includes five initial vibration patterns in the upper rows, and three slidersrepresenting the filters at the bottom. To feel a vibration, users need to choose arhythm at the top with a particular setting of the filters at the bottom.3. Chaining5 (trades off convenience for greater control over the stimuli): De-rived from the Haptic Icon Prototyper [160], a vibration is made of a sequence of4called “filter” in the original conference publication5called “block” in the original conference publication45vibration blocks and to modify a vibration, users change the individual blocks inthe sequence using the available vibration blocks. With our prototype (Figure 3.3c),users can start from one of the five vibrations at the bottom, then choose a blockstructure (silence, half vibration, and full vibration) and one of the three block sen-sations from the top and place it at the desired location along the chosen vibrationsequence. They can test their design by putting the blue circle (Play button) besidethe vibration.3.4.2 Proposed Tool-Characterization Parameter SpaceWe were able to identify five parameters that described the variation we observedduring our review of existing personalization tools. Table 3.1 relates these param-eters to our three concept prototypes (choosing, tuning and chaining personaliza-tion). These parameters are not orthogonal or independent: for example, providingfiner control over stimuli will increase the size of the design space.1) Size of Design Space Accessed by the Tool: The size of the design space refersto the number of distinct stimuli that a tool can create; it depends on the design tooland a rendering haptic display. The tool’s “perceptual size”, meaning the numberof perceptually distinct stimuli that it can create, is also important but harder toquantify. For example, if people can only distinguish a subset of stimuli designedby a tool and rendered by an actuator, that subset is the perceptual space for thattool and actuator. The size of design space increases from choosing to tuning andto chaining.2) Granularity of Control: The smallest unit of a stimuli that a user can directlymanipulate with a tool can vary from holistic (coarse) to local (fine) control. Withchoosing and tuning, users could control a whole 2s vibration by selecting it, butwith chaining they had control over 125ms sub-blocks (by modifying or replacingthem).3) Provided Design Framework: Any design tool inevitably imposes an outline orframework on design. This structure will, to some degree, impose on the user someorganization of the design space. Our choosing tool provides the tightest structure,by only allowing users to choose from a list of sorted vibrations. Tuning conveysa perceptual organization of the design space, via the three axes provided. Chain-46ing provides a discrete, block-based outline for the design and organizes buildingblocks into 3 structures (rhythm management) and 3 sensations (frequencies). Asanother example, the iPhone tapping tool provides very little structure: vibrationsare viewed as variable-length touches to the screen.4) Facilitated Parameters: The degree and ease of control that a given tool af-fords for each parameter may vary. Some are promoted by the tool for creation ormanipulation of stimuli and take the least or little effort to manipulate. Chainingfacilitates control over the rhythm or structure of vibration while tuning facilitatescontrol of feel or sensation. Both of these tools to some extent allow control overstructure and feel but one is more prominent than the other. Choosing allows lim-ited control over both feel and rhythm.5) Visibility or Clarity of Design Alternatives: Tools vary on the extent that alter-native designs are provided to users, vs. discovered. Visibility of design alternativesdecreases from choosing (all stimuli are listed) to tuning (all filter combinations areapparent) to chaining (outline and building blocks are apparent, many versions arepossible. Traversal of the design space in a reasonable time must involve discov-ery).Table 3.1: Embodiment of proposed parameters: characterization of choosing, tuning and chainingconcepts.Proposed Parameters Choosing Tuning Chaining1. Size of Design Space(for C2 tactor [34])Technical: 21 90 2400Perceptual: 21 ∼ 45−90 < 24002. Granularity of Control Holistic Holistic Detailed(Coarse) (Coarse) (Fine)3. Provided Design Framework List Perceptual BuildingBlocks,Outline4. Facilitated Parameter(s) Feel, Rhythm Feel Rhythm5. Visibility of Alternatives High High Low47Figure 3.4: Study apparatus. (Left) C2 tactor and amplifier. (Right) Setup showing a participantworking with a prototype and the experimenter playing back the vibrations.3.5 MethodsWe ran a WoZ study with paper prototypes to examine users’ interest in personal-ization and their opinions of our tool concepts.Setup: We delivered vibrotactile effects with a C2 tactor [34], controlled via a con-trol computer’s audio channel and audio-amplified; signal and amplification levelswere held constant. To maximize dynamic range, participants held the actuatorbetween the thumb and index finger of the dominant hand and worked with oneprototype at a time (Figure 3.4). They used movable paper pieces to specify vibra-tions; when they pressed the movable blue Play button, the experimenter playedback those vibrations to them. Participants could not see the control laptop screen.Stimuli: All vibrations in the study lasted 2 seconds. Vibration duration and otherchoices for the parameter values were determined based on pilot studies and priorwork. We used 7 rhythm patterns (Figure 3.5) from a larger rhythm set [165].Initial vibrations and possible alternatives varied for each tool:1. Choosing: 7 rhythms (Figure 3.5) were rendered in 3 frequencies (45Hz,75Hz, 175Hz), chosen based on pilot studies. Thus, participants could choose froma total of 21 vibrations arranged in a table: the vibrations with different rhythms inrows and those with different frequencies in columns (Figure 3.3a).2. Tuning: We rendered the first 5 rhythms in Figure 3.5 in 75Hz to representthe middle setting on the strength and roughness filters and the symmetric settingon the last filter. Participants could choose from 18 filter settings (5× 18 = 90).Entries of Table 3.2 show changes relative to the default settings, determined by48Figure 3.5: Seven rhythm patterns: Each row represents a vibration pattern which is repeated 4 timesin a 2 second stimulus.pilot studies and prior work in our group to match the perceptual filter labels.Table 3.2: Configurations of each filter setting in the tuning tool.Setting Change from Default VibrationDefault No change (75Hz, 5 first rhythms from Figure 3.5)Smooth 45Hz, De-amplification of 3dBRough 5 ms silence added to middle of each 50 ms vibrationWeak De-amplification of 6dBStrong Amplification of 6dBAsymmetric Removal of 2/3rd of vibrations in the first second3. Chaining: The first 5 rhythms in Figure 3.5 were initial templates for chain-ing personalization. To make a new vibration, one could choose one of the 3 blockstructures (silence, half vibration, and full vibration) with one of the 3 block sen-sations (45Hz, 75Hz, 175Hz). Each block had 125ms duration; the full pattern was500ms, to be repeated 4x in playback. This left 2400 ([2 vibration structures×3 sensations+1 silence structure]4−1) design alternatives.Participants: 24 university students (9 male) participated in a 1 hour study for$10. They came from many fields (engineering, science, management, arts, etc.)and age range (16 [19-29 years], 4 [30-39], 3 [40-49], 1 [>50]). 20 used cellphonesor game controllers with haptic feedback on a daily basis. 7 had basic design49experience with Photoshop and other video editing software.Design: We used one independent within-subject factor (prototype, three levels)and counterbalanced order of interface with a Latin square. We also counterbal-anced order of designing urgent vs. pleasant notifications, though for each partici-pant, kept the order the same across the three prototypes. We collected: 1) ratingson personalization interest (1-5 Likert scale), 2) rankings of the tools on ease-of-use, design control, and preference, 3) comments from participants, 4) time spenton each tool, 5) vibrations designed with each tool for pleasant and urgent notifi-cations.Procedures: Study sessions took place in a quiet room. Participants completeda questionnaire on demographics, experience with haptic feedback, and previoushaptic, auditory or visual design experience. The experimenter then briefly ex-plained the first prototype and asked the participant to use it to design an urgentand a pleasant notification; repeated this for each tool (about 15 minutes each); andadministered the post-questionnaire above. We also asked which tools they woulduse if they had all three tools on their cellphone and for what purpose; if they hadenough time to design vibrations, and if the labels in the tuning tool matched thevibrations.3.6 Results3.6.1 Comparison of the ToolsWe use separate Friedman tests to compare the rankings of the tools on ease-of-use, design control and preference. In the cases of statistical significance, we reportfollow-up pairwise comparisons using a Wilcoxon test and controlling for the TypeI errors across these comparisons at the .017 level, using the Bonferroni correction.Ease-of-Use or Usability: Ranking of ease-of-use did not differ significantly acrossthe three interfaces (χ2(2) = 0.8, p = 0.67), suggesting that the usability of thetools were reasonably similar.Design Control: Participants ranked how well each tool allowed design of anurgent and of a pleasant notification. There was a significant difference of in-terface for both types of messages (urgent: χ2(2) = 10.94, p = .004, pleasant:50Figure 3.6: Participants’ rankings of the three tools. Chaining was the most powerful while tuningwas the most preferred.χ2(2) = 6.02, p = .049). For both types of messages, post-hoc tests indicated thatchaining was significantly ranked more powerful than choosing, (urgent: p= .003,pleasant: p = .041). Rankings for tuning did not significantly differ from chainingand choosing (urgent and pleasant p > 0.5).Preference: Rankings for preference were significantly different for the tools(χ2(2) = 9.69, p = .008). Post-hoc comparisons showed tuning was significantlypreferred over choosing (p=.006) and chaining (p = .012).Design Time: According to the post-questionnaire, participants generally hadenough time; three participants wanted more time for chaining, the most complex.The average time spent on chaining (M∼12.5m, SD∼5m) was higher than for tun-ing (M∼7, SD∼2.5) and choosing (M∼6, SD∼2.5). This time included creationand playback of the vibrations by the experimenter. As we knew that vibration cre-ation was more time-consuming for chaining, we did not analyze the timing datastatistically. Our observations during the study sessions support the timing datai.e., participants needed more time to think, change, and compare the generatedvibrations with chaining.Choice of Tools: In response to our question “Which tools would you use if youhad all three tools on your cellphone?”, 20 participants (83%) chose tuning, 10chose chaining (42%), and 8 chose choosing (33%). Unsurprisingly, many partic-ipants mentioned design flexibility and required time as two factors in their deci-sion. According to their comments, tuning is “simple and fast...yet gives flexibilityto choose and customize” (P16). Interestingly, some participants described chain-51ing as being “fun” (P11), or for when they are in a “good mood” (P15): “When Ifeel that I have too much time and have a good mood, I may like to design a specialpattern using the chaining personalization. If I don’t have any mood or feel lazy, Imay use the choosing or the tuning one.”(P15)A majority (20/24) felt that the filter labels in tuning personalization matchedthe sensations. Three said that asymmetric and symmetric vibrations were not verydifferent and one had a similar comment for the strength and roughness filters.When we asked about the iPhone tapping tool, only three participants had triedit for making custom vibrations, none of whom found it useful. P24 doubted his/herability to make nice vibrations: “At first, I thought it would be fun making your owncustom vibration, but once I tried the interface, I was not really into it since thevibrations I created were not as nice as the already customized vibrations on myphone.”P9 wanted some vibration or structure to start from: “It’s simple and not somuch patterns to choose from.”P5 did not find the input mechanism adequate for his/her needs: “It was reallyeasy to use, but my fingers don’t move fast enough to create the rapid vibrationI would want to use for urgent messages. And it was hard to make the vibrationsymmetrical.”3.6.2 Interest in PersonalizationOn average, participants stated mild interest in personalizing their vibration noti-fications (M = 3.42, SD = 1.14 on a 1-5 Likert scale). Lack or minimal use ofvibrations was the main reason for not being interested in personalization whilerecognizing different types of alerts, being unique, adjusting the sensation levels,and concerns about repetitive exposure to unpleasant vibrations were the main rea-sons for personalizing their cellphone notifications.3.6.3 Vibrations Designed by Participants24 participant each designed 6 vibrations (one pleasant and one urgent with eachtool) resulting in 144 in total. We provide an informal summary of the vibrations.We imagine that participants might have made different choices if designing for52real use, and the WoZ study approach could also have impacted the extent that theyexplored alternative designs. This might also be the reason for some inconsisten-cies in the vibrations designed with the three tools.Overall, participants chose and modified the first three rhythms (R0, R1, andR2 in Figure 3.5) the most. The order of rhythms on the paper prototypes was thesame for all participants and all interfaces. Although this result can be partiallydue to the presentation order, the same rhythm preferences stood out in anotherexperiment (Chapter 2). Unexpectedly, in many cases participants did not choosemarkedly different rhythms for pleasant and urgent messages. We are interested inknowing if a similar pattern of choices would hold in real life.With choosing and chaining, over 20 participants (83%) used higher or thesame frequency for urgent notification than for pleasant notifications. With tuning,over 17 participants (70%) used the strong and symmetric settings for both pleasantand urgent messages. The participants varied the rough/smooth and rhythm settingsthe most to differentiate pleasant and urgent messages. Only 8 participants (33%)used the asymmetric setting, and 5 of them used it only for urgent notifications.3.7 Discussion3.7.1 Desirable Characteristics (Q1)Not surprisingly, perception of design flexibility and low effort are the main factorsin participants’ choices.Design space accessed and flexibility afforded by tool framework impacts users’perception of Design Control. The perceived size of the design space is largerfor chaining. Also, chaining only provides building blocks for designing vibra-tions, and thus affords a more flexible structure compared to tuning and choosing.According to the rankings, tuning provides reasonable design control (not signifi-cantly lower than chaining) and choosing has the least design control.Holistic control over stimuli and visibility of design alternatives can reduce theperception of Effort. On average, participants took much less time with choosingand tuning compared to chaining. Also, post-questionnaire comments from par-ticipants indicate that they perceived tuning and choosing faster and easier than53chaining. Control granularity and visibility of design alternatives appear to con-tribute to perceived effort; these parameters were similar for choosing and tuningbut different for chaining.Preference is a function of the perceived Design Control, Effort, and Fun. Thechoosing personalization, which is the most common tool for customizing soundand visual effects in consumer devices, was the least preferred option in our studyas it provides minimal sense of control and flexibility. The participants foundchaining time-consuming but tuning provided enough design control (not signif-icantly different from chaining) and required little effort. Thus, it was preferredthe most. Also, many found its perceptual structure of the design space intuitiveand convenient. Also, we hypothesize that a low ratio of perceptual to actual sizeof the design space could cause disappointment, since many efforts could eventu-ally feel similar. In tuning, these two sizes were very close (ratio∼1) compared tochaining.Some participants described chaining as fun, suitable for when they are in agood mood; i.e., gamelike. Chaining’s “Fun” may arise from a sense of discoverydue to its less structured design alternatives.Finally, we note that tools such as the iPhone tapping tool provide very littlestructure for users. Comments suggest that ordinary users (in contrast to designers)prefer some degree of structure and outline to restrict the design space and guidetheir design. P9 specifically stated that “It (iPhone tapping tool) is simple, and notso much patterns to choose from”.3.7.2 Value and Outcomes (Q2, Q3)Do users want to personalize vibrations? Overall, users registered interest inpersonalizing their notifications and playing with personalization tools on their mo-bile devices (Q2). The majority did not require detailed, fine control and preferredquicker holistic changes with more perceptual impact. Factors that typically im-pact software personalization behavior also appear to hold for haptics, includingextent of usage, sense of control and identity, required time, and ease-of-use andcomprehension of personalization tools. Other factors such as creativity, fun andavailable sensations could be more specific to personalizing stimuli. To further54address this question, we need to investigate various everyday scenarios for usingvibrations and survey users’ interest in personalizing vibrations in each case.What do users create or choose? Fully categorizing what people choose whengiven the opportunity (Q3) will be a major, and context-dependent endeavor. As astart, we found some general trends, such as associating urgency to signal energyand preference for some rhythms which are consistent with prior work (Chapter 2).However, the designed vibrations vary not only across individuals but also in somecases across the tools which is very likely due, at least partially, to our lab-basedWoZ approach. A longitudinal study with the developed tools can provide a morecomprehensive answer to this question.3.7.3 Wizard-of-Oz ApproachFollowing our goal of focusing on personalization concepts with the low-fidelityprototypes, our WoZ prototypes and evaluation appeared to elicit natural feedbackin most cases. Nonetheless, it is possible that the unrealistic delay between in-dicating a command and feeling the sensations skewed certain data; specifically,making it difficult for the participants to compare urgency and pleasantness. How-ever, the impact of this on tool preference should be minimal. Participant ques-tionnaire responses suggest that they understood and responded to the paradigmfor each tool.“[I prefer] tuning for first time exploring [the] available or defaultchoices...[and] chaining for advanced personalization”(P20). Further, this delayshould negatively impact the preference for chaining as it had the greatest delay;but despite this, many rated chaining as their first or second choices.3.8 Conclusion and Future WorkIn this work, we examined the desirability and practicality of personalizing every-day vibrations by ordinary users. We proposed five parameters that can impactusers’ perception of personalization tools including: 1) size of design space, 2)granularity of control, 3) provided design framework, 4) facilitated parameter(s),and 5) clarity of design alternatives. We used cellphone message notification asan example application and prototyped three concepts varying in these parameters,namely, choosing, tuning and chaining personalization.55Overall, our participants showed interest in personalizing vibrotactile effects.According to the results of a WoZ study, all three tools were reasonably usable. Theparticipants preferred tuning over both choosing (current practice) and chainingbecause it provides some degree of design control but requires little design effort.Chaining personalization was the most demanding of time and effort but also themost powerful. Despite almost unanimous preference for the tuning interface, ourresults indicate that individuals’ weights for design control, effort, and fun of a toolis different. Thus, an effective personalization tool needs to incorporate a suite ofeasy-to-use tools with different design controls and affordances to accommodatediverse personalization needs.We did not conduct controlled studies to examine the effect of each parameterin isolation, since the parameters are not orthogonal and all combinations of themare not practically interesting. Instead, we defined three practical personalizationtool concepts to capture the variability along those parameters. The proposed pa-rameters were useful in understanding users’ opinions of our tools and the iPhonetool. We think the actual size of the design space and flexibility of the designframework impacts perception of design control. Holistic control over stimuli andvisibility of design alternative can reduce the perception of effort. Preference is afunction of the perceived design control, effort, and fun of the interface.Ongoing questions are whether our proposed parameters can adequately char-acterize new personalization approaches and their use for other scenarios as wellas users’ reactions to them; if there is an optimal subset of the parameters for char-acterizing the tools, and even a single optimal set of parameter values. These meritfurther study; however, we predict the last will be unproductive. Instead, we en-courage tool designers to consider variations of their tools along these parametersto find the best parameter combination for their case, and to consider diversity inuser preferences.Our next step is to implement and test our tools on potential target devices(e.g., mobile phones and tablets) to investigate the effect of form factor and directcontrol over creation of haptic effects. We can then conduct longitudinal studies ofpersonalizing vibrations for truly personal use. Moreover, we would like to furtherinvestigate the specific benefits of personalization for users. Does personalizationincrease likeability, learning and usage of the vibrations?56In terms of easing the personalization task, we see two immediate opportuni-ties. The first is to use filters for stylizing or branding haptic effects, an approachused extensively in photo editing software and preferred by our participants. Whatproperties do users want to change (e.g., emotion, sensation, or physical proper-ties)? How much does it depend on the design case? How can one design anemotion or sensation filter? The second is to gamify design. Some participantsthought using chaining was fun. We do not know of any haptic design games;these could increase interest in haptics and lead to crowd-sourced designs.At minimum, intuitive end-user tools will allow professional designers to em-ploy participatory practices. More inclusive tools and processes will expose users’criteria and desires for haptic effects, which is a significant current challenge inprofessional haptic design.3.9 AcknowledgmentsThis work was supported by NSERC and approved by UBC’s Behavioural Re-search Ethics Board, #H13-01646.57Chapter 4Choosing From a Large LibraryUsing FacetsFigure 4.1: Conceptual sketch of the choosing mechanism with VibVizPreface:1 In Chapter 3, we studied the concept of choosing as a practical mech-anism for haptic personalization and found it to be easy-to-use but lacking a senseof control. Here, we further developed the choosing mechanism into an interface1The content of this chapter was published as:Seifi, Zhang, and MacLean. (2015) VibViz: Organizing, visualizing and navigatingvibration libraries. Proceedings of IEEE World Haptics Conference (WHC ’15).58that improves its sense-of-control and diversity while keeping it simple and ef-ficient. To achieve these, we investigated people’s cognitive schemas for hapticsensations and introduced five facets – flat lists of related vibration attributes de-rived from users’ language. We utilized these facets in building VibViz, an interfacefor accessing a 120-item vibration library. With VibViz, users can quickly locate,search, or browse for their desired vibrations in a faceted space. Our small-scalestudy of VibViz suggested that facets provide effective means for structuring hapticsensations and warranted further investigation of the haptic facets.4.1 OverviewWith haptics now common in consumer devices, diversity in tactile perception andaesthetic preferences confound haptic designers. End-user personalization out ofexample sets is an obvious solution, but haptic collections are notoriously difficultto explore. This work addresses the provision of easy and highly navigable accessto large, diverse sets of vibrotactile stimuli, on the premise that multiple accesspathways facilitate discovery and engagement. We propose and examine five dis-parate organization schemes (facets2), describe how we created a 120-item librarywith diverse functional and affective characteristics, and present VibViz, an interac-tive tool for end-user library navigation and our own investigation of how differentfacets can assist navigation. An exploratory user study with and of VibViz suggeststhat most users gravitate towards an organization based on sensory and emotionalterms, but also exposes rich variations in their navigation patterns and insights intothe basis of effective haptic library navigation.4.2 IntroductionVibrotactile technology appeared in mainstream consumer culture over a decadeago, first in buzzing pagers, cell phones, and game controllers. However, despiteimprovement in quality and expressiveness of consumer-grade tactile display, userappreciation and adoption has remained low.One culprit is slow growth in the value added by haptics, e.g., “informative”uses wherein different stimuli have different assigned meanings [11, 102]. Low2called “taxonomies” in the original conference publication59….  My Vibration Library Speed Duration  Rhythm Energetic  Funny  Heartbeat  Cricket  Bomb  Overtime  Stop  1 min left  ….  ….  ….  Want a vibration that is … energetic & short … Figure 4.2: Users need an intuitive interface for navigating a vibrotactile library.utility interacts closely with low liking: whether a user finds a tool hard to use orjust dislikes it, he/she often responds to the consequent irritations, learning diffi-culty and incomprehensibility by minimizing or disabling it. The high incidenceof online user posts for haptic features asking how to “turn it off” suggests one orboth of these are in fact happening with haptics.Individual differences in haptic perception and preferences may be at the rootof this problem. Underscoring this premise is the emerging theme of a need torecognize user diversity in end-user haptics research [71, 72, 77, 137, 164]. Would“turn-it-off” individuals see more value in tactile feedback if it met their own spec-ifications?Diverse example sets, or libraries, are an obvious way to assist a user withpersonalization [77, 137]; but now we face the navigation challenge. Unlike visualimages, vibrations must be scanned serially with most displays. Feeling and findingthe entire contents of a sizable library is tedious and physiologically infeasible, asthe first few vibrations quickly numb tactile receptors. Users may want to compareor choose multiple stimuli for their applications, but comparing and selecting froma rich multidimensional set is daunting. Confused and exhausted, users soon giveup.We are inspired by approaches taken in other domains to achieve highly navi-gable access to large, diverse collections. This includes principles such as offeringmultiple organizational schemes, informative and distinct visual representations,60highlighting adjacencies between items and engaging users. While some publiclyavailable vibrotactile libraries exist, the accessibility of this valuable resource isobstructed by the general absence of these elements.Approach: The present research explores how organization and representa-tion of a vibrotactile collection can best support users in finding their desired vi-brations. Specifically, we identified five potential ways (“facets”) for organizingeffects. We created a library of 120 vibrations (for a single actuator), large enoughto pose significant navigational needs, annotated it by the facets, and created Vib-Viz, an interactive visualization interface with the goals of supporting both end-usernavigation and our investigation of our five facets’ utility and engaging qualities.Finally, we conducted a preliminary evaluation of VibViz and the five facets usingour vibrotactile library, in a user study with 12 participants where we triangulatedquestionnaire and observation data. Our contributions include:• a process for creating a large (120 items) vibrotactile library• identified challenges for large tactile library design• five potential organization schemes (facets) for vibrotactile effects, drawnfrom literature• an interactive library navigation interface (VibViz)• a first evaluation of VibViz and the five facets4.3 Related Work4.3.1 Vibrotactile LibrariesSome large collections of vibrotactile effects exist, including Haptic Effects pre-view and Haptic Muse by Immersion (124 vibrations) [71, 72], and FeelEffects byDisney Research (>50 vibrations for a haptic seat pad) [77]. Each uses a single or-ganizing principle: FeelEffects are grouped into 6 types of sensations or metaphors(e.g., rain, travel, motor sounds) and Haptic Muse by gaming use cases (sports,casino).61Other examples organize items on multiple dimensions simultaneously, butthese axes occupy the same domain; e.g., van Erp (59 vibrotactile melodies) [174]and Ternes & MacLean (84 items varying on note length, rhythm, frequency, andamplitude) [165]. Relevantly, Ternes used MDS to translate a purely physical de-sign space into perceptual dimensions [165], to facilitate “spacing out” its elementsfor maximum perceptual diversity given a device’s capabilities.Here we further hypothesize that restructuring a library over different domainswill not only help optimize perceptual item packing for a given hardware’s expres-sive capability, but also make it more accessible via multiple, qualitatively differentmeans of exploring and understanding it.4.3.2 Vibrotactile FacetsVibrotactile effects can vary in many ways. Most examined are physical charac-teristics, including intensity, duration, temporal onset, rhythm structure, rhythmevenness, note length, and location [103], all measurable from the vibration signal.Research on tactile language suggests that users often describe vibrations with sen-sory and emotional words [119, 139, 174], motivating Guest et al.’s sensory andemotional dictionary for tactile sensations [52]. Schneider & MacLean found thatpeople use familiar examples or metaphors (e.g., whistle, cat pawing) for describ-ing vibrations [136]. Vibrations may also be characterized by their usage context(e.g., double click vibrations [72]) and example (cellphone vibrations).We synthesized the above literature into five initial facets for vibrotactile ef-fects, intended for structuring and accessing a large vibrotactile collection: 1) Phys-ical characteristics – e.g., duration, energy (“1 second long”), 2) Sensory charac-teristics – e.g., roughness (“feels rough or changing”), 3) Emotional characteristics– e.g., pleasantness, arousal, and other emotion words (“feels urgent”), 4) UsageExamples – types of events for which a stimulus could be used (“good for a re-minder”), and 5) Metaphors – familiar examples that resemble the effect in someway (“feels like snoring”).624.3.3 Inspiration from Visualization and Media CollectionsResearch on books and other media suggests that multiple visual pathways to alibrary can promote exploration and engagement, and increase serendipitous dis-covery [171]. Musicovery, an online music streaming service, visualizes its col-lection based on music mood and emotional content and allows filtering by genre,date, artist and activity [114]. However, unlike books and music, the most relevantalternative facets for vibrotactile stimuli have not been clearly identified.Our library interface borrows many guidelines from the information visualiza-tion (InfoVis) domain, including using multiple views and linking their content. InInfoVis terminology, “filtering” refers to reducing the number of elements shownon the screen to a smaller subset of interest and a “glyph” can refer to any com-plex visual item, in contrast to single geometric primitives such as dots and squares[113].4.4 Library & Facet ConstructionOur library includes 120 vibrations, a size chosen to require an effective organiza-tion scheme. Elements range from 0.1s to 14.6s in duration and 0.05 to 0.734 inenergy (vibration signal Root Mean Square or RMS). In the present study, stimuliare rendered by a C2 actuator [34]. In the following, we describe how we designedthe library and specified our five facets, and discuss obstacles we encountered.4.4.1 Library PopulationOur library required significant and diverse representation across all of our eventualfacets to the extent possible given available physical parameters. We “sourced”effects through a variety of methods, including:• collected a repository of effects from our past studies and collaborations withindustry,• systematically generated a large set of vibrations by varying the rhythm, fre-quency, and envelope structure,• asked our haptics colleagues to design vibrations for a given list of metaphors63(e.g., a dog, a spring, panting) with a rapid prototyping tool called mHive[139],• constructed vibrations based on the Apple iPhone’s sound icons, either mim-icking timing and frequency changes, or directly applying low-pass filteringto them.• for all of above, iteratively generated variants on existing vibrations andpruned overly-similar instances.To balance facet representation, at several points we annotated the library’s con-tents according to the current description of our facets. This in turn led us to refineour facet descriptions, with the final result in Table 4.1.Table 4.1: Final vibrotactile facets used in study1. Physical: Properties of a vibration that can be measured.1) duration (msec), 2) energy (RMS), 3) tempo or speed (annotator-rated), 4) rhythmstructure.For (4), we categorized stimuli by rhythm following [165]:a) short note: all pulses <0.25sb) medium note: all pulses 0.25s<0.75sc) long note: all pulses >0.75sd) varied note: combination of short, medium, and long pulsese) constant: single pulse2. Sensory: Vibration perceptual properties.1) roughness, 2) sensory words from touch dictionary [52].3. Emotional: Emotional interpretations of vibration.1) pleasantness, 2) arousal, 3) dictionary emotion words [52].4. Usage Examples: Types of events which a vibration fits.We collected and consolidated a set of usage examples for presentation timing andexercise tracking (Tam et al. [164]).5. Metaphor: Familiar examples resembling the vibration’s feel.With a questionnaire, we collected a set of metaphors for our list of usage examples,asked colleagues and friends to provide metaphors for our vibrotactile effects, andused the NounProject website [168] for brainstorming on metaphors.4.4.2 Visualizing and Managing Diversity During GrowthAs the library grew, it became harder to assess progress towards a goal of evenlydistributed diversity; to compare existing effects, prune similar ones, and find gaps.64Figure 4.3: Using Audacity for visual comparison of vibrationsWe responded with several organization and visualization mechanisms.1) We built a database of existing vibrations in a spreadsheet; each row repre-sented one vibration. Columns indicated vibration properties for each facet, andcould be filtered. Despite addressing our most immediate needs, this approach hadseveral drawbacks including limited filtering functionality, slow vibration play-back, lack of a visual representation for the vibration patterns to support quickvisual scanning.2) To improve visual inspection, we stacked subsets (about 30) of vibrationwaveforms in Audacity, an audio authoring tool, for quick vibrotactile modifica-tion and playback [107](Figure 4.3). The improved visualization qualities easedidentification of near-duplicates and omitted vibration structures.3) Finally, we plotted vibrations according to their emotional (pleasantness andarousal) and physical characteristics (energy, duration, tempo, etc.) to enable suc-cessive pruning and filling along each dimension.These mechanisms eventually conveyed us to an adequate result, but were cum-bersome; worse, their fragmented nature hindered iteration, sometimes guidingmodifications in conflicting directions. However, the experience of building thislibrary gave us direct insight into the situation faced by any user in navigating alarge, unstructured and poorly visualized set of items. The specific problem of65navigation emerged as a primary obstacle to its use, whether for personalization orany other kind of design, and inspired us to turn to other interactive visualizationmediums to craft a better solution.4.5 VibViz: An Interactive Library Navigation Tool4.5.1 RequirementsWe needed our library interface to do two jobs, in the context of personalizationtasks: 1) support novice end-users in vibration discovery (for example, in an onlineor local vibration library); and 2) allow us to study the utility and appeal of our fivevibrotactile facets.To support end-users, the interface must be easy to use without training. Itneeds to support both search and exploration; we anticipate that sometimes userswill want to search with a set of characteristics in mind, and other times explorewith minimal direction. It must support discovery of vibrations that resemble orcontrast to a reference. It should provide multiple pathways, a key to serendipitousdiscoveries; and its use should be engaging enough to invite curiosity-driven ex-ploration [171]. As a research tool, the interface needed to provide clear separationof the facets, allowing us to study user interactions by facet and users to articulatetheir opinions.4.5.2 VibViz InterfaceDesigned based on these requirements, VibViz is an interactive visualization withthree views (Physical, Sensory/Emotional and Metaphor/Usage Example – Ta-ble 4.2), each with a screen area containing vibration representations and filtercontrols (Figure 6.2a). Several features bear notice:Linked views- All views show the same vibration subset at any time: a filterapplied to one controls the others, and hovering over a vibration in one highlightsthat vibration elsewhere. Hovering over a tag in the tagclouds highlights associatedvibrations in all three views.Thumbnail design- A vibration glyph automatically highlights central char-acteristics of each vibration waveform and renders it as a thumbnail. The glyph66View A View B View C Physical Space Sensory &Emotional Space Physical Filters Emotional Filters Usage Example Filters Metaphor Filters List Space (a) VibViz interface- Hovering over a tag in any of the tagclouds(here, the “agitating” tag, circled in red, highlights the associatedvibrations on all three views. This is done with: more saturated col-ors in view A, B and with a dark frame in view C. The labels “ViewA, B, C” are included for explanation and were not visible to partic-ipants.(b) Detailed vibrationpopup in VibViz (top) andC2 wristband (bottom).Figure 4.4: The VibViz interface and C2 wristband that renders the vibrationsencodes vibration frequency with colour saturation and a darker stroke envelope tohighlight vibration pattern over time.Drill-down and marking- A left or right click on a vibration respectivelyopens a detail popup (Figure 4.4b), or bookmarks it. Marked vibrations have ahighlighted border.VibViz is best displayed on screen sizes equal to or larger than 12 inches andis designed for a single actuator. For multiple actuators, the user can playbackone vibration simultaneously on several actuators or rely on the target applicationprogram to synchronize timings of the vibration notifications on multiple actuators.4.5.3 DatasetTo use our vibration library in VibViz, each vibration had to be annotated for allfive facets. We measured vibration duration, energy, and pulse structure. Threeresearchers annotated the other vibration properties; one annotated all and two halfof the library. We averaged ratings and removed any pairs of contradicting tags.67Table 4.2: VibViz user interface view descriptions.General Characteristics:- Views A, B and C occupy the upper left, upper right and lower regions of the interfacescreen, respectively (Figure 6.2a).- We combined Sensory and Emotional facets due to tag overlap (View B). Metaphorand Usage Example facets share the vibration glyph on View C to save screen space.- Hovering over a dot (Views A-B) or row (View C) shows a visual thumbnail of thevibration pattern (glyph) and plays the vibration on the tactile display.A. Physical View: Provides an overview of all the vibrations, each represented by acoloured dot, according to axes of energy (vertical) and duration (horizontal).Filters: 1) Tempo – slider for speed.2) Pulse structure – checkboxes, with colours matching associated dots, for short note,medium note, etc.3) Horizontal zooming – click & drag on the Physical space zooms on the horizontalduration axis.B. Sensory and Emotional View: Each vibration appears as a dot in a 2D arousal–pleasantness space.Filters: 1) Roughness slider and 2) Sensory and Emotion words tagcloud. Changingthe roughness range or clicking on the tagcloud selects vibrations having a roughnesslevel in the specified range, and all of the currently selected tags.C. Metaphor and Usage Example View: A central, scrollable list of vibrations isflanked by Metaphor and Usage Example tagclouds. Each row has three columns: thevibration’s Metaphor tags, its glyph, and its Usage Examples.Filters: Clicking on tags in either tagcloud reduces the displayed list to vibrations thathave the specified tag(s).4.6 User StudyWe ran a small user study to investigate two questions:Q1) Does VibViz satisfy its design requirements? (Research tool; supportsnovice use, search, exploration, finding similar/contrasting items, serendipity, mul-tiple pathways).Q2) How useful is each facet for personalization? How interesting is each forend-users? As pathways to exploring the library, does their multiplicity providesignificant utility and interest over a single view?Participants and Procedure- We recruited 12 participants (7 female) usingflyers and social media posts, for a 1-hour study and $10. The majority (8 out68of 12) of the participants did not have any prior vibrotactile background beyondtheir cellphone vibration notifications. Three participants had attended vibrotac-tile demos or user studies in the past and one had experience in designing vibra-tion patterns. We audio-recorded sessions and asked participants to verbalize theirthoughts throughout.In a pre-questionnaire, participants wrote down 1-2 daily activities and theirpreferred notifications (e.g., activity: running; notification: start and end of eachinterval). They then explored VibViz (displayed on a 14 inch laptop screen) for 10minutes to get a sense of its features, while wearing a C2 tactor held in a wrist-band (Figure 6.2a-c); the experimenter answered any questions about the interface.Participants next completed 9 scenarios (one at a time, 4 warm-up and 5 complex– Table 4.3), with random ordering in each set (≤ 3 min per scenario). Warm-upscenarios were clearly linked to one facet; complex scenarios were open-ended butcommon tasks in personalizing real world vibrotactile notifications and thus, weresubject to interpretation. For example, the like/dislike scenarios were included tomimic situations where users’ knowledge of the desired vibrotactile notificationis purely implicit and visceral. Finally, participants filled a post-questionnaire.Throughout the session, the experimenter sat beside the participant and used anobservation sheet to record confusions, comments, and actions taken to completeeach scenario.Table 4.3: Study scenarios. Green/warm-up; blue/complex.Scenario DescriptionSc (Physical) Find a vibration that is “short” in duration, “strong”, and“fast”.Sc (Emotional) Find a vibration that is “urgent” and “pleasant”.Sc (Metaphor) Find a vibration that feels like a “fly or bee”.Sc (Usage Example) Find a vibration that is good for both “start” and “stop”notifications.Sc (Like) Find a vibration that you like.Sc (Not like) Find a vibration that you do not like.Sc (Pre-Q) Find a vibration for the notification you wrote on the pre-questionnaire.Sc (Combined) Find a vibration that feels “natural”, catches your atten-tion, and is good for “every 5 minute notification”.Sc (Similar) Find a vibration similar to the last vibration you chose.69Data and Analysis- Our data consisted of demographics and notification typesfrom pre-questionnaire, the experimenter’s notes on confusions and list of actionsfor each scenario, and ratings and comments from the post-questionnaire. Duringthe study, we noticed that sometimes participants used the List, Physical, or Senso-ry/Emotional spaces to explore the vibrations without using the characteristics ofthat facet. Thus, we analyzed participants’ actions on filters and spaces separately.Due to the study’s small size and interesting variations among participants, we relyon summary statistics such as counts and percentages instead of statistical tests.4.7 ResultsWe structure this section according to our research questions.4.7.1 Q1) Does VibViz Satisfy Our Design Requirements?1- Serve as a research tool for vibrotactile researchers: VibViz provided ad-equate separation to allow us to observe and log participants’ actions by facets.With the current design, however, one would need a combination of software log-ging and eye-tracking to automatically collect meaningful data.2- Support novice users: Participant comments indicated that several termsand controls were confusing during initial exploration: Rhythm structure (10 par-ticipants), Arousal dimension (5), AND/OR filter operation (4). Also, none of theparticipants discovered the ability to bookmark vibrations or perform a zoom onthe Physical view until they were told. 4 and 3 people respectively did not noticelinked filtering or linked highlighting of vibrations across all views.3- Support end-users in search and exploration tasks: According to post-questionnaire data, 9 participants followed “an explicit search” and 9 “a less-focused exploration” strategy, “many times” or “always”, to find the vibrations.e.g., P1 stated that “finding vibrations always started with an explicit search up tothe point that I filtered everything that I thought might not be the proper ones forthe scenario. Then I explored among the available filtered options”.4- Support users in finding similar vibrations: 6 participants used the vi-sual vibrotactile glyphs and List space, 4 used proximity on the Sensory/Emotionalspace and 2 used Metaphor or Usage Example tags to find similar vibrations.705- Facilitate serendipitous discoveries: Based on the definition of serendipityin [171], the frequency of finding a vibration “by accident” or “by a less-focusedexploration” can be a measure of serendipitous discoveries. 8 participants found aninteresting vibration “by accident”, 9 found the scenario vibrations “by accident”,and 11 found them “by a less-focused exploration” for at least “a few times”.6- Provide multiple pathways to the vibrotactile library: Based on the per-centage of actions (Figure 4.5), 7 participants used elements of at least two sepa-rate facets in more than 20% of actions. Participants also varied in their preferredfilter and space combinations; e.g., P4 never used the List space, while P9 usedit frequently (62%). All participants used different pathways for different tasks(Figure 4.6). In our observations, these percentages also reflected the time theparticipants spent on the different parts of the interface.Figure 4.5: Average filter and space usage per participant. Tan, yellow, and green colors denote low(< 10%), medium (< 20%), and high (> 20%) usage frequency.4.7.2 Q2) How Useful and Interesting Is Each Vibration Facet?Facets interest and utility- According to post-questionnaire data (Figure 4.7), theparticipants found the combination of all views most interesting, followed by Sen-sory/Emotional. Physical and Usage Example were least interesting. Similarly, allthe views were perceived as useful, led by the full combination and Sensory/Emo-tional.71Figure 4.6: Average filter and space usage per scenario. Tan, yellow, and green colors denote low(< 10%), medium (< 20%), and high (> 20%) usage frequency.Frequency of facets use- In response to the question “Which of the followingviews would you use most often?”, 8/12 participants chose Sensory/Emotional, 3 ofwhom wanted it in combination with the Metaphor or Physical views. Accordingto P6, “they are all useful for different things...I think I can use the Metaphor andEmotional view most of the time and occasionally switch to the other ones for aspecific task”. P8 had a similar comment. Among others, 3 selected the UsageExample and 1 the Physical view. Our observation data generally aligned withpost-questionnaire data. On average, Sensory/Emotional filters were used most(22%), followed by Physical (15%), Metaphor (9%) and Usage Example filters(8%).Mismatches- Post-questionnaire responses from P2, P7, and P9 conflicted withour observations. P2 chose Usage Example on the post-questionnaire but usedSensory/Emotional most often (26%). This difference was likely due to her stateddislike for the tagcloud design for the Usage Example filters. P9 chose Sensory/E-motional but mostly used the List space (62%) during the scenarios, noting that “Iwant to go through them all, don’t wanna miss some by filtering.” Most curiously,P7 chose Usage Example but used it the least during the study. We cannot speculateon the reason. We did not notice any differences in the usage patterns of the fourparticipants who had attended vibrotactile demos or user studies or had vibrotactiledesign experience.Other useful features- Visual vibrotactile glyphs were appreciated (9/12 ratedthem as somewhat or very useful). In our observation, they were especially helpful72Figure 4.7: Interest, usefulness, and ease of use for the vibrotactile facets based on the post-questionnaire datafor finding a previously seen/felt vibration, and for finding similar vibrations. Ac-cording to P4, “Based on the visual pattern, I started to realize which ones I likeand don’t like.” The List space was also used frequently (22%) for going throughall the remaining vibrations. Also, P3, and P9 mainly used the List space for thecomplex scenarios since they felt that their perception of vibrations did not matchsome of the tags.4.8 Discussion4.8.1 Interface RequirementsOur study results suggest several features that are important for a vibrotactilelibrary navigation: 1) filtering functionality, 2) visual vibration pattern, 3) spatialand tabular presentations, 4) bookmarking, and 5) simple vibrotactile authoringtools.We found that filters supported the search task and helped users narrow down toa vibrotactile subset that matched their criteria, while the visual vibration glyphs,list (tabular), and spatial representations were most useful for exploration. Thespatial and tabular representations allowed the users to flexibly sample the library,but the visual vibration glyphs made this exploration quicker and also assisted insimilarity search. In some cases, participants wanted to adjust the sensation of73a vibration; this calls for incorporating simple authoring tools into vibrotactilelibrary navigation interfaces (Chapter 3).4.8.2 Vibrotactile FacetsKeep all, show a subset, allow switching- Although the majority of users founda combination of facets most interesting and useful and used all the facets at somepoint, most often each only used about two views. Thus, we think the librarynavigation interface could show a subset of views to the users but allow them toswitch to other views as needed. Reducing the number of views frees up screenspace for other useful functionality (e.g., a personal view for a favorite vibrationsubset or for temporary comparison) and makes the tool viable for smaller screensizes.Support personalization- Users appear to vary in which subset of the viewsthey prefer. Thus, supporting personalization of default views is an important re-quirement. If only a single facets can be incorporated, our results suggest that theSensory/Emotional view is a reasonable default.4.9 Conclusions and Future WorkWe developed and studied five organization and navigation schemes (vibrotactilefacets) for a library of 120 vibrations. We designed VibViz, an interactive librarynavigation tool, to: 1) support novice end-users in personalizing vibrotactile notifi-cations, and 2) serve us as a research tool for studying the utility and appeal of thefacets. Our user study with 12 participants found greatest interest in the Sensory/E-motional facets, but also interesting variations among participants in preference forall the facets. Our results revealed the importance of visual scanning (tabular andspatial overview, and visual vibrotactile pattern) for efficient library navigation.Our next step is to collect library annotations from a large group of users andstudy variations in their ratings and usage, and extend VibViz to support additionalpersonalization tasks, such as vibration set creation and item comparisons. In thelong term, we plan to conduct a field study on end-user personalization of vibro-tactile applications using our library and an improved VibViz interface.744.10 AcknowledgmentsWe thank Prof. Tamara Munzner for her feedback on the design of VibViz, andOliver Schneider for annotation support.75Chapter 5Deriving Semantics andInterlinkages of FacetsFigure 5.1: Conceptual sketch of the five vibration facets and their underlying semantic dimensionsand linkagesPreface:1 Having verified their utility for personalization tools in Chapter 4,we further developed the concept of haptic facets: we started from a flat list of1The content of this chapter was accepted for publication as follows:Seifi and MacLean. (2017) Exploiting Haptic Facets: Users’ Sensemaking Schemasas a Path to Design and Personalization of Experience. To Appear in InternationalJournal of Human Computer Studies (IJHCS), Special issue on Multisensory HCI.76ratings and tags collected from users for our 120-item vibration library, then iden-tified their within-facet semantic structures with a set of dimensions. Finally, wederived four factors (urgency, liveliness, roughness, novelty) that can describe thebetween-facet linkages. We discuss how these results provide guidelines for hap-tic design, facilitate evaluation, and enable development of personalization tools.Further, we note a lack of scalable evaluation methodology for haptics and presentour new data collection methodology for in-lab large-scale haptic studies.5.1 OverviewOur poor understanding of the connection between haptic effect engineering – us-ing controllable parameters like frequency, amplitude and rhythm – and the way inwhich sensations are comprehended by end-users hinder effective design. Hap-tic facets (categories of attributes that characterize collection items in differentways) are a way to describe, navigate and analyze the cognitive frameworks bywhich users make sense of qualitative and affective characteristics of haptic sen-sations. Embedded in tools, they will provide designers and end-users interestedin customization with a road-mapped perceptual and cognitive design space. Wepreviously compiled five haptic facets based on how people describe vibrations:physical, sensory, emotional, metaphoric, and usage examples.Here, we report a study in which we deployed these facets to identify under-lying dimensions and cross-linkages in participants’ perception of a 120-item vi-bration library. We found that the facets are crosslinked in people’s minds, anddiscuss three scenarios where the facet-based organizational schemes, their link-ages and consequent redundancies can support design, evaluation and personaliza-tion of expressive vibrotactile effects. Finally, we report between-subject variation(individual differences) and within-subject consistency (reliability) in participants’rating and tagging patterns to inform future progress on haptic evaluation. Thisfacet-based approach is also applicable to other kinds of haptic sensations.5.2 IntroductionDespite growing interest in and availability of haptic technology in consumer mar-kets, even its most common manifestation of vibrotactile feedback is still limited77(a) Design Guidelines andRefining: Designers oftenneed to translate aesthetic re-quirements specified in emo-tion, metaphor, and usagespaces (e.g., surprise) to sen-sory and engineering parame-ters (e.g., frequency); and torefine candidates.(b) Evaluation: Assessing oraccessing the perceptual andaesthetic qualities of vibra-tions, created by manipulatingengineering parameters, allowsdesigners to use them appro-priately.(c) Personalization: End-users can more efficientlyselect and tune vibrations in aperceptual and aesthetic spacethan in an engineering space,requiring the further capabilityof repositioning sensationswithin cognitive spaces.Figure 5.2: Three scenarios in vibrotactile design, evaluation, and personalization that facets cansupport when fully instantiated in design tools.in everyday use, generally appearing as a dull, undifferentiated and often annoyingbuzz. While a dearth of expressive hardware is one obvious cause, there are com-parable difficulties in designing with even the hardware we already have for bothvibrotactile and other haptic display modalities [104].Design is difficult for many reasons, not least due to large variances in indi-viduals’ preference and interpretation of how vibrations feel and what they suggest[68, 98, 100, 128]. Here we highlight two gaps in support which we propose arecentral.A Lack of Guidelines and Tools: When making (sketching, refining) and eval-uating sensations, designers often identify requirements in terms of usage exam-ples (e.g., allowing presenters to track time during their presentations), intendedemotions (sadness, surprise), or accompanying media (a racing car in a game)[19, 77, 161, 164, 185], but are forced to design with engineering parameters (Sce-nario 1, Figure 5.2a). In other cases, designers have a set of vibrations (whethernewly created or accessed within an existing collection) and wish to evaluate theiraesthetic and qualitative characteristics (Scenario 2, Figure 5.2b). The ability to uselow-level engineering parameters to construct or evaluate for affective and quali-78tative characteristics is tacit knowledge that haptic designers build over years andthrough extensive contact with users. It is hard to communicate, incorporate intools or transfer to others.Perception is Personal but Personalization is Unsupported: Past studies of vibro-tactile applications in real-world contexts indicate the necessity of end-user per-sonalization [77, 164]. However, there is a dearth even of effective expert tools forfar more accessible and perceptually understood engineering parameters like vibro-tactile amplitude and frequency; easy and practical mechanisms that would makesense to end-users are rare indeed. Unsurprisingly, previous work suggests thatpersonalizing based on engineering parameters is beyond end-user capacity andwillingness. When given tools in their own language domain, users can quicklyaccess and modify their desired vibrotactile notifications (Scenario 3, Figure 5.2c,and Chapters 3 and 4).5.2.1 Facets: Aligning Content Access with Mental FrameworksPeople unconsciously use a multiplicity of cognitive frameworks or schemas todescribe qualitative and aesthetic attributes of vibrations [119, 139]. Sometimespeople describe a vibration based on its similarity to something they have experi-enced before (this is like a cat purring), on emotions and feelings (this is boring),or intended usage (this tells me to speed up). These schemas, themselves composedof many attributes (Figure 5.3a) are in users’ minds: shaped by their past experi-ences and training, they provide a cognitive scaffolding on which people rely forsense-making.Facets, a design concept originating from the information retrieval domain[40, 57, 58, 153, 180], capture the multiplicity and flexibility of users’ sense-making schemas for physical and virtual items. A facet encapsulates the propertiesor labels related to one aspect of or perspective on an item and offers a catego-rization mechanism. For example, examples of alternative facets for a collectionof architectural images are people (such as designer, agency, historical figure),time periods, geographical location (GPS coordinates, province, neighborhood),and structure types (function, architectural elements). For a collection of clothingitems they might be garment type (top, bottom, inner, outer, accessories), color,79brand, formality, season [57, 180]. A given facet may be composed of a singleproperty (e.g., brand) or a set of diverse elements that reflect that perspective –e.g., lists of descriptive words (tags), numerical scales, binary or multicategoryattributes (e.g., province). The facet characterization varies by domain and relieson a user’s knowledge and conceptual mapping of that domain. Multiple facets canbe used flexibly together to describe or examine different aspects of a given itemin a collection, or alternatively, explore those aspects in light of other collectionitems.In Chapter 4, we identified five facets for vibrations based on the literaturewhich captured: 1) physical attributes of vibrations that can be objectively mea-sured such as duration, rhythm structure, etc. 2) sensory properties such as rough-ness, 3) emotional connotations, 4) metaphors that relate the vibration’s feel tofamiliar examples, and 5) usage examples or events where a vibration fits (e.g.,“speed up”). We implemented these facets in an interactive graphical visualizationand navigation tool, VibViz (Chapter 4).Here, we revise these into four facets: sensation, emotion, metaphor, and usageexamples (Table 5.1). For consistency with past haptic literature [166], we nowrefer to dimensional attributes that can be objectively measured (e.g., duration,frequency) as engineering space. The sensation facet now includes the subjectivedimensional attributes energy and tempo, previously in the physical facet.These facets provide unique ways to assign a familiar meaning to a haptic sen-sation. For example, the metaphor and usage facets rely on previously experiencedsensations and usage contexts to make sense of vibrations (see [149] for more de-tails). We implemented these facets in an interactive graphical visualization andnavigation tool, VibViz [149], and denote them and related concepts here with aspecial font and subscripts (as explained in Figure 5.3).While not meant to be a unique or complete delineation of the possible vibro-tactile facet space, this set does provide a practical sense of what facets can offer todesign. Because a given vibration can be located in the context of any and all, eachhighlighting a particular aspect, they can organize a messy hodgepodge of inconsis-tent language and mixed models into a powerful tool that leverages perception andanalogy. The interactive visualization tool VibViz allows untrained users to perusea large vibrotactile collection by viewing items in multiple facets simultaneously80Table 5.1: Vibration facets used here, taken with minor alterations (†) from Chapter 4. These facetproperties are combinations of ratings (quantitative attributes such as i,ii, iii for sensationfacet) and tags (list of words iv). For example, in the sensation facet, i, ii and iii are singleattributes on which an item can be rated, while iv is a list of descriptive tag words that mightapply to sensations when considered from this viewpoint. Modifications: (1) Omitted thephysical facet. For consistency with past haptic literature [165], we now refer to dimensionalattributes that can be objectively measured (e.g., duration, frequency) as engineering space.(2) The sensation facet now includes the subjective dimensional attributes energy and tempo,previously in the physical facet.Facet Attributes1. SensationPerceptual properties ofvibration.i) energy†ii) tempo or speed†iii) roughnessiv) Sensory words: 24 adjectives from touch dictionary [52].2. EmotionEmotional interpreta-tions of vibration.i) pleasantnessii) arousaliii) Emotion words: 26 adjectives from touch dictionary [52].3. MetaphorFamiliar examples re-sembling the vibration’sfeel.Metaphor words: We collected a set of 45 metaphors for ourlist of usage examples, asked colleagues and friends to providemetaphors for our vibrotactile effects, and used the NounProjectwebsite [169] for brainstorming on metaphors.4. Usage ExamplesTypes of events which avibration fits.Usage example words: We collected and consolidated a set of24 usage examples for presentation timing and exercise tracking[164].and dynamically.These multi-facet views thereby become rich, layered descriptions which in-form design. For example, VibViz’s linked facets show how an individual item mayhave different perceptual near-neighbors and contrasts in the different facets.From Browsing to Manipulating in Facet Space: In its primary form, a facet isjust a flat list of attributes like tags and ratings (Figure 5.3b). Thus, it only allowsus to browse existing, defined elements (as VibViz does). What if a designer oruser wants to change an element, or find points in between existing library items(Figure 5.2 scenarios)? A semantic dimension offers a structure for the facet; itprovides a continuous perceptual parameter along which one can move vibrationsor characterize them (Figure 5.3c). Imagine a slider that makes a vibration moreor less “exciting”, “alluring” or “bell-like” – in contrast to ones that change itsbase frequency or amplitude. Such sliders would allow both trained designers and81untrained end-users to manipulate (sketch, ideate, personalize) vibrotactile signalsmore directly by offering handles in a language framework relevant to their pur-pose.However, to allow continuous movement along cognitively useful dimensions,a tool must do far more than locate discrete sensations within facet space: it mustidentify and present a topologically continuous mapping between the facets andengineering spaces, so that every point of the slider’s range can be rendered.Further, VibViz already hints at considerable redundancy between facets – whena dimension in one facet is very similar to that of another, but goes by a differentname. Facets are not independent spaces, but alternative views of the same thing.Mapping connections specifically will enable designers to translate or formulate re-quirements from one facet space (e.g., emotional or application-driven constraints)into more actionable sensory and engineering spaces (Scenario 1, Figure 5.2a) orevaluate aesthetic characteristics of a set of vibrations given their sensory proper-ties (Scenario 2, Figure 5.2b).5.2.2 Research QuestionsA major objective of this research is to establish a means of finding such mappings.As a first step, we have pursued three questions:(Q1) Within-Facet Substructure: What are the underlying dimensions of thefacets that dominate users’ reaction to vibrations? For example, for the emotionfacet one could then design or identify the most emotionally distinct vibrations.These dimensions are the first step towards perceptually salient continuous “slid-ers”, such as roughness.(Q2) Between-Facet Linkages: How are attributes and dimensions in differentfacets linked with each other? A specific mapping will allow for translation of re-quirements from one facet to another (e.g., emotion to sensation and vice versa)and provide the basis for a topologically continuous mapping between the facet di-mensions and engineering parameters. Designing a “surprising” sensation is muchsimpler if one can access its sensory characteristics to be irregular, ramping up, andrough. Our format convention for vibration tags or attributes highlights points in afacet space, as opposed to dimensions.82(a) People use mixed language to describe andmake sense of vibrations, which is highly de-scriptive; but its disorganization makes it hardto use in design.(b) Facets organize users’ descriptions into cat-egories of labels, each describing and orientingelements according to one aspect that the labelsin that facet share.(c) The underlying semantic dimensions ofeach facet (shown as black arrows) structures itsattributes, and exposes axes along which there iscontinuity.(d) Factors are conceptual constructs that candescribe the linkages between dimensions ofthe four facets (red arrows)).Figure 5.3: Concept sketch showing haptic facets, dimensions and their linkages. Central elements(denoted throughout the chapter with a special font and subscripts) include (1) tag: a la-bel/word that people use to describe an attribute of a haptic sensation (e.g., soft, exciting);(2) facet f : a framework that binds related attributes of haptic sensations into a descriptivecategory; (3) dimensiond : a continuous parameter that delineates variations in a facet; and(4) factor f act : a conceptual construct underlying linkages among different facets (deducedhere using factor analysis).83(Q3) Individual differences in facets: To what extent do people coincide or dif-fer in their assessment of vibration attributes? Facets are based on frameworksin users’ mind which can vary greatly, for example due to past experiences andculture. Understanding this variation can shed light on individual differences inpreferences and meaning-mappings, and inform development of robust haptic eval-uation instruments.5.2.3 ScopeWe used the VibViz vibration library and the concept of facets to investigate thesequestions. We first collected an extensive set of user annotations (selections ofadjective ratings and tags) for library elements to situate the vibrations within thefour facets (Chapter 4). We obtained this data in a two-step process adapted fromdata collection methods in the music domain [173], first with three experts and thenwith 44 lay users.In our subsequent analysis, we derived semantic dimensions of each facetthrough Multidimensional Scaling (MDS) analysis [25], and investigated between-facet linkages using factor analysis [170]. With this data, we updated and furtherpopulated Table 5.1’s descriptions to include our derived facet dimensions and theirlinkages. Our analysis occurred at multiple levels: we examined low-level proper-ties and linkages of individual tags (tag level), and then semantic facet dimensionsobtained from MDS analysis (dimensional level), and finally compared these acrossthe four facets (facet level). Thus, our novel contributions include:1. Empirically derived semantic dimensions of four vibrotactile facets;2. Between-facet linkages at dimensional and individual tag levels, and discus-sion of their implications for vibrotactile design and tools;3. Analysis of individual variations in rating and annotating vibrations;4. A two-step methodology for annotating large sets of vibrotactile effects, anddata on its validity and reliability; and5. A publicly available dataset of 120 vibrations and their annotations and di-mensions [145].84In the remainder of the chapter, we present the related literature on tool devel-opment, perceptual dimensions of vibrations, and haptic evaluation methodology(Section 5.3), and highlight important aspects of our approach (Section 5.4) fol-lowed by data collection details (Section 5.5) and analysis procedure and results(Section 5.6). In Section 5.7, we describe how our results support the design andevaluation scenarios outlined above (Figure 5.2) and compare our facet dimensionsand linkages to any existing dimensions in the literature. We finish by reviewingour data collection and analysis methodology and presenting interesting directionsfor future work.5.3 Related WorkThe design process for haptic sensations will inevitably vary substantially depend-ing on designers and use cases, but it usually involves several rounds of design,evaluation, and fine tuning of the stimuli and usage scenarios [16, 19, 104, 187].To support this process better, we need effective authoring tools, design knowledgeand guidelines, as well as evaluation methodology and metrics. Below we describeprogress in these areas and how our work builds on them.5.3.1 Tools for Vibrotactile Design and PersonalizationWith their crucial role in the design process, haptic authoring tools have receivedan increasing attention in the last decade. Design tools by nature facilitate use ofsome parameters and approaches while limiting access to others; e.g., pre-designedthemed color sets vs. full-spectrum palettes – an example of parameter-limiting; orfine tuning and precision vs. rapid prototyping and creative flow, i.e., approach-limiting. Existing haptic tools are built around the most important design param-eters and approaches identified in the literature or by practitioners. For example,to support design around rhythm or temporal pattern, the tools facilitate precisemodification and referencing of vibrations on a timeline [133, 140, 161]. Recentinstances promote use of examples and design by demonstration as well as rapidprototyping by allowing easy modification of design parameters [69, 140, 141].However, to our knowledge these tools currently provide access only to low-levelengineering parameters. Perceptual and affective controls over vibrations are miss-85ing, and this slows design.Content design and manipulation are no longer done only by a specific group ofusers [99]. In several other domains (e.g., photo and video editing, music mixing,configuring software), a spectrum of tools exist for various expertise levels [38, 55,135]. Haptic design tools are catching up: while past tools have mostly focusedon experts, recent trends, published during this PhD work, have targeted end-userhaptic content creation and personalization [77, 185].Our work informs design of higher level controls, which can be thought of astuning sliders or knobs and might be implemented as such in a design interface.These will benefit both expert design tools and end-user personalization interfaces.5.3.2 Knowledge of Perceptual and Qualitative Attributes ofVibrationsA body of work has investigated perceptual dimensions of natural (e.g., textures)and computer-rendered synthetic haptic stimuli (e.g., vibrations), and users’ lan-guage for touch [28, 52, 68, 102, 119, 121, 174]. In our own previous work, VibViz,we compiled five vibrotactile facets based on dimensions and properties known inthe literature for vibrations and users’ language (Chapter 4).Several tactile perceptual studies exist on natural textures (e.g., fabrics, flu-ids and various surface materials) due to their higher availability and wider rangeof sensations (see [121] for a survey). However, the resulting dimensions (suchas warm/cold) are not easily translated to computer-rendered synthetic sensations.Others examine prominent vibrotactile attributes based on users’ similarity group-ings or ratings for small to large sets of vibrations. They report energy, roughnessand rhythm as the most important design parameters [15, 63, 166, 174]. Whilethese studies give insights into vibration perception, they tend to be organized interms of engineering or sensory parameters and are not linked to language attributesin users’ minds.Recent studies examine users’ tactile language and descriptions as a windowonto understanding prominent properties of touch. Notable among these is Guestet al.’s collection of touch-related English vocabulary [52]: based on MDS anal-ysis of word similarities, the authors propose three dimensions for sensory words(roughness, dryness and warmness), and three for emotional words (comfort, arousal86and sensual quality). We use this collection of sensation and emotion words in ourfacets; however, the identified dimensions are not validated for synthetic hapticsensations. Further, other aspects of users’ languages such as metaphors and usageexamples are not examined.Our own facets were previously constructed based in part on this literature;here, we further confirm, refine and add to these dimensions and attributes by ana-lyzing users’ perception of a large library of vibrations collected through the facets.5.3.3 Methodology for Evaluating Qualitative Attributes ofVibrationsPrevious research in related areas typically adapts methodology from other do-mains for haptic studies, or refines existing haptic evaluation methodology to bemore time- and cost-effective. For example, MDS studies in haptics were origi-nally adapted from the auditory domain to investigate perceptual distances betweentactile sensations [25, 51, 68]. Other researchers use phenomenology to obtainricher language-based descriptions of haptic sensations [119, 139]. However, phe-nomenological studies are time-consuming and thus are only practical with fewparticipants and small sets of sensations. In Chapter 6, we examine the feasibilityof using crowdsourcing platforms (e.g., Amazon’s Mechanical Turk) for vibrationevaluation. Despite promising results, the methodology is mainly tested for Likertscale evaluation and is not yet verified for richer, language and annotation-basedhaptic studies.Despite some progress in haptic evaluation approaches, it remains singularlydifficult for a researcher to collect rich feedback from lay users in a manner thatscales to large stimuli sets. Our data collection methodology, adapted from themusic domain, by necessity has had to fill this gap. Here, we report its executiondetails and examine its validity and reliability.5.3.4 Instruments for Evaluating Haptic SensationsAs haptic effects are designed for a wide variety of use cases and requirements,researchers frequently must devise a custom evaluation instrument for every study.Recent investigations have laid the foundations for devising a standard yet flex-87ible instrument for vibrations through examining users’ language and compilingimportant vibration properties and common metrics across past studies.Most relevantly, Guest et al. provide a linguistic instrument for tactile sensa-tions called the “touch perception task” (TPT) [52]. TPT is composed of 26 sensoryratings and 14 emotional ratings and was tested by its authors on natural textures.Here, we have re-used the annotation instrument we previously developed forvalidating and populating VibViz, built around language and metrics found in theliterature. Specifically, (a) four of our five Likert scale ratings (strength/energyd ,roughnessd , pleasantnessd , and arousald) are commonly used metrics; while (b) oursensation f and emotion f tag lists are based on Guest et al.’s sensation and emotionvocabulary [52]. We introduced the tempod rating scale as well as the metaphor fand usage example f tag lists in the previous chapter on VibViz (Chapter 4). Whenused to annotate a large vibrotactile library, this more comprehensive annotationinstrument can generate results that will inform future vibrotactile evaluation in-struments by identifying the redundant facet attributes and providing an estimateof users’ reliability and variation in responses.5.4 ApproachTo investigate the semantic dimensions of these facets and their linkages, we beganwith VibViz’s source vibrations and its comprehensive but efficient evaluation in-strument (Chapter 4). We report the scalable and robust methodology that alloweda comprehensive annotation of our vibration library and use standard dimension-ality reduction methods to analyze the resulting dataset. Below, we describe eachaspect of our approach in more detail.5.4.1 Rich Source VibrationsTo identify underlying dimensions and linkages of facets, we used a large and var-ied set of vibrations. In Chapter 4, we described our various tools and inspirationsincluding systematically changing vibration parameters (e.g., rhythm, frequency),modifying audio files to serve as vibrations or using audio files as reference fordesigning vibrations, and running pilot design studies where our lab colleagues de-signed vibrations for a given set of metaphors (see Chapter 4 for more details on88our library design process). Our design process was intertwined with developingthe four facets and their annotation instrument and resulted in 120 vibrations witha wide range of qualitative and affective characteristics.5.4.2 Inclusive and Concise Annotation Instrument, for a FlatDescriptor SetFor an accurate picture of the vibrations, we needed an inclusive and non-redundantannotation instrument. If an important rating or tag is not included, we would beunable to identify the corresponding dimension (exclusion risk). In contrast, re-dundant ratings or tags can introduce noise. As the set of ratings and tags grows,users’ (even experts’) ability to consistently characterize a vibration decreases (re-dundancy risk).We developed our ratings and tags to reduce both risks. We included quanti-tative rating scales that are frequently utilized in the literature and incorporated asmany relevant tags as possible in our evaluation’s first step with experts (mitigatingexclusion risk), and after the expert annotation phase, removed and consolidatedredundant items in a discussion session (mitigating redundancy risk). The ratingscapture users’ perception on attributes that are previously identified to be salientfor vibrations, while the tags allow us to derive salient dimensions not known be-fore. The results of the process are five bipolar 7-point Likert scale ratings and fourlists of candidate tags (see Table 5.1 for an overview, and Section A.1 for a full listof tags proposed for each facet).5.4.3 Scalable and Robust Data Collection MethodologyWe needed a comprehensive ‘gold standard’ annotation set that covered the fullVibViz library. Ideally, annotations would be applied by individuals who rated theentire facet space for all the items. This would require individuals rate and tag 120vibrations, each according to five scales and 121 candidate tags. In piloting, wefound this was too mentally and physically demanding to be suitable for lay userswith varying levels of commitment, confirmed by poor signal-to-noise propertiesof that pilot data. We therefore devised a new collection method that could bespread across multiple participants (scalable) and would be robust to outliers, i.e.,89the occasional low-commitment participant – or at least, to clearly identify these.Music annotation literature provides interesting alternative approaches for datacollection, such as a panel of experts: Pandora Internet Radio uses experts to an-notate its music dataset, constructing a “gene sequence” for each music piece thatis used for music recommendations [124, 173]. Alternatively, services such asLast.fm crowdsource the annotation task, incenting end-users to add free-form tex-tual tags to songs from which it derives music “folksonomies” [81, 93, 173]. How-ever, our access to haptic experts is limited and the literature lacks a set of standardattributes for vibrations. Furthermore, we can not yet fully crowdsource vibra-tion annotations, in large part due to hardware limitations and lack of a validatedmethodology (Chapter 6).We therefore adapted these two approaches into a two-stage evaluation system.In the first expert annotation stage, three haptic designers rated and tagged thevibrations employing initial rating scales and tag lists, with encouragement to beliberal in application of tags to stimuli. In the lay user validation stage, a largernumber of participants with no haptic background adjusted the experts’ ratings andtags for subsets of the library – principally by removing tags which they felt did notapply, since this proved to be mentally easier than applying new ones; although tagaddition was also allowed. The first stage resulted in consistent annotations acrossthe library that were relatively free of the noise introduced by participants’ fatigueand lack of commitment, but reflected only a small number of subjective opinions.In the second stage, we pruned the potentially overpopulated annotation dataset bybringing in additional, but potentially less committed, perspectives. We fully detailthe process in Section 5.5.This methodology does have a bias risk: participant perceptions of vibrationsin the second stage can be influenced by the rating values and tag assignments thatthey are shown. We devised mechanisms in our experiment design to mitigate thisbias, and evaluated its impact on our final dataset.5.4.4 Data Analysis MethodsWe used Multidimensional Scaling to identify the underlying dimensions for thetags (but not the rating scales or values) in each facet, and factor analysis to inves-90tigate constructs that link dimensions (including rating scale data) between variousfacets.Multidimensional Scaling is a dimensionality reduction technique that is com-monly used to derive and visualize a low-dimensional perceptual space from ahigh-dimensional dataset [25]. We used Matlab’s classical MDS implementation(a.k.a. Principal Component Analysis or PCA) where the distances among theitems (vibrations or tags) are Euclidean – as opposed to ordinal, as in a non-metricMDS [167].Factor analysis is typically used to identify underlying variables (a.k.a. factors)that connect and describe a set of observed but correlated quantitative variables[170, 183]. For example, factor analysis is usually applied to surveys with severalLikert-scale questions to find connected questions. We applied factor analysis toour derived facet dimensions, and the ratings collected for our five scales.5.5 Data Collection and Pre-processingHere, we detail the collection of ratings and tags for the vibrations in two stagesdescribed above – expert annotation, and novice validation; then describe datasetpre-processing and define the metrics with which we analyzed its tags and ratings.5.5.1 Stage 1: Annotation by Haptic ExpertsWe required expert annotators who had experience with a wide range of hapticand/or vibrotactile sensations, were familiar with our vibrotactile library and facets,and could commit to annotate all or a large subset of the vibrotactile library within ashort time span of a few days. Within-subject annotation of the entire vibration setwould produce consistency and breadth in our initial annotation dataset; howeverit did impose a substantial cognitive load on the expert annotators, and thus weutilized experts with some commitment to the research and group. Given the natureof the task, we did not feel this closeness to the research could bias the results, butleveraged it for motivation.Expert backgrounds: Three haptic researchers including the first author providedexpert annotations. The first author, a vibrotactile researcher who developed the vi-bration library and annotation instrument, rated and tagged all the vibrations while91(a) The first tab shows all the five rating scales.(b) The other four tabs show a list of potential tags in each facet and a textbox at thetop for extra tags.Figure 5.4: Expert annotation interface- One can play a vibration many times and move between thetabs representing the required ratings and tags for the vibration, but they cannot go back toprevious vibration(s).the second and third experts each annotated half of the vibrations (randomly as-signed to them). The second annotator is a haptic researcher at University of BritishColumbia with extensive experience in designing and evaluating vibrotactile sensa-tions and haptic design tools, The last annotator is a Human-Computer-Interactionresearcher who co-designed VibViz with the authors and had extensive exposure toall the vibrations in the library before participating in this study. The second andthird annotators received a $50 honorarium for their participation.Initial dataset: 120 vibrations from VibViz library were randomly divided into 10groups with 12 vibrations in each group. These groups remained fixed for all three92expert annotators.Apparatus and procedure: The annotation interface was a web-based wizard thatgradually disclosed the available ratings and tags for the vibrations on subsequenttabs. The first tab disclosed five rating scales (7-point Likert scales) for the vi-brations (Figure 5.4b, Table 5.1). The four other tabs had the list of tags for thesensation f , emotion f , metaphor f , and usage example f facets plus a textbox for anyadditional tags from the experts (Figure 5.4b). In each session, first the expertsplayed a fixed set of representative vibrations for calibration purposes, then pro-ceeded to annotating a group of 12 vibrations (randomized presentation order).During the annotation process, the experts could play a vibration several times andmove between different tabs for one vibration but they could not go back to previ-ous vibration(s), even within that group. At the end of each group, a review pageshowed all the expert’s ratings and tags for the vibrations which could be furtheredited. This procedure encouraged the experts to focus on the demanding task ofannotating each vibration individually but also allowed for cross comparisons andconsistency adjustments afterwards.Annotating a group took about 45-60 minutes. Experts were given the choiceof annotating their groups in a single session or spread over several sessions, butwere not permitted to interrupt a single group’s annotation. Expert 1, the firstauthor, evaluated 10 groups over 5 sessions within 6 days, while Experts 2 and 3evaluated their 5 groups over 3/8 and 4/4 sessions/days, respectively. The expertswere allowed to revisit their previously annotated groups (but never did request todo so). The total time spent by each expert was approximately 8 hours for Expert1, and 4-5 hours for Experts 2 and 3.Pre-processing and tag consensus and consolidation: After collecting all the an-notations, the first author examined all the tags for each vibration and highlightedconflicting tags (e.g., smooth tag by one expert and rough by another one, or angryvs. happy). In a follow-up session, all three experts played and felt vibrations withcontradictory tags again and came to consensus on one of the conflicting tags or onremoving both. Further, they could and did adjust wording (e.g., dynamic insteadof changing), and combined tags under one wording (e.g., jaggy and grainy werereplaced by grainy).935.5.2 Stage 2: Validation of the Dataset by Lay UsersOur sole requirement for our Stage 2 participants was to have no background inhaptics beyond normal everyday exposure to vibration sensations (e.g., via cell-phone usage).Participants and compensation: We recruited 44 participants (24 female, 19-60years old, with 40 of the participants under 36 years old) through advertising ona North American university campus. All participants were university studentsexcept for three who did not declare their occupation. Participants were permittedto participate in more than one session but tag different vibrations in each session(up to a maximum of 4 sessions covering all 120 vibrations) and six participantsdid so. Participants were compensated $10 for a one-hour session.Initial dataset: Our dataset was composed of the 120 vibrations with the averageexpert ratings and the combined and consolidated tags for each vibration, randomlydivided into 12 groups of 10 vibrations. This grouping remained fixed for all theparticipants.Mitigating bias and noise in the validation stage: We anticipated that the existingratings and tags could bias participants’ perception of the vibrations and/or suggesta lower need for their attention. Following literature guidelines on detecting invalidresponses [27, 82], we mitigated this by making additions to the database whichwould expose non-diligent participants, and warned participants of the possibilityof inconsistencies to encourage diligence, while added negligible cognitive load tothe annotation task.Specifically, we included intentional errors in the dataset, duplicated some ofthe vibrations, and presented the existing annotations to the participants as “datafrom other users that can include noise”. To identify the highly-biased participantsor those who did not pay close attention to the experimental task, we included twointentional errors, one in the ratings and one in the tags, in each vibration group.For the rating error, we modified the energyd rating for one of the vibrations fromvery high (+3 on a 7-point likert scale) to very low (-3) or vice versa. For thetags, we added an invalid tag to one of the vibrations in each group (e.g., added“long” to a vibration with the short tag) resulting in two clearly contradicting tagsfor the vibration. These changes were clearly different from the characteristics and94Figure 5.5: Validation interface gave access to all 11 vibrations at the same time and could removetags and adjust ratings. Participants could see the existing (expert) ratings in blue, and theirown adjusted ratings in green. They could remove a tag by clicking on it (graying it out),and re-add it by clicking it again.other ratings and tags for the vibration, thus added minimal cognitive load to theannotation task. Also, we duplicated one of the 10 vibrations in each group (for atotal of 11 vibrations) to assess the participants’ rating and tagging reliability.Finally, as part of the experiment instructions, we told the participants that theexisting ratings and tags were provided by other people and we were running thisstudy to remove the noise in that data.Apparatus and procedure: The validation interface was composed of two webpages, for calibration and annotation pages respectively (Figure 5.5). An experi-ment session took about 1 hour and the participants went over 2-3 vibration groups(22-33 vibrations) depending on their annotation speed. After the initial instruc-tions, participants went through all the calibration vibrations for that session (33vibrations). Then, they proceeded to the annotation page where they could see allthe 11 vibrations for one group (randomized order). They could change the ratings,remove tags, or add additional tags; the initial ratings and tags were visible at alltimes. After completing a group, the experimenter loaded the next group of vibra-tions and the participant went through the calibration and annotation pages for thatgroup. At the end of the session, participants filled a short post-questionnaire fordemographic information and any other relevant comments.95Table 5.2: Definition of our analysis metricsTag removal threshold: The number of participants that must remove a tag from a vibration before we eliminate the tag inour validated dataset. For example, we use a tag removal threshold of 4, meaning that every tag that is removed by 4 or moreparticipants from a vibration’s list of tags is eliminated from the validated dataset.Vibration distance: The extent that two vibrations are described differently according to a given metric. In our study, themetrics are our facets. We calculate the distance between two vibrations in a facet (Fk) as the number of tags (Nt ) that aredifferent between the two vibrations divided by their total number of tags in the given facet. We use this metric in our MDSanalysis of the vibrations.Distance(Vi,V j,Fk) =Ntags[(Vi,Fk)	 (V j,Fk)]Ntags(Vi,Fk)+Ntags(V j,Fk)(5.1)Tag co-occurrence and tag distance: Co-occurrence is the number of times two tags are used together to describe thevibrations in our dataset. We calculate this value for two tags by counting the number of vibrations that have both tags anddividing it by their total frequency in our dataset.Cooccurrence(Ti,Tj) = 1−2×Nvibrations(Ti∩Tj)Nvibrations(Ti)+Nvibrations(Tj)(5.2)Tag distance: We define distance between the two tags (“tag distance”) as one minus their co-occurrence value. We use thesetag distances in our MDS analysis on the tags.Distance(Ti,Tj) = 1−Cooccurrence(Ti,Tj) (5.3)Tag disagreement score: An estimate of the amount of controversy among the participants in keeping or removing a tag. Wemeasure it based on the number participants that disagree with the majority of taggers (about removing or keeping a tag for avibration) divided by the total number of times the tag is presented to the participants in our dataset. For example, if for allthe occurrences of a tag in our dataset only one of the participant have a different opinion from the rest, the formula results ina disagreement score of 0.11. The highest disagreement is 0.44 (the lowest is 0) meaning that for all the vibrations, the tag isapproved by half of the participants and removed by the other half.Disagreement(Ti) =∑jNMinorityParticipants(V j,Ti)Nvibrations(Ti)×Nparticipants(V j)(5.4)Vibration disagreement score: The amount of difference in the participants’ descriptions of a vibration according to acriteria. In our study, we calculate vibration disagreement per rating and per facet. For the ratings, we use the standarddeviation of the ratings by the participants. For each facet (i.e., tag set), we define our metric to be similar to the standarddeviation but applicable to the tags. Specifically, for a vibration, we count the number of tags that are different between aparticipant’s approved tags and the validated tag list for the vibration and divide it by total number of tags the experts providedfor that vibration. We average the value over all taggers for that vibration.Disagreement(Vi) =∑jNtags[(Vi,Pj)	 (Vi,Validated)]Ntags(Vi,Experts)(5.5)Unreliability score: Rating unreliability is the absolute difference in the ratings for a vibration and its duplicated version (forexample, for energy ratings, the reliability is defined as R(Vi,energy) = |energy(Vi)− energy(V ′i)|). Tag unreliability is thepercentage of removed tags that are different between a vibration and its replica. Specifically, it is the number of tags removedfrom a vibration or its replica (but not from both) divided by the total number of tags removed from each.TagUnreliability(Vi,Fk) =NRemovedTags[(Vi,Fk)	 (V ′i,Fk)]NRemovedTags(Vi,Fk)+NRemovedTags(V ′i,Fk)(5.6)965.5.3 Pre-Processing of the DatasetPrior to full analysis, we handled outliers and then averaged and incorporated ourStage 2 annotators’ input to prune tags as planned.Outlier removal: We used participants’ performance on intentional rating and tagerrors to identify outliers with high bias or low attention to the experimental task.Specifically, if a participant only modified the rating errors, we removed their tag-ging data and if they adjusted less than 1/3 of both the rating and tag errors, weremoved all their data from the dataset. As a result, each vibration in the datasethas data from 9 taggers and 9-13 raters (5 rating outliers, and 13 tagging outliers).Constructing the validated dataset: To derive the validated ratings for a vibration,we averaged all the participants’ ratings for that vibration. We eliminated tagsremoved by more than 1/3 of the participants (≥ 4 out of 9). In this way, weremoved tags that were commonly marked as irrelevant, yet did not excessivelylimit the dataset (to the tags approved by everyone) to allow for more interestinganalysis and results.5.5.4 Definition of Analysis MetricsTo address our research questions, we devised a set of metrics that are applicableto ratings and free-form tags and used them as the basis for our analysis. Table 5.2summarizes all the metrics with mathematical formulas. Below, Vi,Vi′ denote theith vibration and its replica respectively. Tj refers to the jth tag, Fk to one of thefour facets, and Nitems to the number of items (e.g., tags, vibrations, participants).∩,	 denote the intersection and symmetric difference respectively of two tag sets.5.6 Analysis and ResultsWe provide our analysis procedure and results, focusing on our three research ques-tions in turn followed by a summary of our dataset characteristics.975.6.1 [Q1] Facet Substructure: What Are the Underlying FacetDimensions That Dominate User Reactions to Vibrations?To interpret and verify the underlying dimensions for the facets, we analyzed thedata in four steps:1. Ran a first MDS analysis on these vibration distances in each facet to deter-mine the number of underlying dimensions for the facet;2. Determined an initial interpretation of the dimension semantics based onfrequent and contrasting tags at the ends of each dimension (Table 5.4);3. Visualized distribution of the vibrations along each MDS dimension, color-coded based on the existence (or lack) of related tags, to verify our interpre-tation of the dimension (Figures 5.7, 5.8, 5.9, 5.10);4. Examined results of a separate MDS analysis on tag (in contrast to vibration)distances as a test of convergent and discriminant validity (A.4)Together, these analyses reinforced our interpretation of the semantics of the di-mensions and revealed the distribution of vibrations and tags in each facet. Below,we separately describe the analysis steps in detail, then present results for eachfacet.[Step 1] Deriving dimensions from vibration distances: We calculated quantita-tive values for vibration distances, in each facet, based on the the number of sharedand different tags in the validated tag lists for each two vibrations in the library(Table 5.2). Then, we ran an MDS analysis on these vibration distance values foreach facet. From this data, we determined the number of MDS dimensions usingthe eigenvalue plots as well as dimension interpretability. In Figure 5.6, eigenvaluecontributions are normalized to that of the first eigenvalue. Since these plots donot have an obvious “knee” (vertical gap), for each we first chose an initial set ofdimensions based on the the highest-contributing eigenvalues; then, considered di-mension interpretability before arriving at a final number [52]. We thereby foundbetween one and four dimensions for each facet (Table 5.4).[Step 2] Determine semantic descriptors for each MDS-produced dimension: Welisted the validated tags and their rate of occurrence for the 10 farthest vibrations at98(a) Sensation f facet (b) Emotion f facet(c) Metaphor f facet (d) Usage f facetFigure 5.6: Eigenvalue plots for the four facets. In each, the horizontal axis represents number ofdimensions and the vertical axis indicates a dimension’s contribution to reconstructing thevibration distances. If there is a large vertical gap between the nth and (n+1)th dimensions,the first n dimensions have much larger contributions than the following ones and describemost of the variation in a facet. Thus, we use those first n dimensions in our analysis. Thered dotted line highlights the number of dimensions we select for each facet. The eigenvaluecontributions are normalized based on the first (largest) eigenvalue.each ends of an MDS dimension. The most frequent, yet still contrasting tags forthe two ends of a dimension provided us with an initial interpretation of dimensionsemantics. We found one to several such high-frequency tags (descriptive terms)bounding each end of each dimension found in Step 1 (Table 5.4).[Step 3] Verifying dimension semantics by visualizing vibration distributions: Wevisualized spatial distribution of vibrations along the identified MDS dimensionsfrom Step 1 and color-coded them based on existence (red, green) or lack (gray) ofhigh-frequency tags from Step 2 (Figures 5.7, 5.8, 5.9, 5.10). As explained morefully in the first caption, vertical bars encode MDS position of the vibrations alongeach dimension, while bar color denotes whether a vibration’s validated tag list hasone of that dimension’s high-frequency tags. Red and green bars that are groupedat the opposite ends of the dimension with gray mostly in the middle signify thatthe identified tags adequately represent the semantics of the dimension; substantial99mixing of colors does not.[Step 4] Investigating tag distances: We ran a second MDS analysis on our de-rived tag distances (Table 5.2) and examined word positions in the resulting MDSmap as a measure of convergent and discriminant validity of our interpretations[52], as follows. Convergent validity is supported when the words that have sim-ilar meanings in relation to a dimension are spatially close in the MDS solution.Discriminant validity is supported if the words with contrasting meanings are lo-cated far from each other in the MDS solution. Thus, we examined whether thecontrasting tags for each dimension are far away from each other while the rele-vant tags for the dimensions are in the same area. Results from this step mainlysupport findings of the above steps and thus are reported in Appendix A.4.In Table 5.3, we step through this process to interpret the dimensionality of each ofour facets specifically.100Table 5.3: Facet dimension analysisSensation FacetDimensions from vibration distances: Figure 5.6a’s eigenvalue plot suggests that after 4 primary dimensions, additionaldimensions contribute little more (<0.1 apart). The identities of the most frequent tags at dimension extremes suggest thatthese four dimensions could be defined by their endpoints as: 1) simple/flat to complex/dynamic, 2) continuous to discontinuous, 3)smooth to rough, and 4) short to long (Table 5.4).Color-coded vibration distributions: Figure 5.7 shows spatial distribution of the vibrations along the above four dimensions.All four have similar ranges (-0.5 to +0.7), indicating comparable variations along the dimensions. For the first three, theassociated tags explain the dimension semantics well: green and red bars are well-separated at the two ends of the dimensionsand the gray bars are around the central, neutral position. For the fourth dimension, the colored bars are less well separated,suggesting that these tags can at least partially explain this variation. We include it as the last interpretable dimension for thesensation f facet. These dimensions were further confirmed by our MDS analysis on tag distances (Appendix A.4).Final dimensions (also in Table 5.4): 1) simple—complexd , 2) discontinuous—continuousd , 3) smooth—roughd , and 4) short—longd .The overlap in the frequent tags for different dimensions (Table 5.4) and their spatial configuration (Figure A.1) suggest theabove dimension properties are not completely orthogonal.Emotion FacetDimensions from vibration distances: Figure 5.6b’s eigenvalues suggest 3-4 underlying dimensions; we opt for three due tohigher interpretability. The most frequent tags in Table 5.4 suggest 1) comfortable and calm vs. annoying and urgent, 2) boringand predictable vs. lively and interesting, 3) strange and surprising vs. rhythmic and mechanical.Color-coded vibration distributions: Figure 5.8 shows the distribution of the vibrations along each emotion f dimension. Forthe first and second, color distribution follows our interpretation. For the last, green bars are mostly grouped at the right(strange and suprising) but red and gray bars are randomly dispersed on the left, suggesting the need for a better description forthis end of the dimension.Final dimensions: 1) comfortable—urgent, agitatingd , 2) boring—lively, interestingd , 3) creepy, strange—rhythmic, predictabled .Metaphor FacetDimensions from vibration distances: We removed 13 of 45 metaphor f tags that were applied with low frequency (to <2vibrations) to avoid unrepresentative distortions in the MDS result. Metaphor f ’s eigenvalue plot then has a large number ofdimensions with similar contributions; however, the first two slightly more so than others. Tag frequencies suggest that thesetwo are differentiated in 1) tapping vs. engine, and 2) tapping and heartbeat vs. game or alarm. Further analysis of the tags,reported in A.4, indicated that along dimension 1, tags are divided into ongoing and repetitive or pulse-like and nuanced.For dimension 2, tags tend to be natural and calm; or mechanical, synthetic and annoying (See Appendix A.4 for the spatialconfiguration of tags).Color-coded vibration distribution: Tag distributions for both dimensions show a separation of green and red bars at the twoends of the dimensions with gray bars lying mostly in the middle (Figure 5.9).Final dimensions: 1) on-off, nuanced—ongoing and repetitived metaphors, and 2) natural, calm (mostly pulsing)—mechanical andannoyingd metaphors.Usage FacetDimensions from vibration distances: Eigenvalues suggest that the first dimension has a dominant contribution (Figure 5.6d).According to the most frequent tags, this dimension represents urgency and attention-demand of notifications. On one end,usage f tags suggest time urgency while on the other, notifications require little attention and are mostly for users’ awareness(Table 5.4).Color-coded vibration distribution: In Figure 5.10, red, gray, and green bars are fairly well separated and gradually changefrom the left to the right of the dimension, supporting our one-dimension interpretation for the usage f facet.Final dimension: 1) Low-demand awareness—urgent and attention-demandingd notifications.101Table 5.4: Final facet dimensions (derived in Table 5.3) and their most frequent tags: number ofdimensions identified from MDS analysis on the vibration distances and our interpretationof their semantics (left column), most frequent tags and their rates of occurrence for the 10vibrations at two ends of the dimensions (middle, right columns)Dimension Semantics Negative End of Scale Positive End of ScaleSensation f FacetSensationD1:complexityd simple (8), regular (7), soft (7) dynamic (10), irregular (9), com-plex (7)SensationD2: continuityd discontinuous (10), regular (9) continuous (10), simple (7)SensationD3: roughnessd smooth (10), soft (7), regular (7) rough (8), short (6), discontinu-ous (6)SensationD4: durationd discontinuous (7), simple (7),short (6)grainy (8), regular (7), long (6),rough (6), ramping up (6)Emotion f FacetEmotionD1: agitationd comfortable (10), calm (10),pleasant (8)annoying (10), mechanical (9),agitating (9), urgent (9), angry (8)EmotionD2: livelinessd predictable (10), boring (9), me-chanical (9)lively (10), unique (9), interesting(8), rhythmic (8)EmotionD3: strangenessd rhythmic (10), lively (9), mechan-ical (8)strange (10)Metaphor f FacetMetaphorD1: on-off, nu-anced/ongoing, repetitivedtapping (10) engine (10)MetaphorD2: natural/mechanicaldtapping (9), heartbeat (5) alarm (10), game (7)Usage f FacetUsageD1: urgency, attention-demanddpause (10), battery low (9), getready (8), resume (7)alarm (10), overtime (9), runningout of time (9), above threshold(8)Our five rating scalesTo determine if our rating scales are orthogonal, we ran a Pearson correlation onthe ratings for the five Likert-scale parameters across the 120 vibrations.Results show significant medium to high correlation for all five parameters, ex-cept for pleasantnessd and tempod (low correlation, r=-0.22). Energyd , arousald androughnessd have the highest correlations (r=0.74 - 0.92), followed by pleasantnessdand roughnessd (r=-0.61), tempod with arousald (r=0.56), and roughnessd(r=0.52),and pleasantnessd with arousald (r=-0.53) (full correlation table in A.3).102Figure 5.7: Distribution of vibrations across the four MDS dimensions identified for the sensation ffacet. All vibrations are shown. Position coding: Thin vertical bars project each vibration’sMDS-derived location onto this dimension. Color coding: Bar color indicates whether thevalidated tag list for the vibration contains one of the frequent tags identified in Step 2 (red orgreen, with red indicating the left end of the dimension, and green the right end) or not (gray).For SensationD1d , a red bar denotes that a vibration has a simple or a flat tag, while a greenbar represents a vibration with a complex or dynamic tag and gray bars show vibrations withno related tag. SensationD2d : (red:discontinuous, green:continuous), SensationD3d : (red:smoothor soft, green:rough), SensationD4d : (red:short, green:long).Figure 5.8: Distribution of all the vibrations across the three MDS dimensions for the emotion ffacet. EmotionD1d : (red:calm, comfortable, or pleasant, green:urgent,annoying), EmotionD2d :(red:boring, green:interesting, lively), EmotionD3d :(red:predictable, familiar, green:strange, creepy,surprising)Figure 5.9: Vibration distribution across the two MDS dimensions for the metaphor f facet. Tags forMetaphorD1d : (red:tapping, green:engine), MetaphorD2d : (red:heartbeat, green:alarm or game)103Figure 5.10: Vibration distribution for the usage f facet. We color all vibrations with high urgencyand attention tags (alarm, running out of time, overtime, or above intended threshold) with greenmarks; and red for those with awareness notifications (pause, battery low, resume, or getready); and gray for those with none or a mix of both types.5.6.2 [Q2] Between-Facet Linkages: How Are Attributes andDimensions Linked Across Facets?We address this question by examining linkages among our identified dimensionsas well as linkages among the tags between various facets.Dimension level: Are there linkages or correlations among the identifieddimensions of various facets? What factors can describe these correlations?To address this question, we use factor analysis. Here, we include both the ratingsand facet dimensions in our analysis to further link our derived facet structures toone another as well as to the rating metrics frequently used in the literature. Thus,our variables are the five rating scales and the 10 dimensions identified for all thefacets (a total of 15 variables). We use the values of the 120 vibrations on those 15variables as our samples. This results in a ratio of 8:1 for our analysis (8 samplesper variable), satisfying the minimum suggested ratio in the literature (5:1) [183].According to our results, four perceptual factors can describe correlations amongthe dimensions in various facets (the four right-most columns on Table 5.5). Ta-ble 5.5 shows the vibration properties (ratings and facet dimensions) with loadings>0.3 for each factor and highlights the high loadings (≥ 0.45) in boldface.Factor 1 (Urgency f act): UsageD1d and emotionD1d are highly connected to thesame underlying factor as energyd , arousald , roughnessd , and pleasantnessd . Sensa-tionD1 - complexityd and metaphorD2 - natural/mechanicald are also connected to thisfactor but with lower loadings.Factor 2 (Liveliness/interestingness f act): EmotionD2 - boring/livelyd is connectedwith sensationD4 - durationd , and tempod on the second factor. SensationD2 - continuitydis also partially loaded onto this factor.Factor 3 (Roughness f act): This factor presents the link between sensationD3 -roughnessd with roughnessd and pleasantnessd ratings.104Table 5.5: Factor analysis outcome. The left column shows the initial rating scales† and new facetdimensions after MDS analysis. The next four columns present the factors upon which wehave found some degree of cross-facet correlation, in terms of facet ratings and dimensions.For each factor column, boldfaced numbers highlight facet variables with the highest con-tributions to that factor (>0.45); empty cells indicate very low contributions (<0.3). Facetproperties that have high values on the same factor column (e.g., energy, UsageD1d in theUrgency f factor) are correlated: the columns/factors are a point of linkages between the facets.Revised facet properties Urgency(Factor 1)Liveliness(Factor 2)Roughness(Factor 3)Novelty(Factor 4)1. Sensation f :energyd† 0.89tempo/speedd† 0.43 0.45 0.34roughnessd† 0.75 0.48SensationD1 - Complexityd 0.45 0.55SensationD2 - Continuityd -0.38 0.31SensationD3 - Roughnessd 0.89SensationD4 - Durationd 0.36 -0.482. Emotion f :pleasantnessd† -0.64 0.33 -0.34 -0.31arousald† 0.95EmotionD1 - Agitationd 0.82EmotionD2 - Livelinessd 0.89EmotionD3 - Strangenessd 0.603. Metaphor f :MetaphorD1 - On/off vs. ongoingd -0.32 0.44MetaphorD2 - Natural vs. Mechanicald 0.454. Usage f :UsageD1-Attention-demandd 0.80Factor 4 (Novelty f act): SensationD1 - complexityd and emotionD3 - strangenessdare connected on the fourth factor. MetaphorD1d also partially loads onto this factor.Tag level: How do tags in the different facets correlate?We used our tag co-occurrence metric (Table 5.2) as a measure of correlation be-tween tags in various facets. We report co-occurrence of the sensation f facet’s tagswith emotion f , metaphor f , and usage f tags, since sensation f tags more directly re-late to engineering parameters (Figure 5.11) but are also hardware independent.Figure 5.11 presents links among the emotion f and sensation f tags (see A.6 for thetag co-occurrence tables of the metaphor f and usage f facets).105Figure 5.11: Co-occurrence of the sensation f tags with the emotion f tags in our vibration library. Foreach emotion f tag (rows), we see the most (and least) associated sensation f tags (encoded asdarker and lighter cells respectively). For example, highlighted with red boxes, to design asurprising vibration, one should make an irregular, dynamic, ramping up, and rough sensation(design scenario in Figure 5.2a). Similarly, looking down on each column, one can seehow a particular sensation f tag is perceived emotionally. Bumpy vibrations mostly invokepositive emotional response such as comfortable, energetic, happy, lively, etc. (evaluationscenario in Figure 5.2b).5.6.3 [Q3] Individual Differences: To What Extent Do PeopleCoincide or Differ in Their Assessment of VibrationAttributes?We examined variation in the participants’ ratings and tags as a measure of individ-ual differences in their perceptions and opinions. Here, we report these individualdifferences on various levels including the extent of variation (disagreement) acrossthe facets, ratings, and tags as well as the amount of disagreement per vibration.Per facetWe measured overall individual differences in the facets based on percentage offacet tags that were approved by everyone (100% of the annotators), as well aspercentage of tags that caused a split between the participants (defined as whenhalf of participants removed a tag and the other half kept it as an appropriate tagfor a vibration). Sensation f had the lowest individual differences, with the highestnumber of tags kept by everyone (21% compared to 7-12% for the other facets), and106Figure 5.12: A stacked bar chart showing tag disagreement scores in each facet. The height of eachbar indicates total number of tags in a facet. More saturated parts of the bar indicate tagswith higher disagreement scores.the lowest number of tags that caused a split (18% compared to 32-37%). Usage felicited slightly more individuated responses than emotion f and metaphor f , with7% tags approved by everyone and 37% tags resulting in a split in the participants’opinions.Per ratingFor each of the five rating scales, we used standard deviation of the values providedby all the annotators for a vibration as a measure of individual differences in thatrating. Averaged across all vibrations and on a 7-point scale, these are 1.0, 0.8, 0.7,0.7, 0.7 for pleasantnessd , roughnessd , energyd , tempod , and arousald respectively.Per tagStage 2 participants approved or removed some tags in consistent ways (e.g., short,irregular, agitating) whereas the participants showed differing opinions about the ap-propriateness of some others (e.g., ticklish, fear, start). Tag disagreement score rep-resents the amount of controversy among the participants in keeping or removing107Figure 5.13: Disagreement scores for the ratings and facets for a subset of the vibrations, calculatedbased on Table 5.2. Disagreement scores are within [1-7] (ratings), and [0-1] (facets). Avibration can have a low disagreement score on one rating or tag set but a high disagreementscore on another. High saturation denotes high disagreement.a tag (Section 5.5.4). The highest possible score is 0.5, denoting a full split inparticipant opinions.Figure 5.12 shows a bar chart of the number of tags in each facet, color-codedbased on their disagreement score (higher color saturations denote higher disagree-ment scores). The figure also lists examples of tags with low and high disagreementscores: e.g., in sensation f , short and smooth transition tags had the lowest disagree-ment while ticklish had the highest. Overall, usage f tags had higher disagreementcompared to the other facets, with no tag showing very low (<0.2) disagreement.Per vibrationWe computed disagreement among the ratings and tags assigned to each vibration(vibration disagreement score is defined in Section 5.5.4). Figure 5.13 presentsa heatmap of a subset of vibrations and their disagreement scores for the ratingsand tags (see disagreement values for all the vibrations in A.6). Interestingly, thevibrations were not always consistently disagreed or agreed upon. For example,vibration “v-09-10-3-56” had low disagreement on sensation f tags but higher dis-agreement on emotion f , metaphor f , and usage f tags. The vibrations also differed inthe facet(s) that had the lowest controversy for them: “v-09-10-6-46” was mostly108agreed upon in the emotion f facet but had high disagreement in the metaphor f facet.This pattern was reversed for another vibration (e.g., “v-09-10-4-25”).Table 5.6: Summary of our annotation dataset after the two stages of expert annotation and lay uservalidation (i.e., pruning). The left column indicates: the the average difference in valuesprovided on the five rating scales originally used to define the facets (top section); overlapin the tag sets for each of the facets (middle section); and the overall tag count for thesefacets (bottom section). Values in the experts and lay user columns in Table 5.6 cannot bedirectly compared due to differences in the tasks in these collection stages: experts appliedannotations (each vibration was annotated by two of three experts), while lay users were askedto confirm them, and largely removed rather than adding tags.Experts Lay UsersAverage difference among experts Average deviation from expertsRating difference (Range, 7-point scale) (Range, 7-point scale)Energyd 1.15 0.45Tempod 1.26 0.54Roughnessd 1.6 0.64Pleasantnessd 1.64 0.84Arousald 1.5 0.5Tag overlap Tags applied by both experts Tags approved by ≥ 4 lay usersSensation f 25% 86%Emotion f 17% 72%Metaphor f 14.5% 76%Usage f 12.5% 69%Dataset tag count Following expert annotation Following lay-user validationSensation f 744 635Emotion f 988 716Metaphor f 584 442Usage f 1234 8575.6.4 Methodology: How Does Staged Data Collection ImpactAnnotation Quality?The goal of our two-stage data collection was to reduce noise from outliers and im-prove dataset convergence and reliability by facilitating the annotation task for thelay users, but at the cost of an additional round of data collection. Below, we sum-marize how well this new method achieves these goals by examining dataset char-acteristics after the two rounds of annotations and reliability of the final dataset.109Expert and Lay User Annotations: Table 5.6 summarizes characteristics of ourdataset after expert and lay user annotation stages.Reliability of the final annotation set: To assess reliability, we measured abso-lute rating difference and percentage of tag difference between a vibration and itsreplica (Section 5.5.4) for each individual participant as well as for the final ag-gregated dataset. On average, the ratings were ∼ 0.7 apart (on a 7-point scale) forindividual participants but this difference was reduced to ∼ 0.2 for the final aggre-gated dataset. Further, ∼ 33% of the tags removed by an individual were differentbetween a vibration and its replica which was further reduced to ∼ 7% differenceon the final aggregated set.5.7 DiscussionWe start by looking at how these results apply to the three design, evaluation, andpersonalization scenarios we proposed in the introduction (Figure 5.2): have weindeed found evidence for perceptually continuous dimensions within individualfacets, along which users would presumably find it logical to “move” individualhaptic elements as an act of design? Do we have a mapping among the facets thatenables translation of design requirements, or evaluation of aesthetic properties ofhaptic elements?We then compare our facet dimensions with the perceptual vibrotactile proper-ties in the literature and draw insights into findings on individual differences andannotation reliability. We finish by reviewing the validity and effectiveness of ourmethodological choices.5.7.1 Within-Facet Perceptual Continuity: ScenariosScenario 1 – Design Guidelines and Manipulations (Figure 5.2a): In makinghaptic sensations, designers commonly have a set of requirements in the usage f ,metaphor f , or emotion f facets (e.g., surprise or racing car engine) and require guide-lines prescribing important sensation f or engineering parameters for meeting thoserequirements. The linkages between the facets can provide such guidelines: thedesigner can look along the rows of Figure 5.11 and find the highly correlatedsensation f tags. For example, using Figure 5.11, the task of designing a surprise110vibration is broken into designing a sensation that is irregular, complex, ramping up,and rough (sensation f tags with high co-occurrence with surprise).On the dimensional level, between-facet linkages provide a more continuousmapping for design. For example, a designer might want to create a palette ofsensations that vary in liveliness. Using the correlation among the boring—livelyddimension and the dimensions from the sensation f facet, the designer can varycontinuityd and tempod of the vibrotactile rhythm in sketching alternative palettesfor further investigation.Determining the relevant engineering parameters and their values depends onthe actuator type (e.g., voice coil vs. eccentric rotating mass actuators), and itshardware configurations (e.g., form factor, weight) and is straightforward, giventhe body of psycho-physical and sensory studies in haptics. For example, the de-signer can add discontinuity by including silence or pause in a vibration whileensuring that the duration of silence is perceptible to people [166].Scenario 2 – Evaluation (Figure 5.2b): Alternatively, for cases where a designerhas a set of vibrations and is interested to know their emotional connotations,proper metaphors or usage examples, he/she can look them up along the columnsof Figure 5.11. For example, a bumpy sensation usually has positive emotional con-notations such as happy, interesting, lively and rhythmic, while ramping up sensationsare usually annoying, mechanical, and uncomfortable.Scenario 3 – Personalization (Figure 5.2c): Facet dimensions and their linkagesprovide the theoretical grounding for designers to build tuning and stylization toolsfor end-users who may wish to personalize their vibration notifications. First, thedimensions we found in this chapter are good candidates for being the basis oftuning sliders, as they capture the dominant spectrums along which a vibration canvary in a facet. For example, one can imagine a tuning slider that moves a vibrationalong the emotion dimension of boring—livelyd . Then, even more practically, thelinkages, identified in our results, between a dimension in the emotion f , metaphor f ,and usage f facets and the sensation f dimensions inform us about the mechanics ofbuilding these sliders. For example, the boring—livelyd dimension is correlated withthe signal’s tempo, durationd (sensationD4) and continuityd (sensationD2). Thus, adesigner can use these three sensation f attributes in developing an algorithm for a111liveliness slider, which is ultimately controlled by end-users to modify a vibration’sliveliness for their personal taste. In Chapter 7, we use these results to build a setof tuning sliders for vibrations.5.7.2 Facet Dimensions and LinkagesHere, we discuss the unique insights and challenges for the facet dimensions andpresent implications for future research and design when applicable.Sensation f provides designers with a practical translation platform between thefacet space and engineering parameters like frequency and waveform. Sensation fdimensions reflect important perceptual and engineering parameters identified inpast studies. Specifically, rhythm and envelope, two parameters found to be in-fluential and manipulable in expressive vibrotactile design [103, 166], are directlylinked to continuityd and complexityd (sensationD2, D1 respectively). Roughnessdand durationd are also known to impact users’ perception [62, 63, 103]. Thus, trans-lating the emotion f , metaphor f , and usage f dimensions and tags to the sensation ffacet offers a practical and hardware-independent means for design.Emotional perceptions of vibrations do not follow theoretical dimensions ofpleasantnessd and arousald . Correlation of the pleasantnessd and arousald ratings(Section 5.6.1) as well as our MDS results on the emotion f tags suggest that thesetwo dimensions are not orthogonal for our vibrotactile collection. As a result, notall four quadrants of the pleasantness (valence)-arousal grid are covered by the vi-brotactile sensations in our library. Specifically, none were marked as either verypleasant and alarming (positive valence-positive arousal), or very calm but unpleas-ant (negative valence-negative arousal).While it is possible that such examples exist but our library does not containthem, we note that two recent studies found a similar correlation and also the samegap for different vibrotactile actuators and vibration sets. Yoo et al. examinedseveral sets of vibrations (24-36 items each) on a voice coil actuator (Haptua-tor – [162]) and none covered the negative valence-negative arousal or very highvalence-high arousal quadrants [184]. Our own previous study in Chapter 2 re-ports a similar correlation for a small subset of 14 vibrations on an Electro-ActivePolymer (EAP) actuator.112We propose that for vibrations, the theoretical dimensions of pleasantnessd andarousald in the literature are not good representatives for the 2-D affect grid. There,sad and boring have negative valence and negative arousal while vibrations withsad and boring tags do not fall in that area; they are not necessarily unpleasant andquiet and this difference is reflected in our dataset. Instead, our MDS analysis onthe emotion f tags suggest that people perceive and rate vibrations according to threeother dimensions: 1) agitationd , 2) livelinessd , and 3) strangenessd .This result impacts future research and design in at least three ways. First, fur-ther studies are needed to confirm or reject this pattern using other vibration sets,and compare emotion f dimensions for vibrations with other haptic stimuli (such asnatural textures, force feedback and variable friction) and other modalities suchas vision and audition. Each of these stimuli categories have distinct similaritiesand differences with vibrotactile sensations, impacting users’ emotional experience(e.g., variable friction stimuli are primarily sensed through skin but require ac-tive user movement). Thus, future research is required to examine their emotionalspace(s) and contrast them with our proposed emotion space for vibrotactile sen-sations. Second, the three dimensions provide new directions for vibration design.Agitationd , livelinessd , and strangenessd explain large variations in emotion f , havelow correlation, and provide a more accessible design space for current vibrotac-tile technology. They may be promising targets for affective design. Finally, oncefurther validated, these dimensions offer good candidates for devising a standardevaluation instrument for vibrations.Metaphor f dimensions are the most difficult to interpret. Our results suggest twodimensions for metaphor f tags that vary on continuity, novelty, and urgency. How-ever, the spatial configuration of tags in Appendix A.3 does not completely followthis definition (see the report of outlier metaphor f tags in Appendix A.3). Also,these two dimensions are partially linked to the other facets in our factor anal-ysis. One reason could be that our metaphor f tag set is larger but also sparser:there are fewer common metaphor f tags among the vibrations (Table 5.4) comparedto sensation f , emotion f , and usage f tags. While this trend can reflect an inherentcharacteristic of metaphors for describing vibrations, future studies are needed tovalidate and expand on the above dimensions and further develop the metaphor f113vocabulary for vibrotactile effects.Users’ interpretation of vibration meaning in usage contexts is mainly dictatedby their energy (or urgency). According to our MDS results, vibration energydor urgencyd is the most important dimension for usage f tags. While energy is animportant design parameter, we are not aware of previous work that empiricallyconnects a vibration’s energy to its application. Our vibration library is designedto include a wide range of sensations but our tag list for usage f is developed fora specific context: applications where time tracking is an important component(e.g., giving presentations and exercising). We anticipate this finding to extendto other application contexts but future studies are needed to confirm or reject theimportance of energy for other types of applications.Emotional connotations of vibrations play an important role in users’ perceptionof vibrations, regardless of facet. The three dimensions found for emotion f havesubstantially high loadings on three of the four factors in Table 5.5: urgency f act ,liveliness f act and novelty f act . This suggests that the underlying constructs, describ-ing the variations and linkages between the facets, are mainly emotional. In theabsence of other strong criteria, the emotion f facet can serve as the best default forend-user tools and interfaces.5.7.3 Individuals’ Annotation Reliability and VariationReliability of individuals’ tagging is surprisingly low. In our Stage 2 study com-ponent, we placed a duplicate vibration in each vibration set – i.e., two out of the11 were identical (Section 5.5.2). However, about 33% of individuals’ removedtags differed for these duplicates (Section 5.6.4). This number is unexpectedlyhigh: participants had access to all the vibrations and their tags via the experimentinterface. Although the variation may be partially due to varying commitment andfocus, it also suggests that people’s memory of vibrations quickly fades. In contrastto auditory and visual icons, sensations in this unfamiliar modality are not alwaysimmediately memorable, and users commonly play a vibration several times toform an opinion about it or to compare it with another vibrotactile sensation. Thisnegatively impacts reliability, but in some cases can simplify study design whenone stimulus is presented in multiple experimental conditions.114Data on individual differences in ratings and tags inform haptic evaluation. Dis-agreement scores for the tags and ratings suggest that a notable portion of annota-tion variation is due to differences among users’ definitions of the language termsand its manifestation in a tactile signal. This is evidenced by lower individual dif-ference values for sensation f tags and the five rating scales. To mitigate this in thelong run, we need to devise and consistently use a set of standard rating scales; thefacet dimensions are promising candidates for such an endeavor. In the meantime,our tag disagreement scores can inform haptic researchers in selecting less contro-versial tags or estimating the number of participants required for their evaluation.5.7.4 Review of Our MethodologyWe contribute a data collection and analysis methodology, based on existing prac-tices in the music annotation domain, that allows for comprehensive evaluation ofa large vibration collection. Here, we discuss the validity and effectiveness of ourmethodological choices according to our results to support future uses and adapta-tions of our approach.Method ValidityBias in validation stage: Seeing existing annotations did not override partici-pant perceptions. Participants made large adjustments (∼ 4.3 on a 7-point scale)to the intentional energy rating errors applied in the validation stage to identifyoutliers – Section 5.5.2). Also, a notable percentage of the tags (∼14-31%) areremoved by 4 or more (out of 9) participants, demonstrating some degree of inter-participant consistency as well as willingness to respond with initiative. We alsoguarded against bias by describing the existing annotations to the participants as“noisy data from other users;” and by eliminating the participants with few anno-tation adjustments as outliers, on presumption that this indicated low engagementwith the task. Finally, our validation task resembles practical scenarios where usersstart from a proposed set of notifications and their intended perception and usage(e.g., list of alarm tones on a phone, game sounds, etc.) and adopt or reject no-tifications depending on their perceptual match. Thus, although we expect somedegree of conformity among the participants to the existing tags and ratings which115were their (nonzero) starting point, it appears this did not override their choicesand our validated dataset reflects their accepted annotations among the proposedones.Annotation instrument: Quality of our tag lists are reflected in the resultingfacet dimensions. While developing the tag sets, our goal was to include as manyrelevant tags as possible, yet avoid redundant tags. For sensation f and emotion f , ourtag lists were built on existing adjective lists in the literature, were inclusive andwere independent of the context. Thus, for these facets we could identify severaldimensions with stronger linkages in the factor analysis. In contrast, the metaphor fand usage f tag lists were use-case dependent and could not be inclusive in nature.Further, it was more difficult to identify tag redundancy and conflicts for them.Thus, they resulted in fewer dominant dimensions which were harder to interpret(metaphor f ) and more use case dependent (usage f ). The attributes and dimensionsfor these facets can be further refined and validated over time, through follow upstudies that examine other use cases and metaphors with diverse participants.Future work can further refine our metaphor f and usage f attributes and dimen-sions by studying other use cases and participant groups.Analysis methods: We triangulate our analysis to guard against the subjectivityin our interpretations. For both MDS and factor analysis, researchers determinenumber and semantics of dimensions and factors. Although this interpretation isbased on evidence in the data, the resulting semantics are subject to the researchers’bias and pre-conceptions. To guard against this, we use three different analyses onthe tags to interpret semantics of the facet dimensions and provide data on between-facet linkages on both dimensional and tag level.Analysis methods: Factors with low loadings must be interpreted with caution.Our factor analysis has a ratio of 8:1 for data points (120 vibration ratings andMDS positions) and variables (15 ratings and facet dimensions; Section 5.6.2).While this meets the minimum ratio proposed in the literature (5:1), higher ratios(10:1 or more) are recommended for more stable results [183]. With our data, thevariables with low factor loadings may not be stable if more data is added, thusthey must be regarded with caution. This is especially true for the two metaphor fdimensions and for continuityd (sensationD2).116Method effectivenessRecruitment benefits: The staged approach increases efficiency of data collec-tion and improves convergence. Practically speaking, we found that validatingexisting ratings and tags can be done more quickly than annotating a vibration. Inour study, validation sessions include about three times more vibrations than our pi-lot and expert annotation sessions (33 vibrations compared to 12 vibrations). Thismeans the same amount of data can be collected with fewer participants. Further,we found that the between-subject variations in the validation stage were reducedto values equal to within-subject variations (reliability) in the ratings, leading tobetter convergence. In Sections 5.6.3 and 5.6.4, all values are ≤ 1 on a 7-pointLikert scale. Finally, having expert ratings on the vibrations allowed for quickdetection of outliers in the data and adjusting the recruitment plan accordingly.Value added by end-user validation: Second stage is crucial for validating ex-pert tags. On average, the lay-user-validated ratings are about 0.5 (7-point scale)different from the expert ratings, and the lay-user-validated set of tags include 14-31% fewer tags than the expert tag set. These results suggest that in this studyexperts’ ratings provide a fairly accurate estimate of users’ ratings; while for thetags, experts’ and lay participants’ opinions deviate more, justifying the need forthe validation stage. If further studies confirm this pattern, then this approach canprovide a discount evaluation method for vibrotactile design similar to heuristicevaluation in user interface design [116].5.8 ConclusionOur work investigates four vibration facets, their underlying dimensions and theirlinkages and mappings based on ratings and tags collected for a library of 120vibrations; Figure 5.3 illustrates the emergent landscape we have exposed and de-scribed with tags, facets, dimensions and facet-linking factors. Our data and anal-ysis confirm definite cross-facet linkages between certain facet dimensions. Wedescribe these linkages on a discrete level between tags (descriptive words appliedto specific vibrations, which themselves we have empirically located within facetdimensional space) and on a continuous level between dimensionsd (wherein di-mensions provide perceptual delineation of the facets). For the latter, the linkages117can be described according to four factors (perceptual constructs underlying facetlinkages): a vibration’s urgency f act , liveliness f act , roughness f act and novelty f act .The linkages between the sensation f facet and the other facets (on both tagand dimension levels) offer guidelines for vibration design, evaluation, and per-sonalization. However, we still lack a continuous mapping between most facetparameters (user’s cognitive schemas) and the engineering parameters, by whichthese sensations are constructed. Applying machine learning techniques to the vi-bratory signals and their associated disposition within the facet space (such as theratings, tags and MDS positions on the facet dimensions) is one approach towardsidentifying such a mapping. To this end, we have released our vibration dataset(vibration .wav files, their annotations and MDS characterization) for use by otherresearchers [145].Further, our lab continues to examine this mapping in the use case of devel-oping a set of tuning sliders that can move a vibration along the semantic facetdimensions – that is, Scenario 3.Will underlying facet dimensions and linkages apply to sensations producedwith other haptic technologies? We anticipate that to a large extent they will, al-though specific labels and properties for the facets might vary. The literature in-cludes evidence that people use sensation f , emotion f , and metaphor f descriptionsfor many kinds of haptic sensations, ranging from ultrahaptics effects (non-contactstimuli produced with acoustic waves [119]) to movements of a furry touch-basedsocial robot [181, 182]. Confirming this requires future studies that examine thefacet dimensions for other types of haptic sensations, such as force feedback,texture displays, variable friction and ultrahaptics, and comparing their findingswith our results. Such an endeavor can lead to a more holistic and technology-independent model of user haptic perception.We close by noting that rarely have the many challenges inherent in hapticevaluation [104] been approached through the development of new, haptic-specificmethodologies and evaluation instruments. Here, we offer a novel, scalable datacollection approach to mapping users’ comprehension of large sets of haptic sig-nals; and report between- and within-subject data variation that can inform futureinstrument development.118AcknowledgmentsThis research was supported by the Natural Sciences and Engineering ResearchCouncil of Canada (NSERC) and the University of British Columbia’s 4YF fel-lowship program. The study was done under UBC Ethics certificate #H13-01646.119Chapter 6Crowdsourcing Haptic DataCollectionFigure 6.1: Conceptual sketch of crowdsourcing data collection for high fidelity vibrationsPreface:1 Our large-scale evaluation methodology in Chapter 5 was still lim-ited by being in-lab. We could collect more data from a wider audience in a fractionof time and cost, if we could utilize online platforms such as Amazon’s Mechani-cal Turk. Thus, here we investigated the feasibility of crowdsourcing haptic eval-uation, using proxy modalities in lieu of specialized haptic hardware. Results ofa local and an online study with visual and low-fidelity vibration proxies showed1The content of this chapter was published as:Schneider, Seifi, Kashani, Chun, and MacLean. (2016) HapTurk: CrowdsourcingAffective Ratings for Vibrotactile Icons. Proceedings of the ACM SIGCHI Conferenceon Human Factors in Computing Systems (CHI ’16).120that using proxies is a viable approach and highlighted promising directions fordeveloping better proxies.6.1 OverviewVibrotactile display is becoming a standard component of informative user experi-ence, where notifications and feedback must convey information eyes-free. How-ever, effective design is hindered by incomplete understanding of relevant percep-tual qualities. To access evaluation streamlining now common in visual design, weintroduce proxy modalities as a way to crowdsource vibrotactile sensations by re-liably communicating high-level features through a crowd-accessible channel. Weinvestigate two proxy modalities to represent a high-fidelity tactor: a new vibrotac-tile visualization, and low-fidelity vibratory translations playable on commoditysmartphones. We translated 10 high-fidelity vibrations into both modalities, andin two user studies found that both proxy modalities can communicate affectivefeatures, and are consistent when deployed remotely over Mechanical Turk. Weanalyze fit of features to modalities, and suggest future improvements.6.2 IntroductionIn modern handheld and wearable devices, vibrotactile feedback can provide unin-trusive, potentially meaningful cues through wearables in on-the-go contexts [16].With consumer wearables like Pebble and the Apple Watch featuring high-fidelityactuators, vibrotactile feedback is becoming standard in more user tools. Today,vibrotactile designers seek to provide sensations with various perceptual and emo-tional connotations to support the growing use cases for vibrotactile feedback (ev-eryday apps, games, etc.). Although low-level design guidelines exist and are help-ful for addressing perceptual requirements [11, 14, 74, 102, 165], higher-level con-cerns and design approaches to increase their usability and information capacity(e.g., a user’s desired affective response, or affective or metaphorical interpreta-tion) have only recently received study and are far from solved [7, 77, 80, 119, 121].Tactile design thus relies heavily on iteration and user feedback [139]. Despite itsimportance, collecting user feedback on perceptual and emotional (i.e., affective)properties of tactile sensations in small-scale lab studies is undermined by noise121due to individual differences.In other design domains, crowdsourcing enables collecting feedback at scale.Researchers and designers use platforms like Amazon’s Mechanical Turk2 to de-ploy user studies with large samples, receiving extremely rapid feedback in, e.g.,creative text production [152], graphic design [179] and sonic imitations [18].The problem with crowdsourcing tactile feedback is that the “crowd” can’t feelthe stimuli. Even when consumer devices have tactors, output quality and intensityis unpredictable and uncontrollable. Sending each user a device is impractical.What we need are crowd-friendly proxies for test stimuli. Here, we define aproxy vibration as a sensation that communicates key characteristics of a sourcestimulus within a bounded error; a proxy modality is the perceptual channel andrepresentation employed. In the new evaluation process thus enabled, the designertranslates a sensation of interest into a proxy modality, receives rapid feedbackfrom a crowd-sourcing platform, then interprets that feedback using known errorbounds. In this way, designers can receive high-volume, rapid feedback to usein tandem with costly in-lab studies, for example, to guide initial designs or togeneralize findings from smaller studies with a larger sample.To this end, we must first establish feasibility of this approach, with specificgoals: (G1) Do proxy modalities work? Can they effectively communicate bothphysical vibrotactile properties (e.g., duration), and high-level affective properties(roughness, pleasantness)? (G2) Can proxies be deployed remotely? (G3) Whatmodalities work, and (G4) what obstacles must be overcome to make this approachpractical?This chapter describes a proof-of-concept for proxy modalities for tactile crowd-sourcing, and identifies challenges throughout the workflow pipeline. We describeand assess two modalities’ development, translation process, validation with a testset translation, and MTurk deployment. Our two modalities are a new techniqueto graphically visualize high-level traits, and the low-fidelity actuators on users’own commodity smartphones. Our test material is a set of 10 vibrotactile stimulidesigned for a high-fidelity tactile display suitable for wearables (referred to as“high fidelity vibrations”), and perceptually well understood as presented by that2www.mturk.com122type of display (Figure 6.6). We conducted two coupled studies, first validatingproxy expressiveness in lab, then establishing correspondence of results in remotedeployment. Our contributions are:• A way to crowdsource tactile sensations (vibration proxies), with a technicalproof-of-concept.• A visualization method that communicates high-level affective features moreeffectively than the current tactile visualization standard (vibration wave-forms).• Evidence that both proxy modalities can represent high-level affective fea-tures, with lessons about which features work best with which modalities.• Evidence that our proxy modalities are consistently rated in-lab and remotely,with initial lessons for compliance.6.3 Related WorkWe cover work related to vibrotactile icons and evaluation methods for vibrotactileeffects, the current understanding of affective haptics, and work with MechanicalTurk in other modalities.6.3.1 Existing Evaluation Methods for Vibrotactile EffectsThe haptic community has appropriated or developed many types of user studies toevaluate vibrotactile effects and support vibrotactile design. These target a varietyof objectives:1) Perceptibility: Determine the perceptual threshold or Just Noticeable Dif-ference (JND) of vibrotactile parameters. Researchers vary the values of a vibro-tactile parameter (e.g., frequency) to determine the minimum perceptible change[103, 129].2) Illusions: Studies investigate effects like masking or apparent motion ofvibrotactile sensations, useful to expand a haptic designer’s palette [56, 75, 151].3) Perceptual organization: Reveal the underlying dimensionality of how hu-mans perceive vibrotactile effects (which are generally different than the machineparameters used to generate the stimuli). Multidimensional Scaling (MDS) studies123are common, inviting participants compare or group vibrations based on perceivedsimilarity [20, 67, 126, 165, 174].4) Encoding abstract information: Researchers examine salient and memo-rable vibrotactile parameters (e.g. energy, rhythm) as well as the number of vi-brotactile icons that people can remember and attribute to an information piece[3, 14, 20, 165].5) Assign affect: Studies investigate the link between affective characteristicsof vibrations (e.g., pleasantness, urgency) to their engineering parameters (e.g.,frequency, waveform) [91, 132, 165, 184]. To achieve this, vibrotactile researcherscommonly design or collect a set of vibrations and ask participants to rate them ona set of qualitative metrics.6) Identify language: Participants describe or annotate tactile stimuli in naturallanguage [20, 52, 70, 119, 165].7) Use case support: Case studies focus on conveying information with vibro-tactile icons such as collaboration [20], public transit [16] and direction [7, 16], ortiming of a presentation [164]. In other cases, vibrotactile effects are designed foruser engagement, for example in games and movies, multimodal storytelling, or artinstallations [77, 185]. Here, the designers use iterative design and user feedback(qualitative and quantitative with user rating) to refine and ensure effective design.All of the above studies would benefit from the large number of participantsand fast data collection on MTurk. In this chapter, we chose our methodology sothat the results are informative for a broad range of these studies.6.3.2 Affective HapticsVibrotactile designers have the challenge of creating perceptually salient icon setsthat convey meaningful content. A full range of expressiveness means manipu-lating not only a vibration’s physical characteristics but also its perceptual andemotional properties, and collecting feedback on this. Here, we refer to all theseproperties as affective characteristics.Some foundations for affective vibrotactile design are in place. Studies ontactile language and affect are establishing a set of perceptual metrics [52, 119].Guest et al. collated a large list of emotion and sensation words describing tactile124stimuli; then, based on multidimensional scaling of similarity ratings, proposedcomfort or pleasantness and arousal as key dimensions for tactile emotion words,and rough/smooth, cold/warm, and wet/dry for sensation [52]. Even so, there is notyet agreement on an affective tactile design language [80].In Chapter 4, we compiled research on tactile language into five taxonomiesfor describing vibrations. 1) Physical properties that can be measured: e.g., du-ration, energy, tempo or speed, rhythm structure; 2) sensory properties: rough-ness, and sensory words from Guest et al.’s touch dictionary [52]; 3) emotionalinterpretations: pleasantness, arousal (urgency), dictionary emotion words [52];4) metaphors provide familiar examples resembling the vibration’s feel: heartbeat,insects; 5) usage examples describe events which a vibration fits: an incomingmessage or alarm.To evaluate our vibration proxies, we derived six metrics from these taxonomiesto capture vibrations’ physical, sensory and emotional aspects: 1) duration, 2) en-ergy, 3) speed, 4) roughness, 5) pleasantness, and 6) urgency.6.3.3 Mechanical Turk (MTurk)MTurk is a platform for receiving feedback from a large number of users, in a shorttime at a low cost [59, 89]. These large, fast, cheap samples have proved usefulfor many cases including running perceptual studies [59], developing taxonomies[22], feedback on text [152], graphic design [179], and sonic imitations [18].Crowdsourced studies have drawbacks. The remote, asynchronous study envi-ronment is not controlled; compared to a quiet lab, participants may be subjectedto unknown interruptions, and may spend less time on task with more responsevariability [89]. MTurk is not suitable for collecting rich, qualitative feedback orfollowing up on performance or strategy [106]. Best practices – e.g., simplifyingtasks to be confined to a singular activity, or using instructions complemented withexample responses – are used to reduce task ambiguity and improve response qual-ity [5]. Some participants try to exploit the service for personal profit, exhibitinglow task engagement [29], and must be pre- or post-screened.Studies have examined MTurk result validity in other domains. Most rele-vantly, Heer et al. [59] validated MTurk data for graphical perception experiments125(spatial encoding and luminance contrast) by replicating previous perceptual stud-ies on MTurk. Similarly, we compare results of our local user study with an MTurkstudy to assess viability of running vibrotactile studies on MTurk, and collect andexamine phone properties in our MTurk deployment.Need for HapTurk: Our present goal is to give the haptic design communityaccess to crowdsourced evaluation so we can establish modality-specific method-ological tradeoffs. There is ample need for huge-sample haptic evaluation. Userexperience of transmitted sensations must be robust to receiving device diversity.Techniques to broadcast haptic effects to video [88, 111], e.g., with YouTube [1]or MPEG7 [31, 32] now require known high-fidelity devices because of remote de-vice uncertainty; the same applies to social protocols developed for remote use ofhigh-quality vibrations, e.g. in collaborative turn taking [20]. Elsewhere, studies ofvibrotactile use in consumer devices need larger samples: e.g., perceivability [84],encoding of caller parameters [12], including caller emotion and physical presencecollected from pressure on another handset [66], and usability of expressive, cus-tomizable vibrotactile icons in social messaging [78]. To our knowledge, this isthe first attempt to run a haptic study on a crowdsource site and characterize itsfeasibility and challenges for haptics.View	  A	   View	  B	  View	  C	  Physical	  Space	   Emo5onal	  Space	  Physical	  Filters	  Emo5onal	  Filters	  Usage	  Example	  Filters	  Metaphor	  Filters	  List	  Space	  (a) VibViz interface from Chapter 4 (b) C2 tactorFigure 6.2: Source of high-fidelity vibrations and perceptual rating scales126Figure 6.3: Visdir Visualization, based on VibViz6.4 Sourcing Reference Vibrations and QualitiesWe required a set of exemplar source vibrations on which to base our proxy modal-ities. This set needed to 1) vary in physical, perceptual, and emotional character-istics, 2) represent the variation in a larger source library, and 3) be small enoughfor experimental feasibility.6.4.1 High-Fidelity Reference LibraryWe chose 10 vibrations from a large, freely available library of 120 vibrations(VibViz, Chapter 4), browsable through five descriptive facets3, and ratings of facetproperties. Vibrations were designed for an Engineering Acoustics C2 tactor, ahigh-fidelity, wearable-suitable voice coil, commonly used in haptic research. Weemployed VibViz’s filtering tools to sample, ensuring variety and coverage by se-lecting vibrations at high and low ends of energy / duration dimensions, and filter-ing by ratings of temporal structure/rhythm, roughness, pleasantness, and urgency.To reduce bias, two researchers independently and iteratively selected a set of 10items each, which were then merged.Because VibViz was designed for a C2 tactor, we used a handheld C2 in thepresent study (Figure 6.2b).6.4.2 Affective Properties and Rating ScalesTo evaluate our proxies, we adapted six rating scales from the tactile literatureand new studies. In Chapter 4, we proposed five facets for describing vibrationsincluding physical, sensory, emotional, metaphors, and use examples. Three facetscomprise quantitative metrics and adjectives; two use descriptive words.3called taxonomy in the original conference publication127We chose six quantitative metrics from Chapter 4 that capture important affec-tive (physical, perceptual, and emotional) vibrotactile qualities: 1) duration [low-high], 2) energy [low-high], 3) speed [slow-fast], 4) roughness [smooth-rough],5) urgency [relaxed-alarming], and 6) pleasantness [unpleasant-pleasant]. A largescale (0-100) allowed us to treat the ratings as continuous variables. To keep trialsquick and MTurk-suitable, we did not request open-ended responses or tagging.6.5 Proxy Choice and DesignThe proxies’ purpose was to capture high-level traits of source signals. We in-vestigated two proxy channels and approaches, to efficiently establish viabilityand search for triangulated perspectives on what will work. The most obviousstarting points are to 1) visually augment the current standard of a direct traceof amplitude = f (time), and 2) reconstruct vibrations for common-denominator,low-fidelity actuators.We considered other possibilities (e.g., auditory stimuli, for which MTurk hasbeen used [18], or animations). However, our selected modalities balance a) di-rectness of translation (low fidelity could not be excluded); b) signal control (hardto ensure consistent audio quality/volume/ambient masking); and c) developmentprogression (visualization underlies animation, and is simpler to design, imple-ment, display). We avoided multisensory combinations at this early stage for clar-ity of results. Once the key modalities are tested, combinations can be investigatedin future work.“Ref” denotes high-fidelity source renderings (C2 tactor).1) Visual proxies: Norms in published works (e.g., [20]) directed our workin Chapter 4 to confirm that users rely on graphical f (time) plots to skim andchoose from large libraries. We tested the direct plot, Visdir, as the status-quorepresentation.However, these unmodified time-series emphasize or mask traits differentlythan felt vibrations, in particular for higher-level or “meta” responses. We consid-ered many other means of visualizing vibration characteristics, pruned candidatesand refined design via piloting to produce a new scheme which explicitly empha-sizes affective features, Visemph.128Figure 6.4: Visualization design process. Iterative development and piloting results in theVisemph visualization pattern.Figure 6.5: Final Visemph visualization guide, used by researchers to create Visemph proxy vibrationsand provided to participants during Visemph study conditions.2) Low-fidelity vibration proxy: Commodity device (e.g., smartphone) actu-ators usually have low output capability compared to the C2, in terms of frequencyresponse, loudness range, distortion and parameter independence. Encouraged byexpressive rendering of vibrotactile sensations with commodity actuation (fromearly constraints [20] to deliberate design-for-lofi [78]), we altered stimuli to con-vey high-level parameters under these conditions, hereafter referred to as LofiVib.Translation: Below, we detail first-pass proxy development. In this feasibilitystage, we translated proxy vibrations manually and iteratively, as we sought gen-eralizable mappings of the parametric vibration definition to the perceptual qualitywe wished to highlight in the proxy. We frequently relied on a cycle of user feed-back, e.g., to establish the perceived roughness of the original stimuli and proxycandidate.Automatic translation is an exciting goal. Without it, HapTurk is still usefulfor gathering large samples; but automation will enable a very rapid create-testcycle. It should be attainable, bootstrapped by the up-scaling of crowdsourcingitself. With a basic process in place, we can use MTurk studies to identify thesemappings relatively quickly.129Figure 6.6: Vibrations visualized as both Visdir (left of each pair) and Visemph.6.5.1 Visualization Design (Visdir and Visemph)Visdir was based on the original waveform visualization used in VibViz (Figure 6.3).In Matlab, vibration frequency and envelope were encoded to highlight its patternover time. Since Visdir patterns were detailed, technical and often inscrutable forusers without an engineering background, we also developed a more interpretivevisual representation, Visemph; and included Visdir as a status-quo baseline.We took many approaches to depicting vibration high-level properties, with vi-sual elements such as line thickness, shape, texture and colour (Figure 6.4). We firstfocused on line sharpness, colour intensity, length and texture: graphical waveformsmoothness and roughness were mapped to perceived roughness; colour intensityhighlighted perceived energy. Duration mapped to length of the graphic, whilecolour and texture encoded the original’s invoked emotion.Four participants were informally interviewed and asked to feel Ref vibrations,describe their reactions, and compare them to several visualization candidates. Par-ticipants differed in their responses, and had difficulties in understanding vibrotac-tile emotional characteristics from the graphic (i.e., pleasantness, urgency), and inreading the circular patterns. We simplified the designs, eliminating representa-130tion of emotional characteristics (color, texture), while retaining more objectivemappings for physical and sensory characteristics.Visemph won an informal evaluation of final proxy candidates (n=7), and wascaptured in a translation guideline (Figure 6.5).6.5.2 Low Fidelity Vibration DesignFor our second proxy modality, we translated Ref vibrations into LofiVib vibra-tions. We used a smartphone platform for their built-in commodity-level vibro-tactile displays, their ubiquity amongst users, and low security concerns for vibra-tion imports to personal devices [41]. To distribute vibrations remotely, we usedHTML5 Vibration API, implemented on Android phones running compatible webbrowsers (Google Chrome or Mozilla Firefox).As with Visemph, we focused on physical properties when developing LofiVib(our single low-fi proxy exemplar). We emphasized rhythm structure, an importantdesign parameter [165] and the only direct control parameter of the HTML5 API,which issues vibrations using a series of on/off durations. Simultaneously, we ma-nipulated perceived energy level by adjusting the actuator pulse train on/off ratio,up to the point where the rhythm presentation was compromised. Shorter durationsrepresented a weak-feeling hi-fi signal, while longer durations conveyed intensityin the original. This was most challenging for dynamic intensities or frequencies,such as increasing or decreasing ramps, and long, low-intensity sensations. Herewe used a duty-cycle inspired technique, similar to [78], illustrated in Figure 6.7.To mitigate the effect of different actuators found in smartphones, we limitedour investigation to Android OS. While this restricted our participant pool, therewas nevertheless no difficulty in quickly collecting data for either study. We de-signed for two phones representing the largest classes of smartphone actuators:Samsung Galaxy Nexus, which contains a coin-style actuator, and a Sony XperiaZ3 Compact, which uses a pager motor resulting in more subdued, smooth sen-sations. Though perceptually different, control of both actuator styles are limitedto on/off durations. As with Visemph, we developed LofiVib vibrations iteratively,first with team feedback, then informal interviews (n=6).131Figure 6.7: Example of LofiVib proxy design. Pulse duration was hand-tuned to represent lengthand intensity, using duty cycle to express dynamics such as ramps and oscillations.6.6 Study 1: In-lab Proxy Vibration Validation (G1)We obtained user ratings for the hi-fi source vibrations Ref and three proxies(Visdir, Visemph, and LofiVib). An in-lab format avoided confounds and unknownsdue to remote MTurk deployment, addressed in Study 2. Study 1 had two ver-sions: in one, participants rated visual proxies Visdir and Visemph next to Ref; andin the other, LofiVib next to Ref. RefVis and RefLo f iVib denote these two references,each compared with its respective proxy(ies) and thus with its own data. In eachsubstudy, participants rated each Ref vibration on 6 scales [0-100] in a computersurvey, and again for the proxies. Participants in the visual substudy did this forboth Visdir and Visemph, then indicated preference for one. Participants in the lo-fistudy completed the LofiVib survey on a phone, which also played vibrations usingJavascript and HTML5; other survey elements employed a laptop. 40 participantsaged 18-50 were recruited via university undergraduate mailing lists. 20 (8F) par-ticipated in the visual substudy, and a different 20 (10F) in the low-fi vibrationsubstudy.Reference and proxies were presented in different random orders. Pilots con-firmed that participants did not notice proxy/target linkages, and thus were unlikelyto consciously match their ratings between pair elements. Ref/proxy presentationorder was counterbalanced, as was Visdir/Visemph.1326.6.1 Comparison Metric: Equivalence ThresholdTo assess whether proxy modalities were rated similarly to their targets, we em-ployed equivalence testing, which tests the hypothesis that sample means are withina threshold δ , against the null of being outside it [143]. This tests if two samplesare equivalent with a known error bound; it corresponds to creating confidence in-tervals of means, and examining whether they lie entirely within the range (−δ ,δ ).We first computed least-squares means for the 6 rating scales for each proxymodality and vibration. 95% confidence intervals (CI) for Ref rating means rangedfrom 14.23 points (Duration ratings) to 20.33 (Speed). Because estimates of theRef “gold standard” mean could not be more precise than these bounds, we setequivalence thresholds for each rating equal to CI width. For example, given theCI for Duration of 14.23, we considered proxy Duration ratings equivalent if the CIfor a difference fell completely in the range (−14.23,14.23). With pooled standarderror, this corresponded to the case where two CIs overlap by more than 50%. Wealso report when a difference was detected, through typical hypothesis testing (i.e.,where CIs do not overlap).Thus, each rating set pair could be equivalent, uncertain, or different. Fig-ure 6.9 offers insight into how these levels are reflected in the data given the highrating variance. This approach gives a useful error bound, quantifying the precisiontradeoff in using vibration proxies to crowdsource feedback.6.6.2 Proxy Validation (Study 1) Results and DiscussionOverview of ResultsStudy 1 results appear graphically in Figure 6.8. To interpret this plot, look for(1) equivalence indicated by bar color, and CI size by bar height (dark green/smallare good); (2) rating richness: how much spread, vibration to vibration, withina cell indicates how well that parameter captures the differences users perceived;(3) modality consistency: the degree to which the bars’ up/down pattern translatesvertically across rows. When similar (and not flat), the proxy translations are beinginterpreted by users in the same way, providing another level of validation. Westructure our discussion around how the three modalities represent the different133Speed Duration Energy Roughness Urgency Pleasantness025507510002550751000255075100Vis:DirVis:EmphLofiVib1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10VibrationRatingEquivalence Level REF Equivalent Uncertain Different Reference REF:Vis REF:LofiVib ProxyStudy 1 Proxy Validation RatingsFigure 6.8: 95% confidence intervals and equivalence test results for Study 1 - Proxy Validation.Grey represents Ref ratings. Dark green maps equivalence within our defined threshold, andred a statistical difference indicating an introduced bias; light green results are inconclusive.Within each cell, variation of Ref ratings means vibrations were rated differently comparedto each other, suggesting they have different perceptual features and represent a varied set ofsource stimuli.rating scales. We refer to the number of equivalents and differents in a given cellas [x:z], with y = number of uncertains, and x+ y+ z = 10.Duration and Pleasantness were translatableDuration was comparably translatable for LofiVib [5:1] and Visemph [6:1]; Visdirwas less consistent [7:3] (two differences very large). Between the three modali-ties, 9/10 vibrations achieved equivalence with at least one modality. For Duration,this is unsurprising. It is a physical property that is controllable through the An-droid vibration API, and both visualization methods explicitly present Duration astheir x-axis. This information was apparently not lost in translation.More surprisingly, Pleasantness fared only slightly worse for LofiVib [4:2] andVisemph [4:1]; 8 / 10 vibrations had at least one modality that provided equivalence.Pleasantness is a higher-level affective feature than Duration. Although not anabsolute victory, this result gives evidence that, with improvement, crowdsourcing13420406080100Ref:LofiVib Equivalent (LofiVib) Ref:Vis Uncertain (Vis:Emph) Different (Vis:Dir)RatingStudy 1 V6 Energy RatingsFigure 6.9: Rating distributions from Study 1, using V6 Energy as an example. These violin plotsillustrate 1) the large variance in participant ratings, and 2) how equivalence thresholds re-flect the data. When equivalent, proxy ratings are visibly similar to Ref. When uncertain,ratings follow a distribution with unclear differences. When different, there is a clear shift.may be a viable method of feedback for at least one affective parameter.Speed and Urgency translated better with LofiVibLofiVib was effective at representing Urgency [6:2]; Visemph attained only [4:5],and Visdir [3:5]. Speed was less translatable. LofiVib did best at [4:2]; Visdir reachedonly [1:6], and Visemph [3:5]. However, the modalities again complemented eachother. Of the three, 9/10 vibrations were equivalent at least once for Urgency (V8was not). Speed had less coverage: 6/10 had equivalencies (V3,4,6,10 did not).Roughness had mixed results; best with VisemphRoughness ratings varied heavily by vibration. 7 vibrations had at least one equiv-alence (V2,4,10 did not). All modalities had 4 equivalencies each: Visemph [4:3],Visdir [4:4], and LofiVib [4:5].135Energy was most challengingLike Roughness, 7 vibrations had at least one equivalence between modalities(V1,4,10 did not). LofiVib [4:5] did best with Energy; Visemph and Visdir strug-gled at [1:8].Emphasized visualization outperformed direct plotThough it depended on the vibration, Visemph outperformed Visdir for most metrics,having the same or better equivalencies/differences for Speed, Energy, Roughness,Urgency, and Pleasantness. Duration was the only mixed result, as Visdir had bothmore equivalencies and more differences [7:3] versus [6:1]. In addition, 16/20participants (80%) preferred Visemph to Visdir. Although not always clear-cut, thesecomparisons overall indicate that our Visemph visualization method communicatedthese affective qualities more effectively than the status quo. This supports ourapproach to emphasized visualization, and motivates the future pursuit of othervisualizations.V4,V10 difficult, V9 easy to translateWhile most vibrations had at least one equivalency for 5 rating scales, V4 andV10 only had 3. V4 and V10 had no equivalences at all for Speed, Roughness,and Energy, making them some of the most difficult vibrations to translate. V4’svisualization had very straight lines, perhaps downplaying its texture. V10 was byfar the longest vibration, at 13.5s (next longest was V8 with 4.4s). Its length mayhave similarly masked textural features.V8 was not found to be equivalent for Urgency and Pleasantness. V8 is anextremely irregular vibration, with a varied rhythm and amplitude, and the sec-ond longest. This may have made it difficult to glean more intentional qualitieslike Urgency and Pleasantness. However, it was only found to be different forVisdir/Urgency, so we cannot conclude that significant biases exist.By contrast, V9 was the only vibration that had an equivalency for every ratingscale, and in fact could be represented across all ratings with LofiVib. V9 was aset of distinct pulses, with no dynamic ramps; it thus may have been well suited totranslation to LofiVib.136SummaryIn general, these results indicate promise, but also need improvement and com-bination of proxy modalities. Unsurprisingly, participant ratings varied, reducingconfidence and increasing the width of confidence intervals (indeed, this is partialmotivation to access larger samples). Even so, both differences and equivalencieswere found in every rating/proxy modality pairing. Most vibrations were equiva-lent with at least one modality, suggesting that we might pick an appropriate proxymodality depending on the vibration; we discuss the idea of triangulation in moredetail later. Duration and Pleasantness were fairly well represented, Urgency andSpeed were captured best by LofiVib, and Roughness was mixed. Energy wasparticularly difficult to represent with these modalities. We also find that resultsvaried depending on vibration, meaning that more analysis into what makes vibra-tions easier or more difficult to represent could be helpful.Though we were able to represent several features using proxy modalitieswithin a bounded error rate, this alone does not mean they are crowdsource-friendly.All results from Study 1 were gathered in-lab, a more controlled environment thanover MTurk. We thus ran a second study to validate our proxy modality ratingswhen deployed remotely.6.7 Study 2: Deployment Validation with MTurk (G2)To determine whether rating of a proxy is similar when gathered locally or re-motely, we deployed the same computer-run proxy modality surveys on MTurk.We wanted to discover the challenges all through the pipeline for running a vibro-tactile study on MTurk, including larger variations in phone actuators and experi-mental conditions (G4). We purposefully did not iterate on our proxy vibrations orsurvey, despite identifying many ways to improve them, to avoid creating a con-found in comparing results of the two studies.The visualization proxies were run as a single MTurk Human Intelligence Task(HIT), counterbalanced for order; the LofiVib survey was deployed as its own HIT.Each HIT was estimated at 30m, for which participants received $2.25 USD. Incomparison, Study 1 participants were estimated to take 1 hour and received $10CAD. We anticipated a discrepancy in average task time due to a lack of direct137supervision for the MTurk participants, and expected this to lead to less accurateparticipant responses, prompting the lower payrate. On average, it took 7m forparticipants to complete the HIT while local study participants took 30m.We initially accepted participants of any HIT approval rate to maximize recruit-ment in a short timeframe. Participants were post-screened to prevent participationin both studies. 49 participants were recruited. No post-screening was used forthe visual sub-study. For the LofiVib proxy survey, we post-screened to verify de-vice used [106]. We asked participants (a) confirm their study completion with anAndroid device via a survey question, (b) detected actual device via FluidSurvey’sOS-check feature, and (c) rejected inconsistent samples (e.g., 9 used non-Androidplatforms for LofiVib). Of the included data, 20 participants participated each inthe visual proxy condition (6F) and the LofiVib condition (9F).For both studies, Study 1’s data was used as a “gold standard” that servedas a baseline comparison with the more reliable local participant ratings [5]. Wecompared the remote proxy results (from MTurk) to the Ref results gathered inStudy 1, using the same analysis methods.6.7.1 ResultsStudy 2 results appear in Figure 6.10, which compares remotely collected ratingswith locally collected ratings for the respective reference (the same reference as forFigure 6.8). It can be read the same way, but adds information. Based an analysisof a different comparison, a red star indicates a statistically significant differencebetween remote proxy ratings and corresponding local proxy ratings. This analysisrevealed that ratings for the same proxy gathered remotely and locally disagreed21 times (stars) out of 180 rating/modality/vibration combination; i.e., relativelyinfrequently.Overall, we found similar results and patterns in Study 2 as for Study 1. Thetwo figures show similar up/down rating patterns; the occasional exceptions corre-spond to red-starred items. Specific results varied, possibly due to statistical noiseand rating variance. We draw similar conclusions: that proxy modalities can stillbe viable when deployed on MTurk, but require further development to be reliablein some cases.138Speed Duration Energy Roughness Urgency Pleasantness* ** *** *** *** ******* * *025507510002550751000255075100Vis:DirVis:EmphLofiVib1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10VibrationRatingEquivalence Level REF Equivalent Uncertain Different Reference REF:Vis REF:LofiVib ProxyStudy 2 MTurk Deployment RatingsFigure 6.10: 95% Confidence Intervals and Equivalence Test Results for Study 2 - MTurk Deploy-ment Validation. Equivalence is indicated with dark green, difference is indicated withred, and uncertainty with light green. Red star indicates statistically significant differencebetween remote and local proxy ratings.6.8 DiscussionHere, we discuss high level implications from our findings and relate them to ourstudy goals (G1-G4 in Introduction).6.8.1 Proxy Modalities Are Viable for Crowdsourcing (G1,G2:Feasibility)Our studies showed that proxy modalities can represent affective qualities of vibra-tions within reasonably chosen error bounds, depending on the vibration. These re-sults largely translate to deployment on MTurk. Together, these two steps indicatethat proxy modalities are a viable approach to crowdsourcing vibrotactile sensa-tions, and can reach a usable state with a bounded design iteration (as outlined inthe following sections). This evidence also suggests that we may be able to deploydirectly to MTurk for future validation. Our two-step validation was important asa first look at whether ratings shift dramatically; and we saw no indications of biasor overall shift between locally running proxy modalities and remotely deploying139them.6.8.2 Triangulation (G3: Promising Directions/Proxies)Most vibrations received equivalent ratings for most scales in at least one proxymodality. Using proxy modalities in tandem might help improve response accu-racy. For example, V6 could be rendered with LofiVib for a pleasantness rating,then as Visemph for Urgency. Alternatively, we might develop an improved proxyvibration by combining modalities - a visualization with an accompanying low-fidelity vibration.6.8.3 Animate Visualizations (G3: Promising Directions)Speed and Urgency were not as effectively transmitted with our visualizations aswith our vibration. Nor was Duration well portrayed with Visdir, which had ashorter time axis than the exaggerated Visemph. It may be more difficult for visualrepresentations to portray time effectively: perhaps it is hard for users to distinguishSpeed/Urgency, or the time axis is not at an effective granularity. Animations (e.g.,adding a moving line to help indicate speed and urgency), might help to decou-ple these features. As with triangulation, this might also be accomplished throughmultimodal proxies which augment a visualization with a time-varying sense usingsounds or vibration. Note, however, that Duration was more accurately portrayedby Visemph, suggesting that direct representation of physical features can be trans-lated.6.8.4 Sound Could Represent Energy (G3: Promising Directions)Our high-fidelity reference is a voice-coil actuator, also used in audio applications.Indeed, in initial pilots we played vibration sound files through speakers. Soundis the closest to vibration in the literature, and a vibration signal’s sound output iscorrelated with the vibration energy and sensation.However, in our pilots, sometimes the vibration sound did not match the sen-sation; was not audible (low frequency vibrations); or the C2 could only play part140of the sound (i.e, the sound was louder than the sensation).Thus, while the raw sound files are not directly translatable, a sound proxy def-initely has potential. It could, for example, supplement where the Visdir waveformfailed to perform well on any metric (aside from Duration) but a more expressivevisual proxy (Visemph) performed better.6.8.5 Device Dependency and Need for Energy Model for Vibrations(G4: Challenges)Energy did not translate well. This could be a linguistic confusion, but also a failureto translate this feature. For the visualization proxies, it may be a matter of findingthe right representation, which we continue to work on.However, with LofiVib, this represents a more fundamental tradeoff due tocharacteristics of phone actuators, which have less control over energy output thanwe do with a dedicated and more powerful C2 tactor. The highest vibration en-ergy available in phones is lower than for the C2; this additional power obviouslyextends expressive range. Furthermore, vibration energy and time are coupled inphone actuators: the less time the actuator is on, the lower the vibration energy. Asa result, it is difficult to have a very short pulses with very high energy (V1,V3,V8).The C2’s voice coil technology does not have this duty-cycle derived coupling. Fi-nally, the granularity of the energy dimension is coarser for phone actuators. Thisresults in a tradeoff for designing (for example) a ramp sensation: if you aim foraccurate timing, the resulting vibration would have a lower energy (V10). If youmatch the energy, the vibration will be longer.Knowing these tradeoffs, designers and researchers can adjust their designs toobtain more accurate results on their intended metric. Perhaps multiple LofiVibtranslations can be developed which maintain different qualities (one optimized ontiming and rhythm, the other on energy). In both these cases, accurate models forrendering these features will be essential.1416.8.6 Vibrotactile Affective Ratings Are Generally Noisy (G4:Challenges)Taken as a group, participants were not highly consistent among one another whenrating these affective studies, whether local or remote. This is in line with ourprevious work (Chapter 4), and highlights a need to further develop rating scalesfor affective touch. Larger sample sizes, perhaps gathered through crowdsourcing,may help reduce or characterize this error. Alternatively, it gives support to theneed to develop mechanisms for individual customization. If there are “types” ofusers who do share preferences and interpretations, crowdsourcing can help withthis as well.6.8.7 Response & Data Quality for MTurk LofiVib Vibrations (G4:Challenges)When deploying vibrations over MTurk, 8/29 participants (approximately 31%)completed the survey using non-Android based OSes (Mac OS X, Windows 7,8.1,NT) despite these requirements being listed in the HIT and the survey. One partici-pant reported not being able to feel the vibrations despite using an Android phone.This suggests that enforcing a remote survey to be taken on the phone is challeng-ing, and that additional screens are needed to identify participants not on a partic-ular platform. Future work might investigate additional diagnostic tools to ensurethat vibrations are being generated, through programmatic screening of platforms,well-worded questions and instructions, and (possibly) ways of detecting vibra-tions actually being played, perhaps through the microphone or accelerometer).6.8.8 Automatic Translation (G4: Challenges)Our proxy vibrations were developed by hand, to focus on the feasibility of crowd-sourcing. However, this additional effort poses a barrier for designers that mightnegate the benefits of using a platform of MTurk. As this approach becomes betterdefined, we anticipate automatic translation heuristics for proxy vibrations usingvalidated algorithms. Although these might be challenging to develop for emo-tional features, physical properties like amplitude, frequency, or measures of en-ergy and roughness would be a suitable first step. Indeed, crowdsourcing itself142could be used to create these algorithms, as several candidates could be developed,their proxy vibrations deployed on MTurk, and the most promising algorithms latervalidated in lab.6.8.9 LimitationsA potential confound was introduced by Visemph having a longer time axis thanVisdir: some of Visemph’s improvements could be due to seeing temporal featuresin higher resolution. This is exacerbated by V10 being notably longer than thenext longest vibration, V8 (13.5s vs. 4.4s), further reducing temporal resolutionvibrations other than V10.We presented ratings to participants by-vibration rather than by-rating. Be-cause participants generated all ratings for a single vibration at the same time, itis possible there are correlations between the different metrics. We chose this ar-rangement because piloting suggested it was less cognitively demanding than pre-senting metrics separately for each vibration. Future work can help decide whethercorrelations exist between metrics, and whether these are an artifact of stimuluspresentation or an underlying aspect of the touch aesthetic.Despite MTurk’s ability to recruit more participants, we used the same samplesize of 40 across both studies. While our proxies seemed viable for remote deploy-ment, there were many unknown factors in MTurk user behaviour at the time ofdeployment. We could not justify more effort without experiencing these factorsfirsthand. Thus, we decided to use a minimal sample size for the MTurk study thatwas statistically comparable to the local studies. In order to justify a larger remotesample size in the future, we believe it is best to iterate the rating scales and to testdifferent sets of candidate modalities.As discussed, we investigated two proxy modalities in this first examination butlook forward to examining others (sound, text, or video) alone or in combination.6.9 ConclusionIn this chapter, we crowdsourced high-level parameter feedback on vibrotactilesensations using a new method of proxy vibrations. We translated our initial setof high-fidelity vibrations, suitable for wearables or other haptic interactions, into143two proxy modalities: a new vibrotactile visualization method, and low-fidelityvibrations on phones.We established the most high-risk aspects of vibrotactile proxies, namely fea-sibility in conveying affective properties, and consistent local and remote deploy-ment with two user studies. Finally, we highlighted promising directions and chal-lenges of vibrotactile proxies, to guide future tactile crowdsourcing developments,targeted to empower vibrotactile designers with the benefits crowdsourcing brings.6.10 AcknowledgmentsWe are grateful for participant feedback, reviewer suggestions, and extra tactorsshared by Hong Tan. Research was funded by NSERC and conducted under UBCBREB H13-01646.144Chapter 7Tuning Vibrations with EmotionControlsFigure 7.1: Conceptual sketch of an emotion tuning control and its mapping to engineering attributesof vibrationsPreface: In our study of haptic personalization mechanisms in Chapter 3, userspreferred the tuning mechanism the most. Thus, for the last component, we focusedon developing this mechanism where users can quickly adjust overall characteris-tics of a sensation by “turning a knob” or “moving a slider”. In contrast to thechoosing approach, where vibrations were mainly described and located in a facetspace (Chapter 4), here our goal was to move them in that space. Since our pre-145vious results suggested emotion to be the most salient facet in users’ perceptionof vibrations (Chapters 4, 5), we devised three emotion controls and investigatedcontinuous mappings between these controls and engineering parameters of vibra-tions. We discuss how these mappings can inform tool design.7.1 OverviewWhen refining or personalizing a design, we count on being able to modify ormove an element by changing its parameters rather than creating it anew in a dif-ferent form or location – a standard utility in graphic and auditory authoring tools.Similarly, we need to tune vibrotactile sensations to fit new use cases, distinguishicon set members and personalize items. For tactile vibration display, however, welack knowledge of the human perceptual mappings which must underlie such tools.Based on evidence that affective dimensions are a natural way to tune vibrationsfor practical purposes, we attempted to manipulate perception along three emotiondimensions (agitation, liveliness, and strangeness) using engineering parametersof hypothesized relevance. Results from two user studies show that an automatablealgorithm can increase a vibration’s perceived agitation and liveliness to differentdegrees via signal energy, while increasing its discontinuity or randomness makesit more strange. These continuous mappings apply across diverse base vibrations,but the extent of achievable emotion change varies. These results illustrate the po-tential for developing vibrotactile emotion controls as efficient refinement tools fordesigners and end-users.7.2 IntroductionFrom cell phones to sensate suits, haptic technology has recently proliferated;studies routinely predict high utility for vibrotactile notifications in everyday life[16, 19, 77, 134]. Adoption, however, has been slow. Advances in hardware the-oretically allow vibrotactile sensations beyond undifferentiated buzzes, but evenprofessional designers struggle to express memorable, aesthetically pleasing per-cepts by twiddling available engineering parameters. It can take years to developa good intuition, and this knowledge is then hard to articulate or transfer. Personalor shared libraries of examples are currently the best mechanism; new expressive146effects are therefore often the result of modifying existing repertoires [140]. Thisis potentially a slow process, with most time spent laboriously exploring alterna-tives – a barrier to creative design, and the antithesis of improvisation. Perceptualcontrols that allow quick, direct modifications to sensations will be highly valuablein this process.For end-users, personalizing haptic signals is an important factor in improvingtheir utility and adoption [164, 185]. Consumers want to manipulate personal con-tent more than ever [92, 99, 135]. The status quo is an immutable library, whichprovides users with a limited pre-designed set of effects to choose from. Giveneffective navigation, this helps; but given a choice, users prefer high-level controlsto tune those predesigned effects to optimally express a personal representation(Chapters 3 and 4).In more mature domains, tools support varying levels of control and expertise.With Adobe Photoshop, one can manipulate pixel-level image features (crop, se-lect a region, color fill), and overall perceptual attributes (brightness control, artis-tic filters) [2]. Adobe Lightroom provides photography enthusiasts with perceptualsliders to manipulate clarity, vibrance, saturation and highlights, which would oth-erwise require manipulating individual RGB pixel values in photo regions [38].Instagram lets any smartphone user quickly choose perceptually-salient filters formore polished or customized images [39].Manipulating vibrations brings similar needs. With existing tools, we mod-ify engineering parameters: cropping part of a signal or changing its amplitude,waveform, or frequency at specific points along its timeline. With sensory con-trols, we can change perceptual attributes like roughness, speed, or discontinuity.Finally, emotion controls can address the mix of cognitive percepts that the vibra-tion engenders. Here, an important question is what haptic controls would be mostmeaningful and useful to designers and end-users.Past haptic studies suggest affective (emotion) dimensions to be an answer.While all three will be valuable for a professional designer, amateurs (whether a de-signer or an end-user) especially need the directness of emotion controls. Further,researchers have argued for the inherent neural link between touch and emotions,and the memorability of affective tactile signals [61, 108, 127]. Other findingspoint to the effectiveness of emotions as a framework for describing and accessing147Figure 7.2: Users mentally align vibrotactile sensations along several primary emotion attributes(left column). To exert direct control over these with design tools, we require a direct,automatable mapping from manipulable engineering parameters (solid line). To find thismapping, we used sensory attributes as a middle step – first establishing a link from emotionto sensory attributes, then from sensory to engineering parameters (dashed line).tactile sensations. In navigating an extensive vibration library, organized by a set ofschemes that included emotional as well as other descriptive perspectives (such asmetaphoric associations or its potential uses), users preferred and used the emotionscheme the most for finding vibrations (Chapter 4). Together, these illustrate theimportance of emotional traits as a target for meaningful vibrotactile design. Forthe rest of the chapter, we use the term “engineering parameter” to signify its ma-nipulable nature and refer to emotion and sensory properties as “attributes” sincethey are not parametrized for manipulation and control yet.7.2.1 Research Questions, Approach and ContributionsIn this chapter, we investigate the possibility of emotion controls for vibra-tions. We began from data indicating which emotion attributes users are mostsensitive to: a previous analysis of user perception of a 120-item vibration library(VibViz) indicated primary alignments with agitation, liveliness and strangeness(Chapter 5). These became our candidate controls. To be useful, such controlsmust further be automatable for inclusion in design tools. This requires establish-ing a continuous mapping between the emotion attributes of interest and the ma-nipulable engineering parameters of a display hardware (e.g., a C2 actuator [34]).The mapping must be consistent (or characterized) for a wide set of starting vi-148brations. Further, although not required for automatability, users can benefit fromknowing the degree of emotion change, given a vibration’s initial characteristicsand the effect of adjusting one emotion control (e.g., agitation) on other emotionattributes (such as liveliness and strangeness).We addressed the primary goal of automatable emotion controls through four sub-sidiary questions.RQ1: What vibrotactile engineering parameters influence primary emotionattributes?Previous work showed the influence of engineering parameters on more ba-sic emotion dimensions of pleasantness and arousal. Here, we needed similardata for more nuanced emotion attributes. We selected a manageable set ofstarting-point “base” vibrations that represent the diversity in possible sensa-tions; then determined influential engineering parameters from literature andexperimentally, using sensory attributes (e.g., roughness) as a middle step(Figure 7.2). We derived a set of vibrations from the base examples by mod-ifying those influential engineering parameters, and tested their impact in auser study where participants rated agitation, liveliness, and strangeness ofthe vibration derivatives relative to the bases. This experiment (Study 1) ver-ified our hypothesized link between the emotion attributes and engineeringparameters. Towards this question, we contribute three sensory attributes(and corresponding engineering parameters) that significantly impact per-ception of a vibration’s primary emotion attributes of agitation, liveliness,and strangeness.RQ2: Can we alter a primary emotion attribute of a vibration (e.g., its live-liness) on a continuum by manipulating influential engineering param-eters?Following our approach for RQ1, we created derivatives of the base vibra-tions using three successively more extreme applications of the influentialengineering parameters found in the previous step. Then, in Study 2, wefurther identified continuous mappings between the emotion attributes andengineering parameters.RQ3: How do characteristics of a base vibration impact a perceived change?149We examined how control effectiveness is amplified or minimized by proper-ties present in a vibration starting point. We analyzed variations in the ratingsprovided in our two user studies for ten base vibrations that varied in their en-gineering characteristics, and showed that the mappings found for RQ2 holdfor various vibration characteristics. We present qualitative descriptions ofhow these characteristics influence the extent of emotion change.RQ4: How does change along one emotion dimension influence other emotiondimensions?We analyzed correlation of ratings for the three dimensions, and tested forunforeseen significant effects of engineering parameters on multiple emotiondimensions. We show that our proposed emotion-engineering mappings arenon-orthogonal. That is, a change in an engineering attribute can impactperception of all three emotion dimensions.In the rest of this chapter, we first review related work (Section 7.3), then describehow perceptual controls can be used by designers and end-users (Section 7.4.1)and detail our process for identifying base vibrations and relevant vibrotactile en-gineering parameters (Sections 7.4.2 and 7.4.3). We detail the two user studies(Section 7.5) and their results (7.6), discuss findings and three example interfacesthat can benefit from them (7.7), then finish by outlining future avenues for researchand tool design (7.7.4).7.3 Related Work7.3.1 Haptic Design, and Inspirations from Other DomainsHaptic designers commonly build on design theories and guidelines, or tool inspi-rations from other, more mature domains of design.Design and personalization process: Built on existing theories of design thinking,MacLean et al. identified a set of major design activities and verified and charac-terized them for haptic experience design as follows: browse, sketch, refine, andshare [104]. Design often starts by browsing existing collections to get inspira-tion, characterize the problem, and gather a starting set of examples. In sketching,150designers quickly explore the design space by creating incomplete and rough sen-sations, making rapid changes to try alternative designs. Throughout the process,designers continuously refine a shrinking set of sensations to achieve a few final de-signs. Tweaking and precise aesthetic adjustments are the hallmarks of the refineactivity. Finally, the sensations are shared with others to get feedback, reach targetend-users, or disseminate design knowledge and contributions. In this framework,tuning controls facilitate the refinement process by expediting generation of salientalternatives for a given sensation.Software and game personalization literature informs us about user motiva-tions and desires. According to these, personalization increases enjoyment, self-expression, sense of control, performance, and time spent on the interface [10, 92,109]. Ease-of-use and ease-of-comprehension in personalization tools engendertake-up, while modifications are discouraged by difficulty of personalization pro-cesses [10, 101, 105, 118, 120].Building on these, we anticipate that an efficient tuning mechanism would en-hance users’ control and enjoyment of haptic notifications and improve their adop-tion rates among the crowds.Intuitive authoring and personalization tools: Similarly, haptic authoring toolsfrequently incorporate successful paradigms from other domains. For example,Mango, a novel authoring tool for spatial haptic vibrations like a haptic seatpad,is modelled after existing animation tools. Exploiting music analogies, interfacessuch as the Vibrotactile Score represent vibration patterns as musical notes [96].Our inspiration for perceptual and emotion tuning controls comes from the visualand auditory domains. In music streaming platforms such as GooglePlay music,Musicovery, and MoodFuse, users can choose to search for songs based on keyterms relating to mood or scenarios such as “keeping calm and mellow” or “boost-ing your energy” in addition to standard music genre categories [50, 112, 114].Similarly, photo editing software such as Adobe Lightroom or Snapseed applica-tion utilize controls named to evoke emotion attributes such as “clarity” or “drama,”which adjusts several pertinent features of the image (contrast, highlights) to createan effect [38, 117]. Among audio design tools, Propellerhead’s “Figure” applica-tion provides audio presets such as “80’s Bass” and “Urban” as well as controls151such as “weirdness” for creating and remixing music pieces [131].These examples show the prevalence of perceptual controls for accessing andmodifying stimuli in other modalities and further highlight the gap in the hapticdomain.Stimuli design: Past research has drawn analogies between haptic and audio sig-nals to develop design guidelines and even hardware for haptics [15, 34, 63, 139,174]. Rhythm and pitch are important attributes of both audio and vibrotactile sig-nals [63, 174]. Van Erp et al. designed 59 vibrations using short pieces of musicwhile others developed crossmodal tactile and auditory icons based on commondesign rules [15, 63, 174]. In hardware design, voice coil actuators can take audiofiles as direct input and are commonly used in research for their high expressiverange.In this work, we benefit from these commonalities: we use an audio editingsoftware, called Audacity, and a voice coil actuator (C2 tactor) to modify and dis-play the vibration files [34, 107]. Further, we use the definition of tempo for audiofiles and report its fit for users’ perception of vibration’s speed [154].7.3.2 Affective Vibration DesignRQ1 builds on previous research in this area. Our own past work links the threeabove-mentioned emotion dimensions to sensory attributes of vibrations; otherstudies provide guidelines in linking sensory attributes to engineering parameters(Figure 7.2).VibViz library and five vibrotactile facets: In Chapter 4, we compiled five cate-gories or facets of vibration attributes: 1) physical or engineering parameters ofvibrations that can be objectively measured (e.g., duration, rhythm, frequency); 2)sensory properties (e.g., roughness); 3) emotional connotations (e.g., exciting);4) metaphors that relate feel to familiar examples (e.g., heartbeat); and 5) usageexamples or events where a vibration fits (e.g., incoming message). We designeda library of 120 vibrations for voice coil actuators (i.e., .wav files) and released aweb-based interactive visualization interface (a.k.a. VibViz) that allows quick ac-cess to the vibrations through the five categories.Here, we used the VibViz interface to choose a diverse set of basis vibrations152from this library for our user studies.Mapping engineering parameters to emotion and sensory attributes: In Chap-ter 5, we collected users’ perception of the 120-item VibViz library according to thefour perceptual facets of sensory, emotion, metaphor, and usage example attributesand analyzed the ratings and tags provided, to identify the underlying semantic di-mensions for these four facets. Further, results from factor analysis and correlationof tags, situated in different facets, linked sensory attributes of the vibrations to theother three facets. Table 7.1 summarizes the results we use from that analysis: 1)three emotion dimensions: agitation, liveliness, strangeness; and their correlationwith 2) six sensory attributes: energy, roughness, tempo, discontinuity, irregular-ity, and dynamism.Others linked vibration’s engineering parameters to sensory attributes as wellas to pleasantness and urgency [91, 139, 184, 186]. Some general trends haveemerged despite hardware dependence of specific engineering parameters and theirreported threshold values: a vibration’s energy depends on its frequency, ampli-tude, duration and waveform and sine waveform is perceived as smoother than asquare wave [123, 139]. No definition exists for changing a vibration’s tempo,discontinuity, irregularity, and dynamism. Also, past studies show that vibrationswith higher energy, duration, roughness, and envelope frequency are less pleasantand more urgent [139, 184]. However, to our knowledge these studies do not gobeyond pleasantness and urgency (a.k.a. arousal) to link more nuanced emotionattributes to engineering parameters.In this chapter, our objective is to develop emotion-engineering mappings forour three emotion attributes, thereby creating a path through which we can controlthese cognitive dimensions – which up to this point we have been able to perceiveand analyze with (Chapters 4 and 5), but not produce at will.7.4 Starting Points: Use Cases, Initial Vibrations andLinkagesTo address our research questions, we carried out three initial steps. First, weestablished a set of guiding use cases in which to frame our studies, as places wheretools of the sort we envision will have value. Then, as a starting point for tuning,153(a) Haptic design inevitably involves several rounds of evalu-ating sensations (left) and refining them (right). With emotioncontrols, designers could efficiently explore the affective designspace around an example or starting point.(b) Personalization: End-usersuntrained in haptics could effi-ciently personalize vibration no-tifications in situ, during or afteruse, by applying emotion filtersto preset vibrations.Figure 7.3: Use cases for tuning vibrations’ characteristics, using parameters aligned with users’cognition and design objectives: for both cases, controls based on emotion attributes enable“direct manipulation” from the user perspective.we chose a vibration subset from the VibViz library with relevant diversity. Finally,we estimated initial linkages of their emotion attributes to engineering parametersusing past literature and our own pilot studies.7.4.1 Design and Personalization Use CasesWe describe two exemplar use cases where emotion controls facilitate otherwisecumbersome tasks for designers and end-users. In Section 7.7.3 we will describethree example tuning interfaces that use our study results to support them.Tuning a vibration set for a game (Figure 7.3a): Alex, a haptic designer, worksat a game company that is developing a new multimodal game. He is responsiblefor developing a set of vibration effects for different scenes and interactions in thegame. While talking to the stakeholders, he refines some of the sensations to bemore “alien-like”, “fun”, or “agitating”, trying for a distinct, yet coherent sensationexperience. He iteratively adjusts emotion attributes and engineering parameters ofseveral vibrations in the game set, testing each alternative quickly and comparingthe feel with the rest of the vibrations in the set.154Personalizing daily notifications (Figure 7.3b): Sarah does interval running ev-eryday. Recently she has installed an application to get vibration notifications onher smartwatch. The application allows her to select the events that trigger a notifi-cation and the associated vibrations from a list. Further, she can preview and applyalternative feels for a vibration (e.g., a more lively version) by quickly tapping onavailable emotion filters.User groups will have different needs: In using perceptual controls, both design-ers and end-users may wish to tweak a single or set of sensations. We anticipatethat when the latter wish to customize sensations for their own use they will pre-fer simple and quick adjustments with intuitive controls. Conversely, the formeralready often tweak sensations for an anticipated application and user group, willneed fine-tuned control over emotion as well as engineering controls, and are will-ing to spend more time to achieve polished results.7.4.2 Choosing Basis VibrationsTo develop an emotion control that can tune any given vibration, one needs to eitherstudy a large set of vibrations with many attributes, or examine a smaller set in asystematic way. The first approach requires extensive data collection and large-scale (e.g., crowdsourced) experimental methods that are currently difficult withhaptics. We chose the second approach, using rhythm to structure our investigationas past research report it to be the most salient perceptual parameter for determiningvibration similarity [165, 166].Two authors independently chose a representative subset of VibViz vibrationswhich varied in rhythmic features, and compared and consolidated them into a17-item set. We further narrowed these to five vibration pairs, with each pair repre-senting a rhythm family (Figure 7.4). Our goal was to examine consistency of thetuning results for the paired members as well as between pairs.7.4.3 Identifying Influential Engineering ParametersIn a two-step process, we first identified an emotion to sensory (emotion-sensory),then a sensory-engineering mapping.155Figure 7.4: Ten basis vibrations (five pairs) from the VibViz library, selected for our studies astuning starting points. Each row represents a vibration pair that shares unique rhythm andenvelope attributes not found in other pairs. As an example, V9 and V10 both have severalconnected pulses with various envelopes (constant, rampup, or rampdown).Emotion-sensory mapping: Table 7.1 summarizes the sensory attributes correlatedwith each emotion attribute from Chapter 5. We selected six attributes (marked inthe table) for further investigation: energy, roughness, tempo, discontinuity, irreg-ularity, and dynamism.Sensory-engineering mapping: We derived relevant engineering parameters forenergy and roughness from the literature but did not find prior work defining tempo,discontinuity, irregularity, and dynamism. For these, we manually and iterativelyaltered our initial 17 vibration .wav files using the Audacity audio editing tool,informally testing candidates in small pilots. In each case, we tested various appli-cations of these sensory attributes on the vibrations until this process converged atthe definitions in Table 7.2.This process resulted in six potentially influential engineering parameters (fre-quency, waveform, tempo (audio), discontinuity, irregularity, and amplitude vari-ation) for further investigation in user evaluations.7.5 User StudiesHaving identified a set of potentially influential engineering parameters, we soughtcontinuous mappings from them to emotion attributes for a given base vibration(RQ1-4). We ran a pilot and two user studies in which participants rated modifiedversions of a base vibration on agitation, liveliness, and strangeness when com-156Table 7.1: Three emotion attributes (rows) and their linkages to sensory attributes and tags of vibra-tions, summarized from previous work (Section 7.3.2). The second column, extracted from afactor analysis in that work, presents the sensory attributes that contribute to the same seman-tic constructs (a.k.a factors) as the emotion attributes. The last two columns show the mostand least correlated tags with each emotion attribute (e.g., “rough” and “smooth” tags have,respectively, high and low correlation with the “agitating” tag in the VibViz dataset.). In thisproject, we used six sensory attributes and tags (marked with †): energy, roughness, tempo,discontinuity, irregularity, and dynamism.EmotionAttributeSensory Attribute(factor loading value)Tags with High Correlation(correlation coefficient)Tags with Low Correlation(correlation coefficient)Agitation Energy (.9)† Rough (.7) Soft (.0)Roughness (.8)† Discontinuous (.5) Smooth (.0)Tempo (.4)† Dynamic (.4)† Flat (.0)Complexity (.5) Complex (.4) Simple (.0)Liveliness Tempo (.5)† Discontinuous (.6) Continuous(.0)Continuity (-.4)† Bumpy (.5) Pointy (.0)Duration (-.5) Dynamic (.4)† Flat (.0)Ramp down (.0)Strangeness Complexity (.6) Irregular (.5)† Regular (.1)Continuity (.3)† Dynamic (.4)† Flat (.0)Complex (.4) Simple (.2)pared to the base. Based on the pilot results, we developed a battery of hypothesesabout such a mapping; with our two studies, we collected data to test for both theseand other unforeseen mappings. Study 1 verified that a mapping exists. In Study2, we investigated the mappings’ continuous nature.7.5.1 Pilot StudyWe established our study protocol and hypotheses in a pilot study with 10 par-ticipants on a subset of our stimuli. For five of our ten base vibrations (V1, V3,V5, V7, V9), we designed six derivatives (a-f) for each by modifying one of the6 engineering parameters identified in Section 7.4.3, with the objective of produc-ing variations in emotion attribute space (Figure 7.5 has implementation details).We then tested the effectiveness of this variation by having the participants ratetheir emotion attributes (agitation, liveliness, strangeness) in relation to the corre-sponding base vibrations, on a scale of -50 (less agitating) to +50 (more agitating).Apparatus and study procedure were the same as for Studies 1 and 2 (details fol-low).157Table 7.2: Influential sensory attributes from Section 7.4.3 (left column), and their definition andimplementation with engineering parameters (middle and right columns). Attribute imple-mentation varied slightly across the two user studies (Figure 7.5).SensoryAttributeDefinition ImplementationEnergy &RoughnessA vibration’s frequency andwaveformIncreased frequency and switched to asquare waveform.Tempo Based on audio definition and al-gorithm for tempoShortened duration of pulses and si-lences without impacting its pitch (fre-quency) [154].Discontinuity Number of silences For discontinuous vibrations, we re-placed part of each pulse with silence.For continuous vibrations, we dividedthe vibration to equal sections and re-placed part of each section with silence.Irregularity Duration of silences and theirsymmetryAdded silence with a random duration toexisting silences in discontinuous vibra-tions or to random positions.Dynamism Amplitude variation Periodically decreased amplitude ofpulses.Results: Rating averages indicated two top-performing engineering parameters foreach emotion dimension: for agitation: waveform and frequency; liveliness: wave-form and tempo; strangeness: discontinuity and irregularity. However, their stan-dard deviations indicated high individual variation (e.g., V1-a received strangenessratings of both +31 and -50).To reduce noise, we switched to an ordinal rating scale (-3 to +3). To achievethe most pronounced emotion effect possible, we created derivatives by applyingchanges to both top performing parameters for each emotion dimension. This ledto a set of hypotheses for Study 1.7.5.2 Study 1 and 2 ObjectivesStudy 1 – Verifying hypothesized influence of engineering parameters on emo-tion attributes: The top half of Table 7.3 presents our four main hypotheses forthis study: The first three describe the anticipated impact of one or both top-performing engineering parameters on an emotion attribute (e.g., waveform andfrequency+waveform increase agitation). The last hypothesis predicts that apply-ing both top-performing parameters (e.g., frequency+waveform) has a larger emo-158Figure 7.5: Overview of the engineering parameters and evolution of their functional implementa-tion to achieve control over the three emotion attributes in the pilot and Studies 1 and 2.Red, green, and yellow highlight agitation, liveliness, and strangeness respectively. “Freq’,“wave”, “discnt”, and “irg” denote frequency, waveform, discontinuity, and irregularity re-spectively. “?” represents a hypothesized link between a functional implementation of anengineering parameter(s) and an emotion attribute.tion impact than one parameter (e.g., waveform).Figure 7.5 further illustrates our implementation for the engineering parametersto achieve the hypothesized control over the emotion attributes.Study 2 – Evaluating continuity of engineering-emotion mapping: The next stepwas to establish continuity in a mapping from engineering parameters to emotionattributes (RQ2), by examining the impact of successively more extreme applica-tions of the engineering parameter combinations that were found to be influentialin Study 1, namely: frequency+waveform, and irregularity+discontinuity. We hy-pothesized that an increase in frequency+waveform leads to a monotonic increasein both agitation and liveliness. We kept both of these emotion dimensions, de-spite their shared engineering parameters to investigate any differences in theircontinuous mappings. In addition, we hypothesized that strangeness monotoni-cally increases with irregularity+discontinuity (see the bottom half of Table 7.3and Figure 7.5 for details).159Table 7.3: Our hypotheses for Study 1 and 2. For each study, the left column shows one mainhypothesis for each emotion attribute (bold font) which is further divided into a set of sub-hypotheses for that dimension (middle column). The right column indicates results for themain hypothesis (bold font), and lists the posthoc test statistics for each sub-hypothesis. Weran a full factorial analysis for each emotion attribute to test for these hypotheses, and alsoto investigate unhypothesized influence of engineering parameter on emotion attribute (e.g.,of tempo on strangeness). Thresholds of 0.05 and 0.1 were used for statistical significanceand borderline significance respectively. Cells marked with p > 0.1 indicate non-significantresults.7.5.3 MethodsStudies 1 and 2 shared apparatus and procedure but differed in stimuli set and size.StimuliStudy 1: We utilized all 10 base vibrations (5 pairs), creating eight deriva-tives for each as follows: a) the base vibration itself, as a statistical control; b)six derivatives per base, representing change in waveform, tempo, discontinuity,frequency+waveform, waveform+tempo, and irregularity+discontinuity (see Fig-160ure 7.5 for details of these parameters in Study 1 and Figure 7.6 for an example),and c) a randomly chosen duplicate of one of these seven, to assess rating reliabil-ity. This resulted in a total of 90 vibrations (10 base and 80 derivatives) rated incomparison to the base vibrations by each participant – i.e., 80 comparisons.Study 2: We included eight derivatives for each of the 10 base vibrations: a)the base vibration itself, b) three levels of frequency+waveform, c) three levelsof irregularity+discontinuity, and d) a randomly chosen duplicate of one of theseseven. For the frequency+waveform derivatives, the frequency increase at eachlevel was based on the Weber’s JND law ( f2 = f1 +f15+ 5). Waveform did notchange across the three levels. For the irregularity+discontinuity derivatives, wefirst applied discontinuity by removing 30%, 50%, and 70% from the middle ofeach pulse in the vibration. To systematically vary irregularity, we then randomlyadded or removed silence from the first 30%, 50%, or 70% of the resulting gaps,with the amount of silence proportional to the duration of the gap (30%, 50%, and70% of the gap duration respectively, which translated to values between 0 and 0.4msec). (Figure 7.5).As for Study 1, this resulted in a total of 90 vibrations (10 base and 80 deriva-tives) rated by each participant – 80 comparisons.Participants: We recruited 20 (12 females, 18 native English speakers), and 22(15 female, 19 native English speakers) participants for Study 1 and 2 respectivelyby advertising on a North American university campus. The participants had nobackground or exposure to haptic signals other than vibration buzzes on their cell-phones. They received $15 compensation for a 1-hour session in each study.Apparatus: To display the vibrations, we used a C2 tactor, connected to an ampli-fier and a laptop. Each base vibration and its derivatives were placed in a separatedesktop folder visible on the laptop screen. The rating interface was a FluidSurveysquestionnaire with each page representing all the derivatives and required ratingsfor one of the base vibrations. Each question on a page displayed the name ofthe derivative and three Likert item ratings (-3 to +3) for agitation, liveliness, andstrangeness (Figure 7.7b). A rating of -3 indicated that a derivative had consid-erably less of an emotion attribute compared to the base (i.e., less agitating ornegative influence of the engineering parameter), while +3 indicated having more161Figure 7.6: An example of vibration derivatives in Study 1 and 2 (designed for base vibration V5).Increasing frequency is represented through increased image color saturation. Increasingrhythmic rate (i.e., manipulating the tempo engineering parameter) also resulted in shortersignals as a side effect. Discontinuity and irregularity+discontinuity are implemented byadding silent periods (represented as zero amplitude), and by varying the duration of thesesilent periods.(a) Apparatus for user studies. (b) Rating interface showing one vibrationderivative and three Likert item ratings represent-ing the three emotion attributes.Figure 7.7: Experimental setup for the pilot and Studies 1 and 2. The rating interface shown insubfigure (b) appears on the computer screen in (a).of the emotion attribute (i.e., more agitating or positive influence). For both stud-ies, the order of the base vibrations and vibration derivatives were randomized onthe questionnaire interface. The participants played the vibration files on the lap-top and provided their ratings on the FluidSurveys interface. They listened to pinknoise through headphones to mask any sound from the actuator.Procedure: Study sessions were held in a private, closed-door room and startedwith a short interview. After asking for the participants’ demographics, the experi-menter asked them to imagine and define an agitating, lively, or strange vibrationusing their own free-form words and typed their responses on a computer. To cal-162ibrate on common definitions, the experimenter then provided a verbal definitionof the three emotion terms with short lists of adjectives drawn from emotion syn-onyms in Chapter 5 and asked them to use these definitions in the remainder of thestudy:• lively: happy, energetic, interesting• agitating: annoying, urgent, angry, uncomfortable• strange: odd, unfamiliar, unexpectedThe rating task consisted of feeling all the derivatives for a base vibration first, andthen providing three ratings for each derivative to indicate whether it was more/lessagitating, lively, and strange than the base vibration or to mark a rating with “donot know”. The participants rated each derivative once (randomly ordered) whilehaving access to its base vibration as well as all other derivatives in that set. Theparticipant held the C2 actuator between the tip of the fingers (Figure 7.7), andcould play the base and its derivatives as many times and in any order.The session ended with another short interview, where the experimenter askedfor and recorded the participants’ definition of agitation, liveliness, and strangenessfor a vibration, what order they followed for the ratings, and any additional com-ments. The goal was to identify any changes in the emotion definitions, as a resultof feeling the vibrations, and to note any other interesting patterns in the partici-pants’ experience of the rating task.7.5.4 AnalysisReplaced Values: Out of over 10,000 ratings collected, we received a small number(five in Study 1, six in Study 2) of “do not know” responses. We replaced thesewith the median of the other ratings for the corresponding derivative.Duplicate Trials: The median of rating differences between a derivative and itsduplicate (inserted to estimate reliability- see Section 7.5.3) was 0 and 0.5 in Study1 and 2 respectively (on a 7-point scale). We therefore removed ratings for theduplicate derivatives for the rest of our analysis.Nonparametric Factorial Analysis (ART): To test our hypotheses and more broadlyidentify any unhypothesized effects of the engineering parameters, we then per-formed a full factorial analysis rather than testing just the hypothesized compar-163isons. Because this involved multiple nonparametric factors, we used the AlignedRank Transform (ART) for nonparametric factorial analyses [177]. ART was de-signed for and has been used by many as a multifactor nonparametric alternativefor ANOVA, since other nonparameteric tests such as Kruskal-Wallis and Friedmantests can handle only one factor of N levels. ART applies a rank transformation onthe rating data [24], then runs an ANOVA test on the ranks. Thus, results from ARTare interpreted similarly to the ANOVA results. For each study, we ran the test onthe ratings for agitation, liveliness, and strangeness separately, using two factorsof engineering parameter (7 levels) and base vibration (10 levels). Since ART is anomnibus test, we used Tukey’s posthoc analysis with corrected p-values for mul-tiple comparisons with an alpha level of .05. Table 7.3 shows our hypotheses andresults of our statistical analysis for Studies 1 and 2.7.6 ResultsWe first present qualitative descriptions collected in the pre and post interviews,then show minimally processed rating data, and present our ART analysis withrespect to our research questions (RQ1-4).7.6.1 Verbal Descriptions for Emotion AttributesWe aggregated the emotion descriptions collected from the participants in the semi-structured pre- and post-session interviews for Studies 1 and 2 as follows. Weextracted adjectives (e.g., irritating) and noun phrases (e.g., short pulses), consol-idated synonyms (e.g., fast and agile), and counted total usage instances for eachadjective across the participants. For example, we coded the Study 2 definition ofa lively vibration by P18 (“more intense and faster vibrations”) as strong (+1) andfast(+1); then summed with similarly named and/or defined adjectives from otherparticipants. Results are shown in Table 7.4.For all three emotion attributes, in the pre-interview participants mostly useddescriptive emotion words when we asked them to define these concepts as theymight be expressed as vibrations, in their own words. In the post-interview wherewe again requested definitions for our three emotion terms, they generally chose todescribe vibration structure and feel. i.e., they drew upon sensory definitions.164Table 7.4: Participant emotion attribute definitions, aggregated for Study 1 and 2. We extractedadjectives and noun phrases and counted participant references to them or their apparentsynonyms. The resulting lists are ordered by the most frequent phrases, with the total countpresented in parenthesis. Frequently used phrases (n≥ 4) for more than one emotion attributeare bold faced.Emotion Definition Pre-Interview Definition Post-InterviewAgitating irritating (12), nervous (10), shaking (5),angry (4), uncomfortable (4), unpleas-ant (3), negative (3), fast (3), random(2), strong (2), constant (2), unbalance(1), provoking (1), attention-getting (1),painful (1), moves up and down (1)strong (25)†, long (6), irritating (5),fast (5)†, non-rhythmic (5)†, irregu-lar (4)†, constant (3), discontinuous (3),aggressive (2), unexpected (2), urgent(2), shaking (2), unpleasant (2), alarm-ing (2), random (2), continuous (2), highfrequency (2), frequent pulses (2), dif-ferent from base (1)Lively energetic (11), happy (10), pleasant (7),strong (6)†, exciting (5), holidays orparty (3), full of life (3), rhythmic (3),upbeat (2), musical (2), alert (2), color-ful (1), noisy (1), young (1), confident(1), tickling (1), bright (1), buzzy (1)strong (14)†, fast (13)†, rhythmic (10),short pulses (6), discontinuous (3), reg-ular (3), happy (2), upbeat (2), light (1),smooth (1), increase in strength overtime (1)Strange weird (16), unfamiliar (13), unexpected(6), unpleasant (3), unnatural (3), un-comfortable (2), scary (2), inconsistent(1), disturbing (1), creepy (1), different(1), cautious (1), non-rhythmic (1), pat-terned vibration (1)off-rhythm (14)†, different from base(8), random pattern (8), unfamiliar (7),irregular (6)†, unexpected (4), weird(3), unnatural (2), negative (1), extreme(1), uncomfortable (1), nonsensical (1),long (1), shorter pulses (1), fast (1)The pre-interview produced several patterns. Both agitating and strange vi-brations (considered in the abstract) were labelled with adjectives typically consid-ered unpleasant and negative. For example, strange vibrations were identified asunexpected and unfamiliar, and agitating ones as irritating and nervous. In con-trast, liveliness was associated with positive attributes such as energetic, happy andpleasant.In the post-interview, the sensory definitions which participants supplied foragitation overlapped in content with both liveliness and strangeness, but the lattertwo did not share any descriptions (per participant or when aggregated). Accordingto the post-questionnaire, agitating or lively vibrations were both described to bestrong and fast, but they differed in other ways: Liveliness was linked to shortpulses and a rhythmic pattern while long, non-rhythmic, and irregular vibrationswere considered agitating. Strange vibrations shared part of the agitation space,165(a) Study 1 (b) Study 2Figure 7.8: Boxplot of agitation, liveliness, and strangeness ratings for the base vibration and vi-brotactile derivatives representing changes in the engineering parameters in Study 1 and 2.Starred lines mark significantly different pairs of conditions, with *** and * indicating sig-nificant results at p < 0.0001 and p < 0.05 respectively. “Freq+Wave”, “Wave+Tempo”,“Discnt”, and “Irg+Discnt” denote frequency+waveform, waveform+tempo, discontinu-ity, and irregularity+discontinuity respectively. For example, in Figure a, agitation rat-ings for the base vibration (Base) are significantly different from the ratings for the fre-quency+waveform derivative (Freq+Wave) at p < 0.0001.being likewise described as irregular and off-rhythm.7.6.2 RatingsWe collected a total of 10,080 emotion attribute ratings for Study 1 and 2 vibra-tion derivatives. Figure 7.8 shows these as boxplots for agitation, liveliness, andstrangeness.To denote patterns of the ratings pertaining to all 10 base vibrations and 7 engi-neering parameters, we then visualized average ratings for each vibration derivativein Figure 7.9. Average ratings of -3, 0, and +3 respectively indicate negative, zeroand positive influence of an emotion attribute on the derivative compared to thebase vibration.7.6.3 RQ1: Impact of Engineering Parameters on EmotionAttributes (Study 1)The first research question’s objective was simply to establish which engineeringparameters (which we are able to manipulate) can influence perception of vibration166Figure 7.9: Average ratings of the emotion attributes in response to variation of engineering param-eter combinations (subfigure columns) for the 10 base vibrations (subfigure rows) in Studies1 and 2. Influence of the engineering parameters on the base vibrations for that emotionattribute is denoted by color: blue is negative (bounded by average rating of -3.0, intenseblue), gray is neutral (0), and red shows a positive influence (bounded at +3.0). Columnsaturation thus indicates strong influence (positive or negative) of an engineering parameter,whereas row saturation indicates susceptibility of that vibration to being influenced. Consis-tent color and saturation in a column indicates a consistent perception regardless of the basevibration; color variation suggests dependency on the base vibration.emotion attributes. ART analysis (Section 7.5.4) of our Study 1 data showed a sig-nificant main effect of engineering parameter and a main effect of base vibrationon the ratings for all three emotion attributes. A posthoc Tukey’s test determinedwhich engineering parameters were significantly different from the base. This al-lowed us to confirm three hypotheses, partially accept one, and reject the otherthree.Specifically, the frequency+waveform hypothesis for agitation and both sub-hypotheses for strangeness were accepted; the waveform hypothesis for agitationand both liveliness hypotheses were rejected. Table 7.3 gives details.Figure 7.9 illustrates these outcomes. Specifically, the columns represent-ing the rejected hypotheses (waveform, tempo, and waveform+tempo) show ei-ther low emotion change (grey or low saturation cells) or inconsistent change fordifferent base vibrations (color variations). In contrast, the majority of cells forthe accepted hypotheses (the frequency+waveform, discontinuity, and irregular-167ity+discontinuity columns) show high emotion influence (highly saturated cells).Further, frequency+waveform resulted in the most consistent perception for agi-tation and liveliness while irregularity+discontinuity led to consistent results forstrangeness.In summary, Study 1 succeeded in highlighting possible control paths towardsall three emotion attributes, by employing different combinations of four of thesix engineering parameters we investigated. Notably, the agitation and livelinessattributes shared the same engineering parameters in these results. We further in-vestigate an overlap in their continuous mappings in Study 2.7.6.4 RQ2: Evidence of Continuity of the Engineering-EmotionMappings (Study 2)In Study 2, we investigated mapping continuity, using three successively more ex-treme applications of the influential engineering parameter combinations to createthe vibration derivatives.ART analysis of Study 2 data (ratings of these derivatives relative to their bases)failed to confirm our hypothesis for all three emotion attributes. More specifically,while ART results showed significant main effects of the engineering parameterand base vibration for all three attributes, Tukey’s posthoc test only showed sig-nificant differences between the derivatives and the base. It showed no differencebetween three successive derivatives of an engineering parameter combination.Further investigation revealed a potential failure of perceptual monotonicityin our application of engineering parameters, which we verified and rectified asfollows.Agitation and LivelinessTo understand this unexpected perceptual result, we more closely examined tac-tor output for the three derivatives. We noted that the three increasing levels offrequency+waveform resulted in very different output energy depending on theactuator’s frequency response curve (peak at f = 275Hz) and a base vibration’sfrequency.We addressed this with two steps. First, we redefined the engineering parame-ter for these emotion attributes in terms of energy rather than frequency+waveform,168and used the frequency of the tactor’s peak response to dictate the most extreme en-ergy value. Based on this reasoning, we re-ordered the vibration derivatives usedto collect Study 2 ratings. That is, the new energy sequence of [base, energy1,energy2, energy3] simply swapped the order of the original 2nd and 3rd deriva-tives: [ f0 = 200Hz, f1 = 245Hz, f3 = 352Hz, f2 = 289Hz].Re-running ART and Tukey’s posthoc on this energy-ordered rating data con-firmed the agitation hypothesis and partially confirmed the liveliness one. Specifi-cally, for agitation ratings, this resulted in significant differences between all threeenergy levels (borderline significance for the revised energy1 and energy2, p=0.1).For liveliness, Tukey’s posthoc showed a borderline significant difference betweenenergy1 and energy3, p=0.08.Figure 7.9’s visualization is consistent with these results: agitation and liveli-ness cells show an increase in emotion change (increase in saturation) for higherenergy levels. We note, however, that liveliness cells are less saturated than the ag-itation ones for the same vibration derivative, suggesting that these energy changesimpacted liveliness less than agitation.StrangenessIn our initial ART analysis, the three levels of irregularity+discontinuity did notyield significantly different strangeness perceptions. i.e., they were different fromthe base but not different from each other. To explain the variations in the strangenessratings, we considered alternative orderings of the levels based on other relevantparameters (e.g., rhythm, or number of pulses in the derivatives) but did not find aplausible explanation.In Figure 7.9, the irregularity+discontinuity (irg+discnt) columns for strangenessshow positive but low influence of these parameters (red but low-saturated cells),with median values around 1 (on the [-3:+3] scale); maximum values are 1.2 inIrg+Discont-3, and 1 in Enrgy-3 for strangeness.7.6.5 RQ3: Impact of Base Vibrations on Emotion Attribute RatingsOur ART analysis suggested that the base vibrations varied in their emotion changeafter applying the engineering parameters – a significant main effect of base vibra-tion in both studies. Figure 7.9 depicts differences in the emotion ratings for the 10169base vibrations.Agitation and liveliness: For all the base vibrations in both studies, applying somelevel of frequency+waveform (or energy) tended to increase their perceived agita-tion and liveliness (grey to red colors). However, the extent of increase varied fordifferent base vibrations. These differences are more pronounced in Study 1 butare resolved after the energy re-ordering in Study 2.To see if a vibration’s base rhythm contributed to the ratings, we examinedconsistency of the ratings for the paired vibrations (Section 7.4.3) but only foundone notable instance. V1 and V2, paired for being continuous and flat, receivedlower liveliness ratings that the other vibrations even with the energy reordering inStudy 2.Strangeness: All the base vibrations became more strange after applying irreg-ularity+discontinuity and discontinuity. However, in some cases, the boost wasminimal (low saturation cells). In Study 2, some base vibrations showed a consis-tent albeit gradual increase in strangeness (V4 and V7 are most pronounced), butthe majority did not. This is consistent with the statistical results (significant maineffect but no pairwise significance) in Section 7.6.4. Examining the paired vibra-tions did not yield any apparent link between the strangeness ratings and rhythmpatterns of the base vibrations.7.6.6 RQ4: Orthogonality of Emotion DimensionsCorrelation among the three emotion dimensions: A Spearman’s rank correlationtest was positive for agitation and liveliness ratings (strong for Study 2 (r=0.67),and weak in Study 1 (r=0.39)). For Study 1, Spearman’s test also revealed a mod-erate correlation between agitation and strangeness (r=0.44).Unhypothesized crosstalk: We designed these vibration series with the intent of in-fluencing each emotion attribute with one engineering parameter combination. Wealso checked for “crosstalk” – i.e., a parameter intended to influence one emotionattribute having unintended impact on a different one. We did find some crosstalk,but the effect was either inconsistent or less than for the intended influence.Agitation and liveliness were influenced by irregularity+discontinuity and dis-continuity in both studies. However, the effect was not consistent: these parameters170tended to significantly increase agitation and liveliness ratings in Study 1 but sig-nificantly decreased them in Study 2. Figure 7.9 suggests the same results; inStudy 1, Discnt and Irg+Discnt cells have red or grey cells (positive to nuetralinfluence), while in Study 2, there is an apparent increase in the saturation of theblue cells (negative-influence) with increasing Irg+Discnt.In both studies, strangeness ratings were increased by frequency+waveform.In Figure 7.9, there is a very mild (and non-designed) positive influence of theenergy parameters on strangeness with the impact of Irg+Discnt being only alittle stronger.7.7 DiscussionAfter an overview of our findings, we discuss automatability of the emotion con-trols given these results and reflect on our study approach, and finally present threeexample interfaces that can benefit from our results.7.7.1 FindingsEvidence of mapping from engineering parameters to emotion attributes (RQ1and RQ2): We found a set of engineering parameters that can increase perceptionof agitation, liveliness, and strangeness for a given vibration. Specifically, ourresults suggest a linear relationship between agitation, liveliness and the actuator’soutput energy. Adding irregularity and discontinuity to a vibration increases itsstrangeness but the effect does not increase with the degree of discontinuity andirregularity.Differences observed for the base vibrations (RQ3): The extent of emotion boostdepends on the characteristics of the base vibration. We found that differences inagitation and liveliness boosting were best described by the actuator’s output en-ergy, as evinced by the improved monotonicity of relationship in Study 2 versusStudy 1. Rhythm and envelope played a secondary role for liveliness, where con-tinuous and flat base vibrations (V1, V2) received a lower boost than did the otherbases for a similar increase in energy. V7, with a symmetric rhythm of short andlong pulses, was among the most lively vibrations for different energy levels.Strangeness ratings were mixed. This may have been due to using random171values in our irregularity derivatives: sometimes this produced a regular rhythmicpattern (e.g., irg+discnt-3 for V1), and elsewhere, noticeably irregular beats.Orthogonality of the emotion controls (RQ4): In our study, agitation and live-liness were controlled by the same engineering parameter combination, albeit atdifferent rates, while strangeness was mapped to a different engineering parametercombination. This suggests one can design two emotion controls for vibrations:one that modifies agitation and liveliness, and a second one for strangeness. Al-though limited, our results provide evidence for a subtle distinction between ag-itation and liveliness (e.g., impact of base rhythm in the ratings and qualitativedescriptions), which need to be further examined in future studies. Finally, in ourstudy a change in one emotion attribute influenced perception of the others. Below,we discuss automatability of emotion controls given these results.7.7.2 Automatable Emotion Controls and Study ApproachOur studies show that at least one automatable solution exists. They confirmedthe viability of the mapping we proposed between engineering parameters andemotion dimensions, for a diverse set of base vibrations. The mapping, however, isneither orthogonal or uniform. The extent of change along the emotion dimensioncan vary for different vibrations, and moving a vibration along one emotion dimen-sion can impact its other emotion attributes. These qualities are not surprising; theyexist in other domains and do not undermine the effectiveness of the controls. Asan example, in Adobe Lightroom, increasing the “shadows” does not change everyphoto to the same degree. Further, the effect of adjusting “shadows” on a photo’s“vibrance” is not always predictable.We used a top-down approach in designing the emotion controls. We started witha set of emotion attributes, then devised a mapping to the engineering parame-ters. A bottom-up approach would have required developing a set of engineeringand physical controls; then building higher level controls based on emerging us-age trends over time. This would have necessitated long-term usage or access tocrowds to aggregate usage patterns. Also, the resulting controls may require back-ground knowledge (e.g., “highlights” vs. “whites” sliders in Adobe Lightroom)which makes them mainly accessible to designers and power users. Given a lack of172access to the crowds (Chapter 6) or a large established haptic design community,we chose the top-down approach to find an existence proof as opposed to an opti-mized solution. This process is not the only possible way, nor are these mappingsthe only possible paths. Over time, we anticipate that triangulation of differentapproaches will lead to the best results.Sensory attributes of vibrations provide a hardware-independent layer for emo-tion controls. To narrow down to a set of promising engineering parameters, weused a two-step process: 1) finding relevant sensory attributes for the three emotiondimensions, and 2) linking those sensory attributes to engineering parameters ofour actuator. The first phase was hardware-independent while the second step wasnot. While we used this as a detour to incorporate existing literature guidelines,emotion and perceptual controls can be built with a similar structure using twosoftware layers: a hardware-independent sensory layer, and a hardware-specificmiddleware. This would promote modularity and flexibility to work with diverseactuators and expressive parameter ranges, and opens the possibility of translationand cross-mappings for other haptic technologies and modalities.Exposure to vibrations led to more concrete descriptions for the emotion termswhich in turn highlighted next steps for the engineering-emotion mappings. Forhypothesis testing, we relied on quantitative Likert ratings for efficiency, comple-mented with qualitative descriptions for a perspective on practical significance.Interestingly, participants became more articulate after a relatively short exposureto the vibrations. At the start of the study sessions, the participants described thethree emotion dimensions mostly with other emotion words but in the post-studyinterview, they commonly referred to the sensory and engineering parameters ofthe vibrations (e.g., strong, frequent pulses). In most cases, the participants’ defi-nitions for the emotion attributes were consistent with their ratings and our hypoth-esized engineering parameters. For example, agitating and lively vibrations werefrequently described as being “strong”. Strange vibrations were also described tobe “very different from the base”, “irregular”, and “offbeat”. In a few cases, the rat-ings and qualitative comments were not aligned, raising questions for future work.Specifically, lively vibrations were commonly described as “fast” but tempo andwaveform+tempo were not effective. We increased tempo based on the definition173Figure 7.10: Instagram for vibrations. Users can sketch a new vibration true a simple interface(e.g., by tapping on the phone screen, recording their voice, drawing a rhythm, etc. (left))and then stylize it by applying emotion filters (right).available for audio tracks since previous work show perceptual and design com-monalities between the haptic and auditory modalities. But our results suggest thatthis definition is not aligned with users’ perception and calls for a more effectivemodel for the perception of speed in vibrations.7.7.3 What Do These Results Enable? Revisiting Our Use CasesOur motivation for this research was to empower haptic design and personalizationtools. Here, we discuss three interface concepts, informed by our results, that cansupport the design and personalization scenarios we laid out in Section 7.4.1. Wechose this set of examples – drawn from different points in a large potential designspace – as a vehicle to relate our findings back to those use cases, and reflect ontheir underlying parameters.Vibration Instagram: Our personalization use case in Section 7.4.1 calls for asimple interface where ordinary users, untrained in haptics, can apply a set of pre-defined effects to any given vibration. The base vibration can be chosen from aset of example vibrations on the users’ device, or designed by them with a simplesketching tool (e.g., by tapping, recording, or drawing a pattern). Similarly tothe existing Instagram application, the same binary control is enough for all thepre-designed effects/filters (i.e., the filter is applied or not) and users do not needaccess to or any preview of low-level sensory or engineering parameters. The filters174Figure 7.11: Emotion toolbox. Designers can start from a vibration in their library (left panel),use high-level emotion controls (third panel), and override default engineering presets asneeded (right panel). Promising candidate can be saved to the bottom alternatives panel.must be perceptually salient and distinct but may not rely on pre-defined emotionalconnotations, although meaningful labels will be helpful.Grounding in our results: The present findings show how to create at least twoemotionally meaningful filters (representing agitating/lively, and strange effects).Several filters can be added to represent alternative applications of an engineeringparameter. For example, the three levels of irregularity+discontinuity, in our re-sults, lead to distinct strange sensations, thus can represent separate strange filters.Further, different levels of energy can be used to create lively and agitating versionsof a vibration.Emotion Toolbox: Alex, in our design use case (Section 7.4.1), requires access tolow-level authoring support as well as emotion controls for quick exploration andsensation refinement. An add-on toolbox to a designers’ existing authoring tool(s)could support this by providing haptic functionality similar to Adobe Lightroomfilters. The interface must expose full capability of all the available emotion con-trols by representing them as a switch, or a discrete or continuous slider depend-ing on the binary, discrete, or continuous nature of the possible emotion change.Further, the designer must be able to access and modify the preset values for theunderlying engineering parameters contributing to an emotion control. Ideally, theinterface will allow designers to define new proprietary controls and map them tothe engineering parameters.Grounding in our results: We can now create a toolbox with controls for agita-175Figure 7.12: Haptic palette generator. Users can select a base vibration (from a library), deter-mine the emotion dimension for derivatives, as well as their similarity, and number ofderivatives. The system automatically creates the derivatives on demand based on a prede-fined algorithm(s) (e.g., similar to our procedure, in Study 2, for creating multiple levelsof strangeness).tion, liveliness, and strangeness, with the first two as sliders and the last a switch.In a “details” layer, designers can see default engineering settings underlying eachcontrol and note that the frequency slider changes linearly with modifications to theagitation or liveliness sliders but at different rates. Since our results suggest sub-tle differences between these two emotion attributes, here we present them withtwo separate controls to support further characterization of their difference by thedesigners if needed. Similarly for strangeness, designers can change the preset val-ues of irregularity and discontinuity. The details layer could also expose additionalengineering controls not used in current attribute definitions.Haptic Palette Generator: In designing vibration derivatives for our studies, wenoticed several cases where seeing a palette of vibrations, each of which are per-ceptually distinct from the others, but sharing a common theme, would be usefulfor personalization and design. For example, an end-user may wish to create a ho-mogeneous set of wakeup alarms with increasing agitation for each snooze round.Application developers and designers may need to generate a set of derivativesbased on any given example before deciding on a final set of effects.These cases benefit from an interface that can automatically generate a set ofderivatives based on an input vibration along a predetermined emotion dimension(Figure 7.12). The interface provides one type of multi-level discrete control for allthe emotion dimensions, and requires effects to be perceptually distinct. In contrast176to the Vibration Instagram, emotion labels must guide derivative generation, andusers can have more control over semantic parameters of the underlying algorithm.For example, the user can determine the number of derivatives, and the desirableextent of similarity between them.Grounding in our results: This interface concept would exploit our findings, inthe form of a palette generator along the agitation/liveliness dimension and anotheralong the strangeness) dimension and vary energy and irregularity+discontinuityto derive perceptually distinct sensations with predictable emotion impact. Al-though applicable to both dimensions, this interface is mostly appropriate for emo-tion attributes such as strangeness where we know the contributing engineeringparameter(s) but there is no known linear effect of the parameter (in contrast toagitation/liveliness). While the system can use a predefined step size to generatederivatives (similar to the application of discontinuity and irregularity in Study 2),defining a perceptual function (e.g., based on JND, as used for frequency levels inStudy 2) would be ideal and is left for future work.Reflections on what is needed to support tuning scenarios: Our use casesin Section 7.4.1 vary in two parameters: 1) target users - designers vs. naive users,and 2) tuning task - tweaking a single vs. a set of sensations. We can now use theseinterface concepts as a basis for discussing how tuning scenarios, varying on thesetwo parameters, may be supported through interface design choices.Target Users: Ease-of-use and design control are typical interface propertiesthat may be valued or suitable for the two demographics of interest here – design-ers, or naive (or simply less committed)) end-users, respectively (see Chapter 3).Our results indicate feasibility of achieving both of these balances, and suggestways they might be embodied. Specifically, Vibration Instagram limits vibrationalternatives and control (fixed binary presets) to achieve simplicity and efficiencyof use, thus is mainly appropriate for naive users. In contrast, Emotion Toolkitprovides high flexibility and design control (continuous slides, ability to modifyengineering parameters and presets), and thus is suitable for designers. Finally,Haptic Palette Generator provides a middle ground, suitable for both designers ortech-savvy end-users.Tuning Task: In modifying sensations, users may wish to tweak a single item177or a set. The above interface concepts vary on how these may be achieved basedon our results. While Vibration Instagram only allows tuning of a single sensationat a time, Haptic Palette Generator mainly facilitates creation of a set of relatedsensations. Emotion Toolkit supports both tasks; the designer can tweak a singlevibration with the sliders but can also access and/or save to the vibration set usingthe library and alternatives panels.7.7.4 Future WorkImmediate avenues for extending our work include modelling the mapping be-tween engineering parameters and the three emotion dimensions using regressionand other statistical techniques, and automating the process. i.e., developing algo-rithms that can automatically detect the engineering parameters of an input vibra-tion (e.g., frequency, rhythm, etc.) and create derivatives of the signal based onour results. To complement tool development efforts, future studies can examinecontrols in situ and define appropriate metrics.We close by pointing to other conceptual approaches for moving a vibration inthe emotion space which can complement and/or extend our work. Conceptually,our controls enable “extrapolation”; they start from an existing sensation and gen-erate a new one based on a set of rules. Alternative frameworks include a systemwhich:- Navigation: recommends an alternative (but existing) vibration with the desiredemotion attribute(s) (e.g., is more lively) that shares similar structure andengineering parameters with the base vibration.- Interpolation: creates a new vibration in between a starting base vibration andanother with the desired emotion attributes. e.g., to make a vibration morelively, it interpolates between the base and a lively vibration. The interpola-tion ends are specified by the user or automatically selected from a libraryaccording to user-specified attributes.These approaches pose different challenges and opportunities. Once applied in adesign tool, they can complement one other to provide a rich toolset for designers,or a seamless personalization mechanism for end-users.1787.8 ConclusionInspired by existing authoring tools in visual and auditory domains, our work callsfor designing emotion and perceptual controls for haptics and takes a first step to-wards this goal. We investigate the feasibility of designing such controls: in thiscase for modifying a vibration’s agitation, liveliness, and strangeness. We show,based on the results of our user studies, that such controls are automatable and pro-pose a mapping between these controls and engineering parameters of vibrations.Our results enable new interfaces for haptic design and personalization which inturn pave the way towards more expressive haptic sensations and improved adop-tion and engagement by end-users.179Chapter 8ConclusionWe envision, for haptic personalization, a suite of tools that are unified by oneunderlying conceptual model and can be effectively incorporated into users’ work-flows with various applications.Our vision is analogous to what we have for color personalization: A simple,yet powerful, suite of tools (color swatches, color gamut, sliders) built on the colortheory (e.g., Munsell color system), and seamlessly integrated in a variety of appli-cations (e.g., Microsoft Office, Adobe Creative Suite) for a wide range of users.Below, we first discuss our contributions towards this vision, by the three mainthemes of this thesis that were presented in Chapter 1, then outline the next stepsfor which the research described here has exposed a need.8.1 Personalization MechanismsIdentifying a suite of tools - Our first study on personalization mechanisms sug-gests the need for a set of personalization mechanisms (i.e., tools), rather than asingle one. In our lab-based study, users varied in the personalization mechanismthey preferred, weighing “design effort”, “sense of control”, and “fun” differently.To inform tool design, we outlined the design space for personalization mecha-nisms and proposed three promising candidate mechanisms (choosing, tuning, andchaining) for the personalization tool suite. This study, however, mainly examinedthe concepts of these mechanisms, without providing any guidelines for realizing180them as tools.Developing the mechanisms - Can these mechanisms be developed into tools?What would those tools look like? With VibViz (Chapter 4), we built on the prin-ciples from information visualization and library sciences to devise a set of guide-lines for a choosing interface. In Chapter 7, we verified the feasibility of develop-ing automatic emotion controls for tuning vibrations and presented three exampleinterfaces for this mechanism.Figure 8.1: Summary of our contributions to personalization mechanisms: three tool concepts forpersonalization (Chapter 3) and development of choosing (Chapter 4) and tuning (Chapter 7)conceptsReflecting on alternative designs - While our prototypes are developed for a par-ticular platform, their underlying mechanisms are platform independent. In thisthesis, we focused on developing the mechanisms and chose the prototyping plat-forms (e.g., device, programming technology) based on an anticipated personaliza-tion workflow given a tool. For example, VibViz was designed for devices with arelatively large screen size (e.g., a desktop or laptop computer) where users wouldwant to explore a wide range of pre-designed vibrations before choosing one. Sim-ilarly, the Haptic Palette Generator and Emotion Toolbox are designed for station-181ary use cases. In contrast, the Vibration Instagram prototype was designed for aphone interface where users can apply quick fixes on the go.However, this association is not rigid. Our designs can be revised and adaptedto alternative platforms to accommodate diverse use cases and user preferences.An increasing portion of the users spend most of their time on mobile devices(e.g., smartphones and smartwatches). Therefore, desktop applications, designedfor everyday use, typically have an accompanying mobile application. Our pro-totypes can be redesigned to accommodate smaller screen sizes. For example, amobile version of VibViz can present one facet view at a time (e.g., Sensory andEmotional View) while allowing users to switch to the other views (e.g., tabs inan interface) when needed. Search filters can be presented with common interfacewidgets such as a navigation drawer. The new design would have reduced function-ality compared to the desktop version (e.g., users cannot easily crosslink vibrationson different views) but could enable quick selections. Alternatively, a small screensize can lead to a very different design. For example, a choosing interface canpresent a subset of the vibrations in the library based on the users’ preference andinteraction history. Such interfaces would benefit from future research on adaptiveinterfaces and recommender systems for haptic sensations.8.2 Facets as an Underlying Model for PersonalizationToolsSupporting users’ mental model - Facets offer a unifying conceptual model forthe haptic personalization tools. In this respect, their primary advantage is theirmatch with users’ cognitive structure(s). Our five proposed facets are derived frompeople’s descriptions for haptic sensations and encapsulate their multiple and over-lapping sense-making schemas for haptics. VibViz showed that these facets, evenin their primary form as a flat list of attributes, can enable design of powerful toolsfor end-users. In VibViz, several linked views of the vibration library supportedan individual’s varying criteria in different usage contexts as well as differencesamong the users in cognition and preference.Informing design practices - Informing design beyond a tool’s interface requiresa concise picture of the facets as well as a path from the facets to sensation and182engineering parameters available in haptic authoring tools and display hardware.In Chapter 5, we derived a set of semantic dimensions for the facets, thereby struc-tured their large list of attributes into a succinct set. Further, we linked a path fromthe emotion, metaphor, and usage facets to the sensation facet.Linking the facets to engineering parameters - In Chapter 7, we verified that thetuning mechanism can be built on our evolved understanding of the facets, namelytheir semantic dimensions and interlinkages. Focusing on emotion controls, wepresent a path from the three emotion dimensions to engineering parameters of aspecific actuator (C2 tactor), using the linkages between the emotion and sensationfacets as a middle step.Reporting individual differences - In developing end-user tools and/or rich sen-sations, designers must note variations around an aggregated model. Thus, wealso present an in-depth analysis of individual differences in the facets and theirattributes.Figure 8.2: Summary of our contributions to affective haptics: five haptic facets (Chapter 4), theirsemantic dimensions and linkages (Chapter 5), and quantification of individual differencesin affect (Chapters 2, 5, and 6)Reflecting on the facets - Facets and their interlinkages are an evolving conceptand our work is a first pragmatic characterization of them. Notably, our proposedfacets can overlap, with some attributes being applicable to more than one facet.183For example, “alarm” can belong to both the usage example and metaphor facets.Similarly, “energetic” can be included in the sensation or emotion facets. Theseinstances question the idea of rigid boundaries for the facets, and suggest an evolv-ing and flexible characterization that can be revised, shifted, or combined as ourunderstanding of the domain evolves and depending on the use case.Examples of these revisions and shifts can be seen in this thesis. Initially,we defined the physical facet to encapsulate all measurable properties of vibra-tions including energy and tempo attributes. As our understanding of the facetsevolved, we moved these two attributes to the sensation facet (since they cannotbe measured objectively, at least not yet) and revised the physical facet to includethe engineering parameters (e.g., duration, frequency, waveform). As another ex-ample, in designing the VibViz interface, we combined the sensation and emotionfacets in one view for a more effective access to the dataset. Finally, our evolv-ing understanding of the facets is reflected in our naming; we started with callingthe schemas “taxonomies” and later switched to “facets” as it denotes a flexiblestructure that can combine a mix of attributes with different characteristics (e.g.,numerical ratings, words) in a flat and/or hierarchical structure depending on theknown semantic linkages between the attributes. Future work can further refinethese facets and/or add new unexplored ones.8.3 Large Scale Evaluation for Theory and ToolDevelopmentDevising methodologies for haptic studies - In developing theories and tools foraffective haptic design, the haptic community needs to study large and diversegroups of users, yet haptic methodologies rarely scale. We faced the need forscaling our studies during this research and devised new evaluation methodologiesthat work around existing practical limitations in the field. Our two methodologicalcontributions address the same problem but have unique elements which makesthem suitable for different contexts. The two-stage methodology enables lab-basedstudies and integrates experts’ evaluation with data from lay users. In contrast,with the crowdsourcing methodology, researchers can collect fast and inexpensivedata from a large group but within an error threshold.184Figure 8.3: Summary of our contributions to large scale evaluation: two methodologies for evaluat-ing haptics in lab (Chapter 5) and online (Chapter 6)Reflecting on the long-term value of our methodologies - Fast technologicalprogress and proliferation of the technology can resolve some of the existing prob-lems in accessing the crowds and expedite crowdsourcing in the haptic community.An important question is: would widespread access to haptic hardware eliminatethe need for haptic proxies or in-lab studies?We believe the answer is no. Haptics still has a long journey ahead to achievethe full range of natural sensations. New technologies are being developed ev-eryday, adding to the expressive range of existing hardware and/or enabling newlyprogrammable sensations. These new technologies usually go under several roundsof research, design, and evaluation to mature and pass cost-value trade-offs of busi-ness units. Thus, there will always be a gap between the new technologies testedin research labs and the ones available to everyday users, necessitating the use ofproxies or in-lab evaluations.Further, the haptic industry plays an important role in the progress of the field.Often, companies are not willing to expose their design(s) on online platforms, yetare interested in efficient evaluation methods. Thus, using experts for evaluationis highly desirable as an alternative discount evaluation method when access tothe crowds is limited. We hope our contributions facilitate a range of haptic per-ception studies and inspire new methodologies that further expand designers’ andresearchers’ evaluation toolkit.1858.4 Future Work8.4.1 Incorporating Personalization in Users’ WorkflowA key aspect of our vision was effective integration of the personalization tools inusers’ natural workflow with an application. Otherwise, personalization tools willeither be abandoned or at best adopted by a niche group of power users. This re-quirement further highlights open questions about the users’ personalization prac-tices and workflows. Specifically:What workflows and scenarios can best support the users? Where do our toolapproaches lie in the personalization process? What are the other requirementsbesides effective tools? Given effective tools and personalization workflows, dopeople personalize haptic signals for their everyday devices?These questions cannot be effectively addressed until a user base has built up abody of experience and needs around this new technology. Currently, a majority ofusers have little exposure to the range of possible vibrotactile signals beyond thedull notification buzz on their cellphones and are unsure of application possibilitiesof haptics. Thus, they cannot reliably judge their interest in personalization, nor canthey reflect on their personalization workflows. Our work focuses on developingeffective personalization mechanism, the groundwork needed to tackle the abovequestions. Further, our lab-based studies of these mechanisms and past haptic fieldstudies provide evidence for the importance of personalization, thereby motivatefurther research on the above questions.Rigorous answer(s) to these questions require a series of studies triangulatingvarious research methods; including in-situ studies of haptic applications and per-sonalization practices, small-scale longitudinal qualitative studies, and large scaledeployment of personalization tools. Results can inform future haptic tools and fa-cilitate integration of haptic personalization in end-user devices and applications.1868.4.2 Expanding the Mechanisms and the Underlying ModelWhat are other effective mechanisms in the personalization design space? Canthe facets inform those mechanisms? What are alternative underlying models forpersonalization tools?Emerging paradigms for haptic sketching may inform design of new personal-ization mechanisms. With recently developed haptic authoring tools, people canrapidly create a sensation by demonstrating its properties in a more accessiblemodality (e.g., by drawing, vocalizing, tapping). Apple’s iOS has a simple in-terface where users can tap a pattern to create a custom vibration. With mHive,users can create a vibrotactile sensation by drawing a path on a tablet touchscreen[139]. Voodle is an example system, developed in our lab, where users can controlmovements of a 1-DoF robot with their voice in real time [138].While these interfaces are effective for design [138, 139], they need to be re-vised and adapted for personalization. In their current form, these interfaces are tooopen-ended for most users, as evidenced by the negative comments on the iOS’stapping interface (Chapter 3). One possibility is using a mixed-initiative approachwhere the users sketch a pattern (e.g., by drawing, vocalizing, or tapping) and thesystem renders and refines it into a plausible sequence based on a set of perceptualrules. Alternatively, the systems can recommend a set of patterns, from a largerepository, based on input sequences sketched by the users. Future studies can in-vestigate these and other plausible mechanisms for personalization to complementthe suite of tools available to the users.In developing such new mechanisms, researchers can determine the utility ofthe facets as an underlying conceptual model and/or propose alternative models forpersonalization.8.5 Final RemarksThe study of haptic design and in particular, the affective aspect of designed hap-tic experiences, was largely ignored until very recently. Premier haptic confer-ences, namely Haptic Symposium and World Haptics, were mainly focused onhardware development and users’ tactile and kinesthetic abilities, while main HCIconferences such as CHI sometimes published studies that were not considered187novel among haptic experts. Further, for the first decade of consumer-level hapticdevices, the quality of the haptic experience offered to end-users was very low.Phones included low-fidelity actuators, resulting in the users’ low opinion of hap-tics.But the situation is rapidly changing. Open haptics communities are beingformed, where the goal is to share design contributions widely, discuss avenues forfurther progress, and eliminate several cases of “reinventing the wheel” in hard-ware development or perceptual studies. The haptic community is increasinglyrecognizing the importance of HCI and design. VibViz (Chapter 4) and Macaron[140] were nominated for best demo awards, in World Haptics 2015 and HapticSymposium 2016 resepctively, for their contribution to affect and design. Bothtools also received great attention from the haptic industry and academia. In par-ticular, Immersion Inc., the world’s largest haptic company, contacted our group toutilize the design ideas from VibViz in their internal tools. They were interested inproviding their designers with a unified interface for accessing their several hapticlibraries efficiently. Apple has recently integrated a high-fidelity voice coil hapticengine in their smartwatch, pioneering the change in future devices and suggestingexciting possibilities for engaging the crowds.Our lab has played a pioneering role in the above changes and specifically inthe areas of affective haptics and design. This thesis is an effort to further con-tribute to these areas. Specifically, here we tackled an unexplored area of haptics:end-user personalization. We provided a theoretical grounding for personalizationtools (facets and personalization mechanisms) and prototyped example interfaces(VibViz, and three tuning interfaces) to showcase tool design possibilities. Further,we pushed the boundaries of haptic evaluation, investigating crowdsourcing anduse of haptic experts. We hope our work sparks future research in haptic design,aesthetics, and personalization and ultimately contributes to fun, informative, andsatisfying haptic experiences for all individuals.188Bibliography[1] M. Abdur Rahman, A. Alkhaldi, J. Cha, and A. El Saddik. Adding hapticfeature to YouTube. In Proceedings of the International Conference onMultimedia (MM ’10), pages 1643–1646, New York, New York, USA, Oct.2010. ACM Press. ISBN 9781605589336. doi:10.1145/1873951.1874310.URL http://dl.acm.org/citation.cfm?id=1873951.1874310. → pages 126[2] Adobe Systems, Inc. Adobe photoshop. URLhttps://www.adobe.com/ca/products/photoshop.html. Accessed:2016-10-23. → pages 5, 147[3] M. Allen, J. Gluck, K. E. MacLean, and E. Tang. An initial usabilityassessment for symbolic haptic rendering of music parameters. InProceedings of 7th International Conference on Multimodal Interfaces(ICMI ’05), pages 244–251, Trento, Italy, 2005. → pages 124[4] A. L. Alter and D. M. Oppenheimer. Uniting the tribes of fluency to form ametacognitive nation. Personality and Social Psychology Review, 13(3):219–235, 2009. → pages 8, 26[5] Amazon.com Inc. Amazon Mechanical Turk Requester Best PracticesGuide, 2015. → pages 9, 125, 138[6] S. Andrews, J. Mora, J. Lang, and W. S. Lee. Hapticast: a physically-based3d game with haptic feedback. Proceedings of FuturePlay, 2006. → pages2[7] F. Arab, S. Paneels, M. Anastassova, S. Coeugnet, F. Le Morellec,A. Dommes, and A. Chevalier. Haptic patterns and older adults: To repeator not to repeat? In Proceedings of IEEE World Haptics Conference (WHC’15), pages 248–253. IEEE, June 2015. ISBN 978-1-4799-6624-0.doi:10.1109/WHC.2015.7177721. URLhttp://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7177721.→ pages 121, 124189[8] Artificial Muscles Inc. ViviTouch - ”Feel the game.”. http://vivitouch.com/.URL http://vivitouch.com/. Accessed: 2012-09-28. → pages 21, 25[9] S. J. Biggs and R. N. Hitchcock. Artificial muscle actuators for hapticdisplays: system design to match the dynamics and tactile sensitivity of thehuman fingerpad. In Proceedings of SPIE, volume 7642, 2010. URLhttp://spie.org/x648.html?product id=847741. → pages 25[10] J. O. Blom and A. F. Monk. Theory of personalization of appearance: Whyusers personalize their pcs and mobile phones. Journal ofHuman-Computer Interaction, 18(3):193–228, 2003. URLhttp://www.tandfonline.com/doi/abs/10.1207/S15327051HCI1803 1. →pages 2, 151[11] S. Brewster and L. M. Brown. Tactons: structured tactile messages fornon-visual information display. In Proceedings of the fifth conference onAustralasian user interface-Volume 28, pages 15–23. Australian ComputerSociety, Inc., 2004. → pages 59, 121[12] L. M. Brown and T. Kaaresoja. Feel who’s talking: using tactons formobile phone alerts. In CHI ’06 extended abstracts on Human Factors inComputing Systems (CHI EA ’06), page 604, New York, New York, USA,Apr. 2006. ACM Press. ISBN 1595932984.doi:10.1145/1125451.1125577. URLhttp://dl.acm.org/citation.cfm?id=1125451.1125577. → pages 126[13] L. M. Brown, S. A. Brewster, and H. C. Purchase. A first investigation intothe effectiveness of tactons. In Proceedings of World Haptics Conference(WHC’05), pages 167–176, 2005. URLhttp://ieeexplore.ieee.org/xpls/abs all.jsp?arnumber=1406930. → pages42[14] L. M. Brown, S. A. Brewster, and H. C. Purchase. Multidimensionaltactons for non-visual information presentation in mobile devices. InProceedings of the 8th Conference on Human-Computer Interaction withMobile Devices and Services (MobileHCI ’06), pages 231–238, New York,USA, Sept. 2006. ACM Press. ISBN 1595933905.doi:10.1145/1152215.1152265. URLhttp://dl.acm.org/citation.cfm?id=1152215.1152265. → pages 121, 124[15] L. M. Brown, S. A. Brewster, and H. C. Purchase. Tactile crescendos andsforzandos: Applying musical techniques to tactile icon design. In CHI’06190Extended Abstracts on Human factors in Computing Systems (CHI EA ’06),pages 610–615. ACM, 2006. → pages 6, 86, 152[16] L. Brunet, C. Megard, S. Paneels, G. Changeon, J. Lozada, M. P. Daniel,and F. Darses. Invitation to the voyage: The design of tactile metaphors tofulfill occasional travelers’ needs in transportation networks. In IEEEWorld Haptics Conference (WHC ’13),, pages 259–264, April 2013.doi:10.1109/WHC.2013.6548418. → pages 1, 85, 121, 124, 146[17] T. A. Busey, J. Tunnicliff, G. R. Loftus, and E. F. Loftus. Accounts of theconfidence-accuracy relation in recognition memory. Psychonomic Bulletinand Review, 7(1):26–48, 2000. URLhttp://www.springerlink.com/index/V247323112X74X2G.pdf. → pages 27[18] M. Cartwright and B. Pardo. VocalSketch: Vocally Imitating AudioConcepts. In Proceedings of the ACM SIGCHI Conference on HumanFactors in Computing Systems (CHI ’15), pages 43–46, New York, NewYork, USA, Apr. 2015. ACM Press. ISBN 9781450331456.doi:10.1145/2702123.2702387. URLhttp://dl.acm.org/citation.cfm?id=2702123.2702387. → pages 122, 125,128[19] A. Chan, K. MacLean, and J. McGrenere. Designing haptic icons tosupport collaborative turn-taking. International Journal ofHuman-Computer Studies (IJHCS), 66(5):333–355, 2008. → pages 78, 85,146[20] A. Chan, K. E. MacLean, and J. McGrenere. Designing haptic icons tosupport collaborative turn-taking. International Journal Human ComputerStudies (IJHCS), 66:333–355, 2008. → pages 1, 2, 124, 126, 128, 129[21] G. Changeon, D. Graeff, M. Anastassova, and J. Lozada. Tactile emotions:A vibrotactile tactile gamepad for transmitting emotional messages tochildren with autism. In Haptics: Perception, Devices, Mobility, andCommunication, pages 79–90. Springer, 2012. URLhttp://link.springer.com/chapter/10.1007/978-3-642-31401-8 8. → pages42[22] L. B. Chilton, G. Little, D. Edge, D. S. Weld, and J. A. Landay. Cascade:Crowdsourcing taxonomy creation. In Proceedings of the SIGCHIConference on Human Factors in Computing Systems (CHI ’13), pages1999–2008, New York, NY, USA, 2013. ACM. ISBN 978-1-4503-1899-0.191doi:10.1145/2470654.2466265. URLhttp://doi.acm.org/10.1145/2470654.2466265. → pages 9, 125[23] R. W. Cholewiak and A. A. Collins. Individual differences in thevibrotactile perception of a simple pattern set. Attention, Perception, andPsychophysics, 59(6):850–866, 1997. → pages 21, 23, 24, 32, 36[24] W. J. Conover and R. L. Iman. Rank transformations as a bridge betweenparametric and nonparametric statistics. The American Statistician, 35(3):124–129, 1981. doi:10.1080/00031305.1981.10479327. URLhttp://amstat.tandfonline.com/doi/abs/10.1080/00031305.1981.10479327.→ pages 164[25] T. F. Cox and M. A. Cox. Multidimensional scaling. CRC press, 2000. →pages 84, 87, 91[26] J. C. Craig. Vibrotactile pattern perception: Extraordinary observers.Science, 196(4288):450–452, 1977. → pages 2, 8[27] P. G. Curran. Methods for the detection of carelessly invalid responses insurvey data. Journal of Experimental Social Psychology, 66:4 – 19, 2016.ISSN 0022-1031. doi:http://dx.doi.org/10.1016/j.jesp.2015.07.006. URLhttp://www.sciencedirect.com/science/article/pii/S0022103115000931.Rigorous and Replicable Methods in Social Psychology. → pages 94[28] R. Doizaki, J. Watanabe, and M. Sakamoto. A system for evaluating tactilefeelings expressed by sound symbolic words. In M. Auvray and C. Duriez,editors, Haptics: Neuroscience, Devices, Modeling, and Applications:Proceedings of Eurohaptics, pages 32–39, Berlin, Heidelberg, 2014.Springer Berlin Heidelberg. ISBN 978-3-662-44193-0.doi:10.1007/978-3-662-44193-0 5. URLhttp://dx.doi.org/10.1007/978-3-662-44193-0 5. → pages 7, 86[29] J. S. Downs, M. B. Holbrook, S. Sheng, and L. F. Cranor. Are yourparticipants gaming the system? In Proceedings of the ACM SIGCHIConference on Human Factors in Computing Systems (CHI ’10), page2399, New York, New York, USA, apr 2010. ACM Press. ISBN9781605589299. doi:10.1145/1753326.1753688. URLhttp://dl.acm.org/citation.cfm?id=1753326.1753688. → pages 125[30] N. Ducheneaut, M.-H. Wen, N. Yee, and G. Wadley. Body and mind: Astudy of avatar personalization in three virtual worlds. In Proceedings ofthe ACM SIGCHI Conference on Human Factors in Computing Systems192(CHI ’09), pages 1151–1160, New York, NY, USA, 2009. ACM. ISBN978-1-60558-246-7. doi:10.1145/1518701.1518877. URLhttp://doi.acm.org/10.1145/1518701.1518877. → pages 5[31] M. Eid, A. Alamri, and A. Saddik. MPEG-7 Description of HapticApplications Using HAML. In Proceedings of IEEE InternationalWorkshop on Haptic Audio Visual Environments and their Applications(HAVE ’06), pages 134–139. IEEE, 2006. ISBN 1-4244-0761-3.doi:10.1109/HAVE.2006.283780. URLhttp://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4062526.→ pages 126[32] M. Eid, S. Andrews, A. Alamri, and A. E. Saddik. HAMLAT: AHAML-Based Authoring Tool for Haptic Application Development. InLNCS 5024 - Haptics: Perception, Devices and Scenarios, volume 5024,pages 857–866. 2008. ISBN 978-3-540-69056-6.doi:10.1007/978-3-540-69057-3. URLhttp://www.springerlink.com/index/10.1007/978-3-540-69057-3. → pages126[33] R. B. Ekstrom, J. W. French, H. H. Harman, and D. Dermen. Manual forkit of factor-referenced cognitive tests. Princeton, NJ: Educational TestingService, 1976. URL http://www.ets.org/Media/Research/pdf/Manual for Kit of Factor-Referenced Cognitive Tests.pdf. → pages 27[34] Engineering Acoustics, Inc. C2 tactor ”https://www.eaiinfo.com/”. URLhttps://www.eaiinfo.com/tactor-info/. Accessed: 2017-03-21. → pages 47,48, 63, 148, 152[35] M. Enriquez and K. E. MacLean. The role of choice in longitudinal recallof meaningful tactile signals. In Proceedings of Symposium on HapticInterfaces for Virtual Environment and Teleoperator Systems, pages 49–56,March 2008. doi:10.1109/HAPTICS.2008.4479913. → pages 2[36] W. Epstein, B. Hughes, S. L. Schneider, and P. Bach-y Rita. Perceptuallearning of spatiotemporal events: Evidence from an unfamiliar modality.Journal of Experimental Psychology: Human Perception and Performance,15(1):28, 1989. URL http://psycnet.apa.org/journals/xhp/15/1/28/. →pages 8, 21, 23, 24, 32, 36[37] G. K. Essick, F. McGlone, C. Dancer, D. Fabricant, Y. Ragin, N. Phillips,T. Jones, and S. Guest. Quantitative assessment of pleasant touch.193Neuroscience and Biobehavioral Reviews, 34(2):192–203, 2010. URLhttp://www.sciencedirect.com/science/article/pii/S0149763409000190. →pages 20, 21, 22, 28[38] M. Evening. The Adobe Photoshop Lightroom 5 Book: The CompleteGuide for Photographers. Pearson Education, 2013. → pages 5, 86, 147,151[39] Facebook, Inc. Instagram. URL https://www.instagram.com/?hl=en.Accessed: 2016-10-21. → pages 5, 147[40] J. C. Fagan. Usability studies of faceted browsing: A literature review.Information Technology and Libraries, 29(2):58, 2010. → pages 79[41] A. P. Felt, S. Egelman, and D. Wagner. I’ve got 99 problems, but vibrationain’t one: A survey of smartphone User’s concerns. In Proceedings of the2nd ACM workshop on Security and Privacy in Smartphones and MobileDevices (SPSM ’12), page 33, New York, New York, USA, Oct. 2012.ACM Press. ISBN 9781450316668. doi:10.1145/2381934.2381943. URLhttp://dl.acm.org/citation.cfm?id=2381934.2381943. → pages 131[42] T. Field. Touch. MIT press, 2003. → pages 20[43] L. Findlater and K. Z. Gajos. Design space and evaluation challenges ofadaptive graphical user interfaces. AI Magazine, 30(4):68, 2009. → pages 6[44] L. Findlater and J. McGrenere. A comparison of static, adaptive, andadaptable menus. In Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems (CHI ’04), pages 89–96, New York, NY,USA, 2004. ACM. ISBN 1-58113-702-8. doi:10.1145/985692.985704.URL http://doi.acm.org/10.1145/985692.985704. → pages 6[45] G. Fischer and E. Scharff. Meta-design: Design for designers. InProceedings of the ACM Conference on Designing Interactive Systems(DIS ’00), pages 396–405, New York, NY, USA, 2000. ACM. ISBN1-58113-219-0. doi:10.1145/347642.347798. URLhttp://doi.acm.org/10.1145/347642.347798. → pages 5[46] K. Z. Gajos, M. Czerwinski, D. S. Tan, and D. S. Weld. Exploring thedesign space for adaptive graphical user interfaces. In Proceedings of theWorking Conference on Advanced Visual Interfaces (AVI ’06), pages201–208, New York, NY, USA, 2006. ACM. ISBN 1-59593-353-0.doi:10.1145/1133265.1133306. URLhttp://doi.acm.org/10.1145/1133265.1133306. → pages 6194[47] A. Gallace and C. Spence. The cognitive and neural correlates of tactilememory. Psychological bulletin, 135(3):380, 2009. → pages 8[48] S. Garzonis, S. Jones, T. Jay, and E. O’Neill. Auditory icon and earconmobile service notifications: Intuitiveness, learnability, memorability andpreference. In Proceedings of the ACM SIGCHI Conference on HumanFactors in Computing Systems (CHI ’09), pages 1513–1522, New York,NY, USA, 2009. ACM. ISBN 978-1-60558-246-7.doi:10.1145/1518701.1518932. URLhttp://doi.acm.org/10.1145/1518701.1518932. → pages 2[49] D. Goldreich and I. M. Kanics. Tactile acuity is enhanced in blindness. TheJournal of Neuroscience, 23(8):3439–3445, 2003. → pages 8[50] Google, Inc. Google play music. URL play.google.com/music/listen#/now.Accessed: 2016-10-21. → pages 151[51] J. M. Grey. Multidimensional perceptual scaling of musical timbres. theJournal of the Acoustical Society of America, 61(5):1270–1277, 1977. →pages 87[52] S. Guest, J. M. Dessirier, A. Mehrabyan, F. McGlone, G. Essick,G. Gescheider, A. Fontana, R. Xiong, R. Ackerley, and K. Blot. Thedevelopment and validation of sensory and emotional scales of touchperception. Attention, Perception, and Psychophysics, 73(2):531–550,2011. → pages 7, 22, 28, 62, 64, 81, 86, 88, 98, 100, 124, 125[53] M. Haraty and J. McGrenere. Designing for advanced personalization inpersonal task management. In Proceedings of the ACM Conference onDesigning Interactive Systems (DIS ’16), pages 239–250, New York, NY,USA, 2016. ACM. ISBN 978-1-4503-4031-1.doi:10.1145/2901790.2901805. URLhttp://doi.acm.org/10.1145/2901790.2901805. → pages 5[54] J. A. Harris, I. M. Harris, and M. E. Diamond. The topography of tactileworking memory. Journal of Neuroscience, 21(20):8262–8269, 2001. URLhttp://www.jneurosci.org/content/21/20/8262.short. → pages 23[55] M. Harrower and C. A. Brewer. Colorbrewer.org: an online tool forselecting colour schemes for maps. The Cartographic Journal, 2013. →pages 86195[56] V. Hayward. A brief taxonomy of tactile illusions and demonstrations thatcan be done in a hardware store. Brain research bulletin, 75(6):742–52,Apr. 2008. ISSN 0361-9230. doi:10.1016/j.brainresbull.2008.01.008. URLhttp://www.sciencedirect.com/science/article/pii/S0361923008000178. →pages 123[57] M. Hearst. Design recommendations for hierarchical faceted searchinterfaces. In Proceedings of the ACM SIGIR Workshop on Faceted Search,pages 1–5, 2006. → pages 79, 80[58] M. A. Hearst. UIs for faceted navigation: Recent advances and remainingopen problems. In Proceedings of the Second Workshop onHuman-Computer Interaction and Information Retrieval (HCIR), pages13–17, 2008. → pages 79[59] J. Heer and M. Bostock. Crowdsourcing graphical perception: Usingmechanical turk to assess visualization design. In Proceedings of theSIGCHI Conference on Human Factors in Computing Systems (CHI ’10),pages 203–212, New York, NY, USA, 2010. ACM. ISBN978-1-60558-929-9. doi:10.1145/1753326.1753357. URLhttp://doi.acm.org/10.1145/1753326.1753357. → pages 9, 125[60] A. Henderson and M. Kyng. There’s no place like home: Continuingdesign in use. In Design at Work: Cooperative Design of ComputerSystems, pages 219–240. Lawrence Erlbaum Associates Inc., 1992. →pages 4[61] M. J. Hertenstein and S. J. Weiss. The handbook of touch: Neuroscience,behavioral, and health perspectives. Springer Publishing Company, 2011.→ pages 147[62] E. Hoggan and S. Brewster. New parameters for tacton design. In CHI’07Extended Abstracts on Human Factors in Computing Systems (CHI EA’07), pages 2417–2422, New York, NY, USA, 2007. ACM. ISBN978-1-59593-642-4. → pages 112[63] E. Hoggan and S. Brewster. Designing audio and tactile crossmodal iconsfor mobile devices. In Proceedings of the 9th ACM InternationalConference on Multimodal Interfaces (ICMI ’07), pages 162–169. ACM,2007. → pages 6, 86, 112, 152196[64] E. Hoggan, S. Anwar, and S. A. Brewster. Mobile multi-actuator tactiledisplays. In International Workshop on Haptic and Audio InteractionDesign, pages 22–33. Springer, 2007. → pages 6[65] E. Hoggan, R. Raisamo, and S. A. Brewster. Mapping information to audioand tactile icons. In Proceedings of the 2009 International Conference onMultimodal Interfaces (ICMI ’09), pages 327–334, 2009. URLhttp://dl.acm.org/citation.cfm?id=1647382. → pages 23[66] E. Hoggan, C. Stewart, L. Haverinen, G. Jacucci, and V. Lantz. Pressages:augmenting phone calls with non-verbal messages. pages 555–562, Oct.2012. doi:10.1145/2380116.2380185. URLhttp://dl.acm.org/citation.cfm?id=2380116.2380185. → pages 126[67] M. Hollins, R. Faldowski, S. Rao, and F. Young. Perceptual dimensions oftactile surface texture: A multidimensional scaling analysis. Perceptionand Psychophysics, 54:697–705, 1993. → pages 124[68] M. Hollins, S. Bensmaı¨a, K. Karlof, and F. Young. Individual differencesin perceptual space for tactile textures: Evidence from multidimensionalscaling. Perception and Psychophysics, 62(8):1534–1544, 2000. → pages2, 8, 24, 78, 86, 87[69] K. Hong, J. Lee, and S. Choi. Demonstration-based vibrotactile patternauthoring. In Proceedings of the Seventh International Conference onTangible, Embedded and Embodied Interaction (TEI ’13), pages 219–222,2013. → pages 42, 85[70] I. Hwang, K. E. MacLean, M. Brehmer, J. Hendy, A. Sotirakopoulos, andS. Choi. The haptic crayola effect: Exploring the role of naming in learninghaptic stimuli. In Proceedings of IEE World Haptics Conference (WHC’11), pages 385–390, Istanbul, 2011. IEEE. → pages 124[71] Immersion Corporation. Haptic muse,”http://www2.immersion.com/developers/”, . URLhttp://ir.immersion.com/releasedetail.cfm?ReleaseID=776428. Accessed:2015-01-24. → pages 6, 60, 61[72] Immersion Corporation. Haptic effect preview,”http://www2.immersion.com/developers/”, . URL https://play.google.com/store/apps/details?id=com.immersion.EffectPreview&hl=en. Accessed:2015-01-24. → pages 5, 60, 61, 62197[73] Immersion Corporation. Haptic studio,”http://www2.immersion.com/developers/”, . URLhttp://www2.immersion.com/developers/index.php?option=com content&view=article&id=507&Itemid=835. Accessed: 2013-09-19. → pages 42[74] Inwook Hwang, Jongman Seo, Myongchan Kim, and Seungmoon Choi.Vibrotactile Perceived Intensity for Mobile Devices as a Function ofDirection, Amplitude, and Frequency. IEEE Transactions on Haptics(ToH), 6(3):352–362, July 2013. ISSN 1939-1412.doi:10.1109/TOH.2013.2. URLhttp://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6420831.→ pages 121[75] A. Israr and I. Poupyrev. Tactile brush: drawing on skin with a tactile griddisplay. In Proceedings of the ACM SIGCHI Conference on Human Factorsin Computing Systems (CHI ’11, pages 2019–2028, Vancouver, BC, May2011. ACM Press. ISBN 9781450302289. doi:10.1145/1978942.1979235.URL http://dl.acm.org/citation.cfm?id=1978942.1979235. → pages 123[76] A. Israr, S. Zhao, and O. Schneider. Exploring embedded haptics for socialnetworking and interactions. → pages 6[77] A. Israr, S. Zhao, K. Schwalje, R. Klatzky, and J. Lehman. Feel effects:Enriching storytelling with haptic feedback. ACM Transactions on AppliedPerception (TAP), (11):11:1–11:17, 2014. → pages 1, 2, 6, 60, 61, 78, 79,86, 121, 124, 146[78] A. Israr, S. Zhao, and O. Schneider. Exploring Embedded Haptics forSocial Networking and Interactions. In CHI ’15 Extended Abstracts onHuman Factors in Computing Systems (CHI EA ’15), pages 1899–1904,New York, New York, USA, Apr. 2015. ACM Press. ISBN9781450331463. doi:10.1145/2702613.2732814. URLhttp://dl.acm.org/citation.cfm?id=2702613.2732814. → pages 126, 129,131[79] A. Jameson. Adaptive interfaces and agents. Human-ComputerInteraction: Design Issues, Solutions, and Applications, 105:105–130,2009. → pages 6[80] C. V. Jansson-Boyd. Touch matters: exploring the relationship betweenconsumption and tactile interaction. Social Semiotics, 21(4):531–546, Sept.2011. ISSN 1035-0330. doi:10.1080/10350330.2011.591996. URLhttp://dx.doi.org/10.1080/10350330.2011.591996. → pages 121, 125198[81] R. Ja¨schke, L. Marinho, A. Hotho, L. Schmidt-Thieme, and G. Stumme.Tag recommendations in folksonomies. In Proceedings of EuropeanConference on Principles of Data Mining and Knowledge Discovery, pages506–514. Springer, 2007. → pages 90[82] J. K. E. M. P. R. P. D. Jason L. Huang, Paul G. Curran. Detecting anddeterring insufficient effort responding to surveys. Journal of Business andPsychology, 27(1):99–114, 2012. ISSN 08893268, 1573353X. URLhttp://www.jstor.org/stable/41474909. → pages 94[83] L. A. Jones and N. B. Sarter. Tactile displays: Guidance for their designand application. Human Factors: The Journal of the Human Factors andErgonomics Society, 50(1):90–111, 2008. → pages 6[84] T. Kaaresoja and J. Linjama. Perception of Short Tactile Pulses Generatedby a Vibration Motor in a Mobile Phone. In First Joint EurohapticsConference and Symposium on Haptic Interfaces for Virtual Environmentand Teleoperator Systems, pages 471–472. IEEE, 2005. ISBN0-7695-2310-2. doi:10.1109/WHC.2005.103. URLhttp://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1406972.→ pages 126[85] A. L. Kaas, M. C. Stoeckel, and R. Goebel. The neural bases of hapticworking memory. Human Haptic Perception: Basics and Applications,pages 113–129, 2008. URLhttp://www.springerlink.com/index/p373185430172257.pdf. → pages 23[86] I. Karuei and K. E. MacLean. Susceptibility to periodic vibrotactileguidance of human cadence. In IEEE Haptics Symposium (HAPTICS ’14),pages 141–146, Feb 2014. doi:10.1109/HAPTICS.2014.6775446. → pages1[87] I. Karuei, K. E. MacLean, Z. Foley-Fisher, R. MacKenzie, S. Koch, andM. El-Zohairy. Detecting vibrations across the body in mobile contexts. InProceedings of the SIGCHI Conference on Human Factors in ComputingSystems (CHI ’11), pages 3267–3276, New York, NY, USA, 2011. ACM.→ pages 6[88] Y. Kim, J. Cha, I. Oakley, and J. Ryu. Exploring Tactile Movies: An InitialTactile Glove Design and Concept Evaluation. IEEE Multimedia, PP(99):1,2009. ISSN 1070-986X. doi:10.1109/MMUL.2009.63. URLhttp://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5255212.→ pages 126199[89] A. Kittur, E. H. Chi, and B. Suh. Crowdsourcing user studies withmechanical turk. In Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems (CHI ’08), pages 453–456, New York, NY,USA, 2008. ACM. ISBN 978-1-60558-011-1.doi:10.1145/1357054.1357127. URLhttp://doi.acm.org/10.1145/1357054.1357127. → pages 9, 125[90] W. B. Knowles and T. B. Sheridan. The Feel of rotary controls: Frictionand inertia1. Human Factors, 8(3):209215, 1966. URLhttp://hfs.sagepub.com/content/8/3/209.short. → pages 28[91] E. Koskinen, T. Kaaresoja, and P. Laitinen. Feel-good touch: Finding themost pleasant tactile feedback for a mobile touch screen button. InProceedings of the 10th International Conference on Multimodal Interfaces(ICMI ’08), pages 297–304, New York, NY, USA, 2008. ACM. ISBN978-1-60558-198-9. doi:10.1145/1452392.1452453. URLhttp://doi.acm.org.ezproxy.library.ubc.ca/10.1145/1452392.1452453. →pages 7, 124, 153[92] D. H. Kwak, G. E. Clavio, A. N. Eagleman, and K. T. Kim. Exploring theantecedents and consequences of personalizing sport video gameexperiences. Sport Marketing Quarterly, 19(4):217–225, 12 2010. URLhttp://ezproxy.library.ubc.ca/login?url=http://search.proquest.com/docview/851541557?accountid=14656. Copyright- Copyright Fitness Information Technology, A Division of ICPE WestVirginia University Dec 2010; Document feature - Tables; ; Last updated -2012-07-06. → pages 5, 147, 151[93] Last.fm, 2016. URL http://www.last.fm/music. Accessed: 2016-07-24. →pages 90[94] S. J. Lederman and R. L. Klatzky. Haptic perception: A tutorial. Attention,Perception, and Psychophysics, 71(7):1439–1459, 2009. → pages 23[95] J. Lee and S. Choi. Evaluation of vibrotactile pattern design usingvibrotactile score. In Proceedings of IEEE Haptics Symposium (HAPTICS’12), pages 231–238, 2012. URLhttp://ieeexplore.ieee.org/xpls/abs all.jsp?arnumber=6183796. → pages42[96] J. Lee, J. Ryu, and S. Choi. Vibrotactile score: A score metaphor fordesigning vibrotactile patterns. In Proceedings of IEEE World Haptics200(WHC ’09), pages 302–307, March 2009.doi:10.1109/WHC.2009.4810816. → pages 151[97] V. Levesque, L. Oram, K. MacLean, A. Cockburn, N. D. Marchuk,D. Johnson, J. E. Colgate, and M. A. Peshkin. Enhancing physicality intouch interaction with programmable friction. In Proceedings of the ACMSIGCHI Conference on Human Factors in Computing Systems (CHI ’11),pages 2481–2490. ACM, 2011. → pages 1, 2[98] V. Levesque, L. Oram, and K. E. MacLean. Exploring the design space ofprogrammable friction for scrolling interactions. In Proceedings of IEEEHaptic Symposium (HAPTICS ’12), pages 23–30, 2012. → pages 2, 8, 21,23, 24, 26, 32, 36, 40, 78[99] H. Lieberman, F. Paterno`, M. Klann, and V. Wulf. End-user development:An emerging paradigm. In End user development, pages 1–8. Springer,2006. → pages 5, 86, 147[100] J. Lo, R. S. Johansson, et al. Regional differences and interindividualvariability in sensitivity to vibration in the glabrous skin of the humanhand. Brain research, 301(1):65–72, 1984. → pages 2, 8, 78[101] W. E. Mackay. Triggers and barriers to customizing software. InProceedings of ACM SIGCHI conference on Human Factors in ComputingSystems (CHI ’91), pages 153–160, 1991. URLhttp://dl.acm.org/citation.cfm?id=108867. → pages 2, 43, 151[102] K. MacLean and M. Enriquez. Perceptual design of haptic icons. InProceedings of EuroHaptics Conference, pages 351–363, 2003. → pages 6,59, 86, 121[103] K. E. MacLean. Foundations of transparency in tactile information design.IEEE Transactions on Haptics (ToH), 1(2):84–95, 2008. → pages 6, 42,62, 112, 123[104] K. E. MacLean, O. Schneider, and H. Seifi. Multisensory hapticinteractions:understanding the sense and designing for it. In The Handbookof Multimodal-Multisensor Interfaces. ACM Books, 2017. → pages 1, 78,85, 118, 150[105] S. Marathe and S. S. Sundar. What drives customization?: Control oridentity? In Proceedings of ACM SIGCHI Conference on Human Factorsin Computing Systems (CHI ’11), pages 781–790, 2011. URLhttp://dl.acm.org/citation.cfm?id=1979056. → pages 2, 43, 151201[106] W. Mason and S. Suri. Conducting behavioral research on amazon’smechanical turk. Behavior Research Methods, 44(1):1–23, 2012.doi:10.3758/s13428-011-0124-6. URLhttp://dx.doi.org/10.3758/s13428-011-0124-6. → pages 9, 125, 138[107] D. Mazzoni and R. Dannenberg. Audacity software,”http://audacity.sourceforge.net/”. URL http://audacity.sourceforge.net/.Accessed: 2015-01-24. → pages 65, 152[108] F. McGlone, A. B. Vallbo, H. Olausson, L. Loken, and J. Wessberg.Discriminative touch and emotional touch. Canadian Journal ofExperimental Psychology, 61(3):173, 2007. → pages 147[109] J. McGrenere, R. M. Baecker, and K. S. Booth. A field evaluation of anadaptable two-interface design for feature-rich software. ACMTransactions on Computer-Human Interaction (TOCHI), 14(1):3, 2007.URL http://dl.acm.org/citation.cfm?id=1229858. → pages 151[110] J. Mitchell and B. Shneiderman. Dynamic versus static menus: Anexploratory comparison. SIGCHI Bull., 20(4):33–37, Apr. 1989. ISSN0736-6906. doi:10.1145/67243.67247. URLhttp://doi.acm.org/10.1145/67243.67247. → pages 6[111] S. O. Modhrain and I. Oakley. Touch TV : Adding Feeling to BroadcastMedia. 2001. → pages 126[112] MoodFuse. Moodfuse. URL moodfuse.com. Accessed: 2016-10-21. →pages 151[113] T. Munzner. Visualization Analysis and Design. CRC Press, 2014. →pages 63[114] Musicovery. Musicovery. URL http://musicovery.com/. Accessed:2016-10-21. → pages 63, 151[115] H. Nagano, S. Okamoto, and Y. Yamada. What appeals to human touch?effects of tactual factors and predictability of textures on affinity totextures. In Proceedings of IEEE World Haptics Conference (WHC ’11),page 203208, 2011. URLhttp://ieeexplore.ieee.org/xpls/abs all.jsp?arnumber=5945486. → pages20, 22, 28202[116] J. Nielsen and R. Molich. Heuristic evaluation of user interfaces. InProceedings of the ACM SIGCHI Conference on Human Factors inComputing Systems (CHI ’90), pages 249–256, 1990. → pages 117[117] Nik Software. Snapseed, on Google Play Store. URLhttps://itunes.apple.com/ca/app/snapseed/id439438619?mt=8. Accessed:2016-10-21. → pages 151[118] P. Nurkka. “Nobody other than me knows what i Want”: customizing asports watch. In P. Kotz, G. Marsden, G. Lindgaard, J. Wesson, andM. Winckler, editors, Proceedings of Human-Computer Interaction(INTERACT ’13), number 8120 in Lecture Notes in Computer Science,pages 384–402. Springer Berlin Heidelberg, Jan. 2013. ISBN978-3-642-40497-9, 978-3-642-40498-6. URLhttp://link.springer.com/chapter/10.1007/978-3-642-40498-6 30. → pages2, 151[119] M. Obrist, S. A. Seah, and S. Subramanian. Talking about tactileexperiences. In Proceedings of the ACM SIGCHI Conference on HumanFactors in Computing Systems (CHI ’13), pages 1659–1668. ACM, 2013.→ pages 7, 11, 62, 79, 86, 87, 118, 121, 124[120] U. Oh and L. Findlater. The challenges and potential of end-user gesturecustomization. In Proceedings of ACM SIGCHI Conference on HumanFactors in Computing Systems (CHI ’13), pages 1129–1138, 2013. URLhttp://dl.acm.org/citation.cfm?id=2466145. → pages 2, 43, 151[121] S. Okamoto, H. Nagano, and Y. Yamada. Psychophysical dimensions oftactile perception of textures. IEEE Transactions on Haptics (ToH), 6(1):81–93, 2013. → pages 86, 121[122] M. G. Orr and S. Ohlsson. Relationship between complexity and liking as afunction of expertise. Music Perception, 22(4):583–611, 2005. URLhttp://www.jstor.org/stable/10.1525/mp.2005.22.4.583. → pages 26[123] C. O’Sullivan and A. Chang. An Activity Classification for VibrotactilePhenomena, pages 145–156. Springer Berlin Heidelberg, Berlin,Heidelberg, 2006. ISBN 978-3-540-37596-8. doi:10.1007/11821731 14.URL http://dx.doi.org/10.1007/11821731 14. → pages 7, 153[124] Pandora Internet Radio. Pandora, 2016. URL http://www.pandora.com/.Accessed: 2016-07-24. → pages 90203[125] S. Panels, M. Anastassova, and L. Brunet. TactiPEd: easy prototyping oftactile patterns. In Proceedings of Human-Computer Interaction(INTERACT ’13), pages 228–245. Springer, 2013. URLhttp://link.springer.com/chapter/10.1007/978-3-642-40480-1 15. → pages42[126] J. Pasquero, J. Luk, S. Little, and K. Maclean. Perceptual analysis of hapticicons: An investigation into the validity of cluster sorted mds. InProceedings of IEEE Haptic Symposium (HAPTICS ’06), pages 437–444,Alexandria, Virginia, 2006. IEEE. → pages 124[127] M. Paterson. The senses of touch: Haptics, affects and technologies. Berg,2007. → pages 147[128] J. Peck and T. L. Childers. Individual differences in haptic informationprocessing: The need for touch scale. Journal of Consumer Research, 30(3):430–442, 2003. → pages 2, 8, 21, 24, 30, 40, 78[129] H. Pongrac. Vibrotactile perception: examining the coding of vibrationsand the just noticeable difference under various conditions. MultimediaSystems, 13(4):297–307, 2008. ISSN 0942-4962.doi:10.1007/s00530-007-0105-x. URLhttp://dx.doi.org/10.1007/s00530-007-0105-x. → pages 6, 123[130] A. Ponsard and J. McGrenere. Anchored customization: Anchoringsettings to the application interface to afford customization. In Proceedingsof the ACM SIGCHI Conference on Human Factors in Computing Systems(CHI ’16), pages 4154–4165, New York, NY, USA, 2016. ACM. ISBN978-1-4503-3362-7. doi:10.1145/2858036.2858129. URLhttp://doi.acm.org/10.1145/2858036.2858129. → pages 5[131] Propellerhead Software. Figure, on Google Play Store. URLhttps://www.propellerheads.se/products/figure/manual/introduction/.Accessed: 2016-10-21. → pages 152[132] J. Raisamo, R. Raisamo, and V. Surakka. Comparison of saltation,amplitude modulation, and a hybrid method of vibrotactile stimulation.IEEE Transactions on Haptics (ToH), 6(4):517–521, Oct 2013. ISSN1939-1412. doi:10.1109/TOH.2013.25. → pages 124[133] J. Ryu and S. Choi. posVibEditor: Graphical authoring tool of vibrotactilepatterns. In Proceedings of IEEE International Workshop on Haptic Audio204visual Environments and Games (HAVE), pages 120–125, 2008. → pages85[134] J. Ryu, J. Chun, G. Park, S. Choi, and S. H. Han. Vibrotactile feedback forinformation delivery in the vehicle. IEEE Transactions on Haptics (ToH), 3(2):138–149, April 2010. ISSN 1939-1412. doi:10.1109/TOH.2010.1. →pages 1, 146[135] G. Saul, M. Lau, J. Mitani, and T. Igarashi. Sketchchair: an all-in-one chairdesign system for end users. In Proceedings of the Fifth ACM InternationalConference on Tangible, Embedded, and Embodied Interaction (TEI ’11),pages 73–80, 2011. → pages 86, 147[136] O. Schneider and K. E. MacLean. Haptic jazz: Collaborative touch withthe haptic instrument. In Proceedings of the IEEE Haptics Symposium(HAPTICS ’14). IEEE, 2014. → pages 62[137] O. Schneider, S. Zhao, and A. Israr. Feelcraft: User-crafted tactile content.In Haptic Interaction, pages 253–259. Springer, 2015. → pages 60[138] O. Schneider, D. Marino, P. Bucci, and K. E. MacLean. Voodle: Vocaldoodling for affective robot interaction. In Proceedings of the DesigningInteractive Systems, DIS ’17, 2017. → pages 187[139] O. S. Schneider and K. E. MacLean. Improvising design with a hapticinstrument. In Proceedings of IEEE Haptics Symposium (HAPTICS ’14),pages 327–332. IEEE, 2014. → pages 2, 7, 8, 11, 62, 64, 79, 87, 121, 152,153, 187[140] O. S. Schneider and K. E. MacLean. Studying design process and exampleuse with macaron, a web-based vibrotactile effect editor. In Proceedings ofIEEE Haptics Symposium (HAPTICS ’16), pages 52–58, 2016. → pages 2,85, 147, 188[141] O. S. Schneider, A. Israr, and K. E. MacLean. Tactile animation by directmanipulation of grid displays. In Proceedings of the 28th Annual ACMSymposium on User Interface Software and Technology (UIST ’15), pages21–30. ACM, 2015. → pages 85[142] O. S. Schneider, H. Seifi, S. Kashani, M. Chun, and K. E. MacLean.Hapturk: Crowdsourcing affective ratings of vibrotactile icons. InProceedings of the ACM SIGCHI Conference on Human Factors inComputing Systems (CHI ’16), pages 3248–3260, 2016. → pages vi, 120205[143] D. Schuirmann. On hypothesis testing to determine if the mean of a normaldistribution is contained in a known interval. Biometrics, 37(3):617–617,1981. → pages 133[144] H. Seifi and K. E. MacLean. A first look at individuals’ affective ratings ofvibrations. In Proceedings of IEEE World Haptics Conference (WHC ’13),pages 605–610. IEEE, 2013. → pages iii, 19[145] H. Seifi and K. E. MacLean. VibViz Dataset, 2016. URLhttp://www.cs.ubc.ca/labs/spin/vibviz. Accessed: 2016-07-29. → pages 84,118[146] H. Seifi and K. E. MacLean. Exploiting haptic facets: Users’ sensemakingschemas as a path to design and personalization of experience. InSubmitted to International Journal of Human Computer Studies (IJHCS),Special issue on Multisensory HCI, 2017. → pages v, 76, 212[147] H. Seifi, C. Anthonypillai, and K. E. MacLean. [d69] end-user vibrationcustomization tools: Parameters and examples. In HAPTICS ’14 Demos,pages 1–1, Feb 2014. doi:10.1109/HAPTICS.2014.6775548. → pages iv[148] H. Seifi, C. Anthonypillai, and K. E. MacLean. End-user customization ofaffective tactile messages: A qualitative examination of tool parameters. InProceedings of IEEE Haptics Symposium (HAPTICS ’14), pages 251–256.IEEE, 2014. → pages iv, 38[149] H. Seifi, K. Zhang, and K. E. MacLean. Vibviz: Organizing, visualizingand navigating vibration libraries. In Proceedings of IEEE World HapticsConference (WHC ’15), pages 254–259. IEEE, 2015. → pages iv, 58, 80[150] H. Seifi, K. Zhang, and K. E. MacLean. Vibviz: an interactive visualizationfor organizing and navigating a vibrotactile library. In WHC ’15 Demos,pages 1–1, 2015. → pages v[151] J. Seo and S. Choi. Perceptual analysis of vibrotactile flows on a mobiledevice. IEEE Transactions on Haptics (ToH), 6(4):522–7, Jan. 2013. ISSN2329-4051. doi:10.1109/TOH.2013.24. URLhttp://www.ncbi.nlm.nih.gov/pubmed/24808404. → pages 123[152] P. Siangliulue, K. C. Arnold, K. Z. Gajos, and S. P. Dow. TowardCollaborative Ideation at Scale - Leveraging Ideas from Others to GenerateMore Creative and Diverse Ideas Pao. In Proceedings of ACM Conferenceon Computer-Supported Cooperative Work and Social Computing (CSCW206’15), pages 937–945, New York, New York, USA, Feb. 2015. ACM Press.ISBN 9781450329224. doi:10.1145/2675133.2675239. URLhttp://dl.acm.org/citation.cfm?id=2675133.2675239. → pages 9, 122, 125[153] G. Smith, M. Czerwinski, B. Meyers, D. Robbins, G. Robertson, and D. S.Tan. Facetmap: A scalable search and browse visualization. IEEETransactions on Visualization and Computer Graphics, 12(5):797–804,2006. → pages 79[154] SoundTouch. Soundtouch algorithm, 2016. URLhttp://www.surina.net/soundtouch/. Accessed: 2016-09-24. → pages 152,158[155] I. Spence and D. W. Domoney. Single subject incomplete designs fornonmetric multidimensional scaling. Psychometrika, 39(4):469–490, 1974.ISSN 1860-0980. doi:10.1007/BF02291669. URLhttp://dx.doi.org/10.1007/BF02291669. → pages 9[156] J. C. Stevens. Aging and spatial acuity of touch. Journal of gerontology, 47(1):P35–P40, 1992. → pages 8[157] J. C. Stevens and K. K. Choo. Spatial acuity of the body surface over thelife span. Somatosensory and Motor Research, 13(2):153–166, 1996. →pages 6, 8[158] B. A. Swerdfeger. A first and second longitudinal study of haptic iconlearnability: The impact of rhythm and melody. 2009. → pages 2, 8[159] C. Swindells, K. E. MacLean, K. S. Booth, and M. Meitner. A case-studyof affect measurement tools for physical user interface design. InProceedings of Graphics Interface (GI ’06), pages 243–250, 2006. →pages 20, 22, 23[160] C. Swindells, E. Maksakov, K. E. MacLean, and V. Chung. The role ofprototyping tools for haptic behavior design. In Proceedings of 14thSymposium on Haptic Interfaces for Virtual Environment and TeleoperatorSystems, pages 161–168, 2006. URLhttp://ieeexplore.ieee.org/xpls/abs all.jsp?arnumber=1627084. → pages42, 43, 45[161] C. Swindells, S. Pietarinen, and A. Viitanen. Medium fidelity rapidprototyping of vibrotactile haptic, audio and video effects. In Proceedingsof IEEE Haptics Symposium (HAPTICS ’14), pages 515–521, 2014. →pages 78, 85207[162] TactileLabs, 2016. URL http://tactilelabs.com/. Accessed: 2016-07-29. →pages 112[163] K. Takahashi, H. Mitsuhashi, K. Murata, S. Norieda, and K. Watanabe.Feelings of animacy and pleasantness from tactile stimulation: Effect ofstimulus frequency and stimulated body part. In Systems, Man, andCybernetics (SMC), 2011 IEEE International Conference on, page32923297, 2011. URLhttp://ieeexplore.ieee.org/xpls/abs all.jsp?arnumber=6084177. → pages20, 23[164] D. Tam, K. E. MacLean, J. McGrenere, and K. J. Kuchenbecker. Thedesign and field observation of a haptic notification system for timingawareness during oral presentations. In Proceedings of the ACM SIGCHIConference on Human Factors in Computing Systems (CHI ’13), pages1689–1698, New York, NY, USA, 2013. ACM. ISBN 978-1-4503-1899-0.doi:10.1145/2470654.2466223. → pages 1, 60, 64, 78, 79, 81, 124, 147[165] D. Ternes and K. E. Maclean. Designing large sets of haptic icons withrhythm. In Haptics: perception, devices and scenarios, pages 199–208.Springer, 2008. → pages 9, 23, 26, 41, 42, 45, 48, 62, 64, 81, 121, 124,131, 155[166] D. R. Ternes. Building large sets of haptic icons: Rhythm as a designparameter, and between-subjects MDS for evaluation. PhD thesis, TheUniversity of British Columbia, 2007. → pages 6, 9, 80, 86, 111, 112, 155[167] The MathWorks, Inc. Matlab, 2016. URLhttps://www.mathworks.com/products/matlab.html. Accessed: 2016-07-29.→ pages 91[168] The Noun Project, Inc. Nounproject, ”http://thenounproject.com/”. URLhttp://thenounproject.com/. Accessed: 2015-01-24. → pages 64[169] The NounProject, Inc., 2016. URL http://thenounproject.com/. Accessed:2016-07-24. → pages 81[170] B. Thompson. Exploratory and confirmatory factor analysis:Understanding concepts and applications. American PsychologicalAssociation, 2004. → pages 84, 91[171] A. Thudt, U. Hinrichs, and S. Carpendale. The bohemian bookshelf:supporting serendipitous book discoveries through information208visualization. In Proceedings of the ACM SIGCHI Conference on HumanFactors in Computing Systems (CHI ’12), pages 1461–1470. ACM, 2012.→ pages 63, 66, 71[172] L. Tsogo, M. Masson, and A. Bardot. Multidimensional scaling methodsfor many-object sets: A review. Multivariate Behavioral Research, 35(3):307–319, 2000. → pages 9[173] D. Turnbull, L. Barrington, and G. R. Lanckriet. Five approaches tocollecting tags for music. In Proceedings of The International Society ofMusic Information Retrieval (ISMIR), volume 8, pages 225–230, 2008. →pages 84, 90[174] J. B. van Erp and M. M. Spape´. Distilling the underlying dimensions oftactile melodies. In Proceedings of Eurohaptics Conference, volume 2003,pages 111–120, 2003. → pages 6, 42, 62, 86, 124, 152[175] J. Watanabe, T. Hayakawa, S. Matsui, A. Kano, Y. Shimizu, andM. Sakamoto. Visualization of tactile material relationships using soundsymbolic words. In P. Isokoski and J. Springare, editors, Proceedings ofEuroHaptics Conference, pages 175–180, Berlin, Heidelberg, 2012.Springer Berlin Heidelberg. ISBN 978-3-642-31404-9.doi:10.1007/978-3-642-31404-9 30. URLhttp://dx.doi.org/10.1007/978-3-642-31404-9 30. → pages 7[176] A. Wawro. How to use custom vibrations in iOS 5 | PCWorld. URLhttp://www.pcworld.com/article/242238/how to use custom vibrations in ios 5.html. Accessed: 2013-09-24. →pages 5, 43[177] J. O. Wobbrock, L. Findlater, D. Gergle, and J. J. Higgins. The alignedrank transform for nonparametric factorial analyses using only anovaprocedures. In Proceedings of the SIGCHI Conference on Human Factorsin Computing Systems (CHI ’11), pages 143–146. ACM, 2011. → pages164[178] L. Wood. Global haptics technology market - analysis, technologiesforecasts to 2022 - anticipated to reach $29.84 billion - research andmarkets. URL ”http://www.marketsandmarkets.com/PressReleases/haptic-technology.asp.Accessed: 2016-11-26. → pages 1209[179] A. Xu, S.-W. Huang, and B. Bailey. Voyant: Generating StructuredFeedback on Visual Designs Using a Crowd of Non-Experts. In ACMConference on Computer-Supported Cooperative Work and SocialComputing (CSCW ’14), pages 1433–1444, New York, New York, USA,Feb. 2014. ACM Press. ISBN 9781450325400.doi:10.1145/2531602.2531604. URLhttp://dl.acm.org/citation.cfm?id=2531602.2531604. → pages 9, 122, 125[180] K.-P. Yee, K. Swearingen, K. Li, and M. Hearst. Faceted metadata forimage search and browsing. In Proceedings of the ACM SIGCHIconference on Human Factors in Computing Systems (CHI ’03), pages401–408, 2003. → pages 79, 80[181] S. Yohanan and K. E. MacLean. Design and assessment of the hapticcreature’s affect display. In Proceedings of the Sixth ACM InternationalConference on Human-Robot Interaction (HRI ’11), pages 473–480, 2011.→ pages 118[182] S. Yohanan, M. Chan, J. Hopkins, H. Sun, and K. MacLean. Hapticat:exploration of affective touch. In Proceedings of the Seventh ACMInternational Conference on Multimodal Interfaces (ICMI), pages222–229, 2005. → pages 118[183] A. G. Yong and S. Pearce. A beginner’s guide to factor analysis: Focusingon exploratory factor analysis. Tutorials in Quantitative Methods forPsychology, 9(2):79–94, 2013. → pages 91, 104, 116[184] Y. Yoo, T. Yoo, J. Kong, and S. Choi. Emotional responses of tactile icons:Effects of amplitude, frequency, duration, and envelope. In Proceedings ofIEEE World Haptics Conference (WHC’15), pages 235–240, June 2015.doi:10.1109/WHC.2015.7177719. → pages 7, 112, 124, 153[185] S. Zhao, O. Schneider, R. Klatzky, J. Lehman, and A. Israr. Feelcraft:Crafting tactile experiences for media using a feel effect library. InProceedings of the Adjunct Publication of the 27th Annual ACMSymposium on User Interface Software and Technology (UIST ’14), pages51–52, New York, NY, USA, 2014. ACM. ISBN 978-1-4503-3068-8.doi:10.1145/2658779.2659109. → pages 78, 86, 124, 147[186] Y. Zheng and J. B. Morrell. Haptic actuator design parameters thatinfluence affect and attention. In Proceedings of IEEE Haptics Symposium(HAPTICS ’12), pages 463–470. IEEE, 2012. → pages 7, 21, 23, 153210[187] Y. Zheng, E. Su, and J. B. Morrell. Design and evaluation of pactors formanaging attention capture. In Proceedings of IEEE World HapticsConference (WHC ’13), pages 497–502, 2013. → pages 85211Appendix ASupplemental Facet AnalysisHere, we present additional data and analysis on the four haptic facets presentedin Chapter 5. These were included in the appendix section of the correspondingpublication1.1To appear as:Seifi and MacLean. (2017) Exploiting Haptic Facets: Users’ Sensemaking Schemasas a Path to Design and Personalization of Experience. To Appear in InternationalJournal of Human Computer Studies (IJHCS), Special issue on Multisensory HCI.212A.1 List of Tags and Their Disagreement ValuesIn this section, we present the full list of tags collected for the four vibration facetsalong with their disagreement scores.Index Tag DisagreementScore1 short 0.082 smooth transition 0.093 irregular 0.114 pointy 0.115 ramping up 0.126 grainy 0.127 long 0.138 simple 0.179 firm 0.1710 rough 0.1711 wavy 0.1712 continuous 0.1713 discontinuous 0.1714 bumpy 0.1715 dynamic 0.216 regular 0.217 spiky 0.2118 soft 0.2219 springy 0.2220 smooth 0.2221 ramping down 0.2422 complex 0.2823 flat 0.2824 ticklish 0.31Table A.1: Sensation f tags and disagreement scoresIndex Tag DisagreementScore1 rhythmic 0.142 attention-getting 0.163 agitating 0.184 unique 0.185 energetic 0.186 mechanical 0.197 familiar 0.28 surprising 0.219 urgent 0.2210 natural 0.2211 strange 0.2312 predictable 0.2413 uncomfortable 0.2514 lively 0.2515 calm 0.2616 interesting 0.2617 annoying 0.2718 comfortable 0.2719 pleasant 0.3120 happy 0.3121 angry 0.3222 boring 0.3223 creepy 0.3224 sad 0.3425 fear 0.3626 funny 0.36Table A.2: Emotion f tags and disagreement scores213Index Tag DisagreementScore1 dancing 0.112 pulsing 0.113 getting close 0.114 cymbal 0.115 alarm 0.156 phone 0.157 morse code 0.168 heart beat 0.179 SOS 0.1810 buzz 0.1811 engine 0.1912 sliding 0.213 tapping 0.2114 game 0.2215 going away 0.2216 shaking 0.2217 a door closing 0.2218 stopping 0.2219 growl 0.2220 frogs 0.2221 poking 0.2322 coming or going 0.2323 beep 0.2424 horn 0.2525 jumping 0.2526 snoring 0.2727 riding 0.2828 clock 0.2829 drums 0.2830 breathing 0.331 electric shock 0.332 musical instruments 0.333 nature 0.3134 bell 0.3135 gun 0.3136 pawing 0.3137 celebration 0.3138 walking 0.3339 echo 0.3340 explosion 0.3341 chainsaw 0.3342 animal 0.3443 a spring 0.4444 footsteps 0.4445 a story 0.44Table A.3: Metaphor f tags and disagreement scoresIndex Tag DisagreementScore1 alarm 0.212 halfway 0.213 reminder 0.224 warning 0.225 running out of time 0.236 confirmation 0.237 speed up 0.248 overtime 0.249 slow down 0.2510 interval/rep 0.2511 above intended threshold 0.2612 resume 0.2613 one minute left 0.2714 finish 0.2715 incoming message 0.2816 congratulations 0.2817 get ready 0.318 milestone 0.319 encouragement 0.320 battery low 0.321 pause 0.322 warm up 0.3123 cool down 0.3124 below intended threshold 0.3325 start 0.36Table A.4: Usage f tags and disagreement scores214A.2 Tag Removal SummaryTable A.5 summarizes the percentage of tags removed by the lay users in the vali-dation study.Table A.5: Percentage of tags removed by normal users. Each row represents the percentages of tagsthat are removed by at least x people (x=1 for ≥ 1 label) in each facet (columns).Number of participantsremoving a tagSensation f(%)Emotion f(%)Metaphor f(%)Usage f(%)≥ 1 79 88 87 92≥ 2 51 69 67 74≥ 3 27 46 43 53≥ 4 14 28 24 31≥ 5 8 14 10 15≥ 6 4 6 3 7≥ 7 2 2 1 2≥ 8 1 0 0 0≥ 9 1 0 0 0A.3 Rating CorrelationsThe following table summarizes results of the Pearson correlation on the five ratingscales.Table A.6: Results of Pearson correlation on the five rating scales. The correlation is applied on allvalid participants’ ratings for the 120 vibrations.Energyd Tempod Roughnessd Pleasantnessd ArousaldEnergyd 1.00 0.48 0.74 -0.46 0.92Tempod 0.48 1.00 0.52 -0.22 0.56Roughnessd 0.74 0.52 1.00 -0.61 0.79Pleasantnessd -0.46 -0.22 -0.61 1.00 -0.53Arousald 0.92 0.56 0.79 -0.53 1.00215A.4 Multidimensional Scaling Graphs on Tag DistancesFigures A.1-A.5 depict results of our MDS analysis on tag distances in the fourvibration facets.(a) Dimension 1 (simple/complex) vs. 2 (dis-continuous/continuous)(b) Dimension 3 (short/long) vs. 4 (rough/s-mooth)Figure A.1: Spatial configuration of the tags for the sensation f facet confirms the four identifieddimensions in Chapter 5. Specifically, contrasting tags according to each dimension arewell-separated, and the semantically-related tags are close together along each dimension.(a) Dimension 1 (comfortable/agitating) vs.2 (boring/lively)(b) Dimension 1 (comfortable/agitating) vs.3 (strange/predictable)Figure A.2: Spatial configuration of the tags for the emotion f facet confirms the three identified di-mensions in Chapter 5 and supports convergent and discriminant validity.216Figure A.3: Spatial configurations of tags for the metaphor f facet. Dimension 1 (on-off—ongoingd) vs.dimension 2 (natural—mechanicald). Semantically-related tags, according to a dimension, areclose along the dimension (e.g., drums, celebration, alarm) and contrasting tags are far fromeach other (e.g., heartbeat vs. engine or alarm). This definition partially explains a few tags,such as clock (among the natural, calm sensations) and snoring (with mechanical, annoyingand ongoing tags).Figure A.4: Spatial configurations of tags for the usage f facets. Dimension 1 (urgent/awareness noti-fications). Dimension 2 is not used in our analysis. Along Dimension 1, tags have increas-ing urgency and attention demand from left to right, supporting convergent and discriminantvalidity for the semantics of the dimension.217A.5 Individual Differences in VibrationsThe following tables present disagreement scores calculated for the 120 vibrationsin the VibViz library.218Figure A.5: Vibration disagreement scores for the five rating scales and the four facets. High colorsaturation denotes high disagreement scores (part A: vibrations 1-60 in the VibViz library).219Figure A.6: Vibration disagreement scores for the five rating scales and the four facets (part B:vibrations 60-120).220A.6 Between-Facet Tag LinkagesIn this section, we present tag co-occurrence values between the sensation f facetand emotion f , metaphor f , or usage f facets.221Figure A.7: Co-occurence of sensation f and emotion f tags222Figure A.8: Co-occurence of sensation f and metaphor f tags223Figure A.9: Co-occurence of sensation f and usage f tags224Appendix BConsent FormsThe following consent forms were approved by UBC’s ethics board for our userstudies.225Version 1.0 /August 04, 2015 / Page 1 of 2  PARTICIPANT’S COPY CONSENT FORM Department of Computer Science 2366 Main Mall Vancouver, B.C.  Canada  V6T 1Z4 tel:   fax:   Project Title: Designing Affective Vibrotactile Stimuli  Principal Investigator: Karon MacLean, Professor, Dept. of Computer Science, Co-Investigator: Hasti Seifi, Graduate student, Dept. of Computer Science Oliver Schneider, Ph.D. Student, Dept. of Computer Science Salma Kashani, MSc., Dept. of Electrical and Computer Engineering Matthew Chun, BSc., Dept. of Computer Science  The purpose of this project is to investigate how people design and describe vibration patterns with affective or aesthetic attributes for a handheld or wristband device. In this study, you will be invited to interact with one or more haptic devices, such as the vibrations found in smartphones or a wristband, or attend to a set of visual, auditory notifications and perform tasks such as grouping or describing them based on some criteria. We may also ask you to interact with a tool for controlling these haptic devices, and create or modify vibrations using the tool(s), to describe your process to us, and discuss your preferences and likings for the patterns you created as well as for the design tools you used. You will also be asked to provide general demographic information (e.g., your age), previous design activities and familiarity with tactile feedback. You may be asked to wear headphones to mask external noises. Please tell the experimenter if you find the auditory level in the headphones uncomfortable, and it will be adjusted. If you are not sure about any instructions, do not hesitate to ask. Your responses will be audio recorded.  REIMBURSEMENT: $10  TIME COMMITMENT: 1 × 60 minute session CONFIDENTIALITY: You will not be identified by name in any study reports. Data gathered from this experiment will be stored in a secure Computer Science account accessible only to the experimenters.   You understand that the experimenters will ANSWER ANY QUESTIONS you have about the instructions or the procedures of this study. After participating, the experimenter will answer any other questions you have about this study. Your participation in this study is entirely voluntary and you may refuse to participate or withdraw from the study at any time without jeopardy. Your signature below indicates that you have received a copy of this consent form for your own records, and consent to participate in this study. If you have any concerns or complaints about your rights as a research participant and/or your experiences while participating in this study, contact the Research Participant Complaint Line in the UBC Office of Research Ethics at 604-822-8598 or if long distance e-mail RSIL@ors.ubc.ca or call toll free 1-877-822-8598 (Toll Free: 1-877-822-8598).   226Version 1.0 / August 04, 2015 / Page 2 of 2 RESEARCHER’S COPY CONSENT FORM Department of Computer Science 2366 Main Mall Vancouver, B.C.  Canada  V6T 1Z4 tel:   fax:   Project Title: Designing Affective Vibrotactile Stimuli  Principal Investigator: Karon MacLean, Professor, Dept. of Computer Science, Co-Investigator: Hasti Seifi, Graduate Student, Dept. of Computer Science Oliver Schneider, Ph.D. Student, Dept. of Computer Science Salma Kashani, MSc., Dept. of Electrical and Computer Engineering Matthew Chun, BSc., Dept. of Computer Science The purpose of this project is to investigate how people design vibration patterns with affective or aesthetic attributes for a handheld or a wristband device. In this study, you will be invited to interact with one or more haptic devices, such as the vibrations found in smartphones or a wristband, and perform tasks such as grouping or describing haptic sensations. We may also ask you to interact with a tool for controlling these haptic devices, and to create or modify vibrations using the tool(s), to describe your process to us, and discuss your preferences and likings for the patterns you created as well as for the design tools you used. You will also be asked to provide general demographic information (e.g., your age), previous design activities and familiarity with tactile feedback. You may be asked to wear headphones to mask external noises. Please tell the experimenter if you find the auditory level in the headphones uncomfortable, and it will be adjusted. If you are not sure about any instructions, do not hesitate to ask. Your responses will be audio recorded.  REIMBURSEMENT: $10 TIME COMMITMENT: 1 × 60 minute session CONFIDENTIALITY: You will not be identified by name in any study reports. Data gathered from this experiment will be stored in a secure Computer Science account accessible only to the experimenters.   You understand that the experimenters will ANSWER ANY QUESTIONS you have about the instructions or the procedures of this study. After participating, the experimenter will answer any other questions you have about this study. Your participation in this study is entirely voluntary and you may refuse to participate or withdraw from the study at any time without jeopardy. Your signature below indicates that you have received a copy of this consent form for your own records, and consent to participate in this study. If you have any concerns or complaints about your rights as a research participant and/or your experiences while participating in this study, contact the Research Participant Complaint Line in the UBC Office of Research Ethics at 604-822-8598 or if long distance e-mail RSIL@ors.ubc.ca or call toll free 1-877-822-8598 (Toll Free: 1-877-822-8598).  You hereby CONSENT to participate and acknowledge RECEIPT of a copy of the consent form: PRINTED NAME ________________________________ DATE ____________________________ SIGNATURE ____________________________________  227Version 1.0 / August 04, 2015 / Page 1 of 1  STUDY CONSENT FORM Department of Computer Science 2366 Main Mall Vancouver, B.C.  Canada  V6T 1Z4 tel:   fax:  Project Title: Crowdsourcing haptic design and evaluation  (UBC Ethics #H13-01646) Principal Investigator: Karon MacLean, Professor, Dept. of Computer Science, Co-Investigator: Hasti Seifi, Ph.D. Student, Dept. of Computer Science,  Oliver Schneider, Ph.D. Student, Dept. of Computer Science, Salma Kashani, MSc., Dept. of Electrical and Computer Engineering Matthew Chun, BSc. Student, Dept. of Computer Science  The purpose of this study is to understand the context and usage scenarios for everyday applications such as tracking a workout or timing a public talk. Further, the study seeks to investigate characteristics of desirable software notifications in those scenarios. During the experiment, we will provide you with an imaginary everyday application or usage scenario and ask you to indicate the kinds of notifications you would like to receive from a software tool (e.g., cellphone or smartwatch application). We may ask you to structure or describe the notifications in a specific way  (e.g., using metaphors, drawing). We may also ask you to attend to a set of visual, auditory, or tactile (e.g., vibrations) notifications and structure, modify, or describe the notifications based on some given criteria.   REIMBURSEMENT: $2.25 ($4.5/hour) TIME COMMITMENT: 30 minutes CONFIDENTIALITY: You will not be identified by name in any study reports. Any identifiable data gathered from this experiment will be stored in a secure Computer Science account accessible only to the experimenters.  If you have ANY QUESTIONS about the instructions or the procedures of this study, feel free to contact or . Your participation in this study is entirely voluntary and you may refuse to participate or withdraw from the study at any time without jeopardy. Checking the box below indicates that you are more than 19 years old and that you have consent to participate in this study.  If you have any concerns about your treatment or rights as a research participant, you may contact the Research Subject Info Line in the UBC Office of Research Services at 604-822-8598.  228

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0343521/manifest

Comment

Related Items