Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Haptic experience design : tools, techniques, and process Schneider, Oliver Stirling 2016

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2017_february_schneider_oliver.pdf [ 43.27MB ]
Metadata
JSON: 24-1.0340617.json
JSON-LD: 24-1.0340617-ld.json
RDF/XML (Pretty): 24-1.0340617-rdf.xml
RDF/JSON: 24-1.0340617-rdf.json
Turtle: 24-1.0340617-turtle.txt
N-Triples: 24-1.0340617-rdf-ntriples.txt
Original Record: 24-1.0340617-source.json
Full Text
24-1.0340617-fulltext.txt
Citation
24-1.0340617.ris

Full Text

Haptic Experience DesignTools, Techniques, and ProcessbyOliver Stirling SchneiderB.Sc. Honours, University of Saskatchewan, 2010M.Sc., The University of British Columbia, 2012A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFDoctor of PhilosophyinTHE FACULTY OF GRADUATE AND POSTDOCTORALSTUDIES(Computer Science)The University Of British Columbia(Vancouver)December 2016c© Oliver Stirling Schneider, 2016AbstractHaptic technology, which engages the sense of touch, offers promising benefitsfor a variety of interactions including low-attention displays, emotionally-awareinterfaces, and augmented media experiences. Despite an increasing presence ofphysical devices in commercial and research applications, there is still little supportfor the design of engaging haptic sensations. Previous literature has focused on thesignificant challenges of technological capabilities or physical realism rather thanon supporting experience design.In this dissertation, we study how to design, build, and evaluate interactivesoftware to support haptic experience design (HaXD). We define HaXD and itera-tively design three vibrotactile effect authoring tools, each a case study covering adifferent user population, vibrotactile device, and design challenge, and use themto observe specific aspects of HaXD with their target users. We make these in-depth findings more robust in two ways: generalizing results to a breadth of usecases with focused design projects, and grounding them with expert haptic design-ers through interviews and a workshop. Our findings 1) describe HaXD, includingprocesses, strategies, and challenges; and 2) present guidelines on designing, build-ing, and evaluating interactive software that facillitates HaXD.When characterizing HaXD processes, strategies, and challenges, we show thatexperience design is already practiced with haptic technology, but faces uniqueconsiderations compared to other modalities. We identify four design activitiesthat must be explicitly supported: sketching, refining, browsing, and sharing. Wefind and develop strategies to accommodate the wide variety of haptic devices. Wearticulate approaches for designing meaning with haptic experiences, and finally,highlight a need for supporting adaptable interfaces.iiWhen informing the design, implementation, and evaluation of HaXD tools,we discover critical features, including a need for improved online deploymentand community support. We present steps to develop both existing and futureresearch software into a mature suite of HaXD tools, and reflect upon evaluationmethods. By characterizing HaXD and informing supportive tools, we make a firststep towards establishing HaXD as its own field, akin to graphic and sound design.iiiPrefaceNo creative work owes to a lone individual; this dissertation is no exception. Allof the projects described in this work are collaborative efforts in at least somecapacity. Even where the author contributed all work, there was often informalfeedback from friends, family, and colleagues. As such, this dissertation will usethe first-person plural, “we”, throughout. In this preface, we clarify the author’scontribution to the work, much of which has been published.In Chapters 1, 2, and 9, Oliver contributed writing and framing, with feedbackprovided by the supervisor (Dr. Karon MacLean) and supervisory committee (Drs.Ronald Garcia and Michiel van de Panne) throughout his PhD program. Someof this thinking (Figure 1.1, the sketch/refine/browse/share design activities, andsome of Chapter 9) is combined with a handbook chapter currently in press, writ-ten with Dr. MacLean as lead author, and Oliver and PhD candidate Hasti Seifi asco-authors. This chapter is aimed as an advanced (i.e., graduate or senior under-graduate) educational resource incorporating Oliver and Hasti’s research.In Chapter 3, Oliver contributed all work and ideas, with feedback and guid-ance from Dr. MacLean. The software has been released as an open-source projectat https://github.com/ubcspin/mHIVE. This work has been published as full confer-ence paper with an associated demo at HAPTICS’14, and at a workshop at CHI’14:Schneider and MacLean. (2014) Improvising Design with a HapticInstrument. Proceedings of Haptics Symposium – HAPTICS ’14.Schneider and MacLean. (2014) mHIVE: A WYFIWIF design tool.Proceedings of Haptics Symposium – HAPTICS ’14.ivSchneider and MacLean. (2014) Reflections on a WYFIWIF DesignTool. Proceedings of the SIGCHI Conference on Human Factors inComputing Systems – CHI ’14.In Chapter 4, Oliver contributed most work and ideas, with initial interviews withdesigners and haptic experts conducted by Disney Research. This work was con-ducted while on internship at Disney Research Pittsburgh, with some supplemen-tary work done at UBC. Dr. Ali Israr supervised Oliver’s internship; Oliver ledwriting with feedback and guidance from Drs. Israr and MacLean. This work waspresented by Oliver at UIST’15 with an associated demo:Schneider, Israr, and MacLean. (2015) Tactile Animation by DirectManipulation of Grid Displays. Proceedings of the Annual Sympo-sium on User Interface Software and Technology – UIST ’15.Schneider, Israr, and MacLean. (2015) Tactile Animation by DirectManipulation of Grid Displays. Proceedings of the Annual Sympo-sium on User Interface Software and Technology – UIST ’15 Demos.In Chapter 5, Oliver contributed all work and ideas, with feedback and guidancefrom Dr. MacLean. Macaron has been released as an open-source project athttps://github.com/ubcspin/Macaron and is available online at http://hapticdesign.github.io/macaron. Subsequent development of the core Macaron tool and exten-sion MacaronMix includes work by Matthew Chun, Benson Li, Ben Clark, andPaul Bucci. The study reported in Chapter 5 was presented by Oliver at HAP-TICS’16 with an associated demo:Schneider and MacLean. (2016) Studying Design Process and Exam-ple Use with Macaron, a Web-based Vibrotactile Effect Editor. Pro-ceedings of Haptics Symposium – HAPTICS ’16.Schneider and MacLean. (2016) Macaron: An Online, Open-Source,Haptic Editor. Proceedings of Haptics Symposium – HAPTICS ’16.In Chapter 6, Oliver was part of a collaborative team together with Hasti Seifi, un-dergraduate summer student Matthew Chun, and master’s student Salma Kashani,vall supervised by Dr. MacLean. Oliver and Hasti planned and managed the project,with Matthew and Salma doing proxy design, study design, and data collection forlow-fidelity proxies and visual proxies respectively. Oliver led paper writing andquantitative analysis, working closely with the other authors, and presented thework at CHI’16:Schneider, Seifi, Kashani, Chun, and MacLean. (2016) HapTurk:Crowdsourcing Affective Ratings for Vibrotactile Icons. Proceedingsof the SIGCHI Conference on Human Factors in Computing Systems– CHI ’16.Chapter 7 describes several focused projects to give this dissertation improvedbreadth. Oliver played different roles depending on the project.Section 7.1, FeelCraft Oliver worked closely with Siyan Zhao, supervised by Dr.Israr at Disney Research Pittsburgh. Oliver implemented the rendering sys-tem (co-developed with the engine described in Chapter 4), developed theMineCraft plugin and connection architecture, and wrote the AsiaHapticspaper (archived in LNEE 277) with feedback from Ali Israr. Artistic contri-butions to the video were made by Kyna McIntosh and Madeleine Varner.Oliver and Siyan together designed the implemented feel effects (Oliver ledimplementation), planned, shot, and edited the video submissions (Siyan ledediting); each presented the demo once (Oliver at AsiaHaptics 2014, Siyanat UIST 2014):Schneider, Zhao, and Israr. (2015) FeelCraft: User-Crafted Tac-tile Content. Lecture Notes in Electrical Engineering 277: HapticInteraction.Zhao, Schneider, Klatzky, Lehman, and Israr. (2014) FeelCraft:Crafting Tactile Experiences for Media using a Feel Effect Li-brary. Proceedings of the Annual Symposium on User InterfaceSoftware and Technology – UIST ’14 Demos.Section 7.2, Feel Messenger Oliver worked closely with Siyan Zhao and Dr. Is-rar. All three developed the concept. Siyan led poster design and assistedviwith figures. Dr. Israr led writing assisted by Oliver and Siyan, and pre-sented this work at CHI’15. Oliver designed and implemented the Feel Mes-senger application, conducted part of the preliminary study, and led the demosubmission and presentation at World Haptics 2015:Israr, Zhao, and Schneider. (2015) Exploring Embedded Hap-tics for Social Networking and Interactions. Proceedings of theSIGCHI Conference on Human Factors in Computing Systems –CHI EA ’15.Schneider, Zhao, and Israr. (2015) Feel Messenger: EmbeddedHaptics for Social Networking. World Haptics ’15 Demos.Section 7.3, RoughSketch Oliver was the senior graduate student on a four-personstudent team including Paul Bucci, Gordon Minaker, and Brenna Li. All fourcontributed ideas and haptic designs and iteratively developed the final sub-mission. Oliver provided mentorship and leadership. Paul led graphic de-sign efforts; Gordon and Brenna presented the work at World Haptics 2015.RoughSketch won first place among 10 finalists.Section 7.4, HandsOn Oliver helped supervise Gordon Minaker during a summerNSERC placement and directed studies, with Dr. MacLean supervising andStanford Education PhD student Richard Davis collaborating. This workwas part of a larger collaborative effort including Melisa Orta Martinez, Dr.Allison Okamura, and Dr. Paulo Blikstein from Stanford University. Gor-don led the system design and implementation, study design, facilitation, andanalysis, and paper writing and submission. Oliver helped supervise Gordonthroughout this process, assisted and supervised by Dr. MacLean. Richardhelped plan the study, implement software, write the paper, and provide in-sights for study implementation. All three assisted with poster design. Dr.MacLean presented and demonstrated the work at EuroHaptics 2016, whereit won Best Poster award; the system was also included in a demo presentedby Melisa at HAPTICS’16:viiMinaker, Schneider, Davis, and MacLean. (2016) HandsOn: En-abling Embodied, Creative STEM e-learning with Programming-Free Force Feedback. EuroHaptics ’16.Orta Martinez, Minaker Gordon, Davis, Schneider, Morimoto,Taylor, Barron, MacLean, Blikstein, and Okamura. (2016) Hand-sOn with Hapkit 3.0: a creative STEM e-learning framework.Proceedings of Haptics Symposium – HAPTICS ’16.Section 7.5, CuddleBit Design Tools Oliver collaborated closely with undergrad-uate David Marino and master’s student Paul Bucci, supervised by Dr. MacLeanand with support from Hasti Seifi. Oliver supervised David through his di-rected studies project, and helped worked with Paul and David in developingand designing the Voodle system. Oliver worked with Paul Bucci to extendMacaron into MacaronBit and contributed writing to a demo presented atEuroHaptics 2016 by Dr. MacLean:Bucci, Cang, Chun, Marino, Schneider, Seifi, and MacLean. (2016)CuddleBits: an iterative prototyping platform for complex hapticdisplay. EuroHaptics ’16 Demos.In Chapter 8, UBC alumnus Dr. Colin Swindells conducted interviews and devel-oped interview notes and initial analysis ideas in 2012, supervised by Dr. MacLeanand Dr. Kellogg Booth. In 2015-2016, Oliver transcribed and analyzed the col-lected interviews, organized and analyzed the HaXD’15 workshop (haptics2015.org/program/index.html#WorkshopsAndTutorials) with guidance from Dr. MacLean,and led writing of a manuscript. Drs. MacLean and Booth contributed to writing;Dr. Swindells provided feedback.Schneider and Maclean. (2015) HaXD’15: Workshop on Haptic Ex-perience Design. Proceedings of World Haptics Conference – WHC’15.Because much of this work has been peer-reviewed, we reproduce published papersas chapters in this dissertation. Chapters 3-6 and 8 each include a newly-writtenviiipreface to introduce the work, then includes the corresponding paper with only mi-nor formatting modifications. In this way, we preserve the original argumentationof each published work while connecting it to this dissertation’s overall goals andfindings.ixTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xixAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvi1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Haptic Experience Design (HaXD) . . . . . . . . . . . . . . . . . 21.2 Why is HaXD Hard? . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.3.1 Depth: Vibrotactile Design Tool Case Studies (Chapters 3-6) 41.3.2 Breadth: Focused Haptic Design Projects (Chapter 7) . . . 51.3.3 Ground: Data from Haptic Experience Designers (Chapter8) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.4 Contributions and Outline . . . . . . . . . . . . . . . . . . . . . . 52 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.1 An Overview of Haptics . . . . . . . . . . . . . . . . . . . . . . 8x2.1.1 Tactile Perception and Technology . . . . . . . . . . . . . 92.1.2 Proprioceptive Perception and Technology . . . . . . . . 112.1.3 Haptic Illusions . . . . . . . . . . . . . . . . . . . . . . . 122.2 The Value of Haptic Experiences . . . . . . . . . . . . . . . . . . 132.2.1 Why Touch? . . . . . . . . . . . . . . . . . . . . . . . . 132.2.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . 142.3 Non-Haptic Design and Creativity Support . . . . . . . . . . . . . 182.3.1 Problem Preparation . . . . . . . . . . . . . . . . . . . . 192.3.2 Hands-On Design . . . . . . . . . . . . . . . . . . . . . . 192.3.3 Collaboration . . . . . . . . . . . . . . . . . . . . . . . . 202.3.4 Design Research . . . . . . . . . . . . . . . . . . . . . . 202.4 Previous Efforts for Haptic Experience Design . . . . . . . . . . . 222.4.1 Interactive Software Tools . . . . . . . . . . . . . . . . . 222.4.2 Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . 232.4.3 Conceptual Tools . . . . . . . . . . . . . . . . . . . . . . 242.5 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Sketch: The Haptic Instrument . . . . . . . . . . . . . . . . . . . . . 293.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.3.1 Musical Metaphors in Haptic Design . . . . . . . . . . . 323.3.2 Other Haptic Design Approaches . . . . . . . . . . . . . 333.3.3 Haptic Language . . . . . . . . . . . . . . . . . . . . . . 333.4 Defining the Haptic Instrument . . . . . . . . . . . . . . . . . . . 343.4.1 Design Dimensions . . . . . . . . . . . . . . . . . . . . . 343.5 mHIVE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.6 Preliminary Study Methodology . . . . . . . . . . . . . . . . . . 383.6.1 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . 393.7 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393.7.1 mHIVE Succeeds as a Haptic Instrument . . . . . . . . . 403.7.2 Tweaking through Visualization and Modification . . . . . 413.7.3 A Difficult Language . . . . . . . . . . . . . . . . . . . . 42xi3.8 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.8.1 Design Tools . . . . . . . . . . . . . . . . . . . . . . . . 433.8.2 Language . . . . . . . . . . . . . . . . . . . . . . . . . . 443.8.3 Methodology and Limitations . . . . . . . . . . . . . . . 453.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 Refine: Tactile Animation . . . . . . . . . . . . . . . . . . . . . . . . 464.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474.3 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484.3.1 Haptic Entertainment Technologies . . . . . . . . . . . . 484.3.2 Haptic Authoring Tools . . . . . . . . . . . . . . . . . . . 494.4 Tactile Animation Authoring Tool . . . . . . . . . . . . . . . . . 524.4.1 Gathering Design Requirements . . . . . . . . . . . . . . 524.4.2 Framework for Tactile Animation . . . . . . . . . . . . . 544.4.3 Authoring Interface . . . . . . . . . . . . . . . . . . . . . 564.5 Rendering Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 584.5.1 Perceptual Selection of Interpolation Models . . . . . . . 594.5.2 Pairwise Comparison Study . . . . . . . . . . . . . . . . 604.6 Design Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 624.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654.7.1 Design Evaluation Summary . . . . . . . . . . . . . . . . 654.7.2 Possible Extension to Other Device Classes . . . . . . . . 664.7.3 Interactive Applications . . . . . . . . . . . . . . . . . . 674.7.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . 684.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685 Browse: Macaron . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735.3.1 Salient Factors in VT Effect Perception and Control . . . 735.3.2 Past Approaches to VT Design . . . . . . . . . . . . . . . 73xii5.3.3 Examples in Non-Haptic Design . . . . . . . . . . . . . . 745.4 Apparatus Design . . . . . . . . . . . . . . . . . . . . . . . . . . 745.5 Study Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 765.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775.6.1 Archetypal Design Process . . . . . . . . . . . . . . . . . 785.6.2 Micro Interaction Patterns Enabled by Tool . . . . . . . . 785.6.3 Example Use . . . . . . . . . . . . . . . . . . . . . . . . 825.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845.7.1 Implications for Design . . . . . . . . . . . . . . . . . . . 845.7.2 Limitations & Future work . . . . . . . . . . . . . . . . . 865.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886 Share: HapTurk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 896.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 906.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 906.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 926.3.1 Existing Evaluation Methods for VT Effects . . . . . . . . 926.3.2 Affective Haptics . . . . . . . . . . . . . . . . . . . . . . 936.3.3 Mechanical Turk (MTurk) . . . . . . . . . . . . . . . . . 946.4 Sourcing Reference Vibrations and Qualities . . . . . . . . . . . . 956.4.1 High-Fidelity Reference Library . . . . . . . . . . . . . . 966.4.2 Affective Properties and Rating Scales . . . . . . . . . . . 966.5 Proxy Choice and Design . . . . . . . . . . . . . . . . . . . . . . 976.5.1 Visualization Design (VISDIR and VISEMPH) . . . . . . . 996.5.2 Low Fidelity Vibration Design . . . . . . . . . . . . . . . 996.6 Study 1: In-lab Proxy Vibration Validation (G1) . . . . . . . . . . 1016.6.1 Comparison Metric: Equivalence Threshold . . . . . . . . 1026.6.2 Proxy Validation (Study 1) Results and Discussion . . . . 1026.7 Study 2: Deployment Validation with MTurk (G2) . . . . . . . . . 1076.7.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 1086.8 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1086.8.1 Proxy Modalities are Viable for Crowdsourcing (G1,G2:Feasibility) . . . . . . . . . . . . . . . . . . . . . . . . . 108xiii6.8.2 Triangulation (G3: Promising Directions/Proxies) . . . . . 1096.8.3 Animate Visualizations (G3: Promising Directions) . . . . 1096.8.4 Sound Could Represent Energy (G3: Promising Directions) 1106.8.5 Device Dependency and Need for Energy Model for Vi-brations (G4: Challenges) . . . . . . . . . . . . . . . . . 1106.8.6 VT Affective Ratings are Generally Noisy (G4: Challenges) 1116.8.7 Response & Data Quality for MTurk LOFIVIB Vibrations(G4: Challenges) . . . . . . . . . . . . . . . . . . . . . . 1116.8.8 Automatic Translation (G4: Challenges) . . . . . . . . . . 1116.8.9 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . 1126.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1137 Breadth: Focused Design Projects . . . . . . . . . . . . . . . . . . . 1147.1 FeelCraft: Sharing Customized Effects for Games . . . . . . . . . 1167.1.1 FeelCraft Plugin and Architecture . . . . . . . . . . . . . 1167.1.2 Application Ecosystem . . . . . . . . . . . . . . . . . . . 1197.2 Feel Messenger: Expressive Effects with Commodity Systems . . 1207.2.1 Feel Messenger Application . . . . . . . . . . . . . . . . 1217.2.2 Haptic Vocabulary . . . . . . . . . . . . . . . . . . . . . 1237.2.3 Demo . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1257.3 RoughSketch: Designing for an Alternative Modality . . . . . . . 1267.4 HandsOn: Designing Force-Feedback for Education . . . . . . . . 1277.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 1307.4.2 Tool Development: Conceptual Model and Interface . . . 1327.4.3 Study 1: Perceptual Transparency . . . . . . . . . . . . . 1347.4.4 Study 2: Tool Usability and Educational Insights . . . . . 1367.4.5 Study 2 Discussion . . . . . . . . . . . . . . . . . . . . . 1407.4.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 1427.5 CuddleBit Design Tools: Sketching and Refining Affective RobotBehaviours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1427.6 Takeaways from Focused Design Projects . . . . . . . . . . . . . 145xiv8 Haptic Experience Design . . . . . . . . . . . . . . . . . . . . . . . . 1468.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1478.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1478.2.1 Haptic Experience Design (HaXD) . . . . . . . . . . . . 1488.2.2 Obstacles to Design . . . . . . . . . . . . . . . . . . . . . 1498.2.3 Target Audience . . . . . . . . . . . . . . . . . . . . . . 1508.2.4 Roadmap for the Reader . . . . . . . . . . . . . . . . . . 1518.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1518.3.1 Design Thinking as a Unifying Framework . . . . . . . . 1528.3.2 Haptic Perception and Technology . . . . . . . . . . . . . 1538.3.3 Efforts to Establish HaXD as a Distinct Field of Design . . 1558.4 Part I: Interviews with Hapticians about HaXD in the Wild . . . . 1578.4.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 1578.4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 1608.5 Part II: Validating the Findings in a Follow-Up Workshop . . . . . 1788.5.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 1798.5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 1808.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1838.6.1 Activities of Haptic Design . . . . . . . . . . . . . . . . . 1848.6.2 Challenges for Haptic Experience Design . . . . . . . . . 1858.6.3 Recommendations for Haptic Experience Design . . . . . 1878.6.4 Future of Haptic Design . . . . . . . . . . . . . . . . . . 1908.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1919 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1939.1 Summary of Research Findings . . . . . . . . . . . . . . . . . . . 1939.1.1 Depth: Vibrotactile Design Tool Case Studies . . . . . . . 1939.1.2 Breadth: Focused Haptic Design Projects . . . . . . . . . 1969.1.3 Ground: Data from Hapticians . . . . . . . . . . . . . . . 1979.2 HaXD Process: Requirements for Tools . . . . . . . . . . . . . . 1979.2.1 Contextual Activities of Design: Sketch, Refine, Browse,Share . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1989.2.2 Generalizing Devices and Sensations . . . . . . . . . . . 202xv9.2.3 Framing and Meaning . . . . . . . . . . . . . . . . . . . 2049.3 HaXD Tools: Designing and Implementing . . . . . . . . . . . . 2059.3.1 Communities and Online Deployment . . . . . . . . . . . 2059.3.2 Towards a Mature HaXD Tool Suite . . . . . . . . . . . . 2079.3.3 Notes on Implementing HaXD Tools . . . . . . . . . . . 2079.3.4 Evaluating HaXD Tools . . . . . . . . . . . . . . . . . . 2099.4 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2109.5 In Closing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212A Supporting Materials . . . . . . . . . . . . . . . . . . . . . . . . . . 251A.1 Study Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251A.1.1 Primary Study Forms . . . . . . . . . . . . . . . . . . . . 251A.1.2 Tactile Animation Forms . . . . . . . . . . . . . . . . . . 258A.1.3 HapTurk Forms . . . . . . . . . . . . . . . . . . . . . . . 262A.1.4 HandsOn Forms . . . . . . . . . . . . . . . . . . . . . . 268A.1.5 Haptician Interview Form . . . . . . . . . . . . . . . . . 274A.2 Examples of Qualitative Analysis Methods . . . . . . . . . . . . . 279xviList of TablesTable 4.1 Literature Requirements (LRs) for a tactile animation authoring. 50Table 5.1 Macaron tool alternatives, varied on dimensions of internal vis-ibility and element incorporability. . . . . . . . . . . . . . . . 76Table 5.2 Steps in observed archetypal design process. . . . . . . . . . . 79Table 5.3 Strategies used by participants to directly use examples as astarting point. Ignore and Inspire did not start with copy/paste;Template, Adjust, and Select did, with varying amounts of edit-ing afterwards. When copy/paste was not available, manual re-creation was used as a stand-in. . . . . . . . . . . . . . . . . . 83Table 7.1 Learning tasks used with SpringSim in Study 2. Bloom level isa measure of learning goal sophistication [15] . . . . . . . . . 137Table 8.1 Sub-theme summaries for the Holistic Haptic Experiences (Ex1)theme. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162Table 8.2 Sub-theme summaries for the Collaboration (Co) theme. . . . . 167Table 8.3 Internal roles, the various descriptors used to label them, anddescriptions. Roles were grouped and named by the authorsbased on participant-provided descriptors. . . . . . . . . . . . 167Table 8.4 External roles, the various descriptors used to label them, anddescriptions. . . . . . . . . . . . . . . . . . . . . . . . . . . . 170Table 8.5 Sub-theme summaries for the Cultural Context (CC) theme. . . 173xviiTable 9.1 Four design activities that are supported in other fields of de-sign, but need explicit support in HaXD. . . . . . . . . . . . . 198xviiiList of FiguresFigure 1.1 The classic design funnel, adapted from Buxton [30]. Multipleinitial ideas are iteratively developed into final concepts. Weadd four design activities that occur across design fields, butneed to be explicitly supported for HaXD: sketching, refining,browsing, and sharing. . . . . . . . . . . . . . . . . . . . . . 3Figure 1.2 Approach overview. We investigate VT design tools (Chapters3-5) and techniques (Chapter 6) in-depth. These findings aresynthesized with multiple, smaller focused projects (Chapter7) and grounded data from hapticians (Chapter 8) into a pre-liminary understanding of HaXD. . . . . . . . . . . . . . . . 4Figure 3.1 Concept sketch of a haptic instrument. Both users experiencethe same sensation, controlled in real-time. . . . . . . . . . . 29Figure 3.2 The haptic instrument concept. One or more people controlthe instrument, and receive real-time feedback from the device.Any number of audience members can feel the output in realtime as well. Control methods can vary, from traditional mu-sical control devices (such as the M-Audio Axiom 25, used inpreliminary prototypes) to touchscreen tablets (used in mHIVE).Output devices vary as well. . . . . . . . . . . . . . . . . . . 31Figure 3.3 mHIVE interface. Primary interaction is through the amplitude-frequency view, where visual feedback is provided through acircle (current finger position) and a trail (interaction history). 37xixFigure 3.4 Study setup. Both the participant (left) and the interviewer(right) feel the same sensation as the participant controls mHIVE. 40Figure 4.1 Concept sketch for tactile animation. An artist draws an ani-mated sequence in the user interface and the user experiencesphantom 2D sensations in-between discrete actuator grids. . . 46Figure 4.2 Comparison between related systems. . . . . . . . . . . . . . 51Figure 4.3 Tactile animation rendering pipeline. Users can: (a) create tac-tile animation objects; (b) render objects to actuator parameterprofiles (such as amplitude) with our rendering algorithm; (c)rasterize vector sensations into frames; (d) play the sensationon the device. . . . . . . . . . . . . . . . . . . . . . . . . . . 53Figure 4.4 Mango graphical user interface. Key components are labeledand linked to corresponding design requirements. . . . . . . . 56Figure 4.5 Interpolation models to determine physical actuator output (A1−3)from virtual actuator intensity (Av) and barycentric coordinates(a1−3). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59Figure 4.6 Rendering study setup and user interface. . . . . . . . . . . . 60Figure 4.7 Example of P2’s animation for matching a sound. . . . . . . . 63Figure 4.8 Tactile animation could define motion with (a) 1D actuator ar-rays, (b) dense and sparse VT grids, (c) handhelds, (d) 3D sur-faces, (e) multi-device contexts, and (f) non-VT devices likemid-air ultrasound. . . . . . . . . . . . . . . . . . . . . . . . 66Figure 5.1 Concept sketch for a Macaron, an online, open-source VT ed-itor features incorporable examples and remote analytics. . . . 70Figure 5.2 Macaron interface, “hi” version featuring both composabil-ity (copy and paste), and visibility of underlying parameters.The user edits her sensation on the left, while examples are se-lected and shown on the right. Macaron is publicly availableat hapticdesign.github.io/macaron. . . . . . . . . . . . . . . 72xxFigure 5.3 Design space for Macaron versions. hi and sample bothallow for selection and copying of example keyframes. visand hi both show the underlying profiles. lo represents thecurrent status quo; only a waveform is shown. . . . . . . . . 75Figure 5.4 Animations used as design tasks, in presentation order. Heart-beat expands in two beats; the cat’s back expands as breathingand purring; lightning has two arhythmic bolts; the car oscil-lates up and down, and makes two turns: left then right; snowhas three snowflakes float down. . . . . . . . . . . . . . . . . 77Figure 5.5 Log visualizations showing archetypal design process. Top:P10’s heartbeat/vis condition (an “ideal” version). Bottom:P3’s car/hi condition (variations: a return to example brows-ing after editing, repeated refinement, muted editing). . . . . . 78Figure 5.6 Participants created their designs using different progressionpaths, suggesting flexibility. . . . . . . . . . . . . . . . . . . 81Figure 5.7 Participants used real-time feedback to explore, both (a) intime by scrubbing back and forth (P3 lightning/lo), and (b)by moving keyframes (P10 heartbeat/vis). . . . . . . . . . . 82Figure 6.1 In HapTurk, we access large-scale feedback on informationaleffectiveness of high-fidelity vibrations after translating theminto proxies of various modalities, rendering important charac-teristics in a crowdsource-friendly way. . . . . . . . . . . . . 89Figure 6.2 Source of high-fidelity vibrations and perceptual rating scales. 95Figure 6.3 VISDIR visualization, based on VibViz . . . . . . . . . . . . 96Figure 6.4 Visualization design process. Iterative development and pilot-ing results in the VISEMPH visualization pattern. . . . . . . . 97Figure 6.5 Final VISEMPH visualization guide, used by researchers tocreate VISEMPH proxy vibrations and provided to participantsduring VISEMPH study conditions. . . . . . . . . . . . . . . . 98Figure 6.6 Example of LOFIVIB proxy design. Pulse duration was hand-tuned to represent length and intensity, using duty cycle to ex-press dynamics such as ramps and oscillations. . . . . . . . . 100xxiFigure 6.7 Vibrations visualized as both VISDIR (left) and VISEMPH. . . 101Figure 6.8 95% confidence intervals and equivalence test results for Study1 - Proxy Validation. Grey represents REF ratings. Dark greenmaps equivalence within our defined threshold, and red a sta-tistical difference indicating an introduced bias; light green re-sults are inconclusive. Within each cell, variation of REF rat-ings means vibrations were rated differently compared to eachother, suggesting they have different perceptual features andrepresent a varied set of source stimuli. . . . . . . . . . . . . 103Figure 6.9 Rating distributions from Study 1, using V6 Energy as an ex-ample. These violin plots illustrate 1) the large variance inparticipant ratings, and 2) how equivalence thresholds reflectthe data. When equivalent, proxy ratings are visibly similar toREF. When uncertain, ratings follow a distribution with un-clear differences. When different, there is a clear shift. . . . . 104Figure 6.10 95% Confidence Intervals and Equivalence Test Results forStudy 2 - MTurk Deployment Validation. Equivalence is in-dicated with dark green, difference is indicated with red, anduncertainty with light green. Red star indicates statisticallysignificant difference between remote and local proxy ratings. 106Figure 7.1 FeelCraft architecture. The FeelCraft plugin is highlighted ingreen. The FE library can connect to shared feel effect repos-itories to download or upload new FEs. A screenshot of ourcombined authoring and control interface is on the right. . . . 117Figure 7.2 Mockup for FeelCraft demo system. . . . . . . . . . . . . . . 119Figure 7.3 Application ecosystem for FeelCraft and an FE repository . . 120Figure 7.4 Users exchanging expressive haptic messages on consumer em-bedded devices. . . . . . . . . . . . . . . . . . . . . . . . . . 121Figure 7.5 Graphical representation of haptic vocabularies and icons. . . 124Figure 7.6 Some examples of expressive haptic messages embedded withnormal text messages. . . . . . . . . . . . . . . . . . . . . . 124Figure 7.7 Implemented Feel Messenger demo at World Haptics 2015. . 126xxiiFigure 7.8 RoughSketch handout, illustrating interaction techniques andtextures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128Figure 7.9 RoughSketch poster, describing interaction techniques and high-level findings. . . . . . . . . . . . . . . . . . . . . . . . . . . 129Figure 7.10 Students, teachers, and researchers can explore science, tech-nology, engineering, and math (STEM) abstractions throughlow-fidelity haptics, incorporating elements into system designs. 130Figure 7.11 The HandsOn CM enables three kinds of exploration based onrequirements. . . . . . . . . . . . . . . . . . . . . . . . . . . 133Figure 7.12 SpringSim interface, a HandsOn sandbox for a single lessonmodule on springs. . . . . . . . . . . . . . . . . . . . . . . . 134Figure 7.13 In the Hapkit+Dynamic Graphics condition, graphical springsresponded to input (left); static images were rendered in theHapkit+Static Graphics condition (right); in both, HapKit 3.0[198] was used as an input/force-feedback device (far right). . 136Figure 7.14 Two examples of CuddleBits, simple DIY haptic robots. . . . 143Figure 7.15 CuddleBit design tools. Voodle enables initial sketching of af-fective robot behaviours, while MacaronBit enables refining. . 144Figure 8.1 Our three themes, each exploring different levels of scope through5 emergent sub-themes. . . . . . . . . . . . . . . . . . . . . . 148Figure 8.2 Responses for tools used in haptic design (N=16, “check allthat apply”). . . . . . . . . . . . . . . . . . . . . . . . . . . 180Figure 8.3 Responses for evaluation techniques used (N=16, “check allthat apply”). . . . . . . . . . . . . . . . . . . . . . . . . . . 181Figure 8.4 Reported group size for projects (N=16, “check all that apply”). 182Figure 9.1 Vibrotactile design case studies. Each studied an aspect of vi-brotactile design with a varied set of users, devices, platforms,and foci. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194Figure A.1 Picture of whiteboard when developing themes during Mac-aron analysis (Chapter 5). . . . . . . . . . . . . . . . . . . . 280xxiiiFigure A.2 Affinity diagram showing clustering of participant statementsinto themes during Macaron (Chapter 5) analysis. . . . . . . . 281Figure A.3 Screenshot of video coding sheet used to develop and countcodes and calculate task timing during Macaron analysis (Chap-ter 5). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282Figure A.4 Screenshot of transcribed participant comments with code tagsfrom Macaron analysis (Chapter 5). . . . . . . . . . . . . . . 283Figure A.5 Image of all participant timelines developed for Macaron anal-ysis (Chapter 5). . . . . . . . . . . . . . . . . . . . . . . . . 284xxivAcknowledgmentsI first thank my extraordinary supervisor, Dr. Karon MacLean, for all of her advice,direction, and support throughout my graduate studies. I am extremely fortunate toreceive her mentorship.I am very grateful to my supervisory and examination committees. Dr. RonaldGarcia and Dr. Michiel van de Panne always made time for insightful, thought-provoking, and helpful discussions that have thoroughly enriched this work. Dr.Antony Hodgson, Dr. Eric Meyers, Dr. Luanne Freund, and my external exam-iner Dr. Ian Oakley asked challenging, valuable questions that led to several keyimprovements of this dissertation.I have immense appreciation for the faculty, friends, and colleagues that havemade the last four years an amazing experience. Each of you had a lasting impacton me. At UBC, Dr. Kellogg Booth, Dr. Joanna McGrenere, Dr. Tamara Munzner,Dr. Idin Karuei, Hasti Seifi, Laura Cang, Paul Bucci, David Marino, Soheil Kian-zad, Matthew Chun, Salma Kashani, Gordon Minaker, Ben Clark, Dilan Ustek,Meghana Venkatswany, Dr. Matthew Brehmer, Jessica Dawson, Anamaria Crisani,Kamyar Ardekani, Antoine Ponsard, Derek Cormier, Francisco Escalona; at Dis-ney Research, Dr. Ali Israr, Siyan Zhao; at Stanford University, Dr. Allison Oka-mura, Melisa Orta Martinez, Richard Davis; at McGill University, Jan Anlauff,Colin Gallacher, Jeff Blum; and beyond this list, there are many more.I also greatly appreciate my funding sources for supporting this work. I thankthe Natural Sciences and Engineering Research Council of Canada (NSERC), theNational Science Foundation (NSF), Disney Research Pittsburgh, and the Univer-sity of British Columbia.Finally, I am forever grateful to my family for their undying love and support.xxvDedicationThis work is dedicated to my family, who always believed in me,and to Dr. Douglas Engelbart, who returned a teenager’s phone call.xxviChapter 1IntroductionTechnology changes. Symbolic, machine-focused communication like punch cards,assembly languages, and terminal interfaces yield to natural, physical, always-connected interactive systems. The emergence of virtual and augmented reality(VR & AR), rapid development of personal fabrication techniques, and explosionof wearable and cyberphysical technologies propel us towards a mixed physical-digital world at an accelerating pace. As computers expand beyond screens andkeyboards, we look to engage the rich senses of touch.Haptic experiences are moving from niche roles to mainstream adoption. Hap-tic technology includes both the tactile (skin-based) and proprioceptive (force- andposition-based) components of touch. Recently, other natural interaction tech-niques like touchscreens and voice control held the limelight, with haptic feedbackrelegated to buzzing alerts or limited to high-stakes expert systems like laparo-scopic surgery. Now, new media seeks deeper immersion, smart environmentslook to connect physically with users, and consumer devices like the Apple Watchand Pebble adopt high-fidelity haptic actuators. The question is how to enabledesigners to craft experiences with these technologies.The diverse field of haptics has seen active engineering of new devices andstudy of human perception, but the design of haptic experiences remains a criticalchallenge. Little is known about this nascent field of design, with many uniquechallenges: real-world haptic experiences are rich, diverse, multimodal entitieswhich necessitate in-person interaction, while synthesized haptics strive to match1these characteristics. How can we support creativity with these experiences, em-powering artists, developers, designers, and scientists to effectively work with thisemerging medium? In this dissertation, we study the process of haptic experiencedesign (HaXD) and establish guidelines for building interactive software systemsto support it.1.1 Haptic Experience Design (HaXD)We define HaXD as:The design (planning, development, and evaluation) of user experi-ences deliberately connecting interactive technology to one or moreperceived senses of touch, possibly as part of a multimodal or multi-sensory experience.1We use HaXD instead of “haptic design”, which can also refer to design practicesrelated to haptics but not directly involving the user experience, e.g., mechanicaldesign of a new display mechanism.We refer to our intended supported designer as a haptician:One who is skilled at making haptic sensations, technology, or experi-ences.We use “haptician” to capture the diversity of people who currently make haptics,and the diversity of their goals. Many users with a need to design with haptics maynot have formal design training, and may focus on subsets of the entire experience,e.g., technical demonstrations or creating stimuli for psychological tests.In this dissertation, we take a systems approach to design. Designers do not ex-ist in a vacuum, but rather in a physical, personal, social, and cultural context. Weadopt a framework of design activities which are practiced in general experiencedesign, but need explicit support in HaXD. Our research identified four activities(Figure 1.1): sketching, ad-hoc, suggestive exploration; refining, iteration and fine-tuning; browsing examples and drawing from experience; and sharing designs forfeedback and posterity. Chapters 3-6 explore these activities directly.1We developed these definitions from our interviews with hapticians (Chapter 8).2Figure 1.1: The classic design funnel, adapted from Buxton [30]. Multipleinitial ideas are iteratively developed into final concepts. We add fourdesign activities that occur across design fields, but need to be explicitlysupported for HaXD: sketching, refining, browsing, and sharing.1.2 Why is HaXD Hard?Two major types of challenges facing HaXD are those resulting from its relativeyouth, and those intrinsic to the sense of touch and touch-based technology. Onegoal of this dissertation is to articulate the challenges that have been known infor-mally for years, and capture those that are not as well known.The first conference to explicitly focus on haptics was the Haptics Symposiumin 1992, which focused on engineering concerns. As a result, design for hapticexperiences is not as mature as for vision and audio, which can draw from centuriesof music and graphic design, and decades of sound design, translating well to theirdigital equivalents. We see this immaturity in limited, varied language for touch[132] and the limited infrastructure to support the wide variety of haptic devices[111], to the point where online distribution is current research [1].There are also intrinsic challenges when designing for touch: variabilities inlow-level perception due to, e.g., individual differences [165], device location anduser activity [144], and aging [253, 254]; a strong influence of user preferences[239, 240]; complexity of the haptic senses [48, 142, 153]; and tight technical3constraints cross-cutting software and hardware [111, 162]. Throughout this work,we identify and characterize these challenges, and make progress to conquer them.Haptic Experience DesignDescription of HaXD:• Process: What are designers doing?• Strategies: What tools and techniques currently support designers?• Challenges: What isn’t supported?Prescriptive Guidelines for HaXD Tools:• Design: What are important requirements and features for HaXD tools?• Build: How should we develop interactive software for HaXD?• Evaluate: How can we evaluate tools?Interviews with DesignersHaXD’15 WorkshopSketch: Haptic Instrument(Chapter 3)Object 1Object 2Refine: Tactile Animation(Chapter 4)Browse: Macaron(Chapter 5)Depth: VT Design Tool Case StudiesBreadth: Focused Design Projects(Chapter 7)Share: HapTurk(Chapter 6)Ground: Lessons from Designers(Chapter 8)Figure 1.2: Approach overview. We investigate VT design tools (Chapters 3-5) and techniques (Chapter 6) in-depth. These findings are synthesizedwith multiple, smaller focused projects (Chapter 7) and grounded datafrom hapticians (Chapter 8) into a preliminary understanding of HaXD.1.3 ApproachWe approach this problem with three strategies: vibrotactile design tool case stud-ies for depth, a variety of focused design projects for breadth, and data from hapticdesigners to ground our findings (Figure 1.2).1.3.1 Depth: Vibrotactile Design Tool Case Studies (Chapters 3-6)To understand design, we practice design. The larger part of our inquiry follows re-search through design [292], where we create artifacts to refine our questions whileseeking a solution. In each of three case studies, we design, build, and evaluate a4tool or technique to support an aspect of HaXD, scoped to vibrotactile (VT) design.VT sensations are simple and potentially high-impact: they can be passively feltby users (i.e., designed to be display-only) and have established design parameters,and recent consumer devices are now employing high-quality VT actuators.Each of our in-depth case studies resulted in concrete implications for design-ing tools and a small window onto the larger HaXD process. Contributions includealgorithms, data structures, interaction techniques, features, analytic techniques,and working software tools that have been employed by designers. Chapter 3,Chapter 4, and Chapter 5 outline iterative development and evaluation of VT de-sign tools; Chapter 6 covers a VT design technique (proxies).1.3.2 Breadth: Focused Haptic Design Projects (Chapter 7)While we investigate VT sensation design in-depth through our case studies, resultsmay not generalize to other devices, and provide limited investigation into appli-cation areas like education. To generalize from VT effects, explore other aspectsof haptic design, and gain personal experience as haptic experience designers, weparticipate in several smaller focused design projects, which lend a broader contextto our findings. Chapter 7 discusses these projects.1.3.3 Ground: Data from Haptic Experience Designers (Chapter 8)Despite the recent growth of the field of haptics, hapticians remain relatively rareand difficult to recruit. To complement our primarily design-based approach andground it with hapticians in the field, we draw from other data sources: a workshopheld at World Haptics 2015 and interviews with professional hapticians. Chapter 8discusses this characterization of HaXD, and serves as a capstone to this disserta-tion by defining HaXD and articulating a vision for how HaXD might manifest.1.4 Contributions and OutlineThis dissertation makes two primary contributions, the first directed towards thehaptics community and the second towards the design research community. To thehaptics community, we repeatedly find value in applying design thinking to hapticmedia. Hapticians are better supported with a principled process of rapidly and5iteratively generating and evaluating multiple ideas. Haptic design tools can beimproved by considering a wider context than simple editing interfaces: collabora-tors, examples, and designer experience all provide value.However, applying design thinking to haptic media is not always straightfor-ward. To the design research community, we present HaXD as a frontier that chal-lenges underlying assumptions about design. Several basic activities that are takenfor granted in non-haptic fields need explicit support when conducted with hap-tics. Haptic experiences are diverse, difficult to rapidly sketch and refine, andhave limited opportunities to browse and share remotely.This dissertation continues as follows. First, in Chapter 2, we present the nec-essary background with an overview of haptic technology and perception, the valueof haptics and related applications, design theory from non-haptic fields, existinghaptic design tools and techniques, and the methodology underlying our work.Then, we describe each VT case study in Chapters 3-6. Through these fourcase studies, we build practical support tools for hapticians, each of which mani-fests a particular design thinking idea. Increasingly, we found ourselves drawingdirectly from modern non-haptic design tools, but needing to solve new problemsto accommodate haptic technology.In Chapter 3, we present findings from our first vibrotactile design tool, the hap-tic instrument, which supported easy exploration and informal feedback (sketch-ing), but identified a key problem: it did not support refining designs.In Chapter 4, we present findings from our second vibrotactile design tool,Mango, which established a generalized pipeline and was able to support bothsketching and refining for expert visual animators; it highlighted reuse as an im-portant next step.In Chapter 5, we present findings from our third vibrotactile design tool, Mac-aron, which implemented a browsing interface and analytics system; we foundexamples played a large part of the design process, and used interaction logs toprovide a picture of our participants’ design process, including confirmation ofproject preparation and browsing, initial design, sketching, and refining.In Chapter 6, we document findings from HapTurk, a technique for sharingvibrotactile designs for feedback at scale. In this project, we distribute proxy vi-brations over Mechanical Turk to collect feedback from the crowd.6In Chapter 7, we synthesize findings from our side projects, showing generalityby applying our understanding of haptic design explicitly in several domains andgaining practical experience designing haptic experience. This allowed us to reflectupon our own design process as practitioners [235].In Chapter 8, we complement our design-based inquiry through interviews withprofessional haptic designers and a workshop run to elicit feedback from the com-munity; this captures a description of haptic design, reinforces our findings forimportant support tools, and identifies more systematic challenges.Finally, in Chapter 9, we conclude with a synthesis of our final results anddirections for future research.7Chapter 2BackgroundIn this chapter, we provide the relevant background for this dissertation. We beginwith an overview of haptic technology and perception. Next, we discuss the ap-plication space for haptics and why haptic experiences are increasingly importantto design. We then discuss non-haptic creativity support tools and design theorywhich provided inspiration and guiding principles. After, we discuss the previouswork in HaXD and related support tools, identifying why this is an area for im-proved understanding. Finally, we present the qualitative and quantitative method-ologies used in this dissertation. Throughout the chapter, we contextualize thiswork and HaXD in both the haptics and HCI communities.2.1 An Overview of HapticsThe term “haptic” was coined in 1892 by German researcher Max Dessoir to re-fer to the study of touch, similar to “optic” for sight and “acoustic” for sound[98]. Today, it refers to both the study of the psychology and perception of thesenses of touch, and the technology that employs touch as a method of feedback.Haptic technology is typically separated into two classes based on the main sensemodality: tactile (or cutaneous) sensations, and proprioception, or the sense ofbody location and force; the latter includes kinaesthetic senses of force and mo-tion. These two types of feedback are useful for different purposes, e.g., peopleuse their fingerpad’s tactile senses to derive texture, but kinaesthetic feedback to8infer weight [152]; different senses can be combined for more convincing results[194]. For an overview of the haptic senses, we direct the reader to Ledermanand Klatzky [153]; for a practical introduction to haptic technology, we suggestHayward and Maclean [111]. We focus our coverage on the sensations directlystudied within this dissertation (vibrotactile feedback, programmable friction, sim-ple force-feedback, simple haptic robots), while also portraying the diversity ofhaptic experiences and technology.2.1.1 Tactile Perception and TechnologyTactile sensations rely on multiple sensory organs in the skin, each of which de-tect different properties, e.g., Merkel disks detect pressure or fine details, Meissnercorpuscles detect fast, light sensations (flutter), Ruffini endings detect stretch, andPacinian corpuscles detect vibration [48]. Haptic technology has evolved alongtwo parallel paths: stimulating sensory mechanisms and mimicking realistic envi-ronments. Tactile technologies are at least as diverse as the senses they stimulateand the environments they simulate.Vibrotactile (VT) sensations, where vibrations stimulate the skin, target thePacinian corpuscle. VT actuators can take may forms; the more affordable tech-niques tend to directly stimulate skin. Eccentric mass motors (sometimes “rum-ble motors” or “pager motors”) are found in many mobile devices and game con-trollers, and are affordable but inexpressive. More expressive mechanisms suchas voice coils offer independent control of two degrees of freedom, frequency andamplitude. Piezo actuation is a very responsive technique that is typically moreexpensive than other vibrotactile technology. While voice coils typically directlystimulate the skin, linear resonant actuators (LRAs) shake a mass back and forthto vibrate a handset in an expressive way; a common research example is the Hap-tuator [280]. Instead of directly stimulating the skin, this actuator typically shakesanother device held by the user, such as a mobile device [287] or pen [56]; thisapproach is amenable to both designed artifical stimulation and physical realism.As of 2016, LRAs are increasingly deployed in consumer products (e.g., Apple’sTaptic Engine).VT sensations are accessible, well-studied, and increasingly widespread, and9can be passively felt, easing implementation. Our in-depth design tool studies thusfocus on VT experiences. Actuators like VT devices can be used alone or puttogether in spatial multiactuator displays like seats [125, 128], belts [199, 204],wristbands [7, 104, 199], vests [136, 206], and gloves [146, 201]. These can bearranged into grids, either dense tactile pixels (“taxels”) [146] or sparse arrays[125, 128], to provide 2D output on a plane. Multiactuator arrays increasingly ex-ploit tactile illusions to create effects of motion or phantom sensations in-betweenactuators.Another emerging tactile feedback mechanism is programmable friction, creat-ing shear forces on the skin to target sensations of skin stretch. Surface friction, forexample on a mobile touch screen, can be manipulated by both mechanical motionor electrical adhesion. The TPad [277] vibrates a plate at ultrasonic frequenciesto create a cushion of air between the surface and the user’s finger. This effect isprogrammable, and can be used to with a number of interactive scenarios [161].Other techniques like electrovibration, deployed in TeslaTouch [13], and electro-static forces [179], can create a similar effect. Strong electroadhesion [246] has thepotential to create very large shear forces, but comes with a high power cost. InRoughSketch (Chapter 7), we design for a mobile version of the TPad deployed onAndroid devices, the TPad phone (www.thetpadphone.com).There are many other types of tactile stimulation used in haptic experiences. 2-dimensional pin-based grids like Optacon [14] and HyperBraille (www.hyperbraille.de) can display Braille and 2D images to the blind and visually impaired, and canoperate as a generic computer display [207]. Similar multi-point displays havebeen deployed on mobile devices. Edge Haptics uses dozens of linearly-actuatedpins on the edge of a mobile device for tactile stimulation, similar to braille pindisplays [131], while laterally moving pins can use skin-stretch as a display mech-anism [166]. Electrocutaneous stimulation, where electrodes directly stimulate theskin, has been deployed for spatial tongue displays [10]. Temperature displays ex-ploit warm and cold receptors in the skin for display, using Peltier junctions [139].Tactile sensations can be created at a distance using ultrasonic transducers [35, 191]and vortex cannons that shoot puffs of air [250].102.1.2 Proprioceptive Perception and TechnologyProprioception, the sense of force and position, is synthesized from multiple sen-sors as well: the muscle spindle (embedded in muscles), golgi-tendon organ (GTO)in tendons, and tactile and visual cues [142]. We distinguish proprioception fromthe related term kinaesthetic by being the general, synthesized sense, where ki-naesthetic sensation is strictly the sense of motion. Force displays are common inprecise, specialized applications like robot-assisted surgery [195] or realistic sen-sorimotor training environments [268]. The focus is usually on simulation, creatingforces on the user from a virtual environment.Force-feedback devices differ in their degrees of freedom of feedback (DoF),the number of variables needed to express their kinematic state. These devicesrender a virtual environment, with simulated forces depending on the input fromthe user. Common consumer-facing 3-DoF devices include the Geomagic Touch(previously the Sensable PHANTOM) and Falcon devices, offering force in threedirections. 2-DoF designs like the pantograph [32, 210] can provide displays onscreens, walls, and tables. Research with these displays often requires realisticsimulation and rendering: e.g., making free space feel free, providing stiff virtualobjects and walls, and avoiding saturation [178]. Open-hardware, self-assembledversions of these devices, such as WoodenHaptics [85] for 3-DoF devices and Hap-let [89] for 2-DoF displays, have the potential to make haptics more accessible.Much previous work has been done on handling technical concerns, e.g., display-ing complex polygonal objects with a “God object” [291], coordinating remotelysituated devices or shared environments [29], and improving collision realism withtransient forces [149]. More complex environments are primarily programmed inusing APIs like CHAI3D, OpenHaptics, or Unity.Another approach is to use simple force feedback, for example, for hapticseducation [135]. 1-DoF devices include linear actuators pushing on the user andhaptic knobs, e.g., the UBC Twiddler [76, 172, 243], and paddles, e.g., the HapKit[198]. The UBC SPIN lab has also adopted 1-DoF force feedback in its affectiverobot, the Haptic Creature [283, 284], the CuddleBot [2], and CuddleBits [33]. Weexplore force-feedback design with the HapKit and CuddleBits in Chapter 7.112.1.3 Haptic IllusionsLike the stroboscopic effect transforming a series of images into the perceptionof motion for visual displays, illusions play a valuable role in haptic sensations[110]: they let a designer create convincing sensations without accurately simu-lating physical environments. Some are influenced by other senses. In the clas-sic size-weight illusion [44], when two weights have the same mass but differentsizes, the smaller is perceived to be heavier, whether size is seen or felt [110]. Sim-ilar effects occur with synthesized haptic effects: perceived stiffness of a springchanges with both visual distortion of the spring’s position [252] and the soundthat is played [64]. A striking, recent example is the use of visual dominance touse a single physical block to provide haptic feedback for multiple virtual blocks bydistorting the visual position of the user’s arm [9]. We employ similar techniquesin our FeelCraft and Feel Messenger projects, using visual feedback to prime usersto haptic sensations (Chapter 7).Other illusions are purely tactile and useful for multiactuator displays; in thesecases, they expand the haptic palette. Phantom tactile sensations [4], create illusoryvibrations in between two or more VT actuators, opening up the space in-betweenactuators for display. Continuous motion can be simulated, e.g., Seo and Choi[241] created a perceived motion flow between two VT actuators mounted on theends of a handheld device by controlling their intensity. Similarly, Lee et al. [159]created across-the-body and out-of-the-body illusions on a mobile device using upto four LRAs; Gupta et al. [104] used interpolation on a VT wristband for new in-teraction techniques. The Tactile Brush algorithm [126] combined phantom tactilesensations and apparent tactile motion to render high-resolution and moving hapticpatterns on the back using a coarse grid of VT actuators. Other spatio-temporal VTillusions such as the “cutaneous rabbit” [262], where carefully timed discrete tac-tile stimuli create perceived motion, and Tau and Kappa effects [109, 110], whereperceived distance between stimuli depending on their timing, can also be usedwith VT arrays. Similar illusions are possible using other tactile modalities, in-cluding temperature displays [248] and electrocutaneous stimulation [264]. Weextend phantom VT sensations to 2D interpolation (e.g., between 3 actuators) toenable Tactile Animation (Chapter 4).12Of course, haptic perception can depend on the user’s physical and attentionalconnection with the device, especially important in wearable contexts. Vibrotactiledetection depends on many variables, including location on the user’s body, howmuch the user is moving, and whether they are expecting the vibration [144], andsocial context [37]. These effects can be mitigated through sensing, e.g., detectingmovement with accelerometers [16]. The implications of context on HaXD arediscussed by professional designers in Chapter 8.2.2 The Value of Haptic ExperiencesHaptic feedback can provide several benefits to interactive experiences. Here, weoutline the main benefits haptics provides, and then several application areas thatcommonly leverage those benefits.2.2.1 Why Touch?Haptic technology enables information transfer between humans and computers;this transfer is rich, proximal, and fast. Information flows both ways, through inputand output, sometimes simultaneously. We focus on designed haptic display.One advantage of touch is simply that it is not vision or audio, the primaryfeedback methods for interactive systems. Haptic technology can reinforce othermodalities, enriching feedback for a more complete experience, or provide comple-mentary feedback, with many possible reasons: information saturation, e.g., whenvisual or audio displays have maximized their output; task context, e.g., when theuser is driving and must keep their eyes on the road; impairment or impairing situa-tions, e.g., when a user has limited sight or hearing; ambient displays, e.g., keepinga user aware of a piece of information without interrupting them; or nature of theinformation, e.g., communicating emotion. For example, touch can be used as asubstitute for other senses [11], and the result can be dramatic, e.g., a blind per-son using a tongue-based display to seeing a rolling ball well enough to bat it asit falls [12]. Many other devices have been developed and studied for the visuallyimpaired (e.g., [14, 207]).Of course, touch is a unique, rich sense in its own right. Touch can be bothinvisible, like sound, and spatial, like vision. Touch is the first sense to develop,13playing an important role in formative experiences [132]; sensorimotor actions canhelp to scaffold understanding through embodied learning [200]. Feeling an objectis especially helpful at discerning material properties [152]. Touch can also be usedfor artistic expression: Gunther et al. [103] studied a full-body vibrotactile suit tocreate music-like “cutaneous grooves”, helping to identify the artistic space of VTsensations, including concerts with tactile compositions.While haptic feedback can improve usability and task performance [39, 204],touch is especially connected to visceral, emotional connections. Marketing re-search has studied multiple ways that touch can connect with customers: the way asmartphone feels can influence a purchase over an alternative that might work bet-ter, and customers prefer to shop at stores that let them touch products [132, 251].2.2.2 ApplicationsWhile realistic virtual environments for force-feedback haptic feedback are helpfulin medical or training applications [195, 268], we focus on applications that benefitfrom an explicit experience design step: gathering requirements from end-uers,iteratively exploring many ideas, and evaluating the experience of use.ImmersionTouch can subtly draw a user into an experience. A popular application is aug-mented, immersive media experiences. Actuated tactile feedback has been used asearly as 1959 in the movie The Tingler [123]. 4D theatres and theme park ridesuse bursts of air or water sprays to engage the audience. Companies like D-Box(www.d-box.com) augment films with haptic tracks that feature both low-frequencymovements and high-frequency vibrations, and can be found in theatres across theworld. Buttkicker (www.thebuttkicker.com) also augments 4D theatres, and pro-vides products for home theatre setups. In these experiences, we need to supportdesigners’ artistic control over the sense of touch.Haptic experiences are also increasingly of interest in virtual reality (VR) envi-ronments for, e.g., entertainment, training, and education. Skin stretch techniques,measured by Levesque and Hayward [160], have been explored in mobile displays[101, 166] and are now commercialized by Tactical Haptics (tacticalhaptics.com)14to augment virtual-reality setups by simulating forces and torques using handheldcontrollers, lending stronger immersion for virtual environments and VR games.Haptic Turk [45] and TurkDeck [46] are innovative explorations of high-fidelityhaptic experiences in virtual environments using people as actuators. Impacto useselectrical muscle stimulation and a solenoid actuator to create wearable kinaes-thetic and tactile feedback [164]. Haptic retargeting distorts visual feedback tore-use a single physical block in a virtual block-building game [9].Previous work has also attempted to add greater immersion to broadcast mediaby including haptic sensations. Modhrain and Oakley [183] present an early vi-sion of Touch TV, using active touch with two-DOF actuators embedded in remotecontrollers; Gaw et al. [92] follow up with editable position playback on a force-feedback device, played alongside movies or cartoons. More recently, the prolif-eration of online streaming video has developed opportunities to add haptic sen-sations using novel data formats that can handle diverse and interactive feedback,from forces to temperature. Researchers have looked at how to integrate a haptictrack into Tactile Movies [146], YouTube [90], or haptic-audiovisual (HAV) con-tent [62], complete with compositional guidelines drawing inspiration from filmand animation [100].AffectResearchers are developing design guidelines to express emotions through hapticexperiences. Low-level parameters like amplitude, frequency, and duration havebeen linked to emotions, e.g., with VT icons [286] and mid-air ultrasound stimu-lation [192]. Because touch can be bidirectional, affective sensing can accompanyhaptic display. The Haptic Creature project established a touch dictionary of ges-tures that emotionally communicate with robots [284]. Touch-based surfaces candetect these gestures [81] through technologies like conductive fur and fabric [82].To study emotion and technology, researchers commonly draw from two af-fective models: Ekman’s basic emotions and Russell’s affect grid. Ekman’s basicemotions [73, 74] are a discrete set of emotions identified from a cross-culturalstudy of facial expressions; we use this model’s emotions as the design task inChapter 3. Russell’s affect grid [216, 217] separates emotions into dimensions of15arousal (low to high energy) and valence (positive and negative emotions); thiswork informs our perspective on expressivity, especially with the CuddleBit workin Section 7.5.The emotional nature of touch has implications for design; for example, cou-ples are more comfortable than strangers when using a “hand stroke” metaphor fortwo remotely coupled haptic devices [249]. Emotional display through touch hastherapeutic applications. Bonanni et al. [18] created TapTap, a wearable that canrecord and playback VT equivalents of affectionate touch to support users in ther-apy. Tactile displays target mental health [270] and emotional understanding forautism [43]. The Haptic Creature project explores affective touch in human-robotinteraction (HRI) [282–285]; this furry, zoomorphic robot can measurably relaxusers when they feel it breath [237]. Emotional haptic interfaces benefit from anexplicit design process, as designers can draw from guidelines on how to consideremotion in haptic experiences.Expressive Interpersonal CommunicationTouch is extremely important for interpersonal communication, from greeting anew acquaintance with firm handshake, to showing affection to a loved one witha long hug; see Gallace [88] for an overview. Of course, technology can mediatetouch between people, e.g., in remote collaboration or shared virtual environments[105]. Brave and Dahley [19] introduced “inTouch”, mechanically linked rollersthat enabled playful touch interactions at a distance. ComTouch [41] used pressureinput to send vibrations between mobile phones, finding it was used for attention,turn-taking, and emphasis. Hoggan et al. [115] elaborated on these findings: a onemonth-study found users sent “Pressages” (pressure messages) both for greetingsand to emphasize speech or emotional messages. Chan et al. [39] used VT iconsto coordinate turn-taking in an online system, featuring an extensive design pro-cess to create and perceptually verify icons that present system state and requestswith varying urgency. A design step can help build languages for communication,providing the right haptic words for users to express themselves.16NotificationMobile contexts are rife with opportunities for haptic feedback. Ambient tactiledisplays can provide awareness and alerts without distracting the user. BecauseVT feedback is affordable and low-power, it can be added to watches and wrist-bands [7, 27], belts, vests, and other wearables. Haptic icons [167] are analogousto visual icons but transmitted through touch. Tactons [25] are a type of haptic iconwhich provide VT feedback, e.g., in mobile applications. Rhythm opens up a large,learnable design space; users can discern between [265] and learn [256] 84 differ-ent rhythmic icons. A 28-day study showed that rhythmic VT icons do not disturbusers in daily activities and can communicate ambient information [37]. Hemmertand Joost [113] explored a life-like metaphor of pulsing and breathing to providealerts, but found that designers needed to take care to not be annoying. Multipleactuators can be combined in mobile handheld devices to provide differentiablespatial information, enriching the VT icon design space [281]. VT icons producedby phones can represent multiple levels of urgency and the source of an alert (e.g.,voice call, text message, or multimedia message) [24]. A haptic designer can helpselect an appropriate notification set for diverse applications in both mobile andnon-mobile contexts.GuidanceGuidance, providing directional or timing information, is a typical application forVT feedback, which can be invisible, mobile, and accessed without using visionor sound. Spatial guidance through haptic wearable display can improve naviga-tion with multiple actuators across several form factors, including belts [163, 204],wrist-bands [7], and vests [206]; in each case, the vibrations inform the user whereto go with spatial vibrations or metaphorical spatial icons. Periodic vibrations canguide a user’s walking speed without large attentional demands [143]. Tactile illu-sions like saltation can provide directional information for guidance [263]; largerback-based displays are effective for guiding both attention and direction, e.g., inautomobiles [261]. Brewster and Constantin [21] used VT icons to provide aware-ness of nearby friends and colleagues. Design helps find the right way to guide theuser.17EducationHaptic technology has the potential to improve educational resources, especiallyto those lacking resources. Montessori methods have long espoused the value ofphysical learning aids, especially using physical manipulatives [184]. There is ev-idence to support these techniques: in a meta-analysis of 55 studies, Carbonneauet al. [34] found that physical manipulations improve several learning outcomes,with influence from other instructional variables. Studies of gestures have alsofound value in students “being the graph” by physically acting out mathemati-cal shapes, grounding abstract knowledge in embodied experience [94]. Thesetechniques have roots in constructivist learning, where learners use existing under-standings as a transitionary object to understand new concepts [200].Haptic technology is well-positioned to support embodied learning, and thereis early evidence for its efficacy. Haptic feedback has been shown to improvetemporal aspects when training motor skills [80]. In a study for molecular chem-istry education, Sato et al. [221] found students had higher test scores when theyinteracted with their haptic learning interface; students reported engagement. InChapter 7 we describe results from an early learning interface for low-cost hapticdisplays [198], showing that haptic technology can improve engagement and makelasting impressions. Students can act as designers when learning through creativeproblem-solving; lessons require carefully designed feedback to engage studentswithout distracting them.2.3 Non-Haptic Design and Creativity SupportDesign thinking is a process and approach to solving problems. Although oftenused without a definition [292], design thinking always involves rapidly generat-ing and evaluating multiple ideas, and iteratively exploring a problem space byarticulating and manifesting a solution space; ideas then converge to a final devel-oped concept (or set of concepts) [30]. Design is increasingly studied as a field innon-haptic contexts, providing both guidelines and inspiration for HaXD. In thissection, we present related work on general design thinking organized into threemajor elements: problem preparation, hands-on design, and collaboration. Wethen briefly discuss the relationship between design and research.182.3.1 Problem PreparationCreative tasks, like design, are often defined as the recombination of existing ideas,with a twist of novelty or spark of innovation by the individual creator [272]. Alsoknown as the “problem setting” [235], “analysis of problem” [272], or “collect”[244] step, problem preparation involves getting a handle on the problem, drawinginspiration from previous work. Scho¨n demonstrated that designers initially frametheir problems before developing a solution [235]; he also describes the designer’srepertoire, their collected experience, which aids in design. External examples areespecially useful for inspiration and aiding initial design [30, 114]. In our work,the design activity of browse overlaps significantly with problem preparation.2.3.2 Hands-On DesignThinking is not relegated to the head, but situated in the physical world [119]. Thedesigner must iteratively generate a varied set of initial ideas (ideation) and thenprune them (evaluation), repeating this step many times to settle on a single de-sign [30]. Working with multiple ideas simultaneously is a boon to good design.Developing interfaces in parallel can facilitate generation and evaluation, delay-ing commitment to a single design [107, 211], while in groups, sharing multipledesigns improves variety and quality of designs [67].Sketching supports ideation, evaluation, and multiple ideas, allowing the de-signer to explicitly make moves in a game-like conversation with the problem[235]. It is so important that some researchers declare it to be the fundamentallanguage of design, like mathematics is said to be the language of scientific think-ing [51]. The power of sketching, according to Cross, is contained in its abilityto describe a partial model of a proposed design or problem. Detail can be sub-ordinated, allowing a designer to zoom-in, solve a problem, and then abstract itaway when returning to a high-level view. This has implications for software tools:designers must easily navigate the design space with undo, copy and paste, and ahistory of progress, creating tools with a “high ceiling” and “wide walls” [211].We use the term “sketching” in a broad sense, including both pencil and paper andsoftware or hardware sketches [30, 185]. Our design activity of sketching refers toexploration and demonstration as distinguished from iterative refinement. In other19works, sketching can encompass iteration, annotation, and some level of refine-ment.2.3.3 CollaborationDesign is a collaborative process. Working in groups has the potential for gen-erating more varied ideas [272] than working individually, and is important forcreativity support tools [211, 244]. Although sometimes group dynamics influencethe design process negatively, proper group management and sharing of multipleideas results in more creativity and better designs [114]. Shneiderman in particu-lar has championed collaboration in design [244], and suggests two different typesof collaboration to be supported by creativity tools: relating, informal discussionswith colleagues, and donating, disseminating information to the public/annals oftime. Orthogonal to these intended purposes (relating and donating) is the collab-oration context. Computer-supported collaborative work often separates interac-tions into four contexts ordered into two dimensions: collocated (same location) ordistributed (different locations), and synchronous (simultaneous) or asynchronous(at different times) [75]. This distinction is useful for scoping challenges facingHaXD: touch is proximal and context-dependent, inhibiting asynchronous and re-mote collaboration. We explore informal collaboration briefly in Chapter 3, andexplore aspects of sharing in more detail in Chapters 6 and 7. Chapter 8 character-izes how professional haptic designers collaborate.2.3.4 Design ResearchThe HCI community is actively exploring the relationships between design andresearch, which are intertwined for interactive systems. Current discourse revolvesaround how HCI research can draw generalizable knowledge from design methods,and how knowledge gained from research can inform design practice.Frayling [86] challenges popular stereotypes about design, art, and scientificresearch. He suggests three different different relationships: research into design(and art), research through design, and research for design. Research into designis common, including historical research, aesthetic or perceptual research, and per-spectives or theories about design. Research for design is “small ‘r’” research, and20includes the gathering of reference materials, examples, and previous artifacts tosupport the design of a product or experience, not for knowledge creation. Re-search through design (RtD) involves practicing design with the explicit goal ofcreating research, including materials research by creating artifacts, customizingor developing technology, or action research. This final approach – RtD – is thekind we use most in this dissertation.Zimmerman et al. [292] suggests RtD as a model to use design skills to con-duct HCI research. Designers can be involved in knowledge-making by attemptingto “make the right thing.” Knowledge is embedded into artifacts and the processof their creation, integrating both “true” and “how” knowledge. To develop RtDinto a more formal methodology, Zimmerman et al. [293] conducted interviewswith leading HCI design researchers and found that design projects in a researchcontext can serve as a common vocabulary between researchers and inspirationfor future projects. Gaver [91] points out that RtD fits into scientific paradigmsby supporting theory development, but cautions against too much formalization ofthis methodology. Instead, Gaver argues that diversity of design approaches is avirtue, and that annotated portfolios may uphold design’s advantages while provid-ing reusable knowledge.Consistently, design is proposed as a means to progress solving “wicked prob-lems” [214]. The core value of design in research is its ability to simultaneouslyresolve ill-posed problems and potential solution spaces into a question with a so-lution. Sketching and prototyping remain core techniques to supporting interactivedialogue with a problem [235], but are valid means of inquiry when design is em-bedded within a research context, e.g., “design-oriented research” [78].In this dissertation, we use RtD to explore the problem space of creating HaXDsupport tools (Chapters 3-6), and in practicing HaXD as a designer (Chapter 7).We complement RtD with qualitative inquiry, using methods from grounded theoryand phenomenology, and quantitative analysis, using statistics and visualization.We discuss our larger methodological framework in Section 2.5.212.4 Previous Efforts for Haptic Experience DesignWe are not the first to look into haptic media production or to apply design think-ing to haptic design. While we present a cohesive look on HaXD process linkedto general guidelines for HaXD support tools, previous work has developed inter-active software tools, supportive software and hardware platforms, and conceptualframeworks to facilitate HaXD.2.4.1 Interactive Software ToolsAs long as designers have considered haptic effects for entertainment media, theyhave needed compositional tools [103]. Here, we report on interactive softwaretools for editing haptic sensations.Track-based editorsMany user-friendly interfaces help designers create haptic effects. The Hapticoneditor [76], Haptic Icon Prototyper [258], and posVibEditor [219] use graphicalmathematical representations to edit either waveforms or profiles of dynamic pa-rameters (torque, frequency) over time. H-Studio [60] enables editing of multi-DOF force feedback for video. A less orthodox approach is the Vibrotactile Score[158], which uses musical notation. The Vibrotactile Score was shown to be gener-ally preferable to programming in C and XML, but required familiarity with musi-cal notation [156]. In industry, editors like the D-Box Motion Code Editor overlayvisual and audio content with haptics, and allow designers to generate, tune andsave frame-by-frame haptic content. Vivitouch Studio allows for haptic prototyp-ing of different effects alongside video (screen captures from video games) andaudio, and supports features like A/B testing [259]. While most existing editors fo-cus on vibration effects or simple forces, other types of actuation are now receivingattention, e.g., the Tactile Paintbrush creates friction profiles for the TPad [180].Mobile Design ToolsMobile tools make haptic design more accessible. The Demonstration-Based Edi-tor [118] allows control of frequency and intensity by moving graphical objects ona touchscreen. Similar to the SPIN lab’s Haptic Instrument (mHIVE, Chapter 3),22this mobile tool was shown to be intuitive and easy to use for exploration or com-munication, but faltered when refining more elaborate sensations. Commercially,Apple’s end-user vibration editor has been present in iOS since 2011 (iOS 5) butonly produces binary on/off timing information; Immersion Touch Effects Studiolets users enhance a video from a library of tactile icons on a mobile platform.Automatic GenerationOne approach is to use camera motion sourced from accelerometers [58] to ac-tuate audience members’ hands and head in a HapSeat [59, 61]. Several studieshave looked into automatic conversion from audio streams [40, 120, 157] or videostreams [145] to VT or force-feedback output. Automatic techniques can be com-bined with later editing to populate a design before a designer works with it.Multi-actuator ToolsThe control of multi-actuator outputs has been explored by TactiPEd [199], Cuar-tielles’ proposed editor [55], and the tactile movie editor [146]; the latter combinedspatial and temporal control using a tactile video metaphor for dense, regular ar-rays of tactile pixels (“taxels”), including a feature of sketching a path on videoframes. However, these approaches embrace the separate control of different ac-tuators, rather than a single perceived sensation produced by the multi-actuatordevice, which we address with tactile illusions in Chapter 4.2.4.2 PlatformsPrototyping platforms help users rapidly build haptic effects. We first report soft-ware platforms, then hardware.Software LibrariesThere are many software libraries to support developers. Some are collectionsof effects available to designers. The UPenn Texture Toolkit contains 100 texturemodels created from recorded data, rendered through VT actuators and impedance-type force feedback devices [56]. The HapticTouch Toolkit [154] and Feel Ef-fect library [129] control sensations using semantic parameters, like “softness” or23“heartbeat intensity” respectively. VibViz is an online tool organizing 120 vibra-tions into perceptual facets [240]. Vibrotactile libraries like Immersion’s HapticSDK (immersion.com) connect to mobile applications, augmenting Android’s na-tive vibration library. Collections of effects can help a designer start their design,but few are open and modifiable, which we establish as a valuable feature in Chap-ter 5.Other software libraries facillitate programming. Force feedback devices havemature APIs like CHAI3D (chai3d.org), H3D (h3dapi.org), and OpenHaptics (ge-omagic.com). Immersion’s TouchSense Engage is a software solution for vibrationfeedback for developers. The Haptic Application Meta Language (HAML) [70] isan XML-based data format for adding haptics to MPEG-7 video, eventually aug-mented with the HAML Authoring Tool (HAMLAT) [71]. Abdur Rahman et al. [1]adapted an XML approach to YouTube, and Gao et al. [90] developed related on-line MPEG-V haptic editing. Programming is extremely flexible, but are abstract;designers are usually not be able to feel what they design.Hardware Prototyping PlatformsHardware prototyping platforms speed physical development. Arduino (arduino.cc)is a popular open source microcontroller and development platform with a dedi-cated editor. Phidgets (phidgets.com) facilitate rapid hardware prototyping withover 20 programming languages [97]. More recently, Wooden Haptics gives open-source access to fast laser cutting techniques for force feedback development [85].These platforms, especially Arduino, have made significant improvements to en-able rapid iteration and hardware sketching. We can do better: these platformsrequire programming, hardware, and haptics expertise, and include inherent timecosts like compilation, uploading, and debugging.2.4.3 Conceptual ToolsDesign can be supported by having a good metaphor to frame it. Here, we reporton different ways to approach a haptic design: higher-level perspectives, metaphorsfor designers, and the language of touch.24PerspectivesSome higher-level perspectives offer outcome targets or design attitudes to guidehaptic practitioners. “DIY Haptics” categorize feedback styles and design prin-ciples [111, 168]. “Ambience” is proposed as one target for a haptic experience[171]. Haptic illusions can serve as concise ways to explore the sense of touch,explain concepts to novices and inspire interfaces [109]. “Simple Haptics”, epit-omized by haptic sketching, emphasizes rapid, hands-on exploration of a creativespace [185, 186]. Haptic Cinematography [63] uses a film-making lens, discussingphysical effects using cinematographic concepts. Haptics courses are taught with avariety of pedagogical viewpoints, including perception, control, and design; theyprovide students with an initial repertoire of skills [135, 196].Metaphors for designHaptics has often made use of metaphors from other fields. Haptic icons [167],tactons [20], and haptic phonemes [77] are small, compositional, iconic represen-tations of haptic ideas. Touch TV [183], tactile movies [146], haptic broadcasting[38], and Feel Effects [129] attempt to add haptics to existing media types, espe-cially video. Haptic Cinemotography [62, 63] uses a film-making lens for haptic-augmented multimedia, including basic principles of composition when combinedwith video [100].Musical analogies have frequently been used to inspire haptic design tools,especially VT sensations. The Vibrotactile Score, a graphical editing tool repre-senting vibration patterns as musical notes, is a major example [156, 158]. Othermusical metaphors include the use of rhythm, often represented by musical notesand rests [23, 26, 39, 265]. Earcons and tactons are represented with musical notes[20, 22], complete with tactile analogues of crescendos and sforzandos [25]. Theconcept of a VT concert found relevant tactile analogues to musical pitch, rhythm,and timbre for artistic purposes [103]. Correspondingly, tactile dimensions havebeen also been used to describe musical ideas [72]. Though music lends its vocab-ulary to touch, non-temporal properties remain difficult-to-describe.25Language of TouchThe language of tactile perception, especially affective (emotional) terms, is an-other way of framing haptic design. Many psychophysical studies have been con-ducted to determine the main tactile dimensions with both synthetic haptics andreal-world materials [76, 193]. Language is a promising way of capturing userexperience [191], and can reveal useful parameters, e.g., how pressure influencesaffect [290]. Tools for customization by end-users, rather than expert designers,are another way to understand perceptual dimensions [239, 240]. However, thiswork is far from complete; touch is difficult to describe, and some even questionthe existence of a tactile language [132]. In this dissertation, we encounter the lan-guage of touch in some chapters (Chapter 3, Chapter 6). However, we approach ourdesign tools without an imposed language, giving designers fast, powerful controlover low-level parameters like frequency and amplitude.2.5 MethodologyWe used a mixed-method approach to the work presented in this dissertation. Wecombined research through design with qualitative and quantitative analysis togather complementary information. The particular blend of methods dependedon the project. We began with design and qualitative techniques to gather rich,generative data from design tools. We then iteratively refined our qualitative meth-ods, and built towards large-scale data collection for our generated theories withquantitative analysis and deployed tools.We began our research process using research through design [292]. Beginningwith design allowed us to iteratively frame our questions [293] while generatinginitial theories [91]. In addition, by building usable support tools, we gained directknowledge of how to work haptic devices. A design approach is constructive;working tools exactly specify design ideas, can be shared with haptic designersas a practical contribution, and provide an extendable platform to facillitate futureresearch. For example, implementing Tactile Animation informed how to architectMacaron, which we could then directly extend as MacaronBit (Section 7.5). Wediscuss the relationship between design and research in Section 2.3.4.Qualitative research methods let us study complex phenomena in flexible, rig-26orous way. They are generative, supporting building a hypothesis from observa-tions with few theoretical assumptions. Qualitative frameworks support diverse,rich data, including observational findings, interview, and screen-captured video.This data helped us capture the experience of HaXD and develop guidelines foriteration on our tools’ early implementations. We drew from two methodologies inour qualitative studies: phenomenology and grounded theory.Phenomenology is both a philosophical tradition and a social science method-ology based upon that tradition that involves the study of subjective experience.We used Moustakas [187] as our primary guide through both, as it focuses onpractical methodological concerns but provides a strong philosophical background;Creswell [50] provided an overview of various methodologies and resources forphenomenology. Phenomenology as a methodology has been used in psychologyto investigate topics ranging from visual illusions to tactile experience [50, 191,212]. In this work, we specifically use the Stevick-Colaizzi-Keen methods as de-scribed by Moustakas [187]. This technique handles textual data: transcripts aredivided into non-overlapping, non-redundant statements about the phenomenonknown as Meaning Units (MUs). MUs are then clustered into emergent themesthrough affinity diagrams, writing and re-writing of thematic descriptions, and re-flection guided by phenomenological philosophy. Because HaXD is not a com-monly experienced phenomenon, we drew from phenomenological methods to ex-amine the limited experiences of in-lab stimuli. This approach is similar to previ-ous work on sensory perception, e.g., geometric illusions [212], or mid-air tactiledisplays [191]. In our case, we studied the experience of design haptic sensationsusing our provided design tools.Grounded theory is another well-known methodology first described by Glaserand Strauss [95]. We adopted the more flexible methodology described by Corbinand Strauss [49], as it allowed us to integrate with our phenomenological meth-ods. We principally adapted the methods used in grounded theory, specifically,memoing (writing about each focused quotation), constant comparison (comparingeach new memo and codes to previous ones), and open and axial coding (creatingcodes, or concepts, linking them together, then categorizing statements or obser-vations based on codes). These techniques facillitated video analysis in Chapter 4and Chapter 5, and allowed for quantitative count-based data and simple statistics27to complement our interview-based findings. Section A.2 provides examples ofintermediate products from analysis in Chapter 5.While some design scholars employ qualitative techniques [52, 53, 235], othershave developed quantitative techniques. Statistical analysis and other quantitativemethods can analyze large data sets and complement qualitative research questions.When studying graphic design for ads, Dow et al. [67] used ratings by experts aswell as click-through rates and other online analytics for actual deployed ads fromtheir study. Kulkarni et al. [150] used MTurk to generate sketches of aliens inseveral conditions (of exposure to examples), then deployed another MTurk taskto label each drawing with features like antennae or feet. Lee et al. [155] hadend-users rate graphic designs in both an in-lab study and over Mechanical Turk,and recorded time participant designers spent on each component. Large-scaleonline studies are currently not available for haptic technology, which requiresdata collection infrastructure. As we started to use quantitative data, we also beganto build this infrastructure using an online editor with analytic logs in Macaron(Chapter 5) and examining the potential for crowdsourced feedback with HapTurk(Chapter 6). While a valuable goal, large-scale quantitative feedback on HaXDremains outside the scope of this dissertation.28Chapter 3Sketch: The Haptic InstrumentFigure 3.1: Concept sketch of a haptic instrument. Both users experience thesame sensation, controlled in real-time.Preface – The haptic instrument case study1 was the first of our three vibrotactiledesign tools. We studied the role of real-time feedback and informal, synchronouscollaboration on HaXD using musical instruments as inspiration and participantswith haptics experience as proxies for hapticians. We built a haptic instrument,mHIVE, in a tablet-based interface, and used phenomenology to begin developingour evaluation methods. This study was a small but important first step in ourthinking: we found mHIVE was effective for exploration but not refinement, which1Schneider and MacLean. (2014) Improvising Design with a Haptic Instrument. Proceedings ofHaptics Symposium – HAPTICS ’14.29led to distinguishing the design activities of sketch and refine.3.1 OverviewAs the need to deploy informative, expressive haptic phenomena in consumer de-vices gains momentum, the inadequacy of current design tools is becoming morecritically obstructive. Current tools do not support collaboration or serendipitousexploration. Collaboration is critical, but direct means of sharing haptic sensationsare limited, and the absence of unifying conceptual models for working with hap-tic sensations further restricts communication between designers and stakeholders.This is especially troublesome for pleasurable, affectively targeted interactions thatrely on subjective user experience. In this paper, we introduce an alternative designapproach inspired by musical instruments – a new tool for real-time, collaborativemanipulation of haptic sensations; and describe a first example, mHIVE, a mo-bile Haptic Instrument for Vibrotactile Exploration. Our qualitative study showsthat mHIVE supports exploration and communication but requires additional vi-sualization and recording capabilities for tweaking designs, and expands previouswork on haptic language.3.2 IntroductionHaptic feedback has hit the mainstream, present in smartphones, gaming and auto-mobile design, but our knowledge of how to design haptic phenomena remains lim-ited. There are still no agreed-upon vocabularies or conceptual models for hapticphenomena [76, 154, 158, 191], in contrast to other modalities (e.g., using theoryof minor chords to evoke a sad emotion in music). For subjective qualities, such aspleasant alerts or frightening game environments, prospects are even more limited.Design is still based on trial and error with programming languages, limiting ex-ploration. The lack of established conceptual models or design frameworks furtherchallenges communication between designers and stakeholders.Using a music composition metaphor (as in [158]), we are writing music with-out ever playing a note. Instead, we compose a work in its entirety, then listen to theresult before making changes. In contrast, musicians often use their instruments asa tool for serendipitous exploration when designing music and can draw upon mu-30Output DeviceControl DeviceReal-time Feedback Shared OutputDistributedUser(s)AsynchronousFigure 3.2: The haptic instrument concept. One or more people control theinstrument, and receive real-time feedback from the device. Any num-ber of audience members can feel the output in real time as well. Con-trol methods can vary, from traditional musical control devices (such asthe M-Audio Axiom 25, used in preliminary prototypes) to touchscreentablets (used in mHIVE). Output devices vary as well.sical theory. Furthermore, music is collaborative, with communication facilitatedby a reference point of a sound. Touch, however, is a personal, local sense, makingit difficult to discuss stimuli.Facilitated exploration and collaboration should streamline the haptic designprocess and inform a guiding theory, analogous to those for musical composition.Designers will attain fluency with new devices and control parameters, while col-laborative elements will get people designing in groups. A usable haptic languagemay emerge from their dialogue.Our approach is to directly address these shortcomings with the development ofa haptic instrument, inspired by musical instruments but producing (for example)vibrotactile sensations rather than sound (Figure 3.1). Haptic instruments have twomain criteria: they provide real-time feedback to the user to facilitate improvisationand exploration, and produce haptic output to multiple users as a what-you-feel-is-what-I-feel (WYFIWIF) interface. This allows for a dialogue that includes a hapticmodality: haptic instruments create a shared experience of touch, allowing for acommon reference point. We developed a vibrotactile instance, mHIVE (mobileHaptic Instrument for Vibrotactile Exploration), as a platform to investigate thisconcept. Our main contributions are:• A definition of the haptic instrument concept & design space.• A fully-working haptic instrument (mHIVE).31• The novel application of an established psychological methodology, phe-nomenology, to investigate mHIVE’s interface and subjective tactile experi-ences.• Preliminary results from a qualitative study that show mHIVE supports ex-ploration and collaboration, and implications for the design of future hapticdesign tools.In this paper, we first cover the related work of haptic design tools and hapticlanguage, then define the haptic instrument, its requirements, features, and designspace. We report the design of mHIVE, our methodology, and preliminary results,and conclude with future directions for haptic tool design and research into a hapticlanguage.3.3 Related WorkWe cover previous work related to musical metaphors for haptic design, other toolsfor haptic design, and the language of haptics.3.3.1 Musical Metaphors in Haptic DesignMusical analogies have frequently been used to inspire haptic design tools. The vi-brotactile score, a graphical editing tool representing vibration patterns as musicalnotes, is an example of controlling vibrotactile (VT) sensations [156, 158]. The vi-brotactile score provides an abstraction beyond low-level parameters and can drawfrom a musician’s familiarity with the notation, but we can take this idea further:when writing a song, a musician might improvise with a piano to try out ideas. Weare inspired by the vibrotactile score and musical instruments, but define hapticinstruments as a more general concept than literal musical instruments for touch.Other musical metaphors include the use of rhythm, often represented by musi-cal notes and rests [23, 26, 39, 265]. Tactile analogues of crescendos and sforzan-dos have proven valuable to designing changes in amplitude [25]. Indeed, Brew-ster’s original earcons and tactons were represented with musical notes [20, 22].The concept of a vibrotactile concert or performance was explored to identify rel-evant tactile analogues to musical pitch, rhythm, and timbre for artistic purposes32[103]. As well, tactile dimensions have been used to describe or map to musicalideas [72]. Musical concepts have been widely used in the design of vibrotactilesensations, which we draw upon when designing mHIVE.3.3.2 Other Haptic Design ApproachesMany tools have been developed to make it easier to work with the physical param-eters of a haptic device. The Hapticon Editor is a graphical software tool that allowsdirect manipulation of the waveform for vibrations [76], and in another approach,piecing together of smaller iconic idioms [258]. This idea is best encapsulated by“haptic phonemes”, the smallest unit of meaningful haptic sensations that can becombined [77]. A similar approach was used with TactiPEd, a graphical metaphorfor control of wrist-based actuators, by controlling the low-level parameters of fre-quency, amplitude, and duration [199]. Haptic instrument parameters can be low-or high-level, but we use similar parameters with mHIVE.Non-graphical approaches have also contributed to haptic design. Program-ming has benefitted from the use of toolkits such as HapticTouch, which useshigher-level descriptors (“Softness”, “Breakiness”) to control tangibles [154]. Thougha promising direction, the vocabulary is not empirically grounded, and develop-ers still have to deal with physical parameters. Hardware sketches and designingthrough making are also important approaches, since the immediate feedback ofbeing able to feel haptics is crucial [186].3.3.3 Haptic LanguageInvestigation into the language of tactile stimuli has a long history in psychologicalstudies [193]. Many psychophysical studies have been conducted using factor anal-ysis or similar approaches to determine the main tactile dimensions [193], but thesehave looked at materials rather than synthesized vibrotactile sensations, and haveprimarily been deductive (evaluating a pre-determined set of terms) rather than in-ductive (asking participants to describe sensations without prompting). Other workhas shown little consensus on constant meanings for difference tactile dimensions,or whether a tactile language even exists [132]. There is a clear need to empir-ically investigate the subjective experience of touch-based interfaces, for which33phenomenology is ideal [50, 187].Our study is perhaps most closely related to Obrist, Seah, and Subramanian’swork on the perception of ultrasound transducers [191]. Their study examined thelanguage used to describe two different sensations, one oscillating at 16 Hz and theother at 250 Hz. Though they also used phenomenology, our study differs in twoimportant ways: we explore vibrotactile sensations rather than ultrasound, and giveour participants a way of controlling the phenomenon directly, allowing for morecoverage of the stimulus design parameters. A more deductive approach by Zhengand Morell also looked at how pressure and vibration actuators influenced affect,noting that affect influences attention, and documented qualitative descriptions ofthe sensations [290].3.4 Defining the Haptic InstrumentWe define a haptic instrument as a tool for general manipulation of one or morehaptic (tactile, force-feedback, or both) devices that provides real-time feedback toanyone controlling the device, and can produce identical shared (WYFIWIF) out-put to all users to facilitate discussion and collaboration. Manipulation can includeideation, exploration, communication, recording, refinement, and articulation. Ma-nipulation can be for utilitarian purposes (e.g., designing haptic notifications) orartistic expression (e.g., a haptic performance). Output devices can be purely out-put, or interactive. Furthermore, although haptic devices must be involved, multi-modal experiences could easily be created by combining a haptic instrument withauditory or visual output.23.4.1 Design DimensionsThere are several main design dimensions that can be considered in a haptic instru-ment (outlined in Figure 3.2). A haptic instrument can occupy multiple positionson these dimensions.Asychronous/synchronous. Though a haptic instrument must provide real-time feedback, its collaborative (shared-output) aspect could be either synchronous2One could even imagine a multimodal instrument such as Asimov’s Visi-Sonor [8] or its parody,Futurama’s Holophonor [87].34(by having multiple people experience the real-time output) or asynchronous (byallowing for recording and playback, important for design).Collocated/distributed. A haptic instrument’s output could be present onlyfor users in the same room, or be broadcast over a network to people around theworld. For example, multiple mobile devices could all display identical output in adistributed manner.Private/shared control. A haptic instrument’s control could be private (op-erated by a one person at a time) or shared (multiple users control the display).Shared control could be collocated or distributed (e.g., a web interface and sharedobject model).Output mechanism. Each haptic instrument will control a haptic device,which has its own mechanism for providing a haptic sensation (e.g., vibrotactilesensations). Because haptic devices can be complex and combine multiple mech-anisms, this is a large space in its own right. Characterizing the different displaymechanisms is something that we must leave to future work. Suffice it to say, ahaptic instrument will be different depending on its output device.Number of haptic instruments or output devices. One consideration iswhether a haptic instrument is intended to operate alone, or with other haptic/-multimodal instruments. One can imagine haptic jam sessions for inspiration andideation, or even form haptic bands for artistic expression. This is highly related toprivate/shared control – there is a fine line between several identical haptic instru-ments with private control, and a single haptic instrument with shared control andseveral output devices. Note that a haptic instrument may involve several devicesto produce shared-output.Control mechanism. Similarly, a haptic instrument could be controlled in avariety of ways. From musically-inspired MIDI controllers to smartphone applica-tions, we envision a wide variety of control methods. Even a real-time program-ming environment might be appropriate for complex interactive sensations, so longas the control mechanism works with the output device’s paradigm.We expect that haptic instruments could provide both immediate and long-termvalue. We hope haptic instruments will improve the design process immediately,by supporting exploration and collaboration. Over time, their use could lead to anatural, emergent design language valuable in its own right. One can also imagine35a general tool composed of several virtual haptic instruments, much like digitalmusical synthesizers.3.5 mHIVEWe developed mHIVE to begin to explore how a haptic instrument should workand what it should do (Figure 3.3). mHIVE is collocated, synchronous, and mostlyprivate in control; it accommodates shared display via dual Haptuators [280] andis operated with a single-touch tablet-based interface (Figure 3.4). We began withvibrotactile design because VT sensations are common, do not require interactiveprogramming, are controlled through waveforms (analogous to music), and theirlow-level control parameters are well understood. A touchscreen allowed directmanual control.mHIVE offers real-time control of frequency, amplitude, waveform, envelope,duration, and rhythm, identified as the most important parameters for vibrotactilesensations [20, 25, 26, 103, 215].The main view controls amplitude (0 to 1) on the vertical axis, and frequency(0-180Hz, determined by piloting) horizontally. Amplitude and frequency werecombined because we modeled them both as continuous controls: dynamics ofcontinuous amplitude have been shown to be a salient design dimension [25, 103],and we did not want to choose discrete bins for frequency at this early stage. Fur-ther, single-handed control was essential – the other hand is required to feel theoutput. These axes were labeled to help users understand what they were and togive general sense of the values. A two-dimensional visual trace shows the pre-vious two seconds of interaction history with the main view, intended to providefeedback and aid memory about drawings that were used.VT duration and rhythm are directly mapped to screen-touch duration andrhythm. In analogy to musical timbre [20, 103], we provided four waveforms: sine,square, rising sawtooth and triangle. Sine and square are distinguishable [103], butwe added sawtooth and triangle waveforms to expand the palette.The attack-decay-sustain-release (ADSR) envelope controls amplitude auto-matically as duration of the note continues, as a 0-to-1 multiplier of the amplitudedisplayed on the main amplitude-frequency input. Attack determines the amount of36Waveform ADSR EnvelopeRecord & ReplayAmplitudeFrequencyFigure 3.3: mHIVE interface. Primary interaction is through the amplitude-frequency view, where visual feedback is provided through a circle (cur-rent finger position) and a trail (interaction history).time (in milliseconds) to ramp the amplitude from 0 (none) to 1 (full). Decay deter-mines the amount of time (in milliseconds) to ramp the amplitude from 1 (full) tothe sustain level. Sustain determines the amplitude level (from 0 to 1) held as longas the user keeps a finger on the display, playing a haptic note. Release determinesthe amount of time (in milliseconds) to ramp the amplitude from the sustain levelto 0 (none). This envelope is a common feature of synthesized or digital musicalinstruments, and was noted as particularly useful in the Cutaneous Grooves project[103].During piloting, we noticed that the ADSR concept was difficult to explain. Wethus developed a novel interactive visualization, where the user could change theenvelope parameters by dragging circles around. A red line operates as a cursor orplayhead, showing the current progress through the envelope, looping around thedotted line when the sustain level is held.Recording functionality was added to support more advanced rhythms and rep-etitions, and to allow users to save their sensations for later comparison. The recordfeature captures changes in frequency, amplitude, waveform, ADSR, and replayedrecordings, allowing for compound haptic icons to be created. During playback,all changes are represented in the interface as if the user had manipulated themin real-time. At this time mHIVE only produces a single output sensation (with a37single waveform, ADSR setting, frequency, and amplitude). Multitouch, layering,and sequencing (automatically playing multiple notes with a single touch) are notsupported, as the semantics were too complex for a first design.mHIVE is implemented in Java using the Android SDK [6], and the FMODsound synthesis library [84] to produce sounds, sent to two or more Haptuatorsthrough an audio jack. We deployed mHIVE on an Android Nexus 10 tablet run-ning Android 4.2.1.3.6 Preliminary Study MethodologyWe conducted a preliminary qualitative study to investigate two questions. First,is mHIVE an effective tool for the expression, exploration, and communication ofaffective phenomena? Second, what language, mental models, and metaphors dopeople use to describe vibrotactile sensations, and how do they relate to mHIVE’slow-level control parameters?We collected and analyzed our data using the methodology of phenomenology,an established variant of qualitative inquiry used in psychology to investigate topicsranging from visual illusions to tactile experience [50, 191, 212]. Phenomenologyexplores subjective experience, appropriate for an investigation into the more in-tangible qualities of pleasantness and affect. At this point, the rich, inductive dataof qualitative analysis is more valuable than a controlled experiment with statisticalanalysis.In particular, we use the Stevick-Colaizzi-Keen method as described by Mous-takas [187]. In-depth interviews are conducted with a small number of partici-pants. The interviewer, Researcher 1 (R1), also documents his experience, as if hewas interviewing himself. Then, R1 transcribes each interview, including his own.Transcripts are divided into non-overlapping, non-redundant statements about thephenomena known as Meaning Units (MUs). This considers every statement thatthe participants make, and does not discount any due to bias or selective searching.Then, MUs are clustered into emergent themes. We interpret our themes in theDiscussion.383.6.1 ProcedureOur 1-hour open-ended interviews used the following protocol:1. Ask the participant for their background: occupation, experience with touch-screens, haptics, music, and video games.2. Demonstrate mHIVE to the user, and invite them to explore while thinkingaloud to describe the sensations they feel.3. Probe the design space by asking participants to explore different controlparameters, and to explore their metaphors (e.g., if the participant describesa sensation as “smooth”, R1 would ask them to try to produce a “rough”sensation).4. Ask the participants to produce sensations for the six basic cross-culturalemotions documented by Ekman [74], and rank how well they think theirsensation represents the emotion on a 4-point semantic differential scale(Very Poorly, Somewhat Poorly, Somewhat Well, Well). This was done bothas an elicitation device to gather a wider range of interactions with mHIVE,and to directly investigate a design task.5. Set the Haptuators down, and ask the participants to describe their experienceof working with mHIVE in as complete detail as possible to evaluate thedevice itself.R1 conducted the interviews and analysis, which required specialized knowledgeof mHIVE. Scores of inter-rater reliability common with other qualitative analy-ses (e.g., grounded theory [49]) are inappropriate and unavailable, as we did notconduct deductive, low-level coding. To improve reliability, R1’s documented ex-perience was analyzed first, and then consulted during analysis to remove bias (e.g.,to not use terms only used by the experimenter).3.7 ResultsWe sought participants with experience designing haptics as a proxy for expert de-signers for our initial study. Four participants were recruited through email lists39Figure 3.4: Study setup. Both the participant (left) and the interviewer (right)feel the same sensation as the participant controls mHIVE.and word-of-mouth (P1-4, three male), and were all in the age range of 26-35 withself-reported occupations including graduate students or post-docs in informationvisualization, HCI, and human-robot interaction). All had experience working withhaptic technology, and (because of this requirement) all knew the main researcherin a professional capacity, although only P2 had seen earlier prototypes of thehaptic instrument. The small sample size, typical for phenomenological studies[50], was appropriate for the rich data we wanted. Data collection ended when weachieved saturation of new results, and had a clear direction for our next iteration.Here we report the three major themes that emerged during analysis: mHIVE’ssuccess as a haptic instrument, mHIVE’s limitations that reveal more detail aboutthe haptic design process, and the use of language in the study.3.7.1 mHIVE Succeeds as a Haptic InstrumentOur results suggest that mHIVE can be effective for exploration of a design space,and communication in the haptic domain. Overall, mHIVE was well received, seenas a novel and promising tool. “I definitely liked it” (P1), “I think there should bemore devices like this for designing haptic icons” (P2).Serendipitous exploration. Participants reported that mHIVE was best servedto explore the design space, generate a number of ideas, and try things out. Serendip-itous discoveries and exclamations of surprise were common. Participants wereable to “accidentally stumble upon something” (P2) as they explored the device.“I felt I could get a large variety”, “I could easily play around with the high-level40to find out what was neat” (P3).Communication. mHIVE established an additional modality for dialogue.The dual outputs created a shared context, demonstrated by deictic phrases: theadditional context of the vibrotactile sensation was required to make sense of thestatement. The use of “that” and “there”, reminiscent of the classic “Put ThatThere” multimodal interaction demo [17] indicate a shared reference point was es-tablished from the haptic instrument. “So there’d be like, (creates a sensation onthe device), which is pretty mellow” (P3).In particular, P4 successfully communicated the sensation of sleepiness to theR1, by asking whether R1 could guess the sensation. “Can you guess it?” (P4)“Sleepy?” (R1) “Yeah. Pretty good” (P4). The dialogue worked as a two waychannel, as R1 was able to phrase questions using the device. “It was different”(P2) “How was it different?” (R1) “You delayed the first part, it felt new” (P2).Certain sensations, like a feeling of randomness, could only be felt when an-other person controlled mHIVE. “When someone else does it, I feel better, it’s like,you cannot tickle yourself” (P2).3.7.2 Tweaking through Visualization and ModificationDuring analysis, some key directions for future design emerged around visualiza-tion and control capabilities.Inability to tweak. Though mHIVE supported exploration and collaboration,we found it was inadequate as a standalone design tool. Few created sensationswere considered to be final. Many descriptions were hedged and in the designtask, few sensations captured the emotional content well.“I dunno, maybe that’safraid?” (P1), “Still felt that you can make them better” (P2), “To me that’s morefuming (laughing) than it is angry” (P3). On some occasions, participants were cer-tain about their descriptions. “Sad, definitely down on the amplitude with sad. . . ohthat’s totally sad. Yeah.” (P1). This was uncommon, and usually tied to discoveringan ideal sensation during the design task.More visualization and recording. Part of mHIVE’s inability to supporttweaking was due to cognitive limitations for both memory and attention. Partici-pants found it difficult to remember what they had tried before, and to pay attention41to the output while simultaneously controlling it. “There’s a lot of variables which,when I’m trying to compare between two configurations. . . it was hard sometimesto remember what I had tried” (P3), “I definitely liked being able to feel a stimuluswithout having to implement it, you know, it allows me to focus more on what itfeels like” (P1).Participants suggested that although visualization and recording features helpedsomewhat to overcome these limitations, more was needed. All requested greateremphasis on recording through repetition or looping, both to aid memory and allowfor focus on the sensation independent of device control.Allowing persistent, modifiable sensations and alternative visualizations couldalso help participants overcome these limitations. “The recording records what Ido, but it’d be nice to have it repeat stuff” (P3), “It might conceivably be nice to beable to, you know, draw a curve, draw a pattern, draw like you would in paint, andthen be able to manipulate it, replay it, move the points, see what happens” (P1).3.7.3 A Difficult LanguageOur study was too small to analyze language patterns in detail, but exposes emerg-ing trends.Pleasantness, ADSR, and frequency. Participants often started with a state-ment of like or dislike rather than a description. Pleasant sensations often involvedthe ramp-in and ramp-out (“echo” or “ringing”) of the ADSR envelope, or lower-frequency sensations. Longer, higher frequency without ramp-in and ramp-outwere less pleasant. “I don’t know how else to describe it, I kinda like it” (P1),“Yeah, this [ADSR] seems natural, somehow”, “It feels unnatural to kill the echoright away” (P2), “I like this [low-frequency] sensation cuz to me it feels a lot likepurring” (P3).Waveform. Participants all noticed differences between waveforms, but wereoften challenged in expressing them (P4 used the musical term “timbre”). Squarewaves in particular were distinct, with a greater range and stronger affinity to me-chanical sensations. “It’s interesting, they feel more different than I thought theywould” (P1), “If you want to make something feel like a motorcycle, you woulddefinitely need square wave” (P2).42Aural/haptic metaphors drawn from previous experience. For the mostpart, participants used concrete examples and direct analogies to describe sensa-tions, often drawn from their previous experiences. One stand-out strategy em-ployed by all participants was onomatopoeias: “beeooo” (P1&4), “vroom” (P1),“bsheeeooo”, “boom”, “neeeaa”, “mmmMMMmmm” (P2), “pa pa pa pa”, “tumtum tum tum”, “tumba tumba tumba tumba” (P3); “upward arpeggio, like, (singingwith hand gestures) na na na naaa” (P4). Other sound-based metaphors were verycommon, including hum, buzz, whistle, rumble (P1); bell (P1, P2); squeaky, creak(P2); or thumpy (P3). Still other descriptors were directly haptic in nature: rough,flat (P1); sharp, round, ticklish (P2); sharp, smooth, cat pawing (P3); impatientfoot tapping (P4).3.8 DiscussionHere we interpret these themes to draw implications for haptic design tools, andcompare to research on the language of haptics. We then reflect upon our method-ology and limitations.3.8.1 Design ToolsmHIVE was able to achieve the two main goals of a haptic instrument, facilitatingboth exploration and collaboration. Participants were clearly able to explore thedifferent low-level parameters, and encountered serendipitous or unexpected sen-sations through improvisation. mHIVE created a shared experience that facilitatedcommunication between R1 and the participants. We can thus conclude that hapticinstruments are a promising new tool in a haptic designer’s arsenal, with a first,successful implementation in mHIVE.However, the second theme shows that serendipity and communication are onlypart of the equation. mHIVE does not serve as a general editor of haptic sensations.In particular, participants found their attention split when controlling the deviceand feeling the sensation; perhaps the real-time control should allow for a rapid, butnot instantaneous, switch in focus between control and perception. More generally,participants were unable to tweak sensations because there was insufficient supportfor comparing ideas or evolving an existing idea.43In hindsight, this general difficulty is understandable given the broader contextof the musical instrument analogy we used for inspiration. Musical instrumentsare not used to write songs on their own, but combined with notation or record-ing media. A similar combination of a haptic instrument and recording might bedescribed more succinctly as a haptic sketchpad. Sketching is critical in designbecause it allows for the evolution of an idea through multiple sketches, as wellas criticisms, comparisons, and modifications [53]. Emphasizing a history featurethat supports multiple versions of sketches, the user could develop an idea as ifwith a multiple pages in a sketchbook. Haptic sketching in hardware has alreadybeen shown to be effective [186]. As well, a visual metaphor resonates with thedesire for more effective visualization.Ultimately, haptic instruments may be most useful as one element in a suite, orcomponent of a more general tool. A haptic instrument could complement a graphi-cal editing tool that does support tweaking, such as the vibrotactile score [156, 158]or the hapticon editor [76]. As part of a more comprehensive tool, mHIVE couldbe improved to reduce cognitive barriers to memory and attention. Alternatively,we could add functionality to mHIVE to support looping, visualization, and directmanipulation of the sensations within the tool. We will explore these options as weiterate on mHIVE’s design in future work.3.8.2 LanguageOur preliminary results for language are compatible with the literature, supportingprevious work. Participants’ readiness to say whether a sensation was pleasant ornot supports the view that touch is affective in nature, and that knowing what onelikes or doesn’t like is a primary function of touch [132]. ADSR pleasantness andhigh-frequency unpleasantness are both consistent with the literature: Zheng andMorell note that ramped signals influenced affect more positively than step signals,and 3s high-frequency sensations were annoying or agitating [290]. The heavy useof onomatopoeias is reminiscent of Watanabe et al.’s work with static materials[273]. However, in our study, onomatopoeias were often used to express dynamicsensations (beeeooo being a gradual decrease in amplitude and frequency), whichmight be a useful direction for future work.443.8.3 Methodology and LimitationsAlthough phenomenology is uncommon in the haptics community (excluding [191]),we found it to be an effective way to empirically examine the subjective experienceof using mHIVE. Because the community is still developing processes and tasksfor haptic design, qualitative studies seem to be an especially appropriate way totackle these problems. Once we have further defined haptic design, we can thenmove to more task-based, experimental methods.Our study was a first round of feedback to inform our next iteration, and haslimitations. First, our participant pool is (intentionally) small, and participantswere all collected through our professional network, as people with haptic designexperience are rare. As we continue to tackle the problem of haptic design, wehope to seek out a larger and more diverse pool of participants, and explore morerealistic design tasks.3.9 ConclusionIn this paper, we have introduced the concept of the haptic instrument, a new toolfor haptic designers that supports serendipitous exploration and collaboration. Wedescribed the implementation of mHIVE, a mobile Haptic Instrument for Vibro-tactile Exploration, with design decisions drawn from the literature. Our findingssuggest that haptic instruments are effective tools for improvised exploration andcollaboration, but only support part of the design process. Additional tools or fea-tures are required to support tweaking. Finally, we reported the use of languagewhen interacting with mHIVE, expanding upon several conclusions in the litera-ture.We believe this to be a step towards a greater goal, the establishment of hapticdesign as its own discipline, with processes, tools, and best practices. Future workwill build on this base as we continue to examine the haptic design process. We willconsider a haptic sketchpad concept as one way to overcome the cognitive barriers,and allow users to tweak their designs. We also hope to apply haptic instrumentsand other tools in more realistic design scenarios. By supporting designers at thiscritical point, we can continue to make haptics more valuable than ever.45Chapter 4Refine: Tactile Animation(a) Animate (b) Render (c) DisplayObject 1Object 2Figure 4.1: Concept sketch for tactile animation. An artist draws an animatedsequence in the user interface and the user experiences phantom 2Dsensations in-between discrete actuator grids.Preface – In this second case study1, we iterated on our findings from the hapticinstrument to build a full authoring tool that supported both sketching and refine-ment. This work expanded to spatial vibrotactile designs with professional non-haptic media designers. We surveyed critical haptic authoring tool features anddeveloped a full rendering pipeline for the tactile animation object, an abstractionable to handle diverse spatial vibrotactile arrays. We evaluated the implementedtool, Mango, with both phenomenology and methods from grounded theory anditerated on our study tasks. Professional animators transferred their non-haptic de-1Schneider, Israr, and MacLean. (2015) Tactile Animation by Direct Manipulation of Grid Dis-plays. Proceedings of the Annual Symposium on User Interface Software and Technology – UIST’15.46sign skills to both explore (sketch) and iterate (refine), but missed features to reusedesign elements and gather inspiration from examples. This theme, also glimpsedin Chapter 3, led to the third design activity: browse, which we cover in Chapter 5.4.1 OverviewChairs, wearables, and handhelds have become popular sites for spatial tactile dis-play. Visual animators, already expert in using time and space to portray motion,could readily transfer their skills to produce rich haptic sensations if given the righttools. We introduce the tactile animation object, a directly manipulated phantomtactile sensation. This abstraction has two key benefits: 1) efficient, creative, iter-ative control of spatiotemporal sensations, and 2) the potential to support a varietyof tactile grids, including sparse displays. We present Mango, an editing tool foranimators, including its rendering pipeline and perceptually-optimized interpola-tion algorithm for sparse vibrotactile grids. In our evaluation, professional anima-tors found it easy to create a variety of vibrotactile patterns, with both experts andnovices preferring the tactile animation object over controlling actuators individu-ally.4.2 IntroductionHaptic feedback is viewed today as a key ingredient of immersive media experi-ences. Body-moving devices in theatre seats, ride vehicles, and gaming platformscan tilt, translate, and shake the user for increased engagement. Recently, arrays ofmultiple actuators have been developed to display expressive, spatial sensations onthe skin [59, 127, 146, 250, 276].Vibrotactile (VT) arrays, which stimulate the skin through vibration, are com-mon in diverse applications from immersive gaming chairs [127] to wearable vestsfor mobile awareness [136]. These displays typically employ sparse actuator ar-rangements to reduce cost and power requirements, using perceptual illusions tocreate continuous sensations [4, 126, 242]. Unfortunately, adoption of VT arraysis limited by a lack of authoring tools. Most only support a single actuator [76];those that accommodate multiple actuators control each separately [146, 199, 259],cumbersome for non-adjacent actuators.47To remedy this, we propose the tactile animation object, an abstract, directlymanipulable representation of a phantom sensation perceived in-between physicalactuators. With this approach, designers can efficiently and creatively explore ideasand iterate without worrying about underlying actuator arrangements. As long as arendering algorithm can be developed, this abstraction not only facilitates design,but is compatible with a variety of form factors and technologies.In this paper, we describe the tactile animation object and implement it inMango, a tactile animation tool and pipeline (Figure 4.1). Our contributions are:1) A tactile animation interface grounded in user interviews and prior literature.2) A rendering pipeline translating tactile animation objects to phantom sensationson sparse, generalized VT arrays, optimized with a perceptual study. 3) An eval-uation with professional animators showing accessibility and expressivity. 4) Anexploration of potential applications for tactile animation.4.3 Background4.3.1 Haptic Entertainment TechnologiesHaptic feedback was used in cinema as early as Percepto, a 1959 multisensoryexperience for the movie “The Tingler” [123] with theater seats that buzzed theaudience at strategic moments. Current 4D theaters, rides, shows, and gamingarcades are equipped with sophisticated motion platforms (e.g., D-Box, www.d-box.com) that supplement visual scenes. Large tactile transducers (such as Butt-kickers, www.thebuttkicker.com) that shake the entire seat using the sound streamare also common with gaming and music content. Custom editors (such as D-BoxMotion Code Editor) and software plugins overlay visual and audio content withhaptics, and allow designers to generate, tune and save frame-by-frame haptics inan allocated track.In contrast to displacing the entire body, multichannel haptic devices createpercepts of dynamic and localized haptic sensations on the user’s skin [127] and inmid-air [276]. Similar devices have been developed for online social interactionsusing custom multi-actuator displays [146, 199, 267]. All of these technologies re-quire extensive programming experience, knowledge of hardware and background48in haptic sciences to generate expressive and meaningful haptic content. With-out guiding principles or haptic libraries, content generation schemes are complex,device-specific, and time consuming.Another class of haptic technology renders high-resolution spatio-temporalpatterns on the skin using a sparse array of VT actuators. These technologies useparametric models of sensory illusions in touch, such as phantom tactile sensations[4], and create illusory vibrations in between two or more VT actuators. This ideahas been used to create a perceived motion flow between two vibrators mountedon the ends of a handheld device [242] and to create across-the-body and out-of-the-body illusions on a mobile device using up to four actuators [159]. The TactileBrush algorithm [126] combined phantom tactile sensations and apparent tactilemotion to render high-resolution and moving haptic patterns on the back using acoarse grid of VT actuators, but paths must be pre-determined (Figure 4.2a). Otherspatio-temporal VT illusions such as the “cutaneous rabbit” [262] and Tau andKappa effects [109] can be also used with VT arrays.4.3.2 Haptic Authoring ToolsAs long as designers have considered haptic effects for entertainment media, theyhave needed compositional tools [103]. Requirements drawn from previous workon how to prototype, sketch, or control haptic phenomena using non-programmingmethods are summarized in Table 4.1.The Hapticon editor [76], Haptic Icon Prototyper [258], posVibEditor [219],and Immersion’s Haptic Studio (www.immersion.com) use graphical representa-tions to edit either waveforms or profiles of dynamic parameters (such as frequencyor torque) over time. Another approach is predefining a library of haptic patternsto augment media content. Immersion Corporation’s Touch Effects Studio letsusers enhance a video from a library of tactile icons supplied on a mobile platform.Vivitouch Studio [259] allows for haptic prototyping of different effects alongsidevideo (screen captures from video games) and audio. These tools focus on low-level control of device features rather than a semantic space, and control deviceswith either a spatial or temporal component, but not both simultaneously.Several tools have allowed users to author haptic content using accessible touch-49LR DescriptionLR1 Real-Time Playback [186, 228] Rapid prototyping is essential for workingwith VT sensations, especially in absence of objective metrics. Feeling asensation at design time allows iteration to converge faster to better results.However, too real-time can cause split attention.LR2 Load, save, manipulate [133, 211, 228] A persistent object model is es-sential for sensation editing over longer projects and sharing with other de-signers or across devices. Well-defined actions upon a data structure alsofacilitates features like undo that support experimentation.LR3 Library of effects [76, 114, 199, 258, 259] A library of saved sensationsis an important feature used in previous haptic authoring tools, providinginspiration and preventing designers from re-inventing the wheel.LR4 Device configuration [146, 156, 157, 199] Because of the many typesof haptic devices, a general tool must be able to understand different de-vices. Lightweight configuration files are common in the literature, allowingusers to select specific hardware, specify location and type of actuators, andchoose a rendering algorithm.LR5 Multiple channels & combination of effects [76, 199, 219, 258, 259] Be-ing able to display multiple effects simultaneously, or combine effects viasuperposition or concatenation, is essential for expanding the design space.This is typically represented in a timeline, which represents the temporalbehaviour of any objects.LR6 Visual/direct control metaphor [55, 146, 199] Most previous tools con-sider each actuator separately. When thinking semantically about a spatialsystem, a direct view of the device and actuator layout is critical for directmanipulation.LR7 Audio/visual context [146, 186, 259] Haptic perception depends greatlyon additional senses [109]. By providing audio and visual feedback, theseeffects can be mitigated and the designer can experience haptic sensationsin context.LR8 User Feedback [228, 259] Receiving feedback from users, either by demon-stration or A/B testing, is extremely valuable.Table 4.1: Literature Requirements (LRs) for a tactile animation authoring.50(a) Tactile Brush[126]: precomputed paths(b) Tactile Video[146]: frames of tactilepixels(c) Tactile Animation:direct manipulationFigure 4.2: Comparison between related systems.screen interactions. A demonstration-based editor [118] allowed control of fre-quency and intensity by moving graphical objects on a screen. mHIVE [228] con-trols frequency, intensity, waveform and envelope of two tactors with touchscreengestures. Both systems were shown to be intuitive and easy to use for explorationor communication, but faltered when refining more elaborate sensations. Commer-cially, Apple’s vibration editor (since iOS 5, 2011) allows users to create person-alized vibratory patterns by touching the screen, but only produces binary on/offtiming information.Other aids to creating haptic phenomena include haptic sketching [186] forhands-on exploration of haptic ideas in early design, and end-user customizationof tactile sensations [239]. Both emphasize exploration and broad manipulationrather than finely controlled end results. HAMLAT [71] supports authoring of forcefeedback in static 3D scenes. Lee and colleagues [156] used a musical metaphorfor vibrotactile authoring. Schneider et al. introduced “FeelCraft” for end usercustomization of a library of feel effects [225].Kim and colleagues offered combined spatial and temporal control using a tac-tile video metaphor for dense, regular arrays of tactile pixels (“taxels”), includinga feature of sketching a path on video frames [146] (Figure 4.2b). While a promis-ing approach, this tool relies on editing of discrete actuators and frames, with itssketching feature used for input, not as a manipulation method. As well, it doesnot generalize to sparse or irregular displays, and was not evaluated with designers.We suggest that an animation metaphor could provide an easier interaction model,51facilitating key creative activities such as rapid exploration and iteration, especiallythrough a continuous timeline (Figure 4.2c). The control of multi-actuator outputshas also been explored by TactiPEd [199] and Cuartielles’ proposed editor [55].However, these approaches still require the separate control of different actuators,rather than a single perceived sensation produced by the multi-actuator device.4.4 Tactile Animation Authoring ToolOur objective is to provide media designers with a familiar and efficient frameworkfor creating dynamic haptic content. Mango’s design is based on two sets of re-quirements: Literature (“LRs”, Table 4.1), from prior research on haptic authoringtools, and Industry (“IRs”) from interviews with five industry experts in haptic me-dia creation and animation, which confirm and expand upon design decisions forother VT tools.4.4.1 Gathering Design RequirementsWe interviewed two industry experts with haptics experience from a media com-pany (E1-2). E1 uses Max/MSP, OpenFrameworks, Processing, and Visual Studioto create haptic media. E2 is a professional media designer and an expert userof Pro Tools (an industry standard for authoring sound media). Together, E1 andE2 previously undertook a six-month training that included generation of dynamichaptic experiences on seats and supporting platforms using audio and video tools.Our interviews included meetings, recordings, and sketches of their experienceduring training.In addition, we conducted contextual interviews of three industry animators(A1-3) interacting with non-tactile animation tools using a think-aloud protocol.A1 and A3 used Adobe After Effects, while A2 used Maya. A1 and A2 weretasked with creating an animation of two balls moving; A3 created an animationbased on a sound file. These interviews yielded rich detail that we compiled intocategories, then compared with our LRs (Table 4.1). LRs 2-7 also emerged inde-pendently from this stage. We extend the LRs with additional expert-drawn indus-try requirements (IRs):IR1 - Animation window allows users to draw tactile animation objects, control52Raster(b) Vector SensationActuator 1Actuator 2Actuator N… …DurationRenderSave SavePlayback(c) Raster SensationActuator 1Actuator 2Actuator N… …DurationFrame(a) Tactile Animation ObjectsNo PathWith PathDevice Configuration(d) DeviceSaveFigure 4.3: Tactile animation rendering pipeline. Users can: (a) create tac-tile animation objects; (b) render objects to actuator parameter profiles(such as amplitude) with our rendering algorithm; (c) rasterize vectorsensations into frames; (d) play the sensation on the device.them in space, and define their motion paths. The window is overlaid with locationand type of haptic actuators, providing visual feedback (LR8).IR2 - Timeline is a time track for a tactile animation object. During playback,the animation is played on IR1 showing the movement of the animation relative tothe tactile object. Object behaviours are linked to time track to visualize temporalvariations. Time tracks are editable by inserting key frames.IR3 - Object tools extend LR2, supporting direct manipulation operations ontactile objects such as “new”, “scale”, “translate”, analogous to object creation andmanipulation in After Effects and Maya.IR4 - Path tools define motion paths of tactile objects (straight lines, curves,input-device traces), and store them in a path library (LR3).IR5 - Haptic rendering schemes compute output waveforms for each actuatorchannel, animated visually in the animation window. Users select the scheme froma list for connected hardware, defined in a hardware configuration file (LR4).IR6 - Global parameter tools allow the user to control the overall feel of thetactile animation object. Analogous to filters and effects applied on the object, thisincludes parameter setting for frequency, intensity and modulation.We developed a tool design from these two sets of requirements. Our Mangoprototype uses Python 2.7 and Tkinter for the rendering pipeline (Figure 4.3) andUI (Figure 4.4), which communicates with haptic devices via USB.534.4.2 Framework for Tactile AnimationIn this section, we present an animation metaphor that allows users to generate tac-tile content in the same way as they would create visual animations and play themreal-time on a VT array. Figure 4.3 shows the workflow of this authoring mech-anism. Designers create tactile animations on a typical animation tool as shownin Figure 4.3a. The animation object is placed in space, and the designer adjustsits size on the visual outline of the VT array. The designer then adds movementsand special effects to the object using Mango’s toolset, and plays it to observe itsframe-by-frame sequence.Mango’s rendering engine translates visual animations to tactile animations onthe VT array. Knowing the location of vibrating points on the sparse array ofVT actuators, the rendering engine resolves the animated sequence into individ-ual actuators using the phenomena of phantom tactile sensations [4, 126]. Thephantom sensation is a sensory illusion elicited by stimulating two or more vibra-tory elements on the skin. Instead of feeling the individual vibration points, theuser feels a single sensation in between, whose perceived intensity is defined bythe weighted sum of the intensities of the vibrating elements. Therefore, in eachframe, the animated tactile object is resolved into intensity of actuators on the VTarray (Figure 4.3b). The rendering engine then calculates raw waveforms for eachVT channel (Figure 4.3c) that can either be sent to the VT device to play the ani-mated sequence or exported as a multichannel datafile for later use. Previous workhas interpolated between only two actuators [159, 242]; however, a more general-ized 3-actuator interpolation algorithm allows for arbitrary real-time manipulationof the tactile animation object on grid displays.To accommodate the animation framework, we define three datatype mod-els, for use in the current implementation and future expansion of the Mango tool:Tactile animation objects, high-level hardware-independent data types for tactileanimation; vector formats, high-level hardware-specific control common in previ-ous work; and raster formats, low-level hardware-specific formats for renderingand playback.Tactile animation objects are high-level specifications of virtual sensationsmoving on a 2D VT array (Figure 4.3a). High-level parameters, such as location,54size, and other semantic qualities, can either be constant or variable. Each tac-tile object has a start time and a duration. Object type is also defined for tactileanimations that sets pre-defined parameters and features to animated objects. Forexample, a moving virtual point can have a position, size, and frequency param-eter, while a “rain” effect can have a position and more semantic parameters likeraindrop frequency or size.Tactile animation objects are device-independent. Mango uses a device con-figuration file (LR4) and the rendering engine to create animated VT patterns onhardware. Animation objects can be combined in novel ways, organized in groups,or generate other tactile animations like a particle generator as in a graphical anima-tion tool, and can have paths that constrain motion to a pre-determined trajectory.We prototyped an early version of the tactile animation object in Mango; however,the data type is extensible.Vector formats are similar to those in previous work (e.g., [76]). Instead ofobjected-based definitions, as in tactile animation objects, parameters are definedfor individual actuation. (Figure 4.3b). Parameters include duration, amplitudeenvelopes (e.g., fade-ins and fade-outs), frequency, and start times. Being device-specific, vector formats offer finer sensation control than tactile animation objects(analogous to pixel-level editing of sprites). However, creating a single perceptfrom independent controls can be challenging. This data type is useful when ren-dering methods for the hardware are not defined or the user wants to control specificactuator sequence to animate tactile content, such as using the Tactile Brush [126].Raster format, analogous to a raster-graphics image or WAV file, is suitablefor playback operations or exporting it to a device specific format (Figure 4.3c). Araster format contains a matrix of actuator intensities; each row defines intensitiesof an actuator and columns containing the intensities at each time instance. Eachformat also contains a timestamp row defined by the rendering engine’s framerate.The playback system parses the raster data, finds the current column, and pushesthese actuator settings to the device. This data type is also used for real-time feed-back during authoring.553101112141517161813LR2,3LR7LR6LR4LR1LR7LR5122019LR2IR3,4IR1IR264 5 7 8 9ABBACFigure 4.4: Mango graphical user interface. Key components are labeled andlinked to corresponding design requirements.4.4.3 Authoring InterfaceThe authoring interface allows designers to efficiently create moving tactile contentin a familiar environment. Here we describe user interactions, most of which arethrough the animation window (1) and timeline (2) (Figure 4.4).Animation Window: A user creates a tactile animation object (3) with a “new56object” button (6), then manipulates it in the animation window (1). The windowis overlaid with a faint trace of the VT hardware (13) for context. Here, we usedan array of 10 VT actuators (Figure 4.6).Object Paths: The animation object (3A) has (x, y) parameters describing po-sition, an “r” (radius) parameter, corresponding to the VT output voltage from 0(minimum) to 1 (maximum). An optional path can be added to an object (7), orremoved (8), along which the motion of the object (3B) is constrained (12). Thepath-object (3B) is manipulated in two ways: moving on path (5), which moves theobject from the beginning (position=0) to the end of the path (position=1), or mov-ing in space (4), which moves the object and the path together on the animationwindow (1). The current Mango implementation only supports straight-line paths,however their use can be extended in a later version. Also note that curves can beaccomplished through keyframed (x, y) positions.Timeline: Each animation object (3) is represented in the timeline (2) as a track(17). The red scrubhead (16) (shown as a triangle and line) shows and manipulatesthe current time. Animation objects can be moved in time by clicking and dragging,and resized to change duration. Individual parameters can be set on the left, bytyping values into text fields (19), allowing precision. The entire animation can beplayed and paused using buttons (14) or the spacebar.Keyframes: Parameters can be toggled as “keyframeable” with a small clockbutton (20). When the value is changed, a keyframe (18) is automatically createdat the current time. Intermediate values are linearly interpolated.Vector Sensations: A new vector can be created by selecting an object (3) thenclicking on a button (9). These sensations control each actuator directly through theparameter values, controlling that actuator’s voltage from 0 to 1 (same as the “r”parameter). The corresponding actuator is highlighted in the animation window (1)when the text field (19) or track (17C) is selected. Each track is also keyframeable.Save and Load: Animations can be saved and loaded (10) to/from JSON files.An audio track can be loaded (11) to the timeline (15).This allows the user to designa VT experience for sound files (LR7). Video overlay is left for future work.Hardware Configuration File: A hardware-specific structure is defined andstored in a JSON configuration file (LR4). The file contains: (a) physical widthand height of the grid, (b) a dictionary of actuator types (e.g., voice coils or rum-57ble motors), each with a list of control parameters (e.g., frequency, intensity) andallowable values; (c) location and type of each actuator; (d) supported communi-cation protocols and rendering methods; (e) brand information (e.g., USB vendorid and product id) for device recognition; and (f) default settings. Physical dimen-sions are defined in SI units, e.g., meters, Hz.Playback: Once the animation of the object is defined, the user can play andstop the animation. During playback, the animation runs in (1) and the correspond-ing parameters vary in (2). Simultaneously, VT stimulations are activated on thehardware for user feedback. Multiple animation objects and vector sensations canexist simultaneously. Actuators output the sum of all the values generated by ob-jects (described later in the Rendering Algorithm section) and vector sensations.4.5 Rendering AlgorithmMango’s rendering algorithm defines how high-resolution haptic feedback is trans-lated to sparse grids of VT actuators. The rendering algorithm translates anima-tions created in the animation window to animated VT patterns on the hardware.Figure 4.3 shows the rendering pipeline that converts animation objects to a rasterformat, which outputs to the hardware.The rendering algorithm is derived from psychophysical understanding of VTillusions on the skin and creates percepts of virtual actuators and their motion inbetween a set of real actuators. The precise perceptual model depends on severalfactors, such as type of VT actuators (DC vs. voice coil motors), stimulation site(forearm vs. back) and the spacing of actuators in the array (e.g., [126]). To allowfor custom framerates and real-time feedback, we generalize from the 1D case (inbetween two VT actuator along a line) to the 2D case (in between three or moreactuators, previously accomplished with non-VT sensations [264]). Thorough in-vestigation of the psychophysical model is beyond our present scope, however, weempirically determine the most effective model among those documented in theliterature for the 1D case with a pairwise comparison.58A1A3AvA2a3a1a2(a) Barycentric coordinatesLinear Ai = ai×AvLog Ai =logai+1logAmax+1AvPower Ai =√ai×Av(b) Candidate interpolation methodsFigure 4.5: Interpolation models to determine physical actuator output (A1−3)from virtual actuator intensity (Av) and barycentric coordinates (a1−3).4.5.1 Perceptual Selection of Interpolation ModelsThe rendering algorithm translates virtual percepts to a physical actuator grid. Wefirst construct a Delaunay triangulation for all actuators to automatically definea mesh on the hardware grid. At each instant of rendering, we use barycentriccoordinates of the virtual animation objects relative to a triangle defined by threereal actuators (Figure 4.5a). Barycentric coordinates are scaled by an interpolationmethod to determine real actuator intensity.We propose three interpolation models for Mango, derived from prior psy-chophysical understanding of phantom VT sensations: (i) linear, (ii) logarithmic(“log”), and (iii) Pacinian power (“power”) (Figure 4.5b).In the linear interpolation model, barycentric coordinates are linearly related toactuation amplitude. In the log model, these coordinates are scaled logarithmically,as perceived intensity is related to physical vibration amplitude [271]. In the powermodel, coordinates are coupled to the power (square of the amplitude) of vibratingstimulations [271]. Linear and log interpolation models have been used in thepast to express either location or intensity respectively (but not both) of virtualsensations between two vibrators [4, 242]. A Pacinian power model was used in[126] to account for both location and intensity of virtual sensation between twovibrators.59(a) Rendering study interface (b) Output device with highlighted actuatorsFigure 4.6: Rendering study setup and user interface.4.5.2 Pairwise Comparison StudyTo determine the preferred model for this VT hardware in Mango’s renderingpipeline, and to identify relevant factors (e.g., frequency, amplitude), we performeda pairwise comparison of our three candidate interpolation models.Participants and ApparatusEighteen volunteers took part (6 female, between age 20-35). The VT hardwareconsisted of 10 high-quality VT actuators (C2 tactors, Engineering Acoustics, Inc.,USA) arranged in a 3-4-3 layout and mounted on the back of a chair in a pad 21cm high, 29 cm wide, and 2 cm thick; actuators form equilateral triangles withedges of 6.35 cm (Figure 4.6b). The rendering engine updates at 100 Hz. Throughpiloting, we determined that the device’s on-screen visual outline should mirror thesensations rendered on the physical device. That is, if participants see an animationobject on the right side of the screen, they prefer to feel it on the right side of theback. Figure 4.6a shows the experiment interface, in which an arrow represents thesensation direction.60MethodsWe conducted A/B paired comparison tests (two-alternative, forced-choice) to de-termine the preferred model out of the three candidates. In each trial, participantswere presented with two stimuli at a 400 ms interval. Each stimulus is a “straight-line” VT stimulation on the back using one model. Participants were asked to selectthe stimuli that best represented straight-line motion in a variety of directions.Two durations (500 and 1500 ms), eight cardinal directions, and A/B orderwere crossed with each model pair, and presented in a random order. For each trial,frequency was randomly selected from 80, 160, 240, and 300 Hz, and intensityfrom between 10 and 20 dB above detection threshold. Each participant performed96 trials over ∼15min (1728 total).ResultsEach algorithm pair’s data was fit to a logistic regression model with participant,frequency, intensity, direction, and duration as factors; direction was grouped intohorizontal, vertical, and diagonal. We performed stepwise regression (backwardselimination with α = 0.05 and a χ2 test for removing each factor) to iterativelyeliminate factors that were not statistically significant.Logarithmic vs. Linear. Regression eliminated duration, frequency, intensity,and direction (p > 0.1). The resulting model has Nagelkerke R2 = 0.135. UsingBonferroni correction for multiple comparisons, 95% confidence intervals for eachparticipant were computed. 11 participants were more likely to prefer Log overLinear (p < 0.05) models; none were likely to prefer the Linear model.Logarithmic vs. Pacinian power. All 5 factors were eliminated (p > 0.1).The overall 95% confidence interval of participants selecting Log over Power was37.06% to 87.40%, overlapping 50%. We therefore detected no significant differ-ence of preference between Log and Power models.Pacinian Power vs. Linear. We eliminated intensity, direction and duration(p> 0.1), with the fitted model’s Nagelkerke R2 = 0.0970. The confidence intervalfor each participant-frequency combination, via Bonferroni corrections, yielded 22/ 72 participant-frequency combinations selecting Power model over Linear modelmore than 50% of the time. No one chose the Linear model more than 50% of the61time.Conclusion: Logarithmic interpolation outperformed linear and was equiv-alent to Pacinian power model. We proceeded with the logarithmic model forMango’s implementation, as the power model did not outperform either of theothers.4.6 Design EvaluationTo evaluate Mango’s animation metaphor and expressive capability, we asked me-dia professionals to create a variety of designs. Qualitative evaluation was chosenfor rich, focused, early feedback of the animation metaphor and lessons for itera-tion. A quantitative comparison between tool perspectives is left until more refinedtools are developed. We wanted to establish whether this is an effective approachbefore studying the most effective approach.Six participants (P1-6, 3 females) were introduced to Mango driving the VThardware described previously. P1 had experience with haptics but not animationbeyond video editing; P2-5 had animation experience but little or no experiencewith haptics; P6 had no experience with haptics or animation, but was familiarwith media tools like Adobe Photoshop. P5 was also involved with the requirementgathering interviews presented earlier. Each entire session took 40 to 60 minutes.Each participant was introduced to Mango with a training task: designing analerting sensation using either animation objects or vector sensations (order coun-terbalanced). Then, each participant was given three design tasks. 1) Primarilytemporal: create a heartbeat sensation. 2) Primarily spatial: tell a driver to turnleft. 3) Context-based: create a tactile animation to match a sound file. A 3-secondsound effect of a bomb falling (with a whistle descending in pitch) then explod-ing with a boom was chosen, i.e., complex with two semantic components. Meannon-training task time was 5:59 (med 5:38, sd 2:46, range 1:41-13:48).After each task, participants rated confidence in their design from 1 (Not con-fident) to 5 (Very confident), primarily to stimulate discussion. All designs wererated 3 or higher; P6 wrote “6” for his sound-based design. The animation objecttraining task was always rated the same or higher than the corresponding vectortraining task. While suggestive, these ratings were self-reported and from a small62sample. We thus did not conduct statistical analysis.Figure 4.7: Example of P2’s animation for matching a sound.A semi-structured interview followed the design tasks. Participants were askedto compare animation objects with vector sensations, and to walk through theinterface to elicit feedback. Interviews were conducted and analyzed by a re-searcher with training and experience in qualitative research, and followed estab-lished methodologies: methods of grounded theory [49] informed by phenomeno-logical protocols [187]. Analysis resulted in four themes.Theme 1: Animation MetaphorParticipants found the tool easy to use. All six participants were able to accomplishall five tasks (object alert, vector alert, heartbeat, turn left, sound). Participantsdescribed the interface as intuitive (P1-5), agreeing that it was an animation tool:“It’s up to the standards of other animation tools” (P1), “This is totally animation”(P2), “It felt very much like an animation tool” (P4), “I’m not an expert when it63comes to haptics, but this software seems almost as if it can change the gameof designing haptic vibrations” (P5). Negative feedback focused on polish andfeature completeness: “gotta spline [the keyframe interpolation]” (P2), “a couplequirks but there was nothing difficult to overcome” (P4), “being able to designyour own curve [path] would be really nice” (P5).Theme 2: Tactile Animation Object vs. Vector SensationsParticipants relied more on animation objects than vector sensations, which wereonly used twice: P4’s heartbeat task and P5’s sound task (combined with an anima-tion object). P1 switched from vectors to animation objects early in her heartbeattask; no other participants used vector sensations.Animation objects were described as easier to use and more intuitive, especiallyto represent location or for non-animators. “After using the new object I’d probablynever use new vector again” (P2), “easier to find the location of the heart” (P1), “ifI weren’t an animator I think I would only use [animation objects]” (P4). Vectorswere preferred for more fine-tuned control when motion didn’t matter as much,often using many keyframes. “You can control multiple [actuators] at the sametime, so you don’t have to create new objects and then put them everywhere onthe screen” (P1), “[Animation objects] can be more comfortable to use when onedoesn’t work with keyframes” (P3), “If you want precise control over [actuators],then vector is the way to go” (P4).Theme 3: Designing-in-action with direct manipulationParticipants used direct manipulation to feel their designs in real time, dragginganimation objects and scrubbing through the timeline: “I would make the [ani-mation] object and just play around with it before creating the animation, as away to pre-visualize what I was going to do” (P5), “I kind of play around with it,and randomly come up with the ideas” (P6). P2 even noted that YouTube did nothave real-time video scrubbing feedback like Mango’s: “I wish I could scrub backand forth [with YouTube]” (P2). However, continual vibrations were annoying,and participants requested a “mute” feature: “It would be nice if...it doesn’t go offconstantly.” (P3).64More generally, participants used feedback from their experience or externalexamples. P1 stopped to think about her own heartbeat, P2 used a YouTube videoof a heartbeat as a reference, and P3 based her alert on her phone: “It’s typicalto have two beeps for mobile phones” (P3). Correspondingly, participants wereexcited when prompted by an audio sensation: “I was really happy with the bombone, because I could really hear it and imagine me watching a TV and then feel itat the same time” (P1), “The sound part was good, that would be a fun thing todesign for” (P4).Theme 4: Replication through Copy and PasteReplication in both space and time was common while using Mango. Many designshad symmetrical paths to reinforce sensations (Figure 4.7). All but P4 requestedcopy / paste as a feature. “I could just copy/paste the exact same thing on the leftside and then move it to the right side” (P1), “I have the timing the way I like it,ideally it’d be cool if I was able to copy and paste these, so it would be able torepeat” (P5).4.7 DiscussionHere we interpret our design evaluation, explore animation with other devices, anddescribe applications and limitations.4.7.1 Design Evaluation SummaryFrom our design evaluation, we conclude that tactile animation is a promising ap-proach for controlling tactile grids. Direct, continuous manipulation of tactile an-imation objects supported embodied design and exploration by animators, whorapidly iterated on designs to try new ideas. Mango facilitated the design of awide variety of animations and received positive responses. We also found recom-mendations for our next iteration: more animation features, video as well as audiocontext, and muting.65Object 1(a)Object 1(b)Object 1(c)Object 1(d)Object 1(e)Object 1(f)Figure 4.8: Tactile animation could define motion with (a) 1D actuator ar-rays, (b) dense and sparse VT grids, (c) handhelds, (d) 3D surfaces, (e)multi-device contexts, and (f) non-VT devices like mid-air ultrasound.4.7.2 Possible Extension to Other Device ClassesThe animation metaphor is not limited to a back-based pads. Part of the advantageof an abstracted animation object is that, as long as a suitable rendering algorithmcan be developed, the metaphor can apply to other devices. In this section, weillustrate possibilities that we plan to explore in future work.1D VT Arrays (Figure 4.8a): 1D VT arrays are common in arm sleeves, wristbands, belts, and similar wearables. These devices provide sensations along thepath of the array. By constraining objects to a linear or circular path, barycentriccoordinates collapse into 1D interpolation.Dense and Sparse VT Grids (Figure 4.8b): 2D VT grids are also common, usedin chairs, gloves, and the backs of vests. While we evaluated Mango with a sparseback-mounted array, tactile animation naturally supports denser arrays, either withour rendering algorithm or by using a nearest-neighbour technique to activate asingle actuator.Handhelds (Figure 4.8c): Actuators embedded in handheld objects, such asmobile devices, game controllers, or steering wheels, shake objects instead of di-66rectly stimulating the skin. Animators might be able to define source locations forvibrations using handheld-based rendering algorithms (e.g., [242]).3D Surfaces (Figure 4.8d): Mango currently only supports a 2D location for itsanimation objects. However, tactile animation can be extended to support surfacesof 3D surfaces, such as vests or jackets that wrap around the user’s body. Morework will need to be done to perfect this interaction style, possibly using multipleviews or a rotatable 3D model with animation objects constrained to the surface.Multi-device contexts (Figure 4.8e): Mango’s rendering algorithm already sup-ports connections to multiple devices simultaneously. The editing interface couldcombine layouts for different devices, enabling animators to animate the entire userexperience (such as a car’s seat and steering wheel).Non-vibrotactile devices (Figure 4.8f): While our rendering algorithm is par-ticular to VT arrays, a tactile animation object can represent manipulable perceptswith other actuation technologies. Ultrasound-based mid-air displays generate asensation as a focal point with a position and size [276]; this sensation could bemanipulated through a tool like Mango. Similarly, passive force-feedback sensa-tions (e.g., Hapseat [59]) or height displays (a grid of pins) could be supported.4.7.3 Interactive ApplicationsWhile our goal was to enable animators to create rich content, the tactile animationobject can be linked to alternative input sources for other interactive experiences.User gestures. User gestures and motion can be tracked and mapped to anima-tion objects directly rendered on the haptic hardware. For example, a user createspatterns on a touch sensitive tablet that maps touch locations to a grid. Users couldplay games or create personalized haptic messages on the back of a vest. Similarly,a dancer’s movements could be tracked through accelerometers, drawing animatedhaptic content on the body of her audience through actuated theater seats during alive performance.Camera feed extraction. Motion from video feeds can be automatically ex-tracted with computer vision and rendered on grid displays [145], providing dy-namic patterns associated with actions during sports, movies, and games. Simi-larly, animation parameters could be extracted and mapped to positions on a VT67grid, creating haptic feedback for non-haptic media.Data streams. One main application of haptic grid displays is to provide usersdirectional, assistive, and navigational cues during driving cars, walking down thestreet, or with over- saturated sensory tasks. Users could associate digital datastreams, such as GPS input, to predefined set of directional patterns on the back orpalm of the hand.4.7.4 LimitationsWhile the tactile animation metaphor seems promising and may apply to manycontexts, it is limited by the requirement of a suitable rendering algorithm for tar-get hardware. We have not yet explored other form factors, such as handhelds,multi-device scenarios, or non-vibrotactile sensations. Although we perceptuallyoptimized our algorithm, we did not conduct a full psychophysical investigation.Further work needs to be done to identify the limits, thresholds, and peculiaritiesof this rendering technique. Examples include: curved trajectories of animationobjects (although participants’ use of curved motion was encouraging, e.g., P5’sturn left sensation), spatial frequency control (how to superpose animation objectsof differing frequencies), non-triangular meshes (e.g., quadrilateral interpolationor kernel methods), and mixed actuator types (such as a chair with both voice coiland rumble motors, Figure 4.8e).4.8 ConclusionThis paper introduces the tactile animation object, a new abstraction for creat-ing rich and expressive haptic media on grid displays. This animation metaphorallows designers and media artists to directly manipulate phantom vibrotactile sen-sations continuously in both space and time. Our rendering pipeline, which uses aperceptually-guided phantom sensation algorithm, enables critical real-time feed-back for designing. We incorporated these ideas into a prototype, Mango, witha design grounded in animator requirements and haptic design guidelines. Pro-fessional animators used our tool to create a variety of designs, giving positivefeedback and excitement for future versions. This approach has the potential toaccommodate a large variety of haptic hardware, ranging from a single shaking68element mounted on the seat to an array of actuators stimulating multiple points onthe skin, and can export content into formats applicable in the production pipeline.Tactile animation empowers animators with a new set of artistic tools for rich, mul-timodal feedback.69Chapter 5Browse: MacaronMacaron allows us to remotely study design.Figure 5.1: Concept sketch for a Macaron, an online, open-source VT editorfeatures incorporable examples and remote analytics.Preface – In our third vibrotactile design tool, Macaron1, we explored the designactivity of browsing external examples. Because we explored HaXD tool imple-mentation in-depth in Chapter 4, we knew how to build Macaron; we thus focusedon studying the design process. We specifically investigated how different waysof viewing or reproducing elements of a vibrotactile icon affect design. We basedthis task on the effective sound-based task in Chapter 4: here, participants designedhaptic tracks for visual animations. To complement our previous studies, partici-pants were generally naı¨ve to haptics and media design. We used phenomenologyand grounded theory methods augmented by logged user actions and visualized1Schneider and MacLean. (2016) Studying Design Process and Example Use with Macaron, aWeb-based Vibrotactile Effect Editor. Proceedings of Haptics Symposium – HAPTICS ’16.70timelines to look at our participants’ design process: we directly observed the dif-ferent stages of design, including browsing, sketching, and iterative refinement.5.1 OverviewExamples are a critical part of any design process, but supporting their use for ahaptic medium is nontrivial. Current libraries for vibrotactile (VT) effects provideneither insight into examples’ construction nor capability for deconstruction andre-composition. To investigate the special requirements of example use for VT de-sign, we studied designers as they used a web-based effect editor, Macaron, whichwe created as both an evaluation platform and a practical tool. We qualitativelycharacterized participants’ design processes and observed two basic example uses:as a starting point or template for a design task, and as a learning method. We dis-cuss how features supporting internal visibility and composition influenced theseexample uses, and articulate several implications for VT editing tools and librariesof VT examples. We conclude with future work, including plans to deploy Mac-aron online to examine examples and other aspects of VT design in situ.5.2 IntroductionCreativity often sparks when an inventor, examining existing ideas, sees a way tocombine them with a novel twist [272]. An environment rich with examples is fuelfor this fire. In industrial and graphic design [30, 114] their use improves processand final results [67, 155].Several effect libraries are available to designers of vibrotactile (VT) sensa-tions, e.g., for accessible wayfinding [288] or media experiences [56, 124, 129,225]. But despite the need for effect customizability [239], VT library elements aregenerally opaque in construction and immutable. Recent advances include limitedparameter adjustability [129, 225] and faceted library search and browsing [240].Despite this, designers still must either choose a pre-existing sensation or buildfrom scratch: elements cannot be sampled, recombined, built upon or adapted. Incontrast, web designers can access a page’s source; graphic and sound designerscan sample and incorporate colours and sounds from other media.Here, we examine the potential role of examples in VT design, to establish how71Figure 5.2: Macaron interface, “hi” version featuring both composability(copy and paste), and visibility of underlying parameters. The user editsher sensation on the left, while examples are selected and shown on theright. Macaron is publicly available at hapticdesign.github.io/macaron.to best support their use. We designed a web-based editor and interactive designgallery [155, 175] (Figure 5.2) for VT sensations, then asked users to compareversions (Figure 5.3) that vary in example accessibility via visibility and incor-porability, as they create VT effects for animations (Figure 5.4).Analysis of user action logs provide an objective picture of the VT design pro-cess. To validate the deployment of this methodology at scale, we also interpretand validate logs with direct observation and interviews. Specifically, we:• introduce Macaron, a web-based VT effect editor through which examplescan be used directly in designs,• find that visible, incorporable examples make design easier by providinga starting point for design and scaffolding to learn how to work with VTparameters,• identify implications for future tools and libraries, and• discuss the opportunities afforded by a web-based editor as a practical tool72and platform for studying other aspects of VT design at scale.5.3 Related Work5.3.1 Salient Factors in VT Effect Perception and ControlVibrotactile effects (e.g. haptic icons [167]) are typically manipulated with low-level engineering of signal parameters, beginning with amplitude, frequency andwaveform [20, 103, 167, 170]. Rhythm can support large, learnable icon sets [257,265]; combining waveforms enhances roughness [102]. Time-varying amplitudeadds musical expressivity, from tactile crescendos [25] to envelopes [228]. Multi-dimensional scaling can be used to identify and elaborate these parameters [77,117, 167, 269].Affect and metaphor are another way to structure and manipulate sensations ata level more cognitively relevant than engineering parameters. Perceived valence(pleasantness) and arousal can be influenced by frequency/amplitude combination[192, 286]. Metaphors [39, 191, 240] and use cases [39, 240] offer structure, mem-orability and design language. Spatial displays require additional controls for loca-tion and direction, whether body-scale [103, 126], mobile [242], or mid-air [192].While many parameters are available for VT design, we chose the most established(time-varying frequency and amplitude) for Macaron’s initial implementation.5.3.2 Past Approaches to VT DesignPast editors – e.g., the Hapticon Editor [76], Haptic Icon Prototyper [258], posVibEd-itor [219], Vivitouch Studio [259], and Haptic Studio (www.immersion.com) – aretrack-based, with graphical representations to edit either waveforms or profiles ofdynamic parameters. Additional features (e.g., spatial control or mobile interfaces)are surveyed in [233].A library of effects is critical for haptic design tools [233]. Most existingtools support feature saving/loading, and some have an internal component library[76, 258, 259]. However, previous implementations were primarily compositional,employing building blocks [77] rather than complete artifacts. Example use wasnot studied.73Large VT libraries contain complete artifacts, but impose a serious constrainton their use. In the Immersion Touch Effects Studio library, underlying structureand design parameters are hidden and cannot be incorporated into new designs.VibViz [240] features 120 VT examples with visualizations searchable by severaltaxonomies, but the selection model is all-or-nothing. FeelCraft [225] proposes acommunity-driven library of feel effects [129] for simple parametric customizationand re-use. While end user customization-by-selection is important [239], expertsneed a more open, editable model, just as web designers rely on full access tosource code with recent tools allowing search and easy incorporation [155].5.3.3 Examples in Non-Haptic DesignProblem preparation – also known as the “problem setting” [235] or “analysis ofproblem” [272] step of design – involves immersion in the challenge and drawinginspiration from previous work. Both may come from the designer’s experience,repertoire [235] or exposure to a symbolic domain, e.g., mathematical theoremsand notation [54].To this end, external examples are critical in inspiring, guiding and informingdesign [30, 114]. Industrial designers collect objects and materials; web design-ers bookmark sites [114]. In graphics and web design, design galleries organizeexamples to be immediately at hand [155, 175]. Example-based tools often usesophisticated techniques to mix and match styles and content [151]: this requiresimmediate access to the examples’ underlying structure.5.4 Apparatus DesignTo investigate VT design in the context of examples, we required a platform thatwould expose users’ natural procedural tendencies. Our Macaron design gallery issimple, flexible, and extensible. In this work, we add multiple types of exampleaccess to polished implementations of familiar concepts: tracks, envelopes, andkeyframes (Figures 5.2,5.3).Tracks are the accepted language of temporal media editors (video, audio, andpast haptic efforts [76, 219, 258]). We provide tracks for perceptually important“textural” parameters (amplitude and frequency); the user accesses periodic and74lo samplehivisIncorporabilityVisibilityLessMoreMoreLessFigure 5.3: Design space for Macaron versions. hi and sample both allowfor selection and copying of example keyframes. vis and hi bothshow the underlying profiles. lo represents the current status quo; onlya waveform is shown.time-variant aspects by manipulating their envelopes using keyframes, with linearinterpolation in-between. Users double-click to create a new keyframe, click ordrag a box to select, and change or delete a selection by dragging or with thekeyboard. A waveform visualization reflects changes.Macaron’s example access features are inspired by more recent graphics andweb design galleries [155, 175, 213], which show examples side-by-side with theeditor. Other implemented features, critical for polished creative control [233], in-clude real-time playback, time control (scrubbing) copy-and-paste, undo and redo,and muting (disables realtime VT output). To support its use as an experimentaltool, user interactions are logged; start / stop buttons allow the user to indicatewhen they began and completed their design process.Macaron was built with HTML5 and JavaScript, using React, Reflux, D3, andAudiolet2. Real-time sound synthesis drove a C2 actuator. To leave hands free forkeyboard and mouse, the C2 is attached to a wristband; we simulate the designprocess for a wrist-worn wearable (as in [240]).Evaluation Versions: To study how examples impact design, we made fourgallery versions by sampling two theoretical dimensions of example access: ele-ment incorporability and internal parameter visibility (Figure 5.3, Table 5.1). Wehypothesized these would affect users’ design processes, e.g., incorporable exam-2facebook.github.io/react, github.com/reflux, d3js.org, github.com/oampo/Audiolet75hi Full access to gallery examples, with keyframes visible and selectablefor copy and paste. Simulates source visibility, e.g., viewing the sourceof a web page or having access to a .psd PhotoShop document.sample Hides underlying parameters of frequency and amplitude, whereas wave-form regions (underlying keyframes) may be copied and pasted into adesign, simulating example mixing in absence of visibility into under-lying construction. While possible to see underlying representation bycopying the entire example, the steps are indirect and inconvenient.vis Reveals underlying parameters, but hides keyframes, parameter scales,selection and copy/paste features. The inverse of sample, it exposesexample structure, but does not support incorporating example elementsinto a design.lo Supplies a “black box” outer representation. Playback and visualiza-tion of the complete vibration reflect the status quo of non-visible, non-mixable example libraries.none No examples present.Table 5.1: Macaron tool alternatives, varied on dimensions of internal visi-bility and element incorporability.ples would encourage “mixing and matching” of examples, visibility might provideinsight.We compared these versions with each other and with a non-example version:none. In all versions with examples, the user can play or scrub the example,feeling it and seeing the waveform visualization. We did not allow users to modifythe examples, to avoid study workflow confounds. To populate the gallery, wechose or adapted seven examples from [240], piloted them to confirm examplevariety, then regenerated keyframed versions with Macaron.5.5 Study MethodsParticipants were tasked with creating a sensation to accompany five animations(Figure 5.4) – SVGs (scalable vector graphics) which can be played or scrubbedby the same means as navigating Macaron’s time control. We chose animation va-riety (concrete to abstract) and complexity to inspire non-obvious solutions withoutoverwhelming.Participants were first trained on none with no animation, then presented with76five animation/version combinations. As the least crucial source of variance, an-imations were presented in Figure 5.4’s constant order, while interface versionswere counterbalanced in two 5x5 Latin square designs. Thus, each participant en-countered each animation and each interface version once; over all participants,each animation/version combination appeared twice, with Latin squares balancing1st-order carry-over effects. This design confounds learning with animation task.We believe this is an acceptable tradeoff at this stage, allowing us balance interfaceorder with a single participant session of reasonable length (1-1.5h).(a) Heartbeat. (b) Cat. (c) Lightning. (d) Car. (e) Snow.Figure 5.4: Animations used as design tasks, in presentation order. Heartbeatexpands in two beats; the cat’s back expands as breathing and purring;lightning has two arhythmic bolts; the car oscillates up and down, andmakes two turns: left then right; snow has three snowflakes float down.5.6 ResultsWe targeted a study size of 10 complete participants for a balanced Latin squaredesign, and a manageable sample size for rich, exploratory, qualitative analysis.13 untrained participants were recruited: P1-10 (7 female, ages 22-35) completedall five tasks, while I1-3 (2 female, ages 29-45) only completed the first three dueto time restrictions. Because I1-3 (and P9) all had the same interface order (lo,none, vis, hi, sample), we suspect that beginning with ‘sparse’ versions gaveinsufficient insight into how to design quickly enough to finish the study. I1-3showed no distinct patterns beyond this; we leave their data for future analysis.Analysis and Data: A team member trained in qualitative methods analyzedscreen recordings, interviews, and logs with grounded theory methods (memoing,open & closed coding [49]) and thematic analysis and clustering [187]. We visual-ized logs using D3 (Figure 5.5). We chose a qualitative analysis because our goalwas to capture the design process, not compare Macaron with previous tools. Our77Preparation Initial Design Iteration RefinementP3 car/hiP10 heartbeat/visPreparation Initial Design Iteration Refinement Iteration RefinementEditor focusGallery focusUnmutedMuted Muted Muted Muted MutedUnmutedEditor focusGallery focusExample #2Example #2Play Start/Finish buttons Keyframe editing ScrubbingGalleryFigure 5.5: Log visualizations showing archetypal design process. Top:P10’s heartbeat/vis condition (an “ideal” version). Bottom: P3’scar/hi condition (variations: a return to example browsing after edit-ing, repeated refinement, muted editing).analysis exposed three major qualitative findings, discussed below.Tool Usability: Overall, the tool was well received, described as “easy to use”(P1), “well made” (P5), “pretty neat” (P9), “the templates help a lot” (P3).Completion time: Overall mean task completion time for P1-10 was 5m48s(median 4m48s, sd 3m52s, min 40s, max 18m23s). We conducted two one-wayANOVAs on completion time; neither interface (p = 0.87) nor task (p = 0.64) hada significant effect.5.6.1 Archetypal Design ProcessLog visualizations (Figure 5.5) show that users could and did employ Macaron forall key design stages: preparation, initial design, iteration, and refinement. Allparticipants followed this sequence. Some omitted one or more steps dependingon personal style and strategies for using examples (below). We list observationsof the basic process in Table 5.2, to document behaviour and frame discussion.5.6.2 Micro Interaction Patterns Enabled by ToolSeveral small-scale patterns further characterize behaviour within the archetypalprocess.78Prepare All participants began with a problem preparation step [272]. Theyplayed the animation to understand the problem, then typically looked atseveral (sometimes all) examples. Only P2, P8, and P9 had a task wherethey did not begin with an example. Otherwise, participants browsedexamples, chose a best match to the animation (“I was trying to findthe best match with the visual” (P7, heartbeat/hi)), then transferred intoinitial design. Participants rarely returned to examples for more explo-ration; only P3 (car/hi) and P5 (car/lo) switched to a different exam-ple after beginning their initial design. Preparation is characterized bya large number of plays and example switches: on average, 47.45% ofall session plays were before the first edit (sd 30.15%), and participantsswitched examples an average of 6.75 times (sd 5.17).InitialDesignParticipants either used their example choice to help create their initialdesign, or ignored it because it wasn’t close enough to what they wantedto do. Participants typically recreated the example in their editor bycopy/paste of the entire design (P1,2,4-8,10) or sometimes a component(P3,10) in incorporable conditions (hi and sample), or by manuallyrecreating the design (P5,6) or a component (7,10) with vis. In the locondition, we only observed P5 somewhat recreating an example. Oc-casionally, participants would create a new design loosely based on theexample rather than recreating it (P3,4,6-8), when using the Inspire ex-ample use strategy (described later).Iterate Participants refined designs with longer periods of editing typicallybook-ended by playing the entire design (discussed as “real-time feed-back” micro interaction pattern). In some cases, especially when theexample was “close enough”, participants skipped iteration (Adjust orSelect example use strategies, described later).Refine Smaller changes forecast design conclusion, e.g., incremental globalchanges: constant frequency (P1,2,5,6,10), alignment (P1,3,6), or pulseheight adjustment (P1,3,8,10). This step is sometimes visible in activ-ity logs, as most participants (P1,3-10) exhibited more frequent plays ofthe entire design, and shorter periods of editing/scrubbing. Occasion-ally, participants repeated larger iterations and refinement (P3 car/hi,Figure 5.5).Table 5.2: Steps in observed archetypal design process.79Different paths through the interface – We saw three design-path strategies.– Time (Figure 5.6a; P1,2,3,4,7,9): proceed through the timeline, creating am-plitude and frequency at the same time.– Component (Figure 5.6b, P1,4,6,8,10): iterate on a design element, then re-peat or copy/paste it later in time.– Track (Figure 5.6c, P2,3,6,7,8-10): proceed through one entire track (typi-cally amplitude), then the other one.Strategies were often combined hierarchically. P6 developed a car/lo componentby track (amplitude, then frequency). Wanting additional flexibility, P1,3,7 re-quested copy/paste between tracks: “The one thing I found missing was copy andpasting between amplitude and frequency” (P7).Further showing diverse workflows, participants requested more powerful con-trols to work with keyframes as a group, such as widen (P5), reverse (P7), shifteverything (P9), move up/down and smooth (P4). Other requested features includelooping (P1), hovering over a point to see the value (P1), more detail through azoomable interface (P4).Alignment and Copy/Paste are Precise, Convenient – Precision was valued;alignment and copy/paste were used to achieve it. Alignment was sought both intime and to keyframe values. A common technique (Figure 5.6ab) was to use thered playhead like a plumb-line to align keyframes with animation features (P1-5,7,9,10) and between the two tracks (amplitude and frequency) (P3-5,7,9,10):“Using that red arrow thing and placing the dots when it makes the heartbeat”(P2). Some participants, including those who used the plumb-line, requested morerefined alignment features: “I couldn’t keep it straight” (P1).Copy/paste was used for improved work efficiency (especially helpful duringinitial layout or when creating long or repeating designs) and precision: “Copyand paste...was also the most precise, because if you feel like it’s a perfect fit, youcan use it exactly” (P6). Correspondingly, conditions without copy/paste (i.e., loand vis) took additional effort: “It’s harder...because there’s no copy and paste”(P5). Precision also depended on context: “For monitoring someone’s health, youwould have to be very accurate” (P9)Editing and playback – During iteration, participants edited in bursts of pri-marily scrubbing activity, bookended by full playthroughs. They took time to re-80(a) (b) (c)(a) P9’s cat/none design progressed sequentially in time. Note the red playheadhelping alignment in (b).(a) (b) (c)(b) P6’s car/lo design progressed by component, developing the component thenrepeating it.(a) (b) (c)(c) P10’s heartbeat/vis design progressed by track. Amplitude was developedfirst, then frequency.Figure 5.6: Participants created their designs using different progressionpaths, suggesting flexibility.alize each new version of the design before observing an overview. When editing,participants scrubbed back-and-forth, varying speed (P1-4,7,9,10), and draggingkeyframes to try ideas out (P1,3,4,7,9,10) Figure 5.7. This feature was valuedby those who used it: “The real-time part is pretty important” (P1); some rarelyplayed, showing more frequent or longer periods of scrubbing instead (P2,9,10).Others rarely scrubbed (P5, P8), possibly to have an overall sense of the design:“Trying to get a general sense of how it might feel” (P8). P3, P4, and P7 all exhib-ited focused editing with mute enabled, unmuting for the bookended play sections;others did not use muting.Encoding and Framing – Some participants encoded parameters using consis-81(a) (b)Figure 5.7: Participants used real-time feedback to explore, both (a) in timeby scrubbing back and forth (P3 lightning/lo), and (b) by movingkeyframes (P10 heartbeat/vis).tent rules, often aligned to events like heartbeats or lightning bolts. Others soughtto create moods or metaphors for sensation.Encoding was most visible in the lightning task, where participants representedlightning bolts in regular ways: “if there was a lightning bolt on the left, I putamplitude and frequency a little longer than a lightning bolt on the right” (P9).When the animation had two simultaneous bolts, several (P2-4,7,9) encoded it bysuperimposing two bolt representations on top of one another. Participants wereforced to reframe their encoding strategy: “...two [lightning bolts]...I divided itinto two equal partitions, .6 and 1” (P7).Encoding failed when participants did not find a direct mapping: “When thethree [snow flakes] come together I think my strategy broke down” (P7). Metaphorshelped in these cases. Car took extra imagination, either for the experience ofdriving (P6, P8, and P9 didn’t drive), or because it’s hard to “know what it wouldfeel like on the wrist” (P1). P6 describes her process for both lightning and snowas using mood: “...what I think the mood is...like snow fall, it’s kinda like, verygentle and calm” (P6).5.6.3 Example UseAs seen, examples played a major role in users’ design processes. Analysis re-vealed the effect of examples to be more nuanced than a one-to-one mapping of82Ignore Deliberately do not choose an example, through either lack of match:“I didn’t [find] the examples that I wanted” (P1); a desire to challengethemselves or be creative: “I wanted to do my own thing!” (P9); or diffi-culty in using the examples.Inspire Choose an example, but do not explicitly copy/paste or replicate it in theeditor; instead, design based loosely on example parts, sometimes as anadaptation from memory: “I just tried to remember what the keyframeswere like before, and then I modified it” (P6 car/lo).Template Choose an initial example, but alter it considerably. In this case, partici-pants use the example to expedite the process.Adjust Find an initial example, skipped major iteration and went directly to therefine stage, sometimes because the example was a close match. To en-able this, some participants wanted a more powerful manipulation meth-ods, like inverting (P7).Select Copy/paste an example (or manually recreate it), then do not modify;sometimes because the example seemed to match: “...copy and paste,then confirmed it was the same.” (P5)Table 5.3: Strategies used by participants to directly use examples as a start-ing point. Ignore and Inspire did not start with copy/paste; Template, Ad-just, and Select did, with varying amounts of editing afterwards. Whencopy/paste was not available, manual re-creation was used as a stand-in.the theoretical dimensions of incorporability and visibility. Emergent themes wereinstead organized on the role of examples: as a direct starting point for each de-sign; and to indirectly scaffold learning throughout a session. The latter was relatedto additional themes: task difficulty and individual differences.Direct example use – task starting point –When participants prepared for each task by browsing to find a best-matchexample, then using it as a starting point, they did this with a spectrum of strategies.These strategies, elaborated in Table 5.3, range from Ignore (examples not used) toSelect (an example was the final design).Indirect example use – observe how to design – Over the course of the session,participants used underlying structures of examples to understand how to designVT icons. This was most evident in the none or lo condition after participantswere first exposed to examples: “I sort of remembered” (P4 car/none). Someexplicitly described learning: “It gave me a general idea of thinking in big shapes83rather than little dots” (P9 lightning/vis).Most participants commented on the difficulty or ease of their task (P1-5, 7-9).Task difficulty was connected learning (“It’s easy...maybe it’s more experience”(P4 snow/lo)) and individual differences. Some people were motivated to learn,and challenge themselves; others were not.Connections between these factors are complex and difficult to unravel withthis data. We speculate on the utility of flow theory [54] as a useful lens to con-nect these issues, as it considers creativity, education, and the relationship betweenperceived challenge and perceived ability. We plan to use it to frame future explo-ration.5.7 DiscussionWe discuss implications for design, then limitations we hope to progress on withfuture work.5.7.1 Implications for DesignExpose example structures for learning – When exposed to examples’ underly-ing structure, participants are able to build their repertoire and learn VT designconventions like “big shapes” (P9). Such scaffolding is particularly crucial in anenvironment where experienced VT designers and training possibilities are rare.Whether through exploratory tool use or structured with online training programs,examples can expand the VT design practices available to novice designers.Examples as templates – Participants typically copied an example first beforeiterating and customizing, suggesting a template model of modifiable source doc-uments as a way to expose structure and reduce effort for designers.Example Recommender – The time participants spent searching for the suit-able examples suggests a recommender system could be very valuable. AI tech-niques might recommend examples similar (or dissimilar) to a source stimuli, aswith previous tools in other sensory modalities [155] and VT visualization toolslike VibViz [240].Clarify example context – Participants often repeated gallery searches for eachnew animation; they needed to compare examples alongside the target graphic.84In addition, though our examples were designed independently of our animationtasks, some participants showed confusion about whether they were supposed tomatch. Clarifying the context for each example, by presenting it either in connec-tion to its original design goal or as a candidate for the participant’s current goal,will help participants choose an example.Hideable examples – Some participants wanted to be individualistic with theirdesigns and actively disliked the most powerful hi condition, saying that the nonecondition was cleaner, or that while examples were helpful to learn, they felt “morecreative” with fewer examples present. A hideable gallery, which can be openedwhen needed but kept hidden otherwise, could accommodate user preference. Anintelligent gallery could even time example appearances or suggestions to occur athelpful design stages, e.g., by recognizing by activity patterns [67, 272].Realtime “prefeel” then render – Macaron’s real-time feedback supported ex-ploration, with full play-throughs providing an overview or evaluation in-betweenediting sessions. In addition, P4, who was familiar with haptics, felt that the scrub-bing synthesis was “muddy” relative to waveforms pre-rendered with audio tools– a common challenge, noted also by the researchers but deemed suitable for thisstudy. While we hope this technology deficit inspires improved realtime renderingalgorithms, it also suggests an explicit workflow compromise. Many video editingand compositing tools show a low-resolution previsualization in design mode; aclip is then fully rendered for playback. For tactile design, coarse, “prefeel” sen-sations would be synthesized for immediate feedback during a rough design stage,and a high-fidelity rendering generated for less frequent play-throughs. This couldhelp computationally demanding, perceptually-based models or multi-actuator se-tups (e.g., tactile animation [233] as a prefeel for tactile brush [126]).Tool flexibility – Macaron was used in very different ways depending on theparticipant. Some progressed by time, by track, by component, or a combinationthereof. Some mirrored frequency and amplitude, using them together, while oth-ers used them to express different ideas. This suggests that tools should be flexibleand accommodate different strategies; perhaps offering a choice to group by pa-rameters (e.g., [233]) or work along parameter tracks (e.g., [258, 259]).Alignment tools – Participants frequently used the playhead for alignment,finding locations in the video or aligning points between amplitude and frequency.85Participants requested using modifier keys to align points (as in other editing tools),or a visualization of events in video. This suggests several features, providing abil-ity to:– Align comparison sensations from each modality - visual or audio sensationalongside VT.– Place anchors for attaching a VT sensation (or keyframe within it) to a pointin a target visual or audio sensation. This might be automatically assisted, e.g, withvideo analysis techniques to find scene changes.– Automatically align keyframes to nearby keyframes, or use a modifier key toconstrain or nudge keyframe movement.Reuse – Copy/paste, especially from a template, speeded design and facilitatedotherwise tedious approaches. Several participants made use of element repetition,which had to be re-done upon design re-framing. While copy/paste was helpful,more powerful repetition tools (e.g. looping, and “master templates”, as in Power-Point) would likely find use by many designers.Automated Encoding – Some participants applied consistent rules in translat-ing an animation to a tactile rendering – e.g., representing left/right lightning boltsdifferently in the lightning animation, or directly matching amplitude to up-downmotion in the car animation. Some of these practices might be automated into gen-erative rules. For example, video analysis could detect up/down motion for a visualobject, and translate that automatically to a level for amplitude, similar to how mo-tion trackers can track a moving object and link that to position of an animation;or, a designer might want to specify the mapping. More complex parameteriza-tions could provide a useful tool for expert users, much like how fmod allows forparametrized audio in game design.5.7.2 Limitations & Future workLimitations in our study suggest future lines of inquiry: following up on additionsstudy factor by deploying online.Study factors – Our Latin square design allowed qualitative comparison ofseveral gallery variants, but did not have the power for comparative statistical testsbetween the alternatives. Meanwhile, five design tasks presented in a uniform order86did not permit systematic insights into other factors: learning, or task features suchas abstractness and complexity. Flow was identified after-the-fact as an importantframework for future analysis, but only after our study was designed and data wascollected.Our proposed example-usage dimensions of visibility and incorporability werea useful starting point, but did not line up well with the task processes that peopleactually used with Macaron. We did see behaviors that aligned well with learningand design-starting from examples, as well as hints of a more rich and nuancedview of what makes examples useful and in what way.First, the examples-as-starting-point strategies actually used (Table 5.3) sug-gest that visibility and incorporability at minimum are not quite right and probablyinsufficient in dimensionality – there is a concept of edibility regardless of start-ing point; whereas incorporability could entail editing, but certainly requires anexample as a start.Additionally, observations (including details not reported due to space limits)suggest other factors that influence example use, e.g., difficulty, from task, inter-face and personal confidence and experience; and task, from task complexity andabstraction, user strategy, e.g. encoding and metaphor, and user confidence andexperience. These hints are far from orthogonal, and will require further research,with focus turned to elements like task abstraction and user background, to disen-tangle and prioritize.Online deployment – Triangulation will be helpful in studying factors like dif-ficulty, task abstraction, and user background. In this study, Macaron was deployedand studied locally. We were able to validate the editor’s design support and utilityof its logging methods, and expose many interesting insights into natural end-userdesign practices.Our next plan to answer these questions is to deploy Macaron at a larger scale:online, as a free-to-use design tool for the haptics community, with an initial studyin haptics courses. This will allow research in-situ with larger, more quantitative,remote-based methods for data collection, triangulated with the less scalable qual-itative methods used in-lab. Interaction logs, use statistics, and A/B tests will helpus further develop Macaron as a tool for VT design and more generally as a lensfor the haptic design process.875.8 ConclusionIn this paper, we present initial findings from a vibrotactile (VT) design gallery,Macaron. This tool revealed insights both into how examples are used in VT designand implications for other VT design tools. Macaron was implemented using webtools, offering a unique opportunity to follow-up on the design process we observedhere, helping designers to create engaging experiences while understanding theircraft.88Chapter 6Share: HapTurkFigure 6.1: In HapTurk, we access large-scale feedback on informational ef-fectiveness of high-fidelity vibrations after translating them into prox-ies of various modalities, rendering important characteristics in acrowdsource-friendly way.Preface – While Chapters 3-5 describe iterative development of vibrotactile tools,with HapTurk1 we study a vibrotactile technique. Here, we look into browse’s in-verse: share, disseminating or storing a design concept for others’ use. We focuson one aspect of sharing: disseminating designs over the Internet. In this case,the goal is to collect large-scale feedback. In other design domains, crowdsourc-ing platforms like Amazon’s MTurk can deploy user studies and rapidly collectlarge samples. However, high-fidelity haptic sensations require specialized hard-ware, which most crowdsourced participants will not be able to access. We instead1Schneider, Seifi, Kashani, Chun, and MacLean. (2016) HapTurk: Crowdsourcing Affective Rat-ings for Vibrotactile Icons. Proceedings of the SIGCHI Conference on Human Factors in ComputingSystems – CHI ’16.89send more easily-shared stimuli: proxies, like visualizations and low-fidelity phonevibrations. We found these proxies can convey some affective characteristics forsome source stimuli, and identified several guidelines for developing better proxies.6.1 OverviewVibrotactile (VT) display is becoming a standard component of informative userexperience, where notifications and feedback must convey information eyes-free.However, effective design is hindered by incomplete understanding of relevant per-ceptual qualities. To access evaluation streamlining now common in visual design,we introduce proxy modalities as a way to crowdsource VT sensations by reli-ably communicating high-level features through a crowd-accessible channel. Weinvestigate two proxy modalities to represent a high-fidelity tactor: a new VT vi-sualization, and low-fidelity vibratory translations playable on commodity smart-phones. We translated 10 high-fidelity vibrations into both modalities, and in twouser studies found that both proxy modalities can communicate affective features,and are consistent when deployed remotely over Mechanical Turk. We analyze fitof features to modalities, and suggest future improvements.6.2 IntroductionIn modern handheld and wearable devices, vibrotactile (VT) feedback can pro-vide unintrusive, potentially meaningful cues through wearables in on-the-go con-texts [27]. With consumer wearables like Pebble and the Apple Watch featuringhigh-fidelity actuators, VT feedback is becoming standard in more user tools. To-day, VT designers seek to provide sensations with various perceptual and emo-tional connotations to support the growing use cases for VT feedback (everydayapps, games, etc.). Although low-level design guidelines exist and are helpful foraddressing perceptual requirements [20, 26, 122, 167, 265], higher-level concernsand design approaches to increase their usability and information capacity (e.g., auser’s desired affective response, or affective or metaphorical interpretation) haveonly recently received study and are far from solved [7, 129, 132, 191, 193, 239].Tactile design thus relies heavily on iteration and user feedback [228]. Despite itsimportance [239, 240], collecting user feedback on perceptual and emotional (i.e.,90affective) properties of tactile sensations in small-scale lab studies is underminedby noise due to individual differences (IDs).In other design domains, crowdsourcing enables collecting feedback at scale.Researchers and designers use platforms like Amazon’s Mechanical Turk (www.mturk.com) to deploy user studies with large samples, receiving extremely rapidfeedback in, e.g., creative text production [247], graphic design [279] and sonicimitations [36].The problem with crowdsourcing tactile feedback is that the “crowd” can’t feelthe stimuli. Even when consumer devices have tactors, output quality and intensityis unpredictable and uncontrollable. Sending each user a device is impractical.What we need are crowd-friendly proxies for test stimuli. Here, we define aproxy vibration as a sensation that communicates key characteristics of a sourcestimulus within a bounded error; a proxy modality is the perceptual channel andrepresentation employed. In the new evaluation process thus enabled, the designertranslates a sensation of interest into a proxy modality, receives rapid feedbackfrom a crowd-sourcing platform, then interprets that feedback using known errorbounds. In this way, designers can receive high-volume, rapid feedback to usein tandem with costly in-lab studies, for example, to guide initial designs or togeneralize findings from smaller studies with a larger sample.To this end, we must first establish feasibility of this approach, with specificgoals: (G1) Do proxy modalities work? Can they effectively communicate bothphysical VT properties (e.g., duration), and high-level affective properties (rough-ness, pleasantness)? (G2) Can proxies be deployed remotely? (G3) What modali-ties work, and (G4) what obstacles must be overcome to make this approach prac-tical?This paper describes a proof-of-concept for proxy modalities for tactile crowd-sourcing, and identifies challenges throughout the workflow pipeline. We describeand assess two modalities’ development, translation process, validation with a testset translation, and MTurk deployment. Our two modalities are a new technique tographically visualize high-level traits, and the low-fidelity actuators on users’ owncommodity smartphones. Our test material is a set of 10 VT stimuli designed fora high-fidelity tactile display suitable for wearables (referred to as “high fidelityvibrations”), and perceptually well understood as presented by that type of display91(Figure 6.7). We conducted two coupled studies, first validating proxy expressive-ness in lab, then establishing correspondence of results in remote deployment. Ourcontributions are:• A way to crowdsource tactile sensations (vibration proxies), with a technicalproof-of-concept.• A visualization method that communicates high-level affective features moreeffectively than the current tactile visualization standard (vibration wave-forms).• Evidence that both proxy modalities can represent high-level affective fea-tures, with lessons about which features work best with which modalities.• Evidence that our proxy modalities are consistently rated in-lab and remotely,with initial lessons for compliance.6.3 Related WorkWe cover work related to VT icons and evaluation methods for VT effects, thecurrent understanding of affective haptics, and work with Mechanical Turk in othermodalities.6.3.1 Existing Evaluation Methods for VT EffectsThe haptic community has appropriated or developed many types of user studiesto evaluate VT effects and support VT design. These target a variety of objectives:1) Perceptibility: Determine the perceptual threshold or Just Noticeable Dif-ference (JND) of VT parameters. Researchers vary the values of a VT parameter(e.g., frequency) to determine the minimum perceptible change [170, 205].2) Illusions: Studies investigate effects like masking or apparent motion of VTsensations, useful to expand a haptic designer’s palette [109, 126, 242].3) Perceptual organization: Reveal the underlying dimensionality of how hu-mans perceive VT effects (which are generally different than the machine param-eters used to generate the stimuli). Multidimensional Scaling (MDS) studies arecommon, inviting participants compare or group vibrations based on perceivedsimilarity [39, 116, 202, 265, 269].924) Encoding abstract information: Researchers examine salient and memo-rable VT parameters (e.g. energy, rhythm) as well as the number of VT icons thatpeople can remember and attribute to an information piece [3, 26, 39, 265].5) Assign affect: Studies investigate the link between affective characteristicsof vibrations (e.g., pleasantness, urgency) to their engineering parameters (e.g.,frequency, waveform) [148, 209, 265, 286]. To achieve this, VT researchers com-monly design or collect a set of vibrations and ask participants to rate them on aset of qualitative metrics.6) Identify language: Participants describe or annotate tactile stimuli in naturallanguage [39, 99, 121, 191, 240, 265].7) Use case support: Case studies focus on conveying information with VTicons such as collaboration [39], public transit [27] and direction [7, 27], or tim-ing of a presentation [260]. In other cases, VT effects are designed for user en-gagement, for example in games and movies, multimodal storytelling, or art in-stallations [129, 289]. Here, the designers use iterative design and user feedback(qualitative and quantitative with user rating) to refine and ensure effective design.All of the above studies would benefit from the large number of participantsand fast data collection on MTurk. In this paper, we chose our methodology so thatthe results are informative for a broad range of these studies.6.3.2 Affective HapticsVT designers have the challenge of creating perceptually salient icon sets that con-vey meaningful content. A full range of expressiveness means manipulating notonly a vibration’s physical characteristics but also its perceptual and emotionalproperties, and collecting feedback on this. Here, we refer to all these propertiesas affective characteristics.Some foundations for affective VT design are in place. Studies on tactile lan-guage and affect are establishing a set of perceptual metrics [191, 240]. Guestet al collated a large list of emotion and sensation words describing tactile stim-uli; then, based on multidimensional scaling of similarity ratings, proposed com-fort or pleasantness and arousal as key dimensions for tactile emotion words, andrough/smooth, cold/warm, and wet/dry for sensation [191]. Even so, there is not93yet agreement on an affective tactile design language [132].Recently, Seifi et al compiled research on tactile language into five taxonomiesfor describing vibrations [240]. 1) Physical properties that can be measured: e.g.,duration, energy, tempo or speed, rhythm structure; 2) sensory properties: rough-ness, and sensory words from Guest et al’s touch dictionary [99]; 3) emotional in-terpretations: pleasantness, arousal (urgency), dictionary emotion words [99]; 4)metaphors provide familiar examples resembling the vibration’s feel: heartbeat,insects; 5) usage examples describe events which a vibration fits: an incomingmessage or alarm.To evaluate our vibration proxies, we derived six metrics from these taxonomiesto capture vibrations’ physical, sensory and emotional aspects: 1) duration, 2) en-ergy, 3) speed, 4) roughness, 5) pleasantness, and 6) urgency.6.3.3 Mechanical Turk (MTurk)MTurk is a platform for receiving feedback from a large number of users, in ashort time at a low cost [112, 147]. These large, fast, cheap samples have proveduseful for many cases including running perceptual studies [112], developing tax-onomies [47], feedback on text [247], graphic design [279], and sonic imitations[36].Crowdsourced studies have drawbacks. The remote, asynchronous study envi-ronment is not controlled; compared to a quiet lab, participants may be subjectedto unknown interruptions, and may spend less time on task with more responsevariability [147]. MTurk is not suitable for getting rich, qualitative feedback orfollowing up on performance or strategy [177]. Best practices – e.g., simplifyingtasks to be confined to a singular activity, or using instructions complemented withexample responses – are used to reduce task ambiguity and improve response qual-ity [5]. Some participants try to exploit the service for personal profit, exhibitinglow task engagement [69], and must be pre- or post-screened.Studies have examined MTurk result validity in other domains. Most rele-vantly, Heer et al [112] validated MTurk data for graphical perception experiments(spatial encoding and luminance contrast) by replicating previous perceptual stud-ies on MTurk. Similarly, we compare results of our local user study with an MTurk94View	  A	   View	  B	  View	  C	  Physical	  Space	   Emo5onal	  Space	  Physical	  Filters	  Emo5onal	  Filters	  Usage	  Example	  Filters	  Metaphor	  Filters	  List	  Space	  (a) VibViz interface [240] (b) C2 tactorFigure 6.2: Source of high-fidelity vibrations and perceptual rating scales.study to assess viability of running VT studies on MTurk, and collect and examinephone properties in our MTurk deployment.Need for HapTurk: Our present goal is to give the haptic design communityaccess to crowdsourced evaluation so we can establish modality-specific method-ological tradeoffs. There is ample need for huge-sample haptic evaluation. Userexperience of transmitted sensations must be robust to receiving device diversity.Techniques to broadcast haptic effects to video [146, 183], e.g., with YouTube [1]or MPEG7 [70, 71] now require known high-fidelity devices because of remotedevice uncertainty; the same applies to social protocols developed for remote useof high-quality vibrations, e.g. in collaborative turn taking [39]. Elsewhere, stud-ies of VT use in consumer devices need larger samples: e.g., perceivability [140],encoding of caller parameters [24], including caller emotion and physical presencecollected from pressure on another handset [115], and usability of expressive, cus-tomizable VT icons in social messaging [130]. To our knowledge, this is the firstattempt to run a haptic study on a crowdsource site and characterize its feasibilityand challenges for haptics.6.4 Sourcing Reference Vibrations and QualitiesWe required a set of exemplar source vibrations on which to base our proxy modal-ities. This set needed to 1) vary in physical, perceptual, and emotional character-istics, 2) represent the variation in a larger source library, and 3) be small enough95Figure 6.3: VISDIR visualization, based on VibVizfor experimental feasibility.6.4.1 High-Fidelity Reference LibraryWe chose 10 vibrations from a large, freely available library of 120 vibrations(VibViz, [240]), browsable through five descriptive taxonomies, and ratings of tax-onomic properties. Vibrations were designed for an Engineering Acoustics C2tactor, a high-fidelity, wearable-suitable voice coil, commonly used in haptic re-search [240]. We employed VibViz’s filtering tools to sample, ensuring variety andcoverage by selecting vibrations at high and low ends of energy / duration dimen-sions, and filtering by ratings of temporal structure/rhythm, roughness, pleasant-ness, and urgency. To reduce bias, two researchers independently and iterativelyselected a set of 10 items each, which were then merged.Because VibViz was designed for a C2 tactor, we used a handheld C2 in thepresent study (Figure 6.2b).6.4.2 Affective Properties and Rating ScalesTo evaluate our proxies, we adapted six rating scales from the tactile literature andnew studies. Seifi et al [240] proposed five taxonomies for describing vibrationsincluding physical, sensory, emotional, metaphors, and use examples. Three tax-onomies comprise quantitative metrics and adjectives; two use descriptive words.We chose six quantitative metrics from [240] that capture important affective(physical, perceptual, and emotional) VT qualities: 1) duration [low-high], 2) en-ergy [low-high], 3) speed [slow-fast], 4) roughness [smooth-rough], 5) urgency[relaxed-alarming], and 6) pleasantness [unpleasant-pleasant]. A large scale (0-100) allowed us to treat the ratings as continuous variables. To keep trials quick96Figure 6.4: Visualization design process. Iterative development and pilotingresults in the VISEMPH visualization pattern.and MTurk-suitable, we did not request open-ended responses or tagging.6.5 Proxy Choice and DesignThe proxies’ purpose was to capture high-level traits of source signals. We in-vestigated two proxy channels and approaches, to efficiently establish viabilityand search for triangulated perspectives on what will work. The most obviousstarting points are to 1) visually augment the current standard of a direct traceof amplitude = f (time), and 2) reconstruct vibrations for common-denominator,low-fidelity actuators.We considered other possibilities (e.g., auditory stimuli, for which MTurk hasbeen used [36], or animations). However, our selected modalities balance a) di-rectness of translation (low fidelity could not be excluded); b) signal control (hardto ensure consistent audio quality/volume/ambient masking); and c) developmentprogression (visualization underlies animation, and is simpler to design, imple-ment, display). We avoided multisensory combinations at this early stage for clar-ity of results. Once the key modalities are tested, combinations can be investigatedin future work.“REF” denotes high-fidelity source renderings (C2 tactor).1) Visual proxies: Norms in published works (e.g. [39]) directed [240] toconfirm that users rely on graphical f (time) plots to skim and choose from largelibraries. We tested the direct plot, VISDIR, as the status-quo representation.However, these unmodified time-series emphasize or mask traits differentlythan felt vibrations, in particular for higher-level or “meta” responses. We consid-97Figure 6.5: Final VISEMPH visualization guide, used by researchers to cre-ate VISEMPH proxy vibrations and provided to participants duringVISEMPH study conditions.ered many other means of visualizing vibration characteristics, pruned candidatesand refined design via piloting to produce a new scheme which explicitly empha-sizes affective features, VISEMPH.2) Low-fidelity vibration proxy: Commodity device (e.g. smartphone) actu-ators usually have low output capability compared to the C2, in terms of frequencyresponse, loudness range, distortion and parameter independence. Encouraged byexpressive rendering of VT sensations with commodity actuation (from early con-straints [39] to deliberate design-for-lofi [130]), we altered stimuli to convey high-level parameters under these conditions, hereafter referred to as LOFIVIB.Translation: Below, we detail first-pass proxy development. In this feasibilitystage, we translated proxy vibrations manually and iteratively, as we sought gen-eralizable mappings of the parametric vibration definition to the perceptual qualitywe wished to highlight in the proxy. We frequently relied on a cycle of user feed-back, e.g., to establish the perceived roughness of the original stimuli and proxycandidate.Automatic translation is an exciting goal. Without it, HapTurk is still usefulfor gathering large samples; but automation will enable a very rapid create-testcycle. It should be attainable, bootstrapped by the up-scaling of crowdsourcingitself. With a basic process in place, we can use MTurk studies to identify thesemappings relatively quickly.986.5.1 Visualization Design (VISDIR and VISEMPH)VISDIR was based on the original waveform visualization used in VibViz (Fig-ure 6.3). In Matlab, vibration frequency and envelope were encoded to highlightits pattern over time. Since VISDIR patterns were detailed, technical and often in-scrutable for users without an engineering background, we also developed a moreinterpretive visual representation, VISEMPH; and included VISDIR as a status-quobaseline.We took many approaches to depicting vibration high-level properties, with vi-sual elements such as line thickness, shape, texture and colour (Figure 6.4). We firstfocused on line sharpness, colour intensity, length and texture: graphical waveformsmoothness and roughness were mapped to perceived roughness; colour intensityhighlighted perceived energy. Duration mapped to length of the graphic, whilecolour and texture encoded the original’s invoked emotion.Four participants were informally interviewed and asked to feel REF vibrations,describe their reactions, and compare them to several visualization candidates. Par-ticipants differed in their responses, and had difficulties in understanding VT emo-tional characteristics from the graphic (i.e. pleasantness, urgency), and in readingthe circular patterns. We simplified the designs, eliminating representation of emo-tional characteristics (color, texture), while retaining more objective mappings forphysical and sensory characteristics.VISEMPH won an informal evaluation of final proxy candidates (n=7), and wascaptured in a translation guideline (Figure 6.5).6.5.2 Low Fidelity Vibration DesignFor our second proxy modality, we translated REF vibrations into LOFIVIB vi-brations. We used a smartphone platform for their built-in commodity-level VTdisplays, their ubiquity amongst users, and low security concerns for vibration im-ports to personal devices [79]. To distribute vibrations remotely, we used HTML5Vibration API, implemented on Android phones running compatible web browsers(Google Chrome or Mozilla Firefox).As with VISEMPH, we focused on physical properties when developing LOFIVIB(our single low-fi proxy exemplar). We emphasized rhythm structure, an important99Figure 6.6: Example of LOFIVIB proxy design. Pulse duration was hand-tuned to represent length and intensity, using duty cycle to express dy-namics such as ramps and oscillations.design parameter [265] and the only direct control parameter of the HTML5 API,which issues vibrations using a series of on/off durations. Simultaneously, we ma-nipulated perceived energy level by adjusting the actuator pulse train on/off ratio,up to the point where the rhythm presentation was compromised. Shorter durationsrepresented a weak-feeling hi-fi signal, while longer durations conveyed intensityin the original. This was most challenging for dynamic intensities or frequencies,such as increasing or decreasing ramps, and long, low-intensity sensations. Herewe used a duty-cycle inspired technique, similar to [130], illustrated in Figure 6.6.To mitigate the effect of different actuators found in smartphones, we limitedour investigation to Android OS. While this restricted our participant pool, therewas nevertheless no difficulty in quickly collecting data for either study. We de-signed for two phones representing the largest classes of smartphone actuators:Samsung Galaxy Nexus, which contains a coin-style actuator, and a Sony XperiaZ3 Compact, which uses a pager motor resulting in more subdued, smooth sensa-tions. Though perceptually different, control of both actuator styles are limited toon/off durations. As with VISEMPH, we developed LOFIVIB vibrations iteratively,first with team feedback, then informal interviews (n=6).100Figure 6.7: Vibrations visualized as both VISDIR (left) and VISEMPH.6.6 Study 1: In-lab Proxy Vibration Validation (G1)We obtained user ratings for the hi-fi source vibrations REF and three proxies(VISDIR, VISEMPH, and LOFIVIB). An in-lab format avoided confounds and un-knowns due to remote MTurk deployment, addressed in Study 2. Study 1 had twoversions: in one, participants rated visual proxies VISDIR and VISEMPH next toREF; and in the other, LOFIVIB next to REF. REFVIS and REFLOFIVIB denotethese two references, each compared with its respective proxy(ies) and thus withits own data. In each substudy, participants rated each REF vibration on 6 scales[0-100] in a computer survey, and again for the proxies. Participants in the vi-sual substudy did this for both VISDIR and VISEMPH, then indicated preferencefor one. Participants in the lo-fi study completed the LOFIVIB survey on a phone,which also played vibrations using Javascript and HTML5; other survey elementsemployed a laptop. 40 participants aged 18-50 were recruited via university under-graduate mailing lists. 20 (8F) participated in the visual substudy, and a different20 (10F) in the low-fi vibration substudy.Reference and proxies were presented in different random orders. Pilots con-firmed that participants did not notice proxy/target linkages, and thus were unlikely101to consciously match their ratings between pair elements. REF/proxy presentationorder was counterbalanced, as was VISDIR/VISEMPH.6.6.1 Comparison Metric: Equivalence ThresholdTo assess whether a proxy modalities were rated similarly to their targets, weemployed equivalence testing, which tests the hypothesis that sample means arewithin a threshold δ , against the null of being outside it [236]. This tests if twosamples are equivalent with a known error bound; it corresponds to creating confi-dence intervals of means, and examining whether they lie entirely within the range(−δ ,δ ).We first computed least-squares means for the 6 rating scales for each proxymodality and vibration. 95% confidence intervals (CI) for REF rating means rangedfrom 14.23 points (Duration ratings) to 20.33 (Speed). Because estimates of theREF “gold standard” mean could not be more precise than these bounds, we setequivalence thresholds for each rating equal to CI width. For example, given theCI for Duration of 14.23, we considered proxy Duration ratings equivalent if the CIfor a difference fell completely in the range (−14.23,14.23). With pooled standarderror, this corresponded to the case where two CIs overlap by more than 50%. Wealso report when a difference was detected, through typical hypothesis testing (i.e.,where CIs do not overlap).Thus, each rating set pair could be equivalent, uncertain, or different. Fig-ure 6.9 offers insight into how these levels are reflected in the data given the highrating variance. This approach gives a useful error bound, quantifying the precisiontradeoff in using vibration proxies to crowdsource feedback.6.6.2 Proxy Validation (Study 1) Results and DiscussionOverview of ResultsStudy 1 results appear graphically in Figure 6.8. To interpret this plot, look for(1) equivalence indicated by bar color, and CI size by bar height (dark green/smallare good); (2) rating richness: how much spread, vibration to vibration, withina cell indicates how well that parameter captures the differences users perceived;102Speed Duration Energy Roughness Urgency Pleasantness025507510002550751000255075100Vis:DirVis:EmphLofiVib1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10VibrationRatingEquivalence Level REF Equivalent Uncertain Different Reference REF:Vis REF:LofiVib ProxyStudy 1 Proxy Validation RatingsFigure 6.8: 95% confidence intervals and equivalence test results for Study1 - Proxy Validation. Grey represents REF ratings. Dark green mapsequivalence within our defined threshold, and red a statistical differ-ence indicating an introduced bias; light green results are inconclusive.Within each cell, variation of REF ratings means vibrations were rateddifferently compared to each other, suggesting they have different per-ceptual features and represent a varied set of source stimuli.(3) modality consistency: the degree to which the bars’ up/down pattern translatesvertically across rows. When similar (and not flat), the proxy translations are beinginterpreted by users in the same way, providing another level of validation. Westructure our discussion around how the three modalities represent the differentrating scales. We refer to the number of equivalents and differents in a given cellas [x:z], with y = number of uncertains, and x+ y+ z = 10.Duration and Pleasantness were translatableDuration was comparably translatable for LOFIVIB [5:1] and VISEMPH [6:1];VISDIR was less consistent [7:3] (two differences very large). Between the threemodalities, 9/10 vibrations achieved equivalence with at least one modality. ForDuration, this is unsurprising. It is a physical property that is controllable throughthe Android vibration API, and both visualization methods explicitly present Du-ration as their x-axis. This information was apparently not lost in translation.10320406080100Ref:LofiVib Equivalent (LofiVib) Ref:Vis Uncertain (Vis:Emph) Different (Vis:Dir)RatingStudy 1 V6 Energy RatingsFigure 6.9: Rating distributions from Study 1, using V6 Energy as an ex-ample. These violin plots illustrate 1) the large variance in participantratings, and 2) how equivalence thresholds reflect the data. When equiv-alent, proxy ratings are visibly similar to REF. When uncertain, ratingsfollow a distribution with unclear differences. When different, there isa clear shift.More surprisingly, Pleasantness fared only slightly worse for LOFIVIB [4:2]and VISEMPH [4:1]; 8 / 10 vibrations had at least one modality that provided equiv-alence. Pleasantness is a higher-level affective feature than Duration. Although notan absolute victory, this result gives evidence that, with improvement, crowdsourc-ing may be a viable method of feedback for at least one affective parameter.Speed and Urgency translated better with LOFIVIBLOFIVIB was effective at representing Urgency [6:2]; VISEMPH attained only[4:5], and VISDIR [3:5]. Speed was less translatable. LOFIVIB did best at [4:2];VISDIR reached only [1:6], and VISEMPH [3:5]. However, the modalities againcomplemented each other. Of the three, 9/10 vibrations were equivalent at leastonce for Urgency (V8 was not). Speed had less coverage: 6/10 had equivalencies(V3,4,6,10 did not).104Roughness had mixed results; best with VISEMPHRoughness ratings varied heavily by vibration. 7 vibrations had at least one equiv-alence (V2,4,10 did not). All modalities had 4 equivalencies each: VISEMPH [4:3],VISDIR [4:4], and LOFIVIB [4:5].Energy was most challengingLike Roughness, 7 vibrations had at least one equivalence between modalities(V1,4,10 did not). LOFIVIB [4:5] did best with Energy; VISEMPH and VISDIRstruggled at [1:8].Emphasized visualization outperformed direct plotThough it depended on the vibration, VISEMPH outperformed VISDIR for mostmetrics, having the same or better equivalencies/differences for Speed, Energy,Roughness, Urgency, and Pleasantness. Duration was the only mixed result, asVISDIR had both more equivalencies and more differences [7:3] versus [6:1] Inaddition, 16/20 participants (80%) preferred VISEMPH to VISDIR. Although notalways clear-cut, these comparisons overall indicate that our VISEMPH visualiza-tion method communicated these affective qualities more effectively than the statusquo. This supports our approach to emphasized visualization, and motivates the fu-ture pursuit of other visualizations.V4,V10 difficult, V9 easy to translateWhile most vibrations had at least one equivalency for 5 rating scales, V4 andV10 only had 3. V4 and V10 had no equivalences at all for Speed, Roughness,and Energy, making them some of the most difficult vibrations to translate. V4’svisualization had very straight lines, perhaps downplaying its texture. V10 was byfar the longest vibration, at 13.5s (next longest was V8 with 4.4s). Its length mayhave similarly masked textural features.V8 was not found to be equivalent for Urgency and Pleasantness. V8 is anextremely irregular vibration, with a varied rhythm and amplitude, and the sec-ond longest. This may have made it difficult to glean more intentional qualitieslike Urgency and Pleasantness. However, it was only found to be different for105Speed Duration Energy Roughness Urgency Pleasantness* ** *** *** *** ******* * *025507510002550751000255075100Vis:DirVis:EmphLofiVib1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10VibrationRatingEquivalence Level REF Equivalent Uncertain Different Reference REF:Vis REF:LofiVib ProxyStudy 2 MTurk Deployment RatingsFigure 6.10: 95% Confidence Intervals and Equivalence Test Results forStudy 2 - MTurk Deployment Validation. Equivalence is indicatedwith dark green, difference is indicated with red, and uncertainty withlight green. Red star indicates statistically significant difference be-tween remote and local proxy ratings.VISDIR/Urgency, so we cannot conclude that significant biases exist.By contrast, V9 was the only vibration that had an equivalency for every ratingscale, and in fact could be represented across all ratings with LOFIVIB. V9 was aset of distinct pulses, with no dynamic ramps; it thus may have been well suited totranslation to LOFIVIB.SummaryIn general, these results indicate promise, but also need improvement and com-bination of proxy modalities. Unsurprisingly, participant ratings varied, reducingconfidence and increasing the width of confidence intervals (indeed, this is partialmotivation to access larger samples). Even so, both differences and equivalencieswere found in every rating/proxy modality pairing. Most vibrations were equiva-lent with at least one modality, suggesting that we might pick an appropriate proxymodality depending on the vibration; we discuss the idea of triangulation in moredetail later. Duration and Pleasantness were fairly well represented, Urgency andSpeed were captured best by LOFIVIB, and Roughness was mixed. Energy was106particularly difficult to represent with these modalities. We also find that resultsvaried depending on vibration, meaning that more analysis into what makes vibra-tions easier or more difficult to represent could be helpful.Though we were able to represent several features using proxy modalitieswithin a bounded error rate, this alone does not mean they are crowdsource-friendly.All results from Study 1 were gathered in-lab, a more controlled environment thanover MTurk. We thus ran a second study to validate our proxy modality ratingswhen deployed remotely.6.7 Study 2: Deployment Validation with MTurk (G2)To determine whether rating of a proxy is similar when gathered locally or re-motely, we deployed the same computer-run proxy modality surveys on MTurk.We wanted to discover the challenges all through the pipeline for running a VTstudy on MTurk, including larger variations in phone actuators and experimentalconditions (G4). We purposefully did not iterate on our proxy vibrations or survey,despite identifying many ways to improve them, to avoid creating a confound incomparing results of the two studies.The visualization proxies were run as a single MTurk Human Intelligence Task(HIT), counterbalanced for order; the LOFIVIB survey was deployed as its ownHIT. Each HIT was estimated at 30m, for which participants received $2.25 USD.In comparison, Study 1 participants were estimated to take 1 hour and received$10 CAD. We anticipated a discrepancy in average task time due to a lack of directsupervision for the MTurk participants, and expected this to lead to less accurateparticipant responses, prompting the lower payrate. On average, it took 7m forparticipants to complete the HIT while local study participants took 30m.We initially accepted participants of any HIT approval rate to maximize recruit-ment in a short timeframe. Participants were post-screened to prevent participationin both studies. 49 participants were recruited. No post-screening was used for thevisual sub-study. For the LOFIVIB proxy survey, we post-screened to verify de-vice used [177]. We asked participants (a) confirm their study completion with anAndroid device via a survey question (b) detected actual device via FluidSurvey’sOS-check feature, and (c) rejected inconsistent samples (eg. 9 used non-Android107platforms for LOFIVIB). Of the included data, 20 participants participated each inthe visual proxy condition (6F) and the LOFIVIB condition (9F).For both studies, Study 1’s data was used as a “gold standard” that servedas a baseline comparison with the more reliable local participant ratings [5]. Wecompared the remote proxy results (from MTurk) to the REF results gathered inStudy 1, using the same analysis methods.6.7.1 ResultsStudy 2 results appear in Figure 6.10, which compares remotely collected ratingswith locally collected ratings for the respective reference (the same reference as forFigure 6.8). It can be read the same way, but adds information. Based an analysisof a different comparison, a red star indicates a statistically significant differencebetween remote proxy ratings and corresponding local proxy ratings. This analysisrevealed that ratings for the same proxy gathered remotely and locally disagreed21 times (stars) out of 180 rating/modality/vibration combination; i.e., relativelyinfrequently.Overall, we found similar results and patterns in Study 2 as for Study 1. Thetwo figures show similar up/down rating patterns; the occasional exceptions corre-spond to red-starred items. Specific results varied, possibly due to statistical noiseand rating variance. We draw similar conclusions: that proxy modalities can stillbe viable when deployed on MTurk, but require further development to be reliablein some cases.6.8 DiscussionHere we discuss high level implications from our findings and relate them to ourstudy goals (G1-G4 in Introduction).6.8.1 Proxy Modalities are Viable for Crowdsourcing (G1,G2:Feasibility)Our studies showed that proxy modalities can represent affective qualities of vibra-tions within reasonably chosen error bounds, depending on the vibration. These re-sults largely translate to deployment on MTurk. Together, these two steps indicate108that proxy modalities are be a viable approach to crowdsourcing VT sensations,and can reach a usable state with a bounded design iteration (as outlined in thefollowing sections). This evidence also suggests that we may be able to deploydirectly to MTurk for future validation. Our two-step validation was important asa first look at whether ratings shift dramatically; and we saw no indications of biasor overall shift between locally running proxy modalities and remotely deployingthem.6.8.2 Triangulation (G3: Promising Directions/Proxies)Most vibrations received equivalent ratings for most scales in at least one proxymodality. Using proxy modalities in tandem might help improve response accu-racy. For example, V6 could be rendered with LOFIVIB for a pleasantness rat-ing, then as VISEMPH for Urgency. Alternatively, we might develop an improvedproxy vibration by combining modalities - a visualization with an accompanyinglow-fidelity vibration.6.8.3 Animate Visualizations (G3: Promising Directions)Speed and Urgency were not as effectively transmitted with our visualizations aswith our vibration. Nor was Duration well portrayed with VISDIR, which hada shorter time axis than the exaggerated VISEMPH. It may be more difficult forvisual representations to portray time effectively: perhaps it is hard for users todistinguish Speed/Urgency, or the time axis is not at an effective granularity. Ani-mations (e.g., adding a moving line to help indicate speed and urgency), might helpto decouple these features. As with triangulation, this might also be accomplishedthrough multimodal proxies which augment a visualization with a time-varyingsense using sounds or vibration. Note, however, that Duration was more accuratelyportrayed by VISEMPH, suggesting that direct representation of physical featurescan be translated.1096.8.4 Sound Could Represent Energy (G3: Promising Directions)Our high-fidelity reference is a voice-coil actuator, also used in audio applications.Indeed, in initial pilots we played vibration sound files through speakers. Soundis the closest to vibration in the literature, and a vibration signal’s sound output iscorrelated with the vibration energy and sensation.However, in our pilots, sometimes the vibration sound did not match the sen-sation; was not audible (low frequency vibrations); or the C2 could only play partof the sound (i.e, the sound was louder than the sensation).Thus, while the raw sound files are not directly translatable, a sound proxy def-initely has potential. It could, for example, supplement where the VISDIR wave-form failed to perform well on any metric (aside from Duration) but a more expres-sive visual proxy (VISEMPH) performed better.6.8.5 Device Dependency and Need for Energy Model for Vibrations(G4: Challenges)Energy did not translate well. This could be a linguistic confusion, but also a failureto translate this feature. For the visualization proxies, it may be a matter of findingthe right representation, which we continue to work on.However, with LOFIVIB, this represents a more fundamental tradeoff due tocharacteristics of phone actuators, which have less control over energy output thanwe do with a dedicated and more powerful C2 tactor. The highest vibration en-ergy available in phones is lower than for the C2; this additional power obviouslyextends expressive range. Furthermore, vibration energy and time are coupled inphone actuators: the less time the actuator is on, the lower the vibration energy. Asa result, it is difficult to have a very short pulses with very high energy (V1,V3,V8).The C2’s voice coil technology does not have this duty-cycle derived coupling. Fi-nally, the granularity of the energy dimension is coarser for phone actuators. Thisresults in a tradeoff for designing (for example) a ramp sensation: if you aim foraccurate timing, the resulting vibration would have a lower energy (V10). If youmatch the energy, the vibration will be longer.Knowing these tradeoffs, designers and researchers can adjust their designs toobtain more accurate results on their intended metric. Perhaps multiple LOFIVIB110translations can be developed which maintain different qualities (one optimized ontiming and rhythm, the other on energy). In both these cases, accurate models forrendering these features will be essential.6.8.6 VT Affective Ratings are Generally Noisy (G4: Challenges)Taken as a group, participants were not highly consistent among one another whenrating these affective studies, whether local or remote. This is in line with previ-ous work [240], and highlights a need to further develop rating scales for affectivetouch. Larger sample sizes, perhaps gathered through crowdsourcing, may helpreduce or characterize this error. Alternatively, it gives support to the need to de-velop mechanisms for individual customization. If there are “types” of users whodo share preferences and interpretations, crowdsourcing can help with this as well.6.8.7 Response & Data Quality for MTurk LOFIVIB Vibrations (G4:Challenges)When deploying vibrations over MTurk, 8/29 participants (approximately 31%)completed the survey using non-Android based OSes (Mac OS X, Windows 7,8.1,NT) despite these requirements being listed in the HIT and the survey. One partici-pant reported not being able to feel the vibrations despite using an Android phone.This suggests that enforcing a remote survey to be taken on the phone is challeng-ing, and that additional screens are needed to identify participants not on a partic-ular platform. Future work might investigate additional diagnostic tools to ensurethat vibrations are being generated, through programmatic screening of platforms,well-worded questions and instructions, and (possibly) ways of detecting vibra-tions actually being played, perhaps through the microphone or accelerometer).6.8.8 Automatic Translation (G4: Challenges)Our proxy vibrations were developed by hand, to focus on the feasibility of crowd-sourcing. However, this additional effort poses a barrier for designers that mightnegate the benefits of using a platform of MTurk. As this approach becomes betterdefined, we anticipate automatic translation heuristics for proxy vibrations usingvalidated algorithms. Although these might be challenging to develop for emo-111tional features, physical properties like amplitude, frequency, or measures of en-ergy and roughness would be a suitable first step. Indeed, crowdsourcing itselfcould be used to create these algorithms, as several candidates could be developed,their proxy vibrations deployed on MTurk, and the most promising algorithms latervalidated in lab.6.8.9 LimitationsA potential confound was introduced by VISEMPH having a longer time axis thanVISDIR: some of VISEMPH’s improvements could be due to seeing temporal fea-tures in higher resolution. This is exacerbated by V10 being notably longer thanthe next longest vibration, V8 (13.5s vs. 4.4s), further reducing temporal resolutionvibrations other than V10.We presented ratings to participants by-vibration rather than by-rating. Be-cause participants generated all ratings for a single vibration at the same time, itis possible there are correlations between the different metrics. We chose this ar-rangement because piloting suggested it was less cognitively demanding than pre-senting metrics separately for each vibration. Future work can help decide whethercorrelations exist between metrics, and whether these are an artifact of stimuluspresentation or an underlying aspect of the touch aesthetic.Despite MTurk’s ability to recruit more participants, we used the same samplesize of 40 across both studies. While our proxies seemed viable for remote deploy-ment, there were many unknown factors in MTurk user behaviour at the time ofdeployment. We could not justify more effort without experiencing these factorsfirsthand. Thus, we decided to use a minimal sample size for the MTurk study thatwas statistically comparable to the local studies. In order to justify a larger remotesample size in the future, we believe it is best to iterate the rating scales and to testdifferent sets of candidate modalities.As discussed, we investigated two proxy modalities in this first examination butlook forward to examining others (sound, text, or video) alone or in combination.1126.9 ConclusionIn this paper, we crowdsourced high-level parameter feedback on VT sensationsusing a new method of proxy vibrations. We translated our initial set of high-fidelity vibrations, suitable for wearables or other haptic interactions, into twoproxy modalities: a new VT visualization method, and low-fidelity vibrations onphones.We established the most high-risk aspects of VT proxies, namely feasibility inconveying affective properties, and consistent local and remote deployment withtwo user studies. Finally, we highlighted promising directions and challenges ofVT proxies, to guide future tactile crowdsourcing developments, targeted to em-power VT designers with the benefits crowdsourcing brings.113Chapter 7Breadth: Focused Design ProjectsIn Chapter 7, we complement the vibrotactile tools and techniques in Chapters 3-6, broadening our scope to include application areas like gaming and education,non-vibrotactile haptic devices, and other design concerns like customization. Weadopt a haptician’s role and practice research through design, gaining first-handknowledge into HaXD in a more natural design setting than our one-session lab-based evaluations.These focused design projects contributed to our inquiry in three ways: individ-ually informing our in-depth case studies with practical findings about implemen-tation, implicitly enriching our final conclusions (Chapter 9) with reflection in ourdesign process, and collectively suggesting concrete conclusions about the diver-sity of haptic experiences (Section 9.2.2) and conceptually framing haptic designs(Section 9.2.3).We include five design projects:7.1 FeelCraft: Sharing Customized Effects for Games12, a plug-in architec-ture for distributing customizable feel effects, implemented with the gameMinecraft. FeelCraft showed that we needed a cohesive experience with vi-sual, audio, and haptic feedback all carefully coordinated to be understand-1Schneider, Zhao, and Israr. (2015) FeelCraft: User-Crafted Tactile Content. Lecture Notes inElectrical Engineering 277: Haptic Interaction.2Zhao, Schneider, Klatzky, Lehman, and Israr. (2014) FeelCraft: Crafting Tactile Experiencesfor Media using a Feel Effect Library. Proceedings of the Annual Symposium on User InterfaceSoftware and Technology – UIST ’14 Demos.114able and engaging to users.7.2 Feel Messenger: Expressive Effects with Commodity Systems34, a designproject creating expressive shareable VT icons on commodity smart phones.With Feel Messenger, we found even low-fidelity vibrations could be en-gaging, but again needed a clear, engaging story, e.g., with colourful visualemoji.7.3 RoughSketch: Designing for an Alternative Modality, a drawing applica-tion using programmable friction with the TPad phone. With RoughSketch,we found haptic feedback could be successfully structured around differentmetaphors: does friction feedback literally represent how the act of draw-ing feels (e.g., slippery finger painting) or how the finished product (e.g.,stucco-like spray paint).7.4 HandsOn: Designing Force-Feedback for Education5, a conceptual modelfor DIY force-feedback haptics in education. We found that haptic learningenvironments need to be designed around the stimuli and modality used, notsimply added to existing lesson plans.7.5 CuddleBit Design Tools: Sketching and Refining Affective Robot Be-haviours6, Voodle and MacaronBit are design tools for CuddleBits, simpleaffective robots. Voodle and MacaronBit confirmed the utility of explicitlysupporting sketching and refining in a suite of tools.Most of this chapter (Sections 7.1-7.3 and 7.5) were primarily presented as de-mos. As such, we present this chapter’s work in a summary format rather than fullreproduction, and exclude prefaces.3Israr, Zhao, and Schneider. (2015) Exploring Embedded Haptics for Social Networking andInteractions. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems –CHI EA ’15.4Schneider, Zhao, and Israr. (2015) Feel Messenger: Embedded Haptics for Social Networking.World Haptics ’15 Demos.5Minaker, Schneider, Davis, and MacLean. (2016) HandsOn: Enabling Embodied, CreativeSTEM e-learning with Programming-Free Force Feedback. EuroHaptics ’16.6Bucci, Cang, Chun, Marino, Schneider, Seifi, and MacLean. (2016) CuddleBits: an iterativeprototyping platform for complex haptic display. EuroHaptics ’16 Demos.1157.1 FeelCraft: Sharing Customized Effects for GamesAs shown in prior work [238, 239], and as we discuss in Chapter 8, customizationis an important feature for haptic experiences. In addition, haptic media must bebuilt around existing infrastructure, as it is not directly supported by most mediatypes. In this chapter, we describe the plug-in architecture, envisioned applications,and our implementation for VT grid arrays displaying Feel Effects (FEs) [129] fora popular video game, Minecraft.This project put Tactile Animation (Chapter 4) in context, as we designed forthe same VT grid and domain: multimedia entertainment. We used a variety ofmetaphors to create our designs, from established effects (heartbeat, rain [129])to new ones (explosion and horse galloping), and found only synchronized visual-audio-haptic effects were effective. We had to develop the software vertically,writing the Minecraft plugin in tandem with the rendering system and final effects.Iteration was slow without a mature animation tool. However, we were able to startquickly, as our software architecture was able to use the same low-level renderingplatform as Tactile Animation, and we were able to refine designs easily usinghuman-readable, declarative JSON files.FeelCraft is a media plugin architecture that monitors events and activities inthe media, and associates them to user-defined haptic content in a seamless, struc-tured way. The FeelCraft plugin allows novice users to generate, recall, save, andshare haptic content, and play and broadcast them to other users to feel the samehaptic experience, without requiring any skill in haptic content generation. Ourimplementations uses the Marvel Avengers Vybe Haptic Gaming Pad by ComfortResearch (http://comfortresearch.com), a chair-shaped pad with 12 actuators (6voice coils and 6 rumble motors). We designed effects that leveraged this display,e.g., voice coils simulating rain on the user’s back when there is rain in-game, andrumble motors creating a galloping sensation on the chair’s seat when the user ridesan virtual horse.7.1.1 FeelCraft Plugin and ArchitectureA FeelCraft plugin maps media to haptic sensations in a modular fashion, sup-porting arbitrary media types and output devices. By using a FeelCraft plugin,116users can link existing and new media to the haptic feedback technology, use anFE library to find appropriate semantically defined effects, author, customize, andshare a common, evolving repository of FEs, and play and broadcast haptic expe-riences to one or more user(s). A pictorial description of the FeelCraft architectureis shown in Fig. 1 architecture.The conceptual framework of FeelCraft revolves around the FE library intro-duced in [129]. The FE library provides a structured and semantically correct as-sociation of media events with haptic feedback. By using the authoring interface totailor FE parameters, a repository of FEs can remain general while being used forunique, engaging, and suitable sensations for different media. The playback sys-tem, authoring and control interface, Event2Haptic mappings, and media pluginsupport seamless flow of the media content to the haptic feedback hardware.Feel Effect RepositoryFigure 7.1: FeelCraft architecture. The FeelCraft plugin is highlighted ingreen. The FE library can connect to shared feel effect repositories todownload or upload new FEs. A screenshot of our combined authoringand control interface is on the right.Media (1) can be entertaining, such as video games, movies, and music, or socialand educational. The media can also be general user activity or embeddedevents in applications. In our implementation (Figure 7.3), we use the popu-lar sandbox indie game Minecraft (https://minecraft.net).Media Plugin (2) is a software plugin that communicates with the media and out-117puts events and activities. This plugin can be as simple as receiving mes-sages from the media or as complicated as extracting events and activitiesfrom a sound stream. With existing media, common plugin systems are au-tomatic capture of semantic content from video frames [189], camera angles[63], or sounds [40, 157], or the interception of input devices (such as gamecontrollers or keyboard events). We use a CraftBukkit Minecraft server mod-ification to capture in-game events.Event2Haptic (3) mappings associate events to FEs, which are designed, tuned,and approved by users using the FE library. This critical component links themedia plugin’s output to the haptic playback system. Currently, six FEs aretriggered by six recurring in-game events: the presence of rain, low playerhealth, movement on horse, strike from a projectile, in-game explosions, andplayer falls. Our implementation provides the option to store this mappingdirectly in the source code, or in a text-based JavaScript Object Notation(JSON) fileFE Library (4) is a collection of FEs. A key feature of an FE is that it correlatesthe semantic interpretation of an event with the parametric composition ofthe sensation in terms of physical variables, such as intensity, duration, andtemporal onsets [129]. Each FE is associated with a family, and semanti-cally, similar FEs are associated with the same family. For example, theRain family contains FEs of light rain and heavy rain; as well that that ofsprinkle, drizzle, downpour, and rain.In our implementation, each FE familyis represented as a Python source file that defines parametric composition ofthe FE and playback sequences for the FeelCraft Playback system, and eachFE is coded as preset parameters in a JSON file. FE family files are neces-sary to play corresponding FEs in the family, and new FE families can bedeveloped or downloaded through the shared FE repository. The FE can alsobe created, stored, and shared. FE family and FE files are stored in a localdirectory of the plugin and loaded into FeelCraft on startup.Authoring and Control Interfaces (5, 6) allow users to create and save new FEsand tune, edit, and play back existing FEs. Users modify an FE by varying118sliders labeled as common language phrases instead of parameters such asduration and intensity (Fig. 1). Therefore, users can design and alter FEs byonly using the semantic logic defining the event. The interface also allowsusers to map game events to new FEs and broadcast to other users, support-ing a What-You-Feel- Is-What-I-Feel (WYFIWIF) interface [228].Playback and Communication Protocols (7) render FEs using the structure de-fined in FE family files and outputs them through a communication method(8) to one or more devices (9). Our implementation includes an API control-ling the commercially available Vybe Haptic Gaming Pad via USB.Figure 7.2: Mockup for FeelCraft demo system.7.1.2 Application EcosystemFeelCraft plugins are designed to make haptics accessible to end users using ex-isting media and technology. For example, a user may want to assign a customvibration to a friend’s phone number, or add haptics to a game. In this case, a119HealthcareVideo gamesMusicEducation& trainingMobileArtMoviesNavigationTherapySocial mediaToysRidesFeelCraft PluginTelevisionSportsFeel Effect RepositoryFigure 7.3: Application ecosystem for FeelCraft and an FE repositoryuser would download a FeelCraft plugin for their device, browse FEs on an on-line feel repository, and download FE families they prefer. Once downloaded, theFeelCraft authoring interface allows for customization, as a rain FE for one videogame may not quite suit another game. The user could create a new FE for theirspecific application, and once they were happy with it, upload their custom FE forothers to use. If the user wanted to show a friend their FE, they could use theplayback system to drive output to multiple devices, or export the FE to a file andsend it to them later. Figure 2 illustrates this ecosystem with application areas.Just like the Noun Project for visual icons (http:// thenounproject.com) and down-loadable sound effect libraries, we envision online repositories of FEs that can becontinually expanded with new FEs by users. Our current FErepository includessix original families described in [129] and an additional four new families: Ride,Explosion, Fall, and Arrow.7.2 Feel Messenger: Expressive Effects with CommoditySystemsIn Section 7.1, we designed expressive spatial VT Feel Effects using existing in-frastructure, using a plugin architecture to link desktop applications to new VThardware. In Section 7.2, we look at the expressiveness of existing infrastructureand actuation methods with Android smartphones for customizable VT effects byimplementing customizable VT emojis in a chat program, Feel Messenger. Both120Figure 7.4: Users exchanging expressive haptic messages on consumer em-bedded devices.projects were a chance to practice VT design and contextualize our in-depth designtools.With Feel Messenger, we found that even extremely simple VT icons couldbe engaging. As illustrated previously by Figure 6.6 in Chapter 6, APIs for VTfeedback are currently limited to a series of pulses. With Feel Messenger, we wereable to produce expressive VT icons for emojis using the built-in Android API,including customizable effects like a heartbeat that varies in both rate and intensity.However, we had to ensure that the haptic feedback fit into a narrative. Wefound VT icons were effective only when they had an engaging visual icon to framethe vibration: a cartoon cat emoji helped the user understand purring vibrationwas a purr, but a more abstract motor icon was ineffective. We also found thateach phone had different dynamics and motors, so the set of designs needed to beadapted for each device. Figure 7.4 presents a concept sketch.7.2.1 Feel Messenger ApplicationIn this section, we present the architecture (backend) and user interface (frontend)of a messenger application that allows users to create and share haptic contentthrough a network connection.121Architecture – To account for the limited computation, storage and commu-nication capabilities of a simple microcontroller unit, we introduce feelbits andfeelgits. Feelgits (short from “feel widgets”) are installed piece of software thatdefine parametric compositions of a set of haptic patterns (called a family). Feel-bits are parametric settings of a feelgit to produce a particular haptic pattern (calleda feel effect [129]). For example, the feelgit of pulse is defined as two successiveonsets of vibration, separated by a timing parameter. The feelbits are timing andintensity of onset parameters. Therefore, by varying feelbits, a user can personal-ize the haptic effect to be calm (low intensity, long temporal separation) or racing(high intensity, short temporal separation) heartbeat.A library of haptic patterns is stored as parametric models (feelgits) with presetparameters (feelbits). New feelgits and feelbits can be downloaded, personalizedand saved. The haptic engine idly waits for incoming haptic messages and rendershaptic patterns on demand. Once the message is received, the corresponding feelgitis executed with parameters defined as feelbits. Once the pattern is completelyrendered, the engine waits idly for the next message.Additionally, the response characteristics of the VT motor are also stored inthe memory. These characteristics are generally represented by simple first-orderfunctions relating the digital value (such as data byte) to the perceived intensityjudged by users [137], which could be used to maintain the quality of experienceacross wide variety of mobile phones and hardware technologies.Finally, we introduce a communication protocol that shares feelgits and feelbitsalong with text messages. For example, the frontend application sends a functionplaypattern(“pulse”, p1, p2) to play the feelgit pulse with parameters defined asfeelbits p1 and p2; or playpulse(“soft”) plays a predefine soft pulse. Note thatin order for the device to play a haptic pattern, the corresponding feelgit must bestored in the device; the communication packet includes feelbits and the name orid of the corresponding feelgit.Predefined Patterns – The predefined patterns allow users to quickly attach ahaptic pattern to the IM. These patterns can be stored from incoming messages orcreated by using stored haptic families. Each pattern is defined by a set of feelbitsthat plays when the corresponding feelgit is executed. These presets can be shownas text, images or emotion icons.122Authoring interface – The Feel Editor displays available FE families (feelgits)and allows users to personalize, play, save and share haptic patterns. By clicking aFE icon, sliders corresponding to parameters (feelbits) are enabled. These slidersmay have labels corresponding to physical parameters, such as amplitude or dura-tion of vibration; however, we have used semantic labeling that may correspond tosingle or multiple parameters. Once the sliders are adjusted, the user can play, saveor attach the haptic pattern to the IM.7.2.2 Haptic VocabularyThe vocabulary of haptic effect is critical for expressive and precise communicationbetween users. In this preliminary implementation, we explore three types of hapticvocabularies. Type 1 is adapted from feel effects defined in [129], where hapticpatterns are semantically characterized by a phrase. Type 2 is change in physicalparameters as in [20, 167] but can also be simultaneously played with feel effects.Type 3 is predefined coded patterns. Figure 7.5 shows the icons for haptic language.Note, that the two feel effects cannot be simultaneously played. This will result inoverflow of the user’s bandwidth, especially with a low-fidelity VT actuator.Type 1: Feel Effects – A set of feel effects is defined that delivers emotional,attentional and contextual effects. They are:Pulse: Two successive onsets of vibration; speed (slow/fast) and intensity (weak/strong).Used as pulsation and heartbeat (calm/racing).Motor: A 4-second modulated vibration; intensity (soft/loud) and speed (slow/-fast) are parameters. Used as snoring, breathing, purring, engine rumble,etc.Strike: A single onset of vibration; duration (short/long) and intensity of vibrationare parameters. Used for tap, poke, jab and punch.Urgency: a burst of vibrations; intensity (weak/strong) and temporal separationbetween pulses (low/high urgency) are parameters. Used for alerting usersand expressing urgency.Type 2: Physical Effects – These effects are associated with direct variation in123Figure 7.5: Graphical representation of haptic vocabularies and icons.Figure 7.6: Some examples of expressive haptic messages embedded withnormal text messages.124tactile patterns. Previous studies (e.g., [20, 167]) have used variation in amplitudeand duration as typical variation. Our library includes:Ramp-up: gradual increase in intensity; parameters are peak intensity and the rateof increase.Ramp-down: gradual decrease in intensity; parameters are peak intensity and therate of decrease.Spacer: keeps steady intensity; parameters are intensity and duration. This can beused for putting a delay (or spaces) between two haptic effects.These effects create new haptic effects and can also be combined with feeleffects. Such as the message Ramp-up — Motor followed by Ramp-down createsa new pattern that gradually increases the rumbling and then decays linearly asshown in Figure 7.6.Type 3: Coded Effects – This type demonstrates symbolic vocabularies, suchas one adapted from International Morse Code that consists of pre-stored pulses ofdots and dashes. Other examples can be vibratese language [263], emoticon, andinput from peripheral sensors.7.2.3 DemoWe developed an Android application on a two Samsung S5 smartphones runningAndroid 4.4.2. The Android API allows ON/OFF control of the embedded VTmotor. A rough relationship between duty cycle and perceived intensity was deter-mined to create effects.In this prototype, we explored both predefined effects and Type 1 icons (FeelEffects). Our predefined effects were designed with 6 emoji. Our four Type 1FEs were: Heartbeat (Pulse FE), Lightning (Strike FE), Cat Purr (Motor FE), andCoffee (Urgency FE). These VT emoji could be embedded in chat messages, sentbetween two Android phones using UDP. VT effects are felt when editing, whenreceived, and when the user taps a message. All effects were implemented usingbuilt-in Android APIs.125Figure 7.7: Implemented Feel Messenger demo at World Haptics 2015.7.3 RoughSketch: Designing for an Alternative ModalityIn Sections 7.3 to 7.5, we investigate other modalities in other applications: pro-grammable friction for touchscreen drawing, force-feedback for education, and af-fective robots for emotional expression. Here, in Section 7.3, we describe RoughS-ketch, a drawing application for the TPad Phone.The TPad Phone (www.thetpadphone.com) is a programmable friction displaymounted on an Android phone. It uses piezo-actuated mechanical vibration to cre-ate a cushion of air, reducing friction [277]. As part of the World Haptics 2015Student Innovation Challenge, we built RoughSketch, a mobile drawing applica-tion to explore friction displays for digital mark-making.When working with friction feedback, some design activities were more sup-ported than others. The TPad Phone was accompanied by a mature API which sup-ported sketching for spatial friction profiles: an image could be easily and quicklyloaded to create a static profile. We found it was more difficult to sketch complexinteractions that reacted to input velocity or multitouch. In both cases, refining126designs was slow, as the program needed to be recompiled each time. This projectalso revealed the complexities of creating a design language, that is, a set of princi-ples for a consistent look and feel, when designing for haptics. We found this in thediverse metaphors we used to frame friction feedback for different mark makingtechniques.We looked at six mark-making interaction techniques:• Paintbrush, where you feel paint leaving your finger,• Pen and eraser, based on real-world writing utensils,• Spray paint, where you feel the roughness of paint on the screen as you spray,• Pinch/zoom, inspired by compressing and stretching rubber, and• Feel finger, the ability to feel your drawing on the paper.To implement RoughSketch, we adapted an open-source Android drawing appli-cation, Markers (https://github.com/dsandler/markers) and used the TPad PhoneAPI to control friction using two methods: static textures defined by bitmaps, andtemporal envelopes that programmatically adapt friction based on input values ortime. We used a variety of real-world metaphors to inspire our designs; these are il-lustrated in Figure 7.8. While designing and developing RoughSketch, we exposeda design space, finding conflicts for our metaphors, specifically, should TPad sen-sations feel like their real-world equivalent, or are they unique to the TPad; andshould rendered textures represent the drawing process, or the finished product?Our findings are outlined in Figure 7.9. In addition to developing different ef-fects, we informally compared haptic feedback to non haptic feedback by includinga toggle to friction feedback. Although some effects were subtle, once disabled,users immediately noticed the difference and preferred to have haptic feedback.We also explored stylus use, finding that a rubber tip would barely transmit anysensation, while a more rigid tip would propagate the (dampened) effect.7.4 HandsOn: Designing Force-Feedback for EducationIn Section 7.4, we investigate creative control of 1-degree of freedom (DOF) force-feedback display for education. Force-feedback is interactive, with output depen-127PENCONSTANTThe pen texture is contant but slightly grainy, as if it’s rolling across paper.ERASERSINUSOIDALThe eraser mimics real life, like bits rubbing off as you use it.INSPIRATIONTOOLTEXTUREDESCRIPTIONRENDEREDFRICTIONENVELOPEAMPLITUDE×=PAN/ZOOMRAMP UPThe zoom tool uses a ‘pinching rubber’ metaphor; pan is like moving a page.AIRBRUSHCONSTANTThe airbrush feels like the mark it is making.PAINTRAMP UPThe paintbrush starts slippery and gains texture as you ‘lose paint’TOUCHFINGERCONSTANTDEPENDENTON PICTUREThe touchfinger renders the current canvas image as a friction map.RoughSketchPaul BucciBrenna LiGordon MinakerOliver SchneiderFigure 7.8: RoughSketch handout, illustrating interaction techniques and tex-tures.dent on input, but also controllable when the user holds their hand stationary, unlikeprogrammable friction feedback explored in Section 7.3. The application area, sci-ence education, offers important design constraints: feedback must enhance learn-ing without distraction, and in this project, enable creative exploration for students.We thus both design haptic feedback and enable students to design while they learn.To manage this, we model feedback as a system of springs, easy to adjust and de-sign, but scalable to more complex tasks by combining multiple springs in seriesor parallel.In this project, we found that haptic experiences were not compelling whensimply added to an existing lesson. Instead, we found that lessons must be de-signed around intended modalities, and might be most appropriate when used cre-atively by students, i.e., through active learning. We also found that low-cost haptic128RoughSketchPutting the feeling into drawing on a phonePaul Bucci, Brenna Li, Gordon Minaker, Oliver Schneider, supervised by Karon MacLeanThe University of British ColumbiaYou’re an artist painting on a canvas—you can feel the stroke of the brush, the texture changing as the paint fades. This feedback guides your stroke, giving you immediate, precise control. It’s difficult to replicate this experience on a touchscreen device. We used the TPad Phone’s variable friction display to enhance these experiences on a touchscreen for digital artists, writers, notetakers, and painters alike. We’ve explored what mark-making tools should feel like through the lens of a drawing application: Roughsketch.IntroductionMany of our tools were inspired by reality, but realism isn’t always possible. For example, the pen tool reflects the feeling of writing on paper, but we couldn’t identify a tangible ‘pan/zoom’ tool in real life.Should the feeling reflect real life, or be unique to the TPad?Some tools felt right when we captured the experience of making the mark: while painting, friction increases as your brush deposits paint, which we can directly represent. Others felt right when you felt the mark you made: the airbrush’s character is in its paint splatter, represented by a bumpy, grainy texture.Should feeling represent the drawing process, or the product?Many users use a stylus for handwriting, drawing, or other interactions with touch screens. We designed all our tools to work with a stylus; some required no modification, but others needed an explicit stylus mode. We found rubber-tipped styluses did not transfer friction very well, but rigid styluses did.Stylus Implementation” or “What about a stylus?Possible ApplicationsAnnotating Painting Writing DrawingERASERPAN/ZOOMPAINTAIRBRUSHPENThe eraser mimics real life, little bits rubbing off as you use it.The zoom tool uses a ‘pinching rubber’ metaphor; pan is like moving a page.The paintbrush starts slippery and gains texture as you ‘lose paint’.The airbrush feels like the mark it is making.The pen texture is contant but slightly grainy, as if rolling across paper.Figure 7.9: RoughSketch poster, describing interaction techniques and high-level findings.129bwkHapticOnOOnONumericalFeedback2cmFigure 7.10: Students, teachers, and researchers can explore science, technol-ogy, engineering, and math (STEM) abstractions through low-fidelityhaptics, incorporating elements into system designs.paddles could render discriminable forces, as long as they fit into the lesson. Mostimportantly, when well-designed, haptic lessons might be able to make lessonsmore engaging and ground more abstract concepts.7.4.1 IntroductionRecognition of the value of a hands-on, embodied approach to learning dates to1907, when Maria Montessori opened a school where she used manipulatives toteach a wide array of concepts ranging from mathematics to reading, e.g., by in-troducing the alphabet through children tracing their finger along large, cut-out let-ters [184]. Constructivist learning theories posit that well-designed manipulativescan assist understanding by grounding abstract concepts in concrete representations[200, 203], and are an accepted core principle in early math and science education,confirmed empirically [34]. More recently, digital technologies are radically al-tering learning environments. Massive Open Online Courses (MOOCs) expandaccess, games motivate, and with graphical simulations (e.g., PhET [274]), stu-dents can interact with abstractions to develop their understanding. However, theseexperiences are disembodied. Indirect contact via keyboard, mouse and screenintroduces a barrier of abstraction that undermines the connection and path to un-derstanding.Haptic (touch-based) technology should bring benefits of physicality and em-bodied learning [66] to interactive virtual environments. It adds a sensory channel130as another route to understanding [31]; when deployed appropriately, active explo-ration can improve understanding [176] and memory [96] of new concepts. Haptictools have already shown promising results in many specializations, demographicsand age groups, both to enhance lesson fidelity and to increase engagement andmotivation through tangibility and interactivity; e.g., with devices like GeomagicTouch7 [275] and SPIDAR-G [222].Unfortunately, existing approaches have both hardware and software limita-tions. Actuated learning tools introduce physical issues of cost, storage, and break-age; devices are too bulky, complex, or expensive for schools or self-learners. Forsoftware, it is hard for users to construct and explore their own haptic environ-ments. Typically, users load a virtual system to interact with it haptically. This side-lines the rich learning potential of involving users with model construction [200].We address hardware with the HapKit [198], a $50, simple, low-fidelity deviceconstructed from 3d printed materials.Our focus here is on software, with a new learning environment that lets usersboth construct and explore haptic systems. Until now, the only way for a user toconstruct a haptic system was by programming it herself. Our approach, inspiredby Logo [200] and Scratch [174], is to ultimately provide much of the power of aprogramming language while hiding distracting complexity.Approach and Present Objectives:To study how to unlock the potential of hapticized virtual environments in STEMeducation, we need a viable front-end. To this end, we first established a conceptualmodel (HandsOn): central interface concepts, supported operations and language[134] that can be employed in a broad range of lessons involving physical explo-ration and design.Next, we implemented the HandsOn conceptual model (CM) in SpringSim, afirst-generation learning interface prototype narrowly focused in a module on me-chanical springs and targeted at high school physics students. To render forces weused the HapKit, a simple device with a 3D-printable handle providing affordable,self-assembled 1 DOF force-feedback for about $50 USD. As an evaluation instru-7Prev. Sensable Phantom www.geomagic.com/en/products/phantom-omni/overview131ment, this single-lesson implementation allows us to (a) measure a given hardwareplatform’s fidelity for a representative perceptual task; (b) attain insight into thekinds of lessons such a system can leverage; and (c) assess its learning-outcomeefficacy relative to conventional methods. With these answers, we will be able todesign a more powerful tool.We report results from two user studies: (1) the HapKit’s ability to displaydifferentiable springs with and without graphical reinforcement, and (2) a qualita-tive evaluation of SpringSim for a carefully designed set of educational tasks. Weconfirm that the SpringSim interface and its conceptual model HandsOn are under-standable and usable, describe the role of haptics compared to mouse input, andprovide recommendations for future evaluation, lesson and tool design.7.4.2 Tool Development: Conceptual Model and InterfaceOur goal was to find a software model to use and evaluate low-cost force feedbackin an educational setting. We began by choosing a device, establishing require-ments, and exploring capabilities through use cases and prototypes. From this, wedefined HandsOn. We then implemented essential features in a medium-fidelityprototype, SpringSim, for our user studies.Initial design (requirements):We established six guiding requirements. First, we developed initial prototypeswith HapKit 2.0 through two pilot studies with middle school students (describedin [198]). These highlighted two aspects of a practical, accessible approach forjunior students: 1) no programming; instead 2) a graphical implementation of anexploratory interface within a lesson plan. We also needed to build on known ben-efits of traditional classroom practices, and enable learning-outcome comparison.We must 3) support the same types of traditional education tasks, e.g., let studentscompare and assemble spring networks as easily as in a hands-on physics lab; butalso 4) extend them, to leverage the flexibility offered by a manipulative that isalso virtual. Similarly, to support future formal comparisons, our model needs to5) support both haptic and non-haptic (mouse) inputs. Finally, to ensure general-ity we also needed to 6) support diverse STEM topics, like physics, biology, and132mathematics. Further design yielded a model that addressed these requirements:HandsOn.Conceptual Model:HandsOn is a programming-free (R1) graphical interface supporting learner ex-ploration (R2), with a number of key concepts: Interactive Playground, Hands,Design Palette, Objects, Properties, Haptic and Visual Controls. Exploration issupported at various levels (Figure 7.11).The Interactive Playground provides a virtual sandbox where users can interactwith virtual environments (VE). Hands allow users to select, move, and manipulatecomponents in the Interactive Playground. Control occurs with either the mouse ora haptic device to receive force-feedback (Figure 7.11A) (R5). In the design andmodification phase, users can add or remove objects like springs, masses, gears,or electrons by dragging them to and from a Design Palette (R3). Once addedto the scene, users can modify their physical properties (e.g., a spring constantk) and make changes to the VE (Figure 7.11B). After construction, the user cancustomize their interaction with their VE by adjusting Visual Controls and HapticControls options that extend interactions in new ways afforded by haptics (R4)(Figure 7.11C). Because of the flexibility afforded by having multiple objects inthe playground with multiple Hands for interaction points, and customization ofinteraction and feedback, HandsOn can support different STEM topics (R6), frombiology to mathematics. To confirm the viability of this approach, we built aninitial prototype with essential features: SpringSim.A) Interact with the system in the Interactive Playground using a selected Hand, manipulating and monitoring state via multimodal feedback.C) Customize interaction itself for learners, teachers, and researchers, adjusting input/output modalities with Visual Controls and Haptic Controls.B) Create and Modify the system with a Design Palette, adding or removing Objects and changing object properties.HapticOnOffOnOffNumericalFeedback2cmbwkK = 0.05L0 = 3cm0.053 cmhaptic2cmvisualFigure 7.11: The HandsOn CM enables three kinds of exploration based onrequirements.133Figure 7.12: SpringSim interface, a HandsOn sandbox for a single lessonmodule on springs.Implemented Prototype:Our first HandsOn interface is SpringSim (Figure 7.12), which supports a springlesson – spring systems are natural as a virtual environment of easily-controlledcomplexity. In SpringSim, objects include single springs and parallel spring sys-tems, with properties spring rest length (cm), stiffness (N/m) and label. The De-sign Palette includes the Spring Properties and Spring Generator UI components.Implemented Visual Controls are toggling numerical displays of spring stiffnessand force; Haptic Controls toggle HapKit feedback and output amplification. Theopen-source repository for SpringSim is available at https://github.com/gminaker/SpringSim.7.4.3 Study 1: Perceptual TransparencyBefore evaluating SpringSim, we needed to confirm that the HapKit could renderspring values sufficiently for our qualitative analysis.Methods:14 non-STEM undergraduate students (8 females) participated in a two-alternative,forced choice test with two counterbalanced within-subject conditions: HapKit +134Dynamic Graphics, and HapKit + Static Graphics (Figure 7.13). Three springpairs (15/35, 35/55 and 55/75 N/m) were each presented five times per condition,in random order. For each pair, participants indicated which spring felt more stiff,and rated task difficulty on a 20-point scale. Following each condition, participantsrated overall condition difficulty, mental demand, effort, and frustration on 20-point scales derived from the NASA TLX [106]. Following the completion ofboth conditions, a semi-structured interview was conducted to address any criticalincidents. Each session lasted 20-30 minutes.Results:All tests used a 5% level of significance and passed test assumptions.Accuracy: A logistical regression model was trained on task accuracy with spring-pair and condition as factors. No interaction was detected; spring-pair was the onlysignificant factor. Post-hoc analysis revealed that spring-pair #1 (15/35 N/m) wassignificantly less accurate than spring-pair #2 (35/55; p=0.0467). Performance av-eraged 88.57% (15/35), 96.49% (35/55), and 94.45% (55/75).Time: Task time ranged from 3-160s (median 117s, mean 96.41s, sd 47.57s). Ina 3-way ANOVA (participant, spring-pair, and visualization condition) only partic-ipant was significant (F(13,336) = 4.17 p = 1.947e−06).Difficulty rating: A 3-way ANOVA (factors: participant, spring-pair, and visual-ization condition) detected one two-way interaction between participant and springpair (F(26,336) = 2.10, p = 0.00165).Discussion:Study 1 revealed that (a) for stiffness intervals 15/35/55/75 N/m, the HapKit pro-vides distinguishability equivalent to dynamic graphics. Individual differencesinfluenced difficulty and speed, suggesting that learning interfaces may need toaccommodate this variability. (b) Accuracy was not dependent on individual dif-ferences, suggesting that learning interfaces can consider task time and perceived135SPRING 1 SPRING 2 SPRING 2 HAPTICS + DYNAMIC GRAPHICS HAPTICS + STATIC GRAPHICSSPRING 1ArduinoHandleMotorFigure 7.13: In the Hapkit+Dynamic Graphics condition, graphical springsresponded to input (left); static images were rendered in the Hap-kit+Static Graphics condition (right); in both, HapKit 3.0 [198] wasused as an input/force-feedback device (far right).difficulty separately from accuracy when using the HapKit (at least, for these forceranges). (c) Performance was mostly above 90%, and confidence intervals for oursmall sample size estimate no lower than 82% accuracy at the lowest (15/35). Wespeculate that the HapKit’s natural dynamics are more pronounced at lower ren-dered forces, and may interfere with perceptibility.7.4.4 Study 2: Tool Usability and Educational InsightsMethods:10 non-STEM participants (1st and 2nd year university undergrads with up to firstyear physics training, 6 female, 17-20 years) volunteered for 45-60 minute ses-sions. After an introductory survey, participants were randomly assigned to oneof two conditions, Mouse (4 participants, M1-4) or Hapkit (H1-6). HapKit 3.0was calibrated for force consistency between participants. After allowing partic-ipants to freely explore SpringSim, a survey assessed understanding and usabilityof various SpringSim interface components; misunderstood components were clar-ified. Three exit surveys elicited value of SpringSim components on 7-point Likertscales, cognitive load [138], understanding, and curiosity on 20-point scales, andpreferred learning modality [83], respectively.136Task Bloom Description1 Understand (2) Rank three springs in order from least to most stiff2 Understand (2) Plot the relationship between displacement and force for twosprings.3 Apply (3) Estimate the stiffness of an unknown spring, given two referencesprings with known stiffness value4 Analyze (4) Predict the behaviour of springs in parallel.5 Create (6) Design a parallel spring system that uses two springs to behave likean individual spring of stiffness 55 N/m.6 Apply (3) Predict the behaviour of springs in series.7 Evaluate (5) Describe any relationships you have noticed between spring force,displacement, and stiffness.Table 7.1: Learning tasks used with SpringSim in Study 2. Bloom level is ameasure of learning goal sophistication [15]Learning Tasks:We iteratively designed and piloted a task battery of escalating learning-goal so-phistication [15] to expose strategies for force feedback use and general problem-solving (Table 7.1). Tasks did not require physics knowledge, and were suitablefor both mouse and HapKit input.Analysis:We conducted t-tests on self-reported understanding, cognitive load, engagement,understanding, curiosity; and on objective metrics of time-on-task and number ofspring interactions. Qualitative analysis of video and interview data used groundedtheory methods of memoing and open & closed coding [49]. Together, theseyielded insight into the usability of SpringSim and the HandsOn CM, and severalthemes describing the role of haptics in our tasks. Two participants were excludedfrom analysis of Task 1 due to technical failure.Results - Usability:After free exploration of SpringSim, participants rated their understanding of CMobjects (yes/no) and their ease-of-use [1-7]: Ruler (10/10, 7.0), Numerical ForceDisplay (10/10, 6.5), Playground (10/10, 6.0), Hand (9/10, 6.0), Spring Properties(9/10, 6.0), Spring Generator (7/10, 5.0), HapKit (6/6, 4.5), and Haptic Feedback137Controls (5/6, 4.5). While generally usability was good, interface clarity neededimprovement in highlighted cases. Participants specifically noted confusion on ra-dio button affordances, and Spring Generator input fields (due to redundant avail-ability in Spring properties).Results - Task Suitability for Haptic Research:Regardless of prior physics knowledge, all participants were able to complete ed-ucation tasks 1-6 (Table 7.1) in the allotted 60 minutes. We found no evidencethat any task favoured one condition over another. When participants in the mousecondition were asked how their workflow would change with physical springs, par-ticipants weren’t sure: “I don’t know if that would’ve given me more information”(M4).Results - Haptics & Learning Strategies:We observed several themes relating to the influence of force feedback on a stu-dent’s learning strategy.Haptics creates new, dominating strategies. Learning strategies used by partici-pants in the HapKit condition (H1-6) were more diverse than those in the mousecondition (M1-4). In Task 1, M1-4 all followed the same strategy, displacing all 3springs the same distance and comparing the numerical force required to displacethem. They then correctly inferred that higher forces are associated with stiffersprings (the displace-and-compare strategy).By contrast, all 5 H participants included in analyses (H2 excluded due to tech-nical failure) used force-feedback as part of their approach to Task 1. H1 describesapplying the same force to the HapKit across all 3 springs, recording displacementto solve the task, while H5 described looking at the speed at which the HapKit wasable to move back-and-forth in making his determination of stiffness, rather thanthrough direct force-feedback of the device. Only H6 indicated that he “lookedat the numbers for a sec”, but no participant fully used the displace-and-comparestrategy we observed for M participants.While the single-strategy approach worked for easy tasks, it was linked to er-138rors and dead-ends in at least one instance in the mouse condition. In Task 5, M2-4used displace-and-compare to validate their newly designed spring; M1 did notseek verification of his design. In contrast, H1,2,5,6 used haptic feedback to verifytheir designs. They did this by comparing how stiff their parallel spring systemfelt to a target reference spring. H4 guessed at an answer without verification. H3used the displace-and-compare strategy, checking that equal forces were requiredfor equal displacement.Haptic impressions of springs are enduring and transferrable. HapKit partici-pants were able to use their previous explorations to solve problems. In Task 3,M1-4 interacted with all three springs to find a ratio between force and stiffness.However, H participants interacted with springs fewer times (mean 1.5, sd 3.21)than M (6, sd 1) (p=0.018). H2-4,6 did not interact with any springs, and H1 inter-acted with only one. This was because they had already interacted with the springsin previous questions: “I remember spring C was less stiff” (H3). Further sug-gesting the strength of haptic impressions, when H1 designed an inaccurate springsystem for Task 5 (k=80N/m vs. expected k=55N/m), she described the haptics asoverriding the visual feedback: “they just felt similar. Even though the numbersweren’t really relating to what I thought.” Similarly, H2 arrived at an approximateresult (k=40N/m), after using force-feedback and acknowledges “... [it’s] slightlyless than the reference spring, but it’s closer.”Haptics associated with increases in self-reported curiosity and understanding.Participants’ self-reported curiosity significantly increased over the course of Hap-Kit sessions from a mean of 6.3 (sd 3.83) to 10.8 (sd 3.92) in the Hapkit condi-tion (p=0.041). No significant changes in curiosity were detected in the mousecondition. Participants’ self-reported understanding significantly increased overthe course of HapKit sessions from a mean of 3.67 (sd 4.03) to 11.83 (sd 3.19)(p=0.014). No significant changes in understanding were detected in the mousecondition (before: 9.25, sd 5.32; after: 9.25, sd 5.32; p=0.77).In interviews, participants commonly made references to how the HapKit in-fluenced their understanding: “I can use this thing for help if I really need some139physical, real-world stimuli” (H5); “almost all of my thinking was based on howthe spring [HapKit] ended up reacting to it” (H6). M2, who had a stronger physicsbackground than others (IB Physics), was the only user to report a drop in curiosityand understanding over the course of the physics tasks, despite initial excitement:“the fun part is messing around with [SpringSim],” he exclaimed near the begin-ning of the exploratory phase.7.4.5 Study 2 DiscussionTool and Tasks: Suitability for Learning and as Study PlatformAdequacy and comprehensibility of underlying model: Overall, HandsOn con-cepts proved an effective and comprehensible skeleton for SpringSim. Specificimplementations rather than concepts themselves appeared to be the source of thereported confusions, and we observed that HandsOn should be extended with ad-ditional measurement tools (e.g., protractors, scales, calculators, etc).SpringSim performance: This SpringSim implementation adequately supportedmost students in finishing learning tasks; extending available objects, propertiesand tasks will support advanced students as well. Future iterations should moreclearly map Design Palette elements to the objects they support, increasing render-ing fidelity and reconsider colors to avoid straightforward affordance issues. Whileparticipants did not heavily use haptic and visual controls, we anticipate these willbe important for instructor and researcher use.Learning task suitability: The learning tasks used here were fairly robust to timeconstraints of user-study conditions, did not require previous physics knowledge,avoided bias from standardized physics lessons, and exposed haptics utilizationstrategies without penalizing non-haptic controls. Currently, the task set ends byasking students to predict a serial system’s behavior; some students found pre-dicting new configurations a large jump. Future task-set iterations could supportintegrative, prediction-type questions with interface elements that are successively140exposed to allow prediction testing.Evidence of the Role of Force Feedback in LearningCuriosity and understanding leading to exploration: Self-reported curiosity andunderstanding increased when forces were present. While these trends must beverified, curiosity is of interest since it can lead to more meaningful and self-driveninteractions. Iterations on both tasks and tool should support this urge with aninterface and framing that supports curiosity-driven exploration.Alternative strategies enabled by force feedback: The HapKit’s additional feed-back modality enabled alternative task workflows, e.g., estimations of force ap-peared to supplant mathematical strategies for stiffness estimation. While possi-bly risky as a crutch, force assessments might be a useful step for students notready for technical approaches (e.g., M3/Task 3 when stalled in attempting cross-multiplication). Future task-set iterations could encourage more balanced strategyuse, e.g. mathematical and perceptual rather than primarily perceptual.HapKit salience, resolution & implications: Overall, HapKit 3.0’s fidelity wasenough to assist participants verify a correct hypothesis. However, those whostarted with an incorrect hypothesis and used only HapKit to test it generally ar-rived at solutions that improved but were still inaccurate. Given the confidence thatforces instilled, this is an important consideration. A formal device characteriza-tion will allow us to keep tasks within viable limits; we can also consider usinglow-fidelity forces more for reinforcement and exploratory scenarios.Limitations and Next Steps:Our studies were small and used non-STEM university students as a proxy for high-school learners. Despite both limitations, they were useful for our current needs(rich, initial feedback establishing suitability and usability for HandsOn throughSpringSim); but may overestimate general academic ability and maturity. As wewe move into evaluation of learning outcome impact, larger and more targeted141studies are imperative.Future interfaces can both increase physical model complexity and breadth(e.g., complex mass-spring-damper systems), and extend HandsOn for more ab-stract education topics, such as trigonometry. We also plan to extend the Play-ground to support more engaging, open-ended student design challenges, such asobstacle courses using trigonometry concepts; this in turn requires new measure-ment tools and tasks that are more exploratory and open-ended.7.4.6 ConclusionsHaptic feedback’s potential in STEM education use can only be accessed with acomprehensible, extendable, and transparent front-end. We present HandsOn, aconceptual skeleton for interfaces incorporating virtual forces into learning tasks,and assess its first implementation, SpringSim and task set. Our findings (on in-terface usability, task effectiveness, and impact of haptic feedback on learningstrategies, understanding and curiosity) underscore this approach’s promise, as weproceed to study haptic influence on learning outcomes themselves.7.5 CuddleBit Design Tools: Sketching and RefiningAffective Robot BehavioursIn Section 7.5, we explore a third non-vibrotactile modality - furry, affective,breathing robots called CuddleBits (Figure 7.14). These robots are multimodal:they visibly move, and their breathing can be felt. Their form factor and affor-dances can vary; for example, they can be flexible (Figure 7.14a) or rigid (Fig-ure 7.14b). We use the CuddleBits as a rich design problem - supporting engaging,emotional, lifelike behaviour design - and as a means to explore the interplay be-tween two design tools, each supporting different activities: sketching and refining.As robots begin to take a larger role in our lives, they require natural waysof interacting with people. Notably, they need to communicate affectively withhumans, recognizing and expressing emotion, or behaving with a believable per-sonality. This is important for both everyday interactions with robots, and targetedhealth applications: robot-based therapy can measurably relax people by breathing[237].142(a) “FlexiBit”, a furry, flexible CuddleBit. (b) “RibBit”, a rigid CuddleBit.Figure 7.14: Two examples of CuddleBits, simple DIY haptic robots.The Haptic Creature project [283, 284] explores the role of touch-based inter-actions with furry, zoomorphic robots. However, an early prototype of a multi-DoF haptic robot, the CuddleBit, suffers from slow iteration for both hardwareform-factor and software behaviours. To explore these concepts more thoroughly,we developed the CuddleBits [33]: simple, affective robot pals built with a rapidprototyping (sketching) ethic. To control CuddleBit behaviour and inform HaXDsupport tools in this complex domain, we developed two software design tools:Voodle and MacaronBit (Figure 7.15).Voodle (Figure 7.15a), from “vocal doodling”, is a novel sketching interface toeasily create 1-DoF behaviours using non-speech voice, in particular, ideophones[65] like “Ooooh” and “Bwooop.” It is inspired by the onomatopoeia descrip-tions found with our initial exploration (Chapter 3) and previous work [240, 273].Through a series of user studies, we developed a prioritized set of ideophones andhow they mapped to movements of the CuddleBit. At the time of writing, Voodle isin active development; we are using participatory design to further identify criticalfeatures, Voodle’s expressive capability, and how it might fit into a design tool suitefor the CuddleBit alongside MacaronBit.MacaronBit (Figure 7.15b) is an adaptation of Macaron (Chapter 5) to control1-DoF CuddleBit using a familiar track-based metaphor. Instead of two tracks con-trolling amplitude and frequency, MacaronBit has five: low-frequency amplitudeand frequency (for breathing); high-frequency amplitude and frequency (for “shak-iness” or “noise”), and bias (asymmetry in the signal), determined during piloting.143(a) Voodle, a vocal doodling interface that uses voice to control the CuddleBit. The circlein the middle visualizes the CuddleBit’s movement on-screen, while additional controls adjustalgorithms for vocal processing.(b) MacaronBit, a version of Macaron (Chapter 5) extended to control CuddleBits.Figure 7.15: CuddleBit design tools. Voodle enables initial sketching of af-fective robot behaviours, while MacaronBit enables refining.144Voodle and MacaronBit are symbiotic. Users can record voodles and exportthem to MacaronBit; initial result suggest that they each support different goalsfor users. Together, Voodle and MacaronBit represent sketching (Chapter 3) andrefining (Chapter 4), showing that this dichotomy provides a useful framing whencreating HaXD support tools and applies to other display types beyond VT icondesign. Research is ongoing in a series of user studies, exploring expressivenessand consistency of designed behaviours, the specific capabilities and roles of thetwo tools, and important considerations for future development.7.6 Takeaways from Focused Design ProjectsOur focused design projects reinforced our findings from our in-depth studies andexpanded our understanding of HaXD. We consistently found that the narrativeframing or chosen metaphors were important to frame haptic experiences, both tofacilitate design and to help users interpret intended meaning. Visual, audio, andhaptic feedback need to be tightly coordinated and designed together, necessitatinga holistic and vertical view for designs. Design activities of sketching, refining,browsing, and sharing are sometimes supported, but help to identify opportunitiesfor new tools and techniques. Different feedback modalities, and even differentdevices (e.g., mobile phones), need customized haptic feedback. When success-ful, the haptic actuation can be rudimentary or high-fidelity, as long as the entireexperience is considered and designed. These results informally enrich our finaltakeaways in Chapter 9, and directly inform our suggestions for handling the diver-sity of haptic experiences (Section 9.2.2) and conceptually framing haptic designs(Section 9.2.3) with narrative and metaphor.145Chapter 8Haptic Experience DesignPreface – In Chapters 3-7 we take a design perspective, investigating by doing.The VT design tools each enabled short in-lab sessions of design, which we couldstudy directly; however, this approach lacks external validity. Proxy design inChapter 6 and the focused design projects in Chapter 7 offered ample opportunitiesto gain implicit knowledge about design; these results are ecologically valid butspecific to only one group (our lab). To triangulate both these approaches, westudy the wider community: expert haptic designers in the wild. Here, we reportfindings from six interviews with expert haptic experience designers, augmented bya workshop we coordinated at a major international haptics conference. We foundthemes at three levels of scope: 1) the holistic nature of haptic experiences, 2)the collaborative ecosystem in which hapticians works, and 3) the broader culturalcontext of haptics. Chapter 81 both grounds our work and serves as a capstone: wedefine and characterize HaXD as it occurs in practice, codify challenges for HaXD,and develop recommendations to further develop the field that are grounded in thisnew understanding of designers. We conclude with a vision for how HaXD mightmanifest in the upcoming years.1This work has been prepared as a manuscript and is presented as one.1468.1 OverviewFrom simple vibrations to roles in complex multimodal systems, haptic technologyis often a critical, expected component of user experience – one face of the rapidprogression towards blended physical-digital interfaces. Haptic experience designis thus now becoming part of many designers’ jobs. We can expect it to presentunique challenges, and yet we know almost nothing of what it looks like “in thewild” due to the field’s youth and the difficulty of accessing practitioners in pro-fessional and proprietary environments. In this paper, we analyze interviews withsix professional haptic designers to document and articulate haptic experience de-sign, observing designers’ goals and processes and finding themes at three levels ofscope: the holistic, multimodal nature of haptic experiences, a map of the collabo-rative ecosystem, and the cultural contexts of haptics. Our findings are augmentedby feedback obtained in a recent design workshop at an international haptics con-ference. We find that haptic designers follow a familiar design process, but facespecific challenges when working with haptics. We capture and summarize thesechallenges, make concrete recommendations to conquer them, and present a visionfor the future of haptic experience design.8.2 IntroductionHaptic feedback provides value in several ways, especially accessibility [14], low-attention feedback [171], and motor skill training [181]. Recently, high-fidelityhaptic technology has expanded user experience. Emotional therapy [270, 283],education [221], and entertainment [225] are increasingly employing haptic feed-back. Technological advances enable more compelling haptic sensations in con-sumer products by making it possible to render variable friction on direct-touchsurfaces [161, 277], and produce forces without needing to ground devices to atable or wall [57, 278]. Even commodity vibrotactile displays are increasing in ex-pressiveness, with high-quality actuation a priority in devices like the Apple Watch(www.apple.com) and the Pebble watch (www.pebble.com), although often at thecost of painstaking and costly design effort. Touch is now increasingly studiedwithin market research because it improves the quality of product opinions and en-courages consumer purchases [132]. Part of the power of touch is its emotional,147Ex1: Haptic components are vertical: “Changes are to the guts”Ex2: Reinforcement and substitution: “Have that solid click”Ex3: Latency and timing: “A reliable clock”Ex4: Constraints and unknown context: “Feelable but not seeable”  Ex5: Tailoring and Customization: “Very individual” CC1: Understanding requirements: “Hard to express what they need”CC2: Evaluation: “It felt right”CC3: Secrecy and intellectual property: “Kept condential”CC4: UX and branding: “Articulating the value”CC5: Overcoming risk and cost: “A tough sell”Co1: Internal roles are interdisciplinary: “I’m not so much of a psychologist”Co2: Engineering support: “Go through the technical levels”Co3: External roles are international: “Dierent divisions, dierent companies”Co4: Facilitators and advocates: “Sales Reps”Co5: Demos and documentation: “Your piezo demo, we love it”[Ex] Holistic Haptic Experiences: “It doesn’t end at the actuator”[Co] Collaboration: “Rally the ecosystem”[CC] Cultural Context: “A standard feature, in the future”Figure 8.1: Our three themes, each exploring different levels of scopethrough 5 emergent sub-themes.visceral [190] value with it has within a design, giving haptics a close relationshipwith user experience.8.2.1 Haptic Experience Design (HaXD)We define HaXD as:The design (planning, development, and evaluation) of user experi-ences deliberately connecting interactive technology to one or moreperceived senses of touch, possibly as part of a multimodal or multi-sensory experience.Our focus is on gaining a better understanding of the workflow and processes cur-rently used by hapticians. We define a haptician as:One who is skilled at making haptic sensations, technology, or experi-ences.We use the term “haptician” to capture the diversity of people who currently makehaptics, and the diversity of their goals. Many people with a need to design haptics148may not have formal design training, and may focus on subsets of the entire experi-ence, e.g., technical demonstrations or creating stimuli for psychological tests. Wedescribe two studies examining how contemporary hapticians design haptic expe-riences for use in real-world products. We begin by identifying current obstaclesto good HaXD and the target audience for our work, then provide a roadmap to therest of the paper.8.2.2 Obstacles to DesignThe academic literature suggests many challenges to design for haptic experience.Haptic content remains scarce and design knowledge is limited. Some issuesare technological, arising in the hardware and software, such as highly variablehardware platforms and communications latency [141]. Other issues are human-centered, arising from individual user characteristics in perception and preferences:low-level perceptual variation [165], responses to programmed [161] and natu-ral [117] textures, sensory declines due to aging [253, 254], and varied interpreta-tion and appreciation of haptic effects and sensations [238, 240] – often because ofpersonal experience [228].These research findings are reinforced by many interactions the authors havehad with practitioners in industry. We suspected that there are many challengesrelated to haptics, but had little direct evidence to back this up and guide our re-search. We further suspected that it is somewhat rare for professionals to designhaptic experiences explicitly rather than doing so in the course of larger designefforts. We thus conducted two studies of the workflows used by designers whenthey are engaged in HaXD – something that has been largely unexplored in theliterature.In our studies, we take a first in-depth look at haptic designers’ experi-ences to describe HaXD, identify unique challenges, and connect HaXD toother fields of design. We focus specifically on HaXD instead of the more generalnotion of “haptic design,” which can also refer to design practices related to hapticsnot directly involving user experience, e.g., mechanical design of a new actuatoror software design of a new control method. Our definition encompasses pseudo-haptics [208] and other illusions that trick a user into thinking haptic feedback is149occurring without direct tactile or kinesthetic stimulation. Much of what we dis-cuss can also be gainfully applied to the design of tangibles, even with their lack ofactuation, although we leave them out of our scope to focus on actuated interfaces.8.2.3 Target AudienceWe primarily target readers who are one step removed from HaXD, but who haveother design, haptics, or business expertise relevant to haptics.We expect that haptic experience design experts (hapticians) will be unsur-prised by the insights herein. Although they are not our primary audience, we hopethat the articulated challenges and recommendations will nevertheless still be use-ful for their practice because it consolidates their ad hoc knowledge into a formalframework.We expect that non-haptic design experts will find our discussion of the spe-cific challenges to HaXD informative because it reveals processes of design that areinvisible or are taken for granted in other fields. We also hope non-haptic designersmight lend their expertise to accelerate the generation of tools and techniques forcreatively working with these complex interactive systems.We expect that non-design haptic experts will develop a further appreciationfor how UX design is important for successful haptic technology, and will gain anunderstanding of how their devices or research findings are applied in practice. Therecommendations we provide may also motivate several avenues of either basic andapplied haptic research that these experts could pursue.We expect that industry practitioners will gain insight into how the businesscase for haptic technology might be more quickly built. This includes those al-ready involved with haptics or similar technologies such as wearables, as well asthose looking to become involved. We believe our findings may help cultivateconnections between the diverse stakeholders involved with HaXD, and that thechallenges (and thus the opportunities) that we identify will inspire people to workmore with this emerging modality.1508.2.4 Roadmap for the ReaderWe describe two studies in which we sought to gain a solid understanding of HaXDas it is currently practiced “in the wild” by actual practitioners (hapticians) in theirday-to-day work. After a review of the existing literature in Section 8.3, we re-port on the first study in Section 8.4: a grounded theory [49] analysis of intensiveinterviews with six professional haptic designers. In our results, we describe obser-vations about haptic designers’ process organized in three cross-cutting themes: thecomplex, holistic nature of the experiences they design; the collaborative ecosys-tem in which haptic experience designers play multiple roles; and the influences ofthe cultural contexts in which haptic experiences are used and the value and riskthis poses. In Section 8.5 we describe the second study, conducted in a workshopat a major international haptics conference (World Haptics 2015). The secondstudy complements the first by collecting quantitative and qualitative feedbackfrom a broader sector of industry and academic designers regarding tool use, col-laboration, evaluation methods, and challenges facing hapticians In Section 8.6,we summarize and discuss our overall findings in three major areas:1. A description of current HaXD practice showing how it has already emergedas a distinct field of design.2. A list of challenges facing haptic experience designers, and some uniqueconsiderations HaXD requires compared to other more established fields ofdesign.3. Recommendations for accelerating the development of HaXD as a full-fledgedfield of design.We conclude with a few remarks imagining what a mature discipline of HaXDmight look like in the near future.8.3 Related WorkIn this section, we discuss key elements of contemporary thinking about user ex-perience design (UX design or XD) and a specific approach known as “designthinking.” We then broadly review haptic technology (hardware and software) and151relevant aspects of human perception before providing a critical summary of pre-vious efforts to understand and support HaXD.8.3.1 Design Thinking as a Unifying FrameworkDesign thinking is an empowering way to approach technology and user experi-ences. At the heart of this practice is the rapid generation, evaluation and iterationof multiple ideas at once [30]. There are several general design activities that weobserved in our participants that reflect design thinking, most notably, problempreparation, sketching-like iteration, and collaboration.Many advocates of design thinking refer to an explicit problem preparationstep preceding initial design [235, 244, 272], which involves “getting a handle onthe problem” and drawing inspiration from previous work. Designers find value inthis stage because creative acts can be accurately seen as recombination of existingideas, with a twist of novelty or spark of innovation by the individual creator [272].This stage draws from the designer’s experience, including their understanding ofthe domain (symbolic language of the field) [54], and the ability to frame a de-sign problem to match it to their repertoire, their their collected professional (andpersonal) experience [235]. External examples are especially useful for inspira-tion and aiding initial design [30, 114]. Early and repeated exposure can increasecreativity, although late exposure carries a risk of conformity [150].Later in this paper, we describe the evidence we found in our study that hapticdesigners’ work naturally includes a dedicated problem preparation step, e.g., byemploying collections of examples in a number of ways.Sketching is another critical design activity. It supports ideation, iteration, andevaluation. Here, more generally than pen and paper, we refer to general tech-niques to suggest, explore, propose, and question [30], including physical ideation[185]. Some researchers declare sketching to be the fundamental language of de-sign, much like mathematics is considered the language of scientific thinking [51].Sketching is rapid and exploits ambiguity, allowing partial views of a proposeddesign or problem. Detail can be subordinated, allowing a designer to zoom-in,solve a problem, and then abstract it away when returning to a high-level view. Itcan also support multiple, parallel designs, delaying commitment to a single de-152sign [107, 211]. The fluidity and ad hoc nature of sketching extends to softwaretools: designers must be able to rapidly undo, copy and paste, and see a history ofprogress [211].We discuss techniques for haptic sketching in prior work to support HaXD, andfind major barriers to achieving fluidity that were identified by participants in ourstudy.Collaboration improves design. Involving more people increases the potentialfor generating more varied ideas [272], and is recognized as being important forcreativity support tools [211, 244]. Although group dynamics can influence thedesign process negatively, proper group management and sharing of multiple ideasquite often results in more creativity and better designs [114], and can even influ-ence the work of crowds [68]. Collaboration can be categorized by intent, suchas informal conversations with colleagues or widespread dissemination [244], orby physical and temporal context: collocated (collaborators in the same location)or distributed (in different locations), and synchronous (simultaneous) or asyn-chronous (at different times) [75].We find these categorizations useful to identify where collaboration can breakdown for haptic design, especially remotely, asynchronously, and with limitationson informal or widespread sharing. In Section 8.4.2 we present the first data-informed description of collaboration in HaXD.8.3.2 Haptic Perception and TechnologyHaptic technology is typically separated into two broad classes based on the com-plementary human sense modalities: tactile sensations, perceived through the skin,and proprioception, or the sense of body location and forces; the latter includeskinaesthetic senses of force and motion. On the human side these are further sub-divided into different perceptual mechanisms, each targeted with different actua-tion techniques. We overview the complexity of the different senses that make uptouch, then describe common actuation technologies for these senses, focusing onthose mentioned by participants in our study. Finally, we review major applicationareas that use haptics for both utility and emotional value.Human perception of touch is synthesized from the tactile and proprioceptive153senses, and is influenced by vision and hearing. Tactile sensations rely on mul-tiple sensory organs in the skin, each of which detect different properties, e.g.,Merkel disks detect pressure or fine details, Meissner corpuscles detect fast, lightsensations (flutter), Ruffini endings detect stretch, and Pacinian corpuscles detectvibration [48]. Proprioception, the sense of force and position, is synthesized frommultiple sensors as well: the muscle spindle (embedded in muscles), golgi-tendonorgan (GTO) in tendons, and tactile and visual cues [142]. Humans use these sensestogether to learn about the world, e.g., stroking, bending, poking, and weighing ob-jects in active exploration [152]. Haptic perception is also heavily influenced byother senses. In the classic size-weight illusion [44], when two weights have thesame mass but different sizes, the smaller is perceived to be heavier, whether size isseen or felt [110]; similarly, sound can affect how a texture feels [110]. Interactivesystems can exploit cross-modal perception to reinforce or improve haptic sensa-tions. To be effective, these effects need to be temporally synchronized, sometimesas closely as 20-100ms [141]. For more information about haptic perception, wedirect the reader to [48, 142, 153].Haptic technology to produce stimuli for humans to feel is at least as diverseas the human senses that feel it. Today, the most common approach is vibrotactile(VT) feedback, where vibrations stimulate Pacinian corpuscles in the skin, e.g.,smartphone vibrations. VT actuators can take may forms. Eccentric mass motors(sometimes “rumble motors”) are found in many mobile devices and game con-trollers, and are affordable but inexpressive. More expressive mechanisms suchas voice coils offer independent control of two degrees of freedom, frequency andamplitude. Piezo actuation is a very responsive technique that is typically more ex-pensive than other vibrotactile technology. Linear resonant actuators (LRAs) shakea mass back and forth to vibrate a handset in an expressive way; a common re-search example is the Haptuator [280]. Currently, LRAs are increasingly deployedin mobile contexts (e.g., the Apple Watch Taptic engine). Our participants alsoemploy force-feedback, which engages proprioception. Common force-feedbackdevices include Geomagic Touch (previously the Sensable PHANTOM) and Fal-con devices, offering three degrees-of-freedom: force in three directions. At othertimes, entire screens might push back on the user in a single degree-of-freedom.These are only the most common feedback methods discussed by our participants.154Many other types of feedback can be used, e.g., temperature displays [139] or pro-grammable friction display on touch screens [161, 277].8.3.3 Efforts to Establish HaXD as a Distinct Field of DesignResearchers have developed several approaches to support HaXD. Some have di-rectly applied design metaphors from other fields to haptics. Others have built col-lections of haptic sensations and toolkits that facilitate programming. A number ofhaptic editors, analogous to graphical editors like Adobe Illustrator, have emergedto support specific haptic modalities through parameterized models or other ab-stractions. These approaches have developed focused understandings of particularaspects of HaXD, but they do not adequately describe the process as it is actuallypracticed.There are many examples of designers drawing from other fields to framethe practice of haptic design. Haptic Cinematography [63] uses a film-makingmetaphor, discussing physical effects using cinematographic concepts and estab-lishing principles for editing based on cinematic editing [100]. Similarly, TactileMovies [146] and Tactile Animation [233] draw from other audio-visual experi-ences, and Cutaneous Grooves [103] draws from music to explore “haptic con-certs” and composition as metaphors. Academic courses on haptics are taught witha variety of foci, including perception, control, and design that provide studentswith an initial repertoire of pre-existing skills drawn from other disciplines [135,196]. These and other ways of framing HaXD have been incorporated into rapidprototyping techniques that allow for faster, easier iteration of haptic designs. Sim-ple Haptics, epitomized by haptic sketching, emphasizes rapid, hands-on explo-ration of a creative space [185, 186]. Hardware platforms such as Arduino (ar-duino.cc) and Phidgets (phidgets.com) [97], as well as the recent trend of DIYhaptic devices [85, 89, 198], encourage hackers and makers to include haptics intheir designs.The language associated with tactile perception (terms related to haptic sensa-tion and how they are used), especially affective (emotional) terms, is another wayof framing haptic design. Many psychophysical studies have been conducted to de-termine the main tactile dimensions with both synthetic haptics and real-world ma-155terials [76, 193]. Language is a promising way of capturing user experience [191],and can reveal useful parameters, e.g., how pressure influences affect [290]. Toolsfor customization by end-users, rather than by expert designers, are another placethat efforts have been made to understand perceptual dimensions using a language-based approach [239, 240]. However, this work is far from complete; touch isdifficult to describe, and some researchers even question the existence of a tactilelanguage [132].Meanwhile, software developers who want to incorporate haptics into theirsystems are supported by large collections of haptic sensations and programmingtoolkits. Sensation collections most commonly support VT stimuli. The UPennTexture Toolkit contains 100 texture models created from recorded data, renderedthrough VT actuators and impedance-type force feedback devices [56]. The FeelEffect library [129], implemented in FeelCraft [225], lets programmers controlsensations using semantic parameters, e.g., “heartbeat intensity.” Immersion’sHaptic SDK (immersion.com) connects to mobile applications, augmenting An-droid’s native vibration library with both a library of presets, and on some mo-bile devices, low-level drivers for effects like fade-ins. VibViz [240] is a freeon-line tool with 120 vibrations organized around five different perceptual facets.Force-feedback environments tend to be supported through programming toolkits.CHAI3D (chai3d.org), H3D (h3dapi.org), and OpenHaptics (geomagic.com) aremajor efforts to simplify force-rendering. Table-top haptic pucks can use the Hap-ticTouch Toolkit [154], which includes parametric adjustment (e.g., “softness”) andprogramming support.Finally, several software-based editing tools support haptic design for differentdevices. These tend to focus on VT stimuli or simple 1-degree-of-freedom forcefeedback. Many editors [76, 180, 219, 231, 258, 259] use graphical mathematicalrepresentations to edit either waveforms or profiles of dynamic parameters (torque,frequency, friction) over time. Of these, Vivitouch Studio [259] offers the most in-tegration with other modalities in games, and Macaron [231] is the most availabletool (online and web-based). The Vibrotactile Score [158] uses a musical metaphor,shown to be preferable to a programming metaphor as long as the designer hasmusical experience [156]. Mobile “sketching” tools like the Demonstration-BasedEditor [118] and mHIVE, a Haptic Instrument [228] are useful for exploration, but156not refinement. Since iOS 5 (2011), Apple has let end-users create on/off vibra-tions as custom vibration ringtones. Immersion’s Touch Effects Studio lets usersenhance a video from a library of tactile icons supplied on a mobile platform. Actu-ator sequencing [199], movie editing [146], and animation [233] metaphors enablemulti-actuator, spatio-temporal VT editing.Some of these tools are founded in an understanding of haptic designers’ needs[233, 259] and begin to capture a slice of the HaXD process [231], but they do notfully capture the context and activities of contemporary haptic design.8.4 Part I: Interviews with Hapticians about HaXD inthe WildIn this section, we present findings from our first study, a qualitative analysis ofinterviews with six professional hapticians.8.4.1 MethodOne researcher (the first author) analyzed the interview transcripts through groundedtheory [49] influenced by phenomenology [187] and thematic analysis [218]. Theanalyzing author, who was trained in qualitative methods, first transcribed inter-views and then examined every participant statement, tagging each with relevantand recurring concepts and keeping written notes for reflection and constant com-parison. Emergent sub-themes (sub-categories) [218] were discovered using qual-itative techniques of memoing, iterative coding [49], clustering and affinity dia-grams [187]. Statements were later grouped according to tags, organized usingaffinity diagrams and clustering, and iteratively developed with further writing andreflection. The 15 sub-themes clustered into three themes (categories) [49, 218].We describe the themes in Section 8.4.2 after an introduction to the designers them-selves and the procedure that was followed for the interviews. We delay a detaileddiscussion of the results until Section 8.6 so we can include the findings of thesecond part of the study, presented in Section 8.5.157ParticipantsSix participants were recruited, 5 male and 1 female. Initial contact was by emailfrom a list of potential interviewees developed by the researchers. We describeeach in terms of experience and training, area of focus within HaXD, types ofprojects, and constraints or other factors that might situate or provide insight intothe interview. Experience and position are reported as of the interview year (2012).P1 (M, over 15 years of human factors experience, PhD) held a design andhuman factors position at major healthcare company. He worked with auditoryalarms, signals, and emotional experience. Despite a focus on audio, he frequentlyrelated his work to haptics and works with physical controls, designing characteris-tics like force profiles and detents, and described the haptic and audio processes asbeing the same. Working in health care means there are tight regulations that needto be followed, and a noisy, diverse environment. P1 used a number of psychologyand human factors techniques, such as semantic differential scales, factor analysis,and capturing meaning.P2 (M, 5-6 years in haptics, PhD) described two projects: his experienceadding mechanical feedback to touch screens at a major automotive company, andhis PhD work on remote tactile feedback, where feedback was displayed on onehand while the other interacted with a touch screen. P2’s main concern is “richfeedback”, communicating information like affordances to the user. This is bothpragmatic, such as “consequences” of the button, and affective, aiming to havesensations “feel right.” P2 focused on button presses on a touchscreen, rather thanexploring “roughness” of a touchscreen or other surface.P3 (M, 10 years leadership experience with actuation, sensing, and multimedia,M.Eng.) worked at a company that sells actuators used to add haptics to technology(like a tablet computer, game controller, or mobile phone). P3 had 20-30 projectsgoing on at any time, each with their own level of size, goals, constraints, and othercontexts. His main goal was to sell a developed actuator (with several variants).P4 (M, 11 years of design, development, and analysis/simulation experience,PhD) also puts actuators into new form factors (e.g., touch screens in cars). Whenhe worked, he had limited time and resources, so there is not much time to changethings.158P5 (M, 12 years of haptics UX experience, M.Sc.) held a user experience lead-ership position at major haptics company that sells haptic control technology andcontent; he described mostly software solutions. His company worked with dif-ferent domains, but most examples are from mobile phones (handhelds), with abrief mention of automotive haptic feedback. They worked with extremely high-end piezo vibration actuators with high bandwidth (frequency and mechanical),and delivered software solutions for Android to their customers: OEMs (origi-nal equipment manufacturers). He described handheld feedback as two differentclasses: confirmation haptics, like a vibration to indicate a widget has been used,and animations/gestural feedback, which is more complicated.P6 (F, 5-6 years in haptics, PhD) worked at a major car manufacturor. Sheprimarily designed “feel” properties such as friction, inertia, and detents of phys-ical controls inside automobiles. P6 also works on active haptic controls. Designaspects include measuring force vs displacement profiles and maintenance of alarge scale haptic design specification repository that spans user and technical re-quirements. This haptic specification repository is used by many engineering andbusiness stakeholders across many sites in different countries.ProcedureAnother co-author, trained in interviewing techniques, interviewed the 6 partici-pants in April-May 2012 using Skype. Each interview lasted 30-60 minutes andconsisted of initial ice-breaker and general open-ended questions. To both coverour initial research questions and allow for emergent findings, interviews weresemi-formal: a single set of prepared questions were asked from most general tomost specific, but the interviewer flexibly and opportunistically followed up oninteresting points.Interviews with P2-P5 were fully recorded and transcribed. Interviews with P1and P6 were collected only as interviewer notes. In the presentation of our findings,double quotation marks (“...”) denote direct transcription quotes for P2-P5 whilesingle quotation marks (‘...’) denote interviewer notes for P1 and P6. We usequalitative writing techniques like rich or “thick” description [93], in-vivo codes(where participants’ words are used to describe concepts) [49], and quotations to159provide the reader with a sense of verisimilitude and give our participants a moredirect voice. For example, we use the word “guts”, from P3, to refer to the tightly-coupled internal components of a system (8.4.2/Ex1).8.4.2 ResultsMost of the emergent themes that we identified persist throughout the design pro-cess (Figure 8.1). We found participants generally followed a process typical ofexperience design (UX) [30] in which they first tried to gain an understandingof the design problem, then iteratively developed ideas and evaluated them. Wefirst outline these confirmatory observations on process, then go on to report onthe themes, our main findings. Throughout, we cross-reference themes by sectionnumber and theme label (e.g., 8.4.2/Co5).Observations on Design ProcessParticipants described the initial stages of a project as a time to establish and under-stand requirements, gather initial design concepts, and define or negotiate projectparameters. Designers often collected examples of haptics, such as mechanicalbuttons and knobs, for inspiration (8.4.2/Co5), and they gathered requirements –both direct requirements for haptic designs (8.4.2/CC1), and project parametersaround the value, cost, and risk of haptic technology (8.4.2/CC4,CC5).P2-P6 explicitly referred to an iterative process. They all found different waysto fit it into their collaborative ecosystem and constraints. As we elaborate below,prototyping and assessment in the physical medium of haptics has many challengesthat set it apart from graphical or auditory domains even as designers navigate verycommon-place objectives. For example, initial requirements were often not ac-tually what clients wanted, and our designers would have to iterate (8.4.2/CC1).P5’s teams explicitly follow a conventional user-centered design process, iterat-ing simultaneously on prototypes and their understanding of customer needs. P3sometimes has to ship mockups and devices back and forth with their customers(8.4.2/Co5). Each design problem faced by our participants had to be treatedas a unique problem, with designers fine-tuning their design to fit the problem(8.4.2/Ex5). Our designers used a variety of evaluation techniques to choose their160final designs (8.4.2/CC2).We now proceed with our cross-cutting themes, organized by scope (Figure 8.1):the haptic experience and its implementation (Section 8.4.2), the designers’ collab-orative ecosystem (Section 8.4.2), and implications from the wider cultural contextof haptic technology and business requirements (Section 8.4.2).[Theme Ex] Holistic Haptic Experiences: “It doesn’t end at the actuator”Context is crucial to experience at multiple levels, but is difficult for a designerto foresee or control. Aspects of context range from immediate, very local elec-tromechanical environment (material properties, casing resonance, computationallatencies), through the user’s manner of touching the haptic element (grip, forces,longevity of contact), to the user’s momentary environment, attention, and goals.At the local end, the complexity of the haptic sense itself is a major factor inexpanding the haptic experience design space substantially beyond what are usu-ally its initial requirements – for example, for the changing feel of a modal physicalcontrol in an automobile cockpit. As we’ve discussed, the haptic sense is really acollection of subsenses [48, 142], working together to construct an overall percept,e.g., material properties deduced from stroking, tapping, or flexing a surface or ob-ject [152]. Grip, materials, dynamics as well as visual and audio aspects all play apart in the result.“The problem is it doesn’t end at the actuator, there’s a lot to do withthe case of the device, the mass of the device, the mechanical couplingbetween the device and the hand...this all comes into play because it’sa tangible experience, and so if there’s mechanical resonances that getstimulated by the actuator that make it sound noisy, then it becomes acheap experience, even if it has a piezo actuator.” (P5)Thus, designers both face multifaceted constraints and have opportunities to cir-cumvent those constraints. We begin by discussing implications for implementa-tion, wherein haptic components are directly related to the internal mechanics –the “guts” – of the system (Sub-theme Ex1). Then, we move on to opportunitiesfor improving design: strategies like reinforcement and substitution are powerful161tools for haptic designers (Ex2). Timing is critical, enabling abovementioned op-portunities while imposing constraints: designers must introduce no new delaysand carefully synchronize feedback (Ex3). However, the full extent of a sensorycontext is sometimes uncontrollable or unknown, and at such times prevent de-signers from using their tricks (Ex4). We finish this section by discussing howhaptic experiences are often bespoke, tailored to constraints of known contexts, orcustomizable to unknown contexts (Ex5).Code Sub-theme descriptor ExplanationEx1 Haptic components are vertical Changing a haptic component may influ-ence the larger hardware/software system,and vice-versa.Ex2 Tricks to create great feels Haptic designers can improve designs andwork around constraints through multimodaltricks.Ex3 Latency and Timing Without fast feedback and synchronized tim-ing, haptic experiences fall apart.Ex4 Constraints and unknown context Other modalities may impose constraints;constraints may not always be knowable.Ex5 Tailoring and customization Designers tailor their solutions to each appli-cation; end-users benefit from customization.Table 8.1: Sub-theme summaries for the Holistic Haptic Experiences (Ex1)theme.Ex1: “Changes are to the guts” – Haptic components are vertical. Haptic expe-riences are created when the actuating component physically interacts with othersystem components. Changing a haptic component can thus affect the entire sys-tem’s design, unlike many other upgrades, like improving memory in a mobilephone: “you get the impression every other month they have a new phone...but theguts of it do not change much” (P3). New phones often just have a faster CPU ormore memory swapped into an essentially unchanged system; but when adding ormodifying haptic components, designers must consider the entire system includingthe physical casing, and possibly modify it as well:“First we had to get the outer dimensions [of the prototype’s case]162roughly about right, to get the visual impression close to what it re-sembles later in the application” (P4).This effect is bidirectional. Changing the size or material of the casing can havea profound effect on the sensation; correspondingly, any changes to the hapticswill have an effect on the entire structure of the device. Changes to software arealso cross-cutting: “we’re digging into the source code of Android...we need tomake sure that we have the right hooks in the right locations...that’s a softwarearchitecture issue, right?” (P5).Ex2: “Have that solid click” – Tricks to create great feels.Haptic designers have an array of techniques to create great experiences, work-ing around constraints and uncertainty. The first step is to have a fast, responsiveactuator when possible. Previously, creating good actuators was a goal for our par-ticipants: “[what we] strived in the past significantly to do was to push the markettowards high mechanical bandwidth actuators, so actuators that can respond in15 milliseconds or less” (P5). Now, high-quality actuators are a main competitiveadvantage:“High-definition feels over a very broad frequency range, with enoughstrength and small enough, and especially very fast response time,that’s our business” (P3).As discussed in Ex1, the actuator does not determine the experience alone, butinteracts with physical materials and non-haptic senses. When a haptic device’sultimate situation is known at design time – like a car dashboard – designers canmodify properties of the larger physical system to improve the overall haptic expe-rience: metal makes unwanted sound, so change it with a plastic (P6). The designercan also make a sensation more convincing with multimodal reinforcement, e.g.,adding visual or audio feedback:“Need to have that solid [haptic] click at 150 [Hz] plus some audio at300 or 400 Hz, which is going to give you that sense of quality, and,consistency across the whole dashboard” (P5).163When a known physical context has constraints, designers also use substitutionto enable or improve the haptic interaction. P2 describes two such occasions, onefor sensing input and one for displaying output. Because P2 could not sense inputpressure, he instead used how long the user was pressing the screen (“dwell time”):“we were substituting the forces that are needed on the actual buttons with dynamicdwell times” (P2). This was only possible because P2 had knowledge of how theuser would be touching the control, and thus could deduce that dwell time wasa reasonable proxy for pressure. In another case, P2 could not actuate a touchscreen, so he used tactile feedback on the other hand – again, requiring knowledgeof and considerable design access to the device’s and user’s larger situation; here,the steering wheel.Ex3: “A reliable clock” – Latency and Timing.One underlying requirement for great haptic experiences is responsive timing. Feed-back must be fast; modalities must be synchronized. Effective reinforcement re-quires simultaneity and hence tight (millisecond) control over timing. This is wellestablished in the literature [141, 162] and known to our designers: “I think, audiofeedback and tactile feedback and visual feedback has to happen at a certain timeto have a real effect” (P2).Latency accumulates throughout the computational pipeline, with actuator re-sponsiveness the very last stage and rarely the most impactful. Designers mustminimize computational delays wherever possible. P2 describes unintentionallyadding latency to one project: “we had this Python program and Arduino and allthis communication going on” and how he “threw out some of the serial commu-nication which [had] made the whole thing a little slow”, and thus, the “latencyagain felt right”. Timing problems between components can happen at any time:“we’ve gotten in situations before where we’ve been very near to completion indesign projects, and for whatever reason we can’t get a reliable clock, from theCPU, then the whole thing falls apart” (P5).When adequate simultaneity constraints are met, the user perceptually fusesthese non-collated events (activating a graphical element on a screen, and feelinga tick on the steering wheel) into a single percept: “somehow you connect thesetwo things, the action with the dominant hand and the reaction that is happening164somewhere else” (P2). Haptic designers thus need access to the computationalpipeline to circumvent physical constraints with multimodal tricks.Ex4: “Feelable but not seeable” – Unknown user constraints and context. Hap-tic designers sometimes contend with unavoidable constraints emerging from phys-ical context or application space. Some constraints not only limit multimodal syn-ergies, but go on to actively limit haptic display. For example, eyes-free interactionin cars means that visual reinforcement is unavailable; indeed, visual movementmay have to be avoided altogether for safety reasons. P4 is tasked with creating a“feelable but not seeable” sensation to “avoid having to use visual feedback”, as“driver distraction is always a big topic” (P4). This means P4 has limited controlover his designed haptic sensation, as it cannot visibly move, but P4 can use audioreinforcement or substitution to handle constraints.Perhaps even more difficult is when the experience’s context is unknown. Thiscan derive from at least two sources: protection of intellectual property (IP) throughsecrecy, and unconstrained end-user situation. Stakeholders often keep key con-textual information such as the visual interface secret from third party designers(e.g., OEMs [original equipment manufacturers] or consulting): “we can suggestcomponents, and suggest characteristics of the HMI [human-machine interface]system, but the exact visual design of the HMI system is the OEM’s knowledge”(P4). P3 has an evaluation kit to send to potential customers when customers’ IPis a concern:“[An evaluation kit is] basically a little box that consists of our actua-tor and some electronics, and that box is connected and driven throughthe USB port of a computer, and you can then mechanically integratethe box in your own way, so we don’t need to know what their designlooks like” (P3).We discuss IP and secrecy more in Section 8.4.2/CC3. Meanwhile, designers mustdeal with sometimes unknowable end-user context, especially with mobile scenar-ios. A high quality LRA-type actuator on a metal table can sound cheap, whilean affordable eccentric mass actuator can sound like purring if it’s on rubber, and“there’s not much you can do from a haptic perspective, other than allow the user165to turn it up or down” (P5).Ex5: “Very individual” – Tailoring and customization. Because the context ofhaptic technology can vary so much, haptic designs need to be tailored for eachclient’s problem and are often made customizable for end-users. For the former,several participants’ business models are directly based on tailoring. P4’s groupmakes a small set of actuators, adapting them to each specific request. This is ex-acerbated because it is “hard for [customers] to really express what they need”(P4) (discussed more in Section 8.4.2/CC1) so designers must rapidly and collab-oratively fine-tune their solutions.Even if customer goals are clear, tailoring is necessary because of requirements(e.g., branding or “trademark” (P2), 8.4.2/CC4) and hardware setup: “it’s impor-tant to tune the experience depending on whatever kind of motor they decide to putin” (P5).“Depending on the outer design, what’s given to us by the customer,we have to choose the direction of movement. For some applications,for some ideas, it’s possible to move the surface directly perpendicu-lar, away from the user, and other applications, you have to move thesurface perpendicular towards the user, so the same actuation modulecould feel completely different” (P4).Meanwhile, individual differences of end-users further complicate matters: “feel-ing right is...something that is very individual” (P2). As P5 mentioned, volumecontrols can help end-users and adapt to unknowable context.[Theme Co] Collaboration: “Rally the ecosystem”In this section, we describe the collaborative ecosystem. First, we provide anoverview of group structure and interdisciplinary roles found in our participants’groups (Sub-theme Co1), including a focus on the role of engineering (Co2). Wethen discuss the dispersion of stakeholders internationally and in different organi-zations (Co3), including a focus on the connecting role of sales representatives,and the use of demos and documentation (Co5). We distinguish the Collaborationtheme (Section 8.4.2) from the Cultural Context theme (Section 8.4.2) by focusing166Code Sub-theme descriptor ExplanationCo1 Internal roles are interdisciplinary It takes a multidisciplinary team to create ahaptic design.Co2 Engineering support Prototyping is necessary and often delegatedto engineers.Co3 External roles are international Haptic design teams work with other stake-holders around the world.Co4 Facillitators and advocates Sales reps handle demos and fight for a deal.Co5 Demos and documentation Designers often show instead of telling.Table 8.2: Sub-theme summaries for the Collaboration (Co) theme.Role Descriptors DescriptionUX User division (P6),User research (P5),Ergonomics (P6),Human factors (P1),Psychologist (P2,6)The UX team does research: facilitate prototypes, val-idate, communicate those results (P5). Here we in-clude psychologists and human factors roles becausethey conduct user research such as evaluation: psychol-ogists there who do usability tests (P6), study how ef-fectively how users interact w/ goals (P1).Design Design team (P5) Related to UX but a separate and in some ways higher-level role. The design group ideates and communicatesvision, developing a value proposition. Designers usu-ally have a similar background to the UX group (P5).Engineering Tech manager (P3),Engineering (P3,5)Electronics, mechan-ics, tech team (P6)Often a separate division, handling prototyping and im-plementation (P5). They might test components, dophysical construction, take requirements from design,ergonomics, electronics, mechanics, etc. and generaterequired (haptic) feedback (P6). This can involve bothhardware and software.Table 8.3: Internal roles, the various descriptors used to label them, and de-scriptions. Roles were grouped and named by the authors based onparticipant-provided descriptors.on specific communication methods and roles rather than underlying values andwidespread public consciousness.All six participants indicated collaboration was an important part of their workand design process. Haptic designers are part of interdisciplinary, internationalteams, and do not make haptic experiences alone:167“We basically have to rally the ecosystem...we have to go and find,y’know, somebody to supply the amplifier part, somebody to make themotor, somebody who knows enough about the Android kernel...wehave to be, kind of, renaissance men if you like” (P5).Co1: “I’m not so much of a psychologist” – Internal roles are interdisciplinary.Haptic design is interdisciplinary; hardware, software, psychology, and business allplay a role. P5 describes his company’s job as “rallying the ecosystem”, finding di-verse expertise and establishing a production chain. P6 describes different roles inher team, who work more closely together at different stages: user [research], de-sign, ergonomics, haptics, electronics, all come together (P6). This is reflected bythe diverse internal roles (Table ??), but also in the diverse work in single projectsand individuals:“We do some mechanical integration work, we help [our internationalcustomers] with designing the electronics, we have reference designsthere, we have a couple of reference effects, and then we ship the partback and they go on with further doing the software integration anddesigning the haptic effects.” (P3)Our participants worked in groups of various sizes. P2 worked with a studentin a team of 2, while P5 describes several teams: design, UX, engineering, eachwith different responsibilities. This collaboration can be collocated or remote: P6describes the different divisions in her company as being physically close together,while P3 has sales representatives (“reps”) overseas to help with external collabo-ration.Especially in smaller groups, team members fill multiple roles. Sometimesthis falls naturally into their background: “I guess [phone vibrations are] similarto mechanical control design, except that it’s all virtual” (P5). Otherwise, thislack of expertise leads reduces confidence: “I don’t know, I’m not so much of apsychologist to really, to dare to say I can evaluate subjective responses to tactilefeedback” (P2).Co2: “Go through the technical levels” – Engineering support. Larger groupsare able to have more specialized individuals. Especially common was a dedicated168engineering or technical support team, tasked with implementing prototypes fordesign and user research teams.“In our design research team we don’t do any internal prototyping, werely on engineering resources to do all our prototyping” (P5).P5’s group says that neither the design team nor the UX team build prototypes,though the UX team facillitates and evaluates them. P1’s team is similar: give qual-itative feedback and ranges to the technicians (P1). Engineering departments aresometimes physically very close to other departments (P6), presumably to interactwith different divisions and groups. However, separating expertise can cause gulfsof collaboration, e.g., when P3 tries to propose a deal:“If you try to go through the technical levels from a technology scout toa technical manager and then maybe to a senior manager, you usuallyget blocked with something, because nobody wants to take the risk orthe blame” (P3).Those in engineering roles are risk-adverse: “[it’s] risky to suggest changes totheir component” (P3). P3 says that to pitch to other companies, you need to reach“C-level people” like the CEO, or other business or manager types: “engineerslook at it from a perspective well I’m going to take a risk if I change something inmy design, and if it doesn’t work everybody’s going to blame me” (P3), technicianswon’t give pushback if there is a problem (P1).Co3: “Different divisions, different companies” – External roles are interna-tional. Haptic designers also worked closely with external stakeholders like poten-tial customers and manufacturers. Our designers have diverse suppliers, especiallyhardware suppliers, and often sell to manufacturers who then sell their own productto the end-user. Table ?? provides details on these external-facing roles.“Automotive is very much a tiered and compartmentalized manufac-turing business, and so the person who makes the control surface isdifferent than the person who makes the mounting for it...and thosepeople often never talk to each other, and so for us it’s even worsethan different divisions in a company, it’s different companies” (P5).169Often these groups are distributed internationally. P5’s group, based in NorthAmerica, received international demographics to research: “here’s phone X fromOEM Y and it’s targeted at Asian ladies from 15 to 30 years old” (P5). P3, whohas a headquarters in the North America and clients in Asia, describes sales repsas critical team members who can bridge language and cultural barriers.Role Descriptors DescriptionConnections Sales rep, technol-ogy scout (P3)Sales reps from haptic companies handle local exper-tise (language and culture), haptics expertise (they rundemos), and can be advocates for products. Technologyscouts from large companies talk to haptics companiesto learn their technology.Business Business dev peo-ple, C-level people(P3)Internal business development people are “here [inHQ]” (P3), while external business people make de-cisions; they’re who you need to persuade, rather thantechnology-focused roles.SupplychainVendor, developer,manufacturer, OEM(P5),supplier (P4,6),content provider(P3)Haptic designers are heavily embedded in a supplychain involving hardware and software manufacturers.Some manufacturers provide hardware (e.g, actuators)and software (e.g., Android API) to the haptician, oth-ers are the intended customer (phone or car manufac-turers, software developers). It is unclear who createshaptic content in this ecosystem.Table 8.4: External roles, the various descriptors used to label them, and de-scriptions.Co4: “Sales reps” – Facilitators and advocates. P3 describes sales reps in-depthas key team members. Sales reps are trained locally at headquarters in North Amer-ica, then are sent to the customers’ area, often in countries like Korea, Japan, China,and Taiwan which have large consumer electronics and gaming markets. It is im-portant that they speak the local language and understand the local culture; theyalso facilitate demos and persuade customers to pursue business with the designer’steam. If a demo is sent to a company without a sales rep, customers may respondby shipping the device back and requesting assistance, but often don’t respond atall:“If we try to just ship them a part...in the best case they come backand say well it doesn’t work as we thought, can you help us?...in the170worst case they don’t even contact us back and we never learn whythey didn’t pursue an idea or an opportunity. It’s still a complicatedsetup to make haptics work, there’s lots of aspects that you have totake into account, and if you don’t do it properly, you’re going to bemost likely very disappointed about what the outcome is” (P3).Big tech companies sometimes invert this from a push model (where the hapticscompany uses a sales rep) to a pull model with tech scouts (who reach out to hapticscompanies). Sometimes, companies fill this role without dedicated sales reps: P4goes to customers regularly in confidential meetings, receiving specifications andworking collocated with the customer to get their product to feel “just right”:“There is always [the] option, as we did with one of our customers,that we simply went into the lab for a day or two, and just worked onsimulated button feel, together with the customer, to get the feel justright” (P4)In all cases, content can fall through the cracks. P3’s company provides tech-nology, but “the issue that we are having with uh, the content providers that needto get interested and believe in it...creating the haptic effects is something that wehaven’t been involved in in a lot of detail in the past” (P3). P5’s company doeshave a set of 150 effects, from which they select themes. The other participants allmention technology they develop, with content directly related to their hardwaresolution.Co5: “Your piezo demo, we love it” – Demos and documentation. Demos areessential to showing both the value of a haptic experience and enabling two-waycommunication with the customer. They can clarify requirements and grab atten-tion from clients: “we’ll often get the OEMs who will say, well you showed us yourpiezo demo, and we love it, it feels great” (P5). Demos can be conducted in-person(synchronously) at events like tech-days or one-on-one meetings: “the customereither comes directly to us, we go towards our customers regularly, have our techdays, similar to automotive clinics” (P4), or asynchronously, remotely shipped.However, demos are complicated and need an experienced handler like a salesrep. Once set up, demos are often adjusted, but this is easier than the setup: “From171the moment the actuation module was working...it was just cranking up the maxi-mum current or reducing the maximum current” (P4).Demos are often collected into groups. P5 describes downloading apps thatuse his technology and “sticking those in [their] demo suite”. P1 and P2 talkabout collecting examples for inspiration and guidance early in design: it’s quickerto go out and buy examples, like 15 or 16 appliances that had notably differentfeelings (P1). P2 instructed his student to “collect physical push buttons just to getin contact with all the diversity of stuff”, and ended up with a “button board” toguide design. He also talks about company guidelines:“When I was at [a major automotive company] 3 years ago...they hadthis guideline book...they had guidelines on the design of physical wid-gets like sliders, physical sliders, push buttons, rotary things...they de-fined thresholds basically where these forces have certain thresholdsand if you get over the threshold something is happening” (P2).Demo setups can thus be stored long term for internal documentation (buttonboard, guideline book), but they can also be ephemeral (tech days). In both cases,they can help to articulate the value, especially valuable when most people do notyet understand haptic technology.[Theme CC] Cultural Context: “A standard feature, in the future”Haptic technology has yet to fully penetrate the public consciousness. Participantsreported major difficulty when working with both customers and users, includinga limited understanding of what haptic technology is and how to work with it:“People really don’t know what to do with [haptics] and I think withinthe haptics community we need to...continue to push it into the market,but once it’s there I think it’s going to add to the user experience andwill be a standard feature in the future” (P3).Specifically mentioned were the difficulty in understanding customer require-ments (CC1), and knowing how to appropriately evaluate haptic experiences (CC2).As a technological field, secrecy and intellectual property are important concerns172Code Sub-theme descriptor ExplanationCC1 Understanding requirements Customers and designers have trouble articu-lating and understanding goals.CC2 Evaluation Getting experiences to feel right, usually withacceptance testing and deployment.CC3 Secrecy and intellectual property Haptic technology and sourced componentsare often cutting edge and secret.CC4 UX and branding Tactile experiences provide intangible bene-fits.CC5 Overcoming risk and cost Haptics are risky and expensive to include ina product.Table 8.5: Sub-theme summaries for the Cultural Context (CC) theme.for both designers and customers (CC3). Designers had ways to pitch the valueproposition of haptics, often tied to UX and branding (CC4), but risk and cost ofadopting the technology often make it a hard sell (CC5).CC1: “Hard to express what they need” – Understanding requirements. Cus-tomers found it difficult to both understand and request their needs. Our partic-ipants focused on the end result because it gives them and their colleagues theability to solve problems: Don’t specify elements. Only give end product. Don’ttell how to restrict; can give hints (P6). However, requested end-results are oftenvague or confusing, like “good variable feel” (P4):“The customer only came with a question, yeah, how [can the design]feel variable? Here it did not really describe how it should feel vari-able” (P4).To make these impressions concrete, customers initially give engineering pa-rameters as their best guess. P4 in particular talks about his customers, who mightpoint to a “reference button which is available directly on the market, from com-panies like [company 1] or [company 2], and they say it has to feel exactly likethis button”, or request “a surface acceleration of 10 to 20 G perpendicular anda travelling distance of .2-.3 mm” (P4). This might have little relation to the finalresult, after the designers iterate with the customer: “we ended with an accelera-173tion of 2G and a travelling distance of .4 of a mm, so, due to the size of the module,simply the high accelerations were too high for a good variable feel” (P4). Thegoal function of good variable feel was achieved, but the initial engineering-levelspecification was completely off.Other participants showed this duality between high-level affective goals andlow-level guesses. P1 especially used affective and psychological terms when con-sidering design, such as semantic differential scales: good/bad; gender (robust/del-icate; size); intensity (sharp/dull; bright/dim, fast/slow); novelty (P1). Haptic de-signers often connected low-level/high-level terms through iteration, or with theirown way of representing features like quality: “[audio click gives] quality, and,consistency across the whole dashboard” (P5), mass is big for quality...for thehaptics, nice feedback w/ good snap gives a sense of quality (P6).CC2: “It feels right” – Evaluation. Our designers all evaluated their designs butdemonstrated different methods of evaluation, consistent with our workshop survey(Section 8.5). P2 explicitly evaluates both low-level, pragmatic concerns (e.g.,task accuracy and speed) and high-level affective concerns like feeling personallyinvolved (with the AttrakDiff questionnaire [108], http://attrakdiff.de). P5’s userexperience team conducts validation (but was unable to share details). Small-scaleacceptance testing was employed by both P2 and P4: when iterating in-personwith the customer, P4 kept iterating until the customer said it “felt right”; P2 onlyhad himself and his student evaluate their designs in an academic context, despiteindicating a desire to do a more thorough evaluation. P3’s group doesn’t createcontent, but indicated a desire to look into that and investigate it with studies.Our participants expressed a clear desire for stronger evaluation, but reportedmostly lightweight, ad-hoc acceptance testing. This is consistent with our work-shop findings, which suggest little real-world or in situ evaluation. One reasonmay be that evaluation tools need to be adapted. P2 describes having to “throwout” terms on the AttrakDiff questionnaire that did not fit, and iterate on the ques-tionnaire. However, deployment seems to be a natural way to see if the design isgood enough, as the ultimate acceptance test. P5 described the most memorablemoment of his software project being when his product had been deployed andused by a software development team. Seeing a haptic-enabled app available for174download, and feeling it in context, was impressive:“I think the most memorable day was when we started downloadingapps, and realized that, yes, in fact this does work, and not only doesit work but it works pretty well for a variety of apps... we ended up juststicking those in our demo suite even though we had no relationshipwhatsoever to the developer. So, their app just worked, and it workedreally well” (P5)CC3: “Kept confidential” – Secrecy and intellectual property. Sometimes thecustomer does not know what they want, but in other cases, they have importantinformation they need to withhold. As mentioned in Section 8.4.2/Ex4, secrecy inhaptics has major implications that inhibit design, especially given the verticalityof haptic technology:“Somebody wants to design a completely new gaming controller for agaming console, so they might just have some CAD drawings or theymight have something they don’t want to share with us, so in that casewe provide them an evaluation kit...we don’t need to know what theirdesign looks like, they can really work on it internally” (P3)P3’s clients are able to receive an evaluation kit and create content with audioeditors. P4 describes meetings with customers that preserve confidentiality: “onthese tech days it’s usually only one customer and not that many suppliers at thesame time, sometimes only the customer and us, to make sure our development iskept confidential” (P4). Once technology of P4’s company is on the market, it is nolonger secret - rivals can copy or reverse-engineer the devices, so there are manydemonstrations to customers before release of the tech. P4 wants to show theirtechnology to potential buyers, not competitors.Secrecy can cause delays for software too. P5 delivers a modified Androidkernel to his customers, who are software developers. However, they are not givenan early release, and thus they “always lag the market by two months at least, to getan update [for Android]...it’s annoying because, you know, as soon as the OEMsget the source code they want to put it in their product right away” (P5).175CC4: “Articulating the value” – UX and branding. Our participants were allpassionate about haptic technology and its benefits. The value of haptics can beconnected to better performance on various tasks: P2 tried to “support peopleinteracting bimanually to find out if they are more accurate in drag and drop tasks,[or] faster”, but also whether they would “feel more personally involved in theinteraction somehow” (P2). This latter goal, of user experience or rich feedback,was the primary value for haptics:“It’s like having a touchscreen now on smartphones which nobody ex-pects any other way anymore...sometimes pull out my old, uh, tom-tomnavigation device in my car, and that one didn’t have a touch screenback then (P3 laughs) so I tap on that one [expecting it to respond totouch input], and so it’s the same thing with haptics, at some point it’sjust going to expect that you get some nice haptic feedback, but gettingthere is still a couple of years out” (P3)Of course, “a couple years out” has already gone by as of the time of this writ-ing; and indeed, haptic feedback is now normal and expected in many touchscreenproducts, although quality and range continue to be challenging.As mentioned in Table 8.4.2, tailoring and customization are important for eachimplementation. This is also true for value: differentiable sensations are importantto help distinguish overall user experience and provide branding. look for alarmsthat were different; branding.effective, but different. (P1). Companies and productsneed to have both a cohesive and differentiable feel. P2’s company “guidelinebook”, which defined force profiles for buttons, was helpful to “coin a trademark”(P2).“[We] provide differentiated tactile experiences to our customers, whoare major mobile phone manufacturers. Since Android is pretty genericacross the board, um, they like to have custom themes, which are setsof these 150 effects” (P5).With software libraries, themes are essential to the haptic design process. Thisdesire for consistent output has a tension with customization and fine-tuning: “it’s176also important to tune the experience depending on whatever kind of motor they de-cide to put in” (P5). This is part of the the persuasive capability of touch: improvecomfort and differentiate based on branding (P6).CC5: “A tough sell” – Overcoming risk and cost. Despite its value, haptic tech-nology is a risky, costly feature to add. Providing improved user experience re-quires “high-definition haptics”, not “some rumble feedback that has been arounda long time” (P3). This often means “going up in fidelity” from a “cheap, poorquality motor” (P5). P5’s company argues that “the end-user is going to preferthis quality of experience” with improved hardware, like a piezo actuator.“[If we were to perform this project again,] I think we would spend abit more time up front articulating the value, the specific value prop,of individual features” (P5).P5 notes the challenge of convincing non-end-users to buy or deploy their tech-nology: “[our company] has the unique challenge that our customers are not thepeople who use our products” (P5). Since the main benefit is to the end-user’sexperience, it is challenging to connect to the bottom line, especially compared toother haptics components. According to P3, designers need to“...get up to the decision-making level and more on the business side...[businessroles] know nothing about technology, I mean, they don’t care, but weare trying to demo parts to them, present business cases to them, andshow them what they can do in order to gain market share, or increasetheir retail price when they add our technology” (P3).P3 further commented on lack of knowledge among dicison makers about hapticscompared to other technologies.“Let’s assume we were to work on a completely different product likememory chips, so everybody understands what this is for, what it cando, and you probably have a memory chip that is faster or, whatever,smaller. Now for haptics, this approach is kind of difficult because thetechnology scouts themselves they kind of understand what this is for,177but how it it’s going to add value to their device, and how much theycan increase the retail price, or if they can increase it at all, or gainmarket share, that’s completely open” (P3).Newer technologies are hard to explain: “[Gesture-based haptic feedback is]a much more complex task to design, and also to explain, to the OEM” (P5). It canalso make persuading a customer difficult. P3 finds that “there’s always discus-sions on the cost”, and proposes “alternative business models” to no avail. Costconcerns are perfectly captured by P5:“[The customer says,] ‘we love [the piezo demo], it feels great, we’rebuilding this phone that has a 10 cent eccentric mass motor in it, canyou make it feel the same?’ The answer of course is no” (P5).P5 notes that “cost pressures are pretty extreme”. Mobile phones in the US cost“$199 on contract, that’s sort of a fixed price and you can add more features to thephone, but that just reduces the profit margin, right?”, so “the addition of hapticfeedback technology...can be a tough sell” (P5). Haptic technology is especiallyrisky because of previously discussed challenges: it involves separate risk-adverseengineering divisions, and changes to the “guts” of a product. Designers need toset up complicated demos to persuade decision makers of the value of improveduser experience: if only compete on cost; then this is tough (P1). Of course, “it’shard to get through to the right level”, like “C-level kind of persons, so, talkingto the CTO of Sony, those kinds of people” (P3). The combination of high-risk,increased cost, and indirect connection to the bottom line make haptics a verytough sell indeed.8.5 Part II: Validating the Findings in a Follow-UpWorkshopOur second study was conducted during a workshop on haptic experience designat World Haptics 2015, the largest academic haptics conference, held that year inChicago, IL, USA (http://haptics2015.org).1788.5.1 MethodThe workshop was organized by two of the authors to initiate a conversation be-tween researchers and industry practitioners about HaXD status and needs, andrevOnecomplement our findings from the first study by connecting with a broaderset of hapticians.ParticipantsOver thirty people participated in the workshop brainstorm session and the paneldiscussion. Sixteen workshop participants responded to a questionnaire at the close