UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Periodic vibrotactile guidance Karuei, Idin 2014

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2014_november_karuei_idin.pdf [ 12.86MB ]
Metadata
JSON: 24-1.0167014.json
JSON-LD: 24-1.0167014-ld.json
RDF/XML (Pretty): 24-1.0167014-rdf.xml
RDF/JSON: 24-1.0167014-rdf.json
Turtle: 24-1.0167014-turtle.txt
N-Triples: 24-1.0167014-rdf-ntriples.txt
Original Record: 24-1.0167014-source.json
Full Text
24-1.0167014-fulltext.txt
Citation
24-1.0167014.ris

Full Text

Periodic Vibrotactile GuidancebyIdin KarueiB.S., The University of Tehran, 2003M.A.Sc., Concordia University, 2005A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFDoctor of PhilosophyinTHE FACULTY OF GRADUATE AND POSTDOCTORALSTUDIES(Computer Science)The University Of British Columbia(Vancouver)October 2014c© Idin Karuei, 2014AbstractEmergence of mobile technologies, with their ever increasing computing power,embedded sensors, and connectivity to the Internet has created many new appli-cations such as navigational guidance systems. Unfortunately, these devices canbecome problematic by inappropriate usage or overloading of the audiovisual chan-nels. Wearable haptics has come to the rescue with the promise of offloading someof the communication from the audiovisual channels.The main goal of our research is to develop a spatiotemporal guidance systembased on the potentials and limitations of the sense of touch. Our proposed guid-ance method, Periodic Vibrotactile Guidance (PVG), guides movement frequencythrough periodic vibrations to help the user achieve a desired speed and/or finisha task in a desired time. We identify three requirements for a successful PVG sys-tem: accurate measurement of the user’s movement frequency, successful deliveryof vibrotactile cues, and the user’s ability to follow the cues at different rates andduring auditory multitasking.In Phase 1, we study the sensitivity of different body locations to vibrotactilecues with/without visual workload and under different movement conditions andexamine the effect of expectation of location and gender differences. We create aset of design guidelines for wearable haptics.In Phase 2, we develop Robust Realtime Algorithm for Cadence Estimation(RRACE) which measures momentary step frequency/interval via frequency-domainanalysis of accelerometer signals available in smartphones. Our results show that,with a 95% accuracy, RRACE is more accurate than the published state-of-the-arttime-based algorithm.In Phase 3, we use the guidelines from Phase 1 and the RRACE algorithm toiistudy PVG. First we examine walkers’ susceptibility to PVG which shows mostwalkers can follow the cues with 95% accuracy. Then we examine the effect ofauditory multitasking on users’ performance and workload, which shows that PVGcan successfully guide the walker’s speed during multitasking.Our research expands the reach of wearable haptics and guidance technologiesby providing design guidelines, a robust cadence detection algorithm, and PeriodicVibrotactile Guidance – an intuitive method of communicating spatiotemporal in-formation in a continuous manner – which can successfully guide movement speedwith little to no learning required.iiiPrefaceAll work reported in this dissertation was conducted under the supervision of Dr.Karon E. MacLean (Department of Computer Science), who is my co-author on allwork presented here. I was the primary contributor to all aspects of this research,however, there are several collaborators without whom this work could not havehappened. In this preface, I describe the level of involvement of my collaborators,the resulting publications, and the ethics approval for conducting experiments withhuman participants.This research was done in three phases and each phase started with softwareand/or hardware development followed by two experiments.Phase 1: Sensitivity to Vibrations in Mobile ContextsThis phase started out as a course project under the supervision of Dr. Karon E.MacLean and in collaboration with graduate students Zoltan Foley-Fisher, RusselMacKenzie, Sebastian Koch, and Mohamed El-Zohairy; it consisted of two ex-periments. We all equally participated in conducting the first experiment. ZoltanFoley-Fisher helped with hardware development and the final statistical analysis;Russel MacKenzie helped with software development and initial statistical analy-sis; Sebastian Koch helped with software development; and I led the experimentaldesign, conducted the statistical analysis, and documented the study. I was the soleconductor of the second experiment. I modified the hardware and software that weused in the previous experiment, made changes to the design of the experiment,conducted the study, and performed the statistical analysis. Both experiments werepublished and presented at The ACM CHI Conference on Human Factors in Com-puting Systems (CHI) in 2011:iv• I. Karuei, K. E. MacLean, Z. Foley-Fisher, R. MacKenzie, S. Koch, andM. El-Zohairy. Detecting vibrations across the body in mobile contexts. InProceedings of the 2011 annual conference on Human factors in computingsystems - CHI 11, pages 3267-3276, 2011This phase is explained in Chapter 3.Phase 2: Cadence DetectionThe premise of the second phase was to develop a cadence detection algorithm,deploy it on an Android phone, test it and use it in the next phase. Initially, Isupervised Bryan Stern, an undergraduate, who implemented the algorithm on An-droid. He then helped me with conducting a short indoor experiment on treadmill.I conducted the statistical analysis alone. I then supervised Michelle Chuang tofurther improve the software. Oliver Schneider, a masters student at the time whowould later use our cadence estimation algorithm in his research, helped me inthe planning of the main experiment. Oliver and Michelle both helped me con-duct the experiment. Oliver also reimplemented the time-domain algorithm thatwe compared our algorithm against. I then analyzed the results of the experimentand compared our algorithm and the time-domain one. Oliver helped me with thewriting of these results and we published it in the Journal of Pervasive and MobileComputing:• I. Karuei, O. S. Schneider, B. Stern, M. Chuang, and K. E. MacLean. RRACE:Robust Realtime Algorithm for Cadence Estimation. Pervasive and MobileComputing, (0):52–66, 2014. ISSN 1574-1192This phase is explained in Chapter 4.In a work not covered in this dissertation, Oliver Schneider supervised MikeWu, an undergraduate, to extend our algorithm into a library called GaitLib. Iprovided advisory input to this paper, and am therefore included as a co-author.Phase 3: Study of Periodic Vibrotactile GuidanceIn the third and last phase of this research, I conducted two experiments whereI used the RRACE algorithm we developed in the previous phase and the HapticNotifier, developed by Diane Tam, a masters student in our lab. I developed myvown code for the Haptic Notifier and developed an Android application that usedthe GaitLib to be used in the experiments. I conducted the first experiment.In the second stage of this work, I supervised James Bigland, a cognitive sci-ence undergraduate, who helped me with the design of auditory tasks and conduct-ing of the experiment. Finally, I analyzed the results of both experiments. The firstexperiment was published and presented at Haptics Symposium in 2014.• I. Karuei and K. E. MacLean. Susceptibility to periodic vibrotactile guidanceof human cadence. In Haptics Symposium (HAPTICS), 2014 IEEE, pages141–146, 2014This phase is explained in Chapters 5 and 6.Research with Human Participants and EthicsAll research with human participants was reviewed and approved by the Universityof British Columbia (UBC) Research Ethics Board under the B03-0490 and B01-0470 ethics approval. The amendment numbers and project titles for the associatedcertificates of approval are listed below:B03-0490: Sensitivity to Vibrations in Mobile Contexts, Part 1 (Chapter 3)B01-0470• H01-80470-021: Sensitivity to Vibrations in Mobile Contexts, Part 2 (Chapter 3)• H01-80470-024: Cadence Detection, Part 1 (Chapter 4) and Study of Peri-odic Vibrotactile Guidance Part 1 (Chapter 5)• H01-80470-027: Cadence Detection, Part 2 (Chapter 4)• H01-80470-034: Study of Periodic Vibrotactile Guidance, Part 2 (Chapter 6)viTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xivList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviGlossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xixAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiiDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.2.1 Requirements of PVG . . . . . . . . . . . . . . . . . . . . 81.2.2 Applications of PVG . . . . . . . . . . . . . . . . . . . . 101.2.3 Speculated Closed-loop Control of PVG . . . . . . . . . . 131.3 Research Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . 181.4 Research Approach . . . . . . . . . . . . . . . . . . . . . . . . . 181.4.1 Phase 1: Sensitivity to Vibrations in Mobile Contexts . . . 201.4.2 Phase 2: Cadence Detection . . . . . . . . . . . . . . . . 20vii1.4.3 Phase 3: Study of Periodic Vibrotactile Guidance . . . . . 211.5 Summary of Contributions . . . . . . . . . . . . . . . . . . . . . 221.6 Dissertation Roadmap . . . . . . . . . . . . . . . . . . . . . . . . 232 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.1 Guidance Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 252.1.1 Non-haptic Guidance . . . . . . . . . . . . . . . . . . . . 252.1.2 Stationary Haptic Guidance for Object Manipulation . . . 262.1.3 Haptic Guidance and Shared Control of Inland Vehicles . 272.1.4 Haptic Guidance for Training . . . . . . . . . . . . . . . 292.1.5 Haptic Situational Awareness Aid . . . . . . . . . . . . . 302.1.6 Wearable and Handheld Haptics and Spatial Guidance . . 312.1.7 Non-haptic Temporal Guidance . . . . . . . . . . . . . . 332.1.8 Haptic Feedback in Music . . . . . . . . . . . . . . . . . 342.1.9 Haptic Communication through Rhythm . . . . . . . . . . 362.2 Spatial, Temporal, and Spatiotemporal Guidance . . . . . . . . . 372.2.1 Spatial Guidance . . . . . . . . . . . . . . . . . . . . . . 372.2.2 Temporal Guidance . . . . . . . . . . . . . . . . . . . . . 372.2.3 Spatiotemporal Guidance . . . . . . . . . . . . . . . . . . 382.3 Benefits and Drawbacks of Guidance . . . . . . . . . . . . . . . . 392.3.1 Benefits of Guidance . . . . . . . . . . . . . . . . . . . . 392.3.2 Drawbacks of Guidance . . . . . . . . . . . . . . . . . . 412.4 The Haptic Channel . . . . . . . . . . . . . . . . . . . . . . . . . 432.4.1 Advantages of Haptic Channel . . . . . . . . . . . . . . . 432.4.2 Disadvantages and Limitations of Haptic Channel . . . . . 442.5 Tactile Display . . . . . . . . . . . . . . . . . . . . . . . . . . . 452.5.1 Technology . . . . . . . . . . . . . . . . . . . . . . . . . 462.5.2 Degrees of Freedom . . . . . . . . . . . . . . . . . . . . 472.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493 Detecting Vibrations Across the Body in Mobile Contexts . . . . . . 503.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513.1.1 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 52viii3.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533.2.1 Sensitivity to Vibrotactile Stimuli . . . . . . . . . . . . . 533.2.2 Wearable Haptic Systems . . . . . . . . . . . . . . . . . 543.3 Apparatus and Setup . . . . . . . . . . . . . . . . . . . . . . . . 553.3.1 Vibrotactile Array and Calibration . . . . . . . . . . . . . 553.3.2 Movement Setup and Task . . . . . . . . . . . . . . . . . 573.3.3 Visual Workload Setup and Task . . . . . . . . . . . . . . 573.3.4 Metrics and Analysis Technique . . . . . . . . . . . . . . 583.4 Experiment 1: Random Site With Visual Load . . . . . . . . . . . 583.4.1 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . 593.4.2 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . 593.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 613.5 Experiment 2: Random vs Expected Site . . . . . . . . . . . . . . 693.5.1 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . 693.5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 703.6 Summary and Discussion . . . . . . . . . . . . . . . . . . . . . . 733.7 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . 773.7.1 Design Guidelines . . . . . . . . . . . . . . . . . . . . . 783.7.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . 803.8 Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . 804 Cadence Measurement . . . . . . . . . . . . . . . . . . . . . . . . . 814.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844.2.1 What is Realtime Cadence Detection Good For? . . . . . 844.2.2 Sensor Type . . . . . . . . . . . . . . . . . . . . . . . . . 864.2.3 Estimating Cadence . . . . . . . . . . . . . . . . . . . . 874.2.4 Performance Assessment of Cadence Estimation Algorithms 894.3 Approach: The RRACE Algorithm . . . . . . . . . . . . . . . . . 904.3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 904.3.2 Implementation Details . . . . . . . . . . . . . . . . . . . 914.3.3 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . 924.3.4 Android-Based Validation Platform . . . . . . . . . . . . 92ix4.4 Experimental Validation of RRACE . . . . . . . . . . . . . . . . . 924.4.1 Treadmill-based Pilot Validation . . . . . . . . . . . . . . 934.4.2 Primary Outdoor Walking Task and Measurement Apparatus 944.4.3 Experiment Design, Metrics, and Subjects . . . . . . . . . 974.4.4 Results for Outdoor Validation of 4 Second Window RRACE 984.4.5 Analysis of The Effect of Window Size on RRACE . . . . 1004.4.6 Power Consumption . . . . . . . . . . . . . . . . . . . . 1014.5 Comparing RRACE with a Threshold-based Time-domain Algorithm 1034.5.1 Implementation of Time-based Algorithm for Comparison 1034.5.2 Tuning of the Time-based Algorithm . . . . . . . . . . . 1054.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1084.6.1 The Nature of RRACE’s Error . . . . . . . . . . . . . . . 1094.6.2 RRACE Meets Criteria for 4/6 of Tested Locations; Time-Based for 0/6 . . . . . . . . . . . . . . . . . . . . . . . . 1094.6.3 RRACE is Robust to Subject Differences . . . . . . . . . . 1104.6.4 RRACE is Sensitive to Very Slow Speeds . . . . . . . . . . 1104.6.5 RRACE Window Length of 4 Seconds is Best . . . . . . . 1104.7 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . 1114.8 Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . 1125 Susceptibility to Periodic Vibrotactile Guidance of Human Cadence 1135.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1145.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1155.2.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . 1165.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1165.3.1 Perceptual Overload and Safety . . . . . . . . . . . . . . 1165.3.2 Spatial Vibrotactile Guidance . . . . . . . . . . . . . . . 1175.3.3 Periodic Guidance of Locomotion . . . . . . . . . . . . . 1185.3.4 Controlling Step Rate . . . . . . . . . . . . . . . . . . . . 1195.4 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1205.4.1 Apparatus and Context . . . . . . . . . . . . . . . . . . . 1205.4.2 Experiment Design . . . . . . . . . . . . . . . . . . . . . 1225.4.3 Computing Experimental Guidance Rates . . . . . . . . . 123x5.4.4 Procedures . . . . . . . . . . . . . . . . . . . . . . . . . 1235.4.5 Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235.4.6 Analysis Technique . . . . . . . . . . . . . . . . . . . . . 1245.4.7 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 1255.4.8 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 1285.5 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . 1295.5.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . 1315.6 Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . 1316 Periodic Vibrotactile Guidance of Human Cadence, Performanceduring Auditory Multitasking . . . . . . . . . . . . . . . . . . . . . . 1326.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1336.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1356.2.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . 1366.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1366.3.1 Vibrotactile Guidance . . . . . . . . . . . . . . . . . . . 1366.3.2 Guidance of Human Locomotion . . . . . . . . . . . . . . 1386.3.3 Temporal Guidance and Auditory Task . . . . . . . . . . 1396.3.4 Performance and Workload . . . . . . . . . . . . . . . . . 1396.3.5 Measuring Cadence . . . . . . . . . . . . . . . . . . . . . 1406.4 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1416.4.1 Experiment Design . . . . . . . . . . . . . . . . . . . . . 1416.4.2 Guidance Conditions . . . . . . . . . . . . . . . . . . . . 1426.4.3 Auditory Tasks . . . . . . . . . . . . . . . . . . . . . . . 1426.4.4 Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 1446.4.5 Apparatus and Context . . . . . . . . . . . . . . . . . . . 1456.4.6 Procedures . . . . . . . . . . . . . . . . . . . . . . . . . 1486.4.7 Data Preparation . . . . . . . . . . . . . . . . . . . . . . 1496.4.8 Analysis Technique . . . . . . . . . . . . . . . . . . . . . 1506.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1516.5.1 Presentation of Results . . . . . . . . . . . . . . . . . . . 1526.5.2 Cadence Error % . . . . . . . . . . . . . . . . . . . . . . 1526.5.3 Cadence . . . . . . . . . . . . . . . . . . . . . . . . . . . 153xi6.5.4 Speed, Stride length, and Speed Ratio . . . . . . . . . . . 1556.5.5 Workload . . . . . . . . . . . . . . . . . . . . . . . . . . 1556.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1556.6.1 Guidance Cue . . . . . . . . . . . . . . . . . . . . . . . . 1556.6.2 Effect of Auditory Task on Performance . . . . . . . . . . 1576.6.3 The User’s Response Time . . . . . . . . . . . . . . . . . 1586.6.4 Effect of Guidance on Workload . . . . . . . . . . . . . . 1586.6.5 Effect of Auditory Task on Workload . . . . . . . . . . . 1596.6.6 Interpreting Subjective Workload Measures . . . . . . . . 1596.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1606.8 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1616.9 Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . 1627 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1637.1 Primary Research Contributions . . . . . . . . . . . . . . . . . . 1647.1.1 Study of Sensitivity to Vibrations in Mobile Contexts . . . 1647.1.2 Development and Evaluation of RRACE . . . . . . . . . . 1657.1.3 Study of Periodic Vibrotactile Guidance of Human Walking 1667.2 Secondary Research Contributions . . . . . . . . . . . . . . . . . 1687.2.1 Experimental Design and Methodology . . . . . . . . . . 1687.2.2 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1697.3 Reflections on Research Approach . . . . . . . . . . . . . . . . . 1717.3.1 Visual Workload . . . . . . . . . . . . . . . . . . . . . . 1717.3.2 Sensory Adaptation, Learning, and Fatigue . . . . . . . . 1727.3.3 Step Detection . . . . . . . . . . . . . . . . . . . . . . . 1737.3.4 Speed Measurement . . . . . . . . . . . . . . . . . . . . 1757.3.5 Robust Realtime Algorithm for Cadence Estimation . . . 1767.3.6 Choosing the Range for Cadence and Speed . . . . . . . . 1777.3.7 Workload Measurement . . . . . . . . . . . . . . . . . . 1797.4 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . 1807.4.1 Susceptibility to Periodic Guidance in Other Movements . 1807.4.2 Study of PVG in Medium and Long Term . . . . . . . . . 1807.4.3 Effect of PVG on Attention . . . . . . . . . . . . . . . . . 181xii7.4.4 Other Use Cases for RRACE . . . . . . . . . . . . . . . . 1827.4.5 PVG’s Performance in Closed-loop Control Settings . . . . 1827.5 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . 183Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184A Supporting Materials: Detecting Vibrations Across the Body in Mo-bile Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207A.1 Ethics Documents . . . . . . . . . . . . . . . . . . . . . . . . . . 207A.2 Questionnaires . . . . . . . . . . . . . . . . . . . . . . . . . . . 214B Supporting Materials: Cadence Measurement . . . . . . . . . . . . 216B.1 Ethics Documents . . . . . . . . . . . . . . . . . . . . . . . . . . 216C Supporting Materials: Susceptibility to Periodic Vibrotactile Guid-ance of Human Cadence . . . . . . . . . . . . . . . . . . . . . . . . . 221C.1 Ethics Documents . . . . . . . . . . . . . . . . . . . . . . . . . . 221D Supporting Materials: Periodic Vibrotactile Guidance of HumanCadence, Performance during Auditory Multitasking . . . . . . . . 226D.1 Ethics Documents . . . . . . . . . . . . . . . . . . . . . . . . . . 226D.2 Experiment Setup . . . . . . . . . . . . . . . . . . . . . . . . . . 231D.3 NASA Task Load Index (NASA-TLX) Screenshots . . . . . . . . . 231D.4 Descriptive Statistics . . . . . . . . . . . . . . . . . . . . . . . . 237xiiiList of TablesTable 3.1 Body sites used in Experiments 1 and 2 . . . . . . . . . . . . . 61Table 3.2 Generalized Linear Mixed Model (GLMM) of Detection Rate(DR) in Experiment 1 . . . . . . . . . . . . . . . . . . . . . . 67Table 3.3 Results of Kruskal-Wallis tests on Reaction Time (RT), Experi-ment 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68Table 3.4 GLMM of DR in Experiment 2 . . . . . . . . . . . . . . . . . . 74Table 3.5 Results of Kruskal-Wallis tests on RT, Experiment 2 . . . . . . 75Table 4.1 Walking Speeds During the Experiment . . . . . . . . . . . . . 95Table 4.2 Experiment design . . . . . . . . . . . . . . . . . . . . . . . . 97Table 4.3 Error Ratio (ER) differences by Location on Person (LOP) forfour-second window RRACE through an unpaired Z-test . . . . 99Table 4.4 RRACE ER differences by speed condition for 4 LOPs with a foursecond window (unpaired Z-test) . . . . . . . . . . . . . . . . 100Table 4.5 ER differences by window sizes of RRACE, with walking speedand LOP lumped . . . . . . . . . . . . . . . . . . . . . . . . . 101Table 4.6 RRACE ER differences by LOP for all window sizes and walkingspeeds (unpaired Z-test) . . . . . . . . . . . . . . . . . . . . . 102Table 4.7 Power consumption . . . . . . . . . . . . . . . . . . . . . . . 102Table 4.8 Unpaired Z-test comparison of error ratios of the best and theworst versions of the frequency-based algorithm and the best ofeach category of time-based algorithm . . . . . . . . . . . . . 109xivTable 5.1 Summary statistics of cadence error % by guidance conditionfor cue-on and cue-off . . . . . . . . . . . . . . . . . . . . . . 125Table 6.1 Auditory task conditions used in evaluation . . . . . . . . . . . 143Table 6.2 Elements of Android wrist display . . . . . . . . . . . . . . . 146Table 6.3 Vibrations used in study . . . . . . . . . . . . . . . . . . . . . 147Table 6.4 Pairwise comparisons of cadence error % of auditory task levelsper each guidance condition . . . . . . . . . . . . . . . . . . . 153Table 6.5 Pairwise comparisons of cadence of auditory task levels pereach guidance condition . . . . . . . . . . . . . . . . . . . . . 153Table 7.1 Method of choosing speed or cadence rates in our experiments 179Table D.1 Tempos used for techno music conditions . . . . . . . . . . . . 231Table D.2 Descriptive statistics of all metrics . . . . . . . . . . . . . . . 237Table D.3 Descriptive statistics of performance metrics by guidance con-dition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238Table D.4 Descriptive statistics of performance metrics by auditory task . 239Table D.5 Descriptive statistics of workload scores by guidance condition 240Table D.6 Descriptive statistics of workload scores by auditory task . . . 241Table D.7 Mean cadence error % per each level of auditory task guidancecondition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242Table D.8 Mean cadence (Hz) per each guidance and auditory task condition242Table D.9 Statistical significance of all performance metrics . . . . . . . 242Table D.10 Guidance condition’s significant effect on cadence and pairwisecomparisons of guidance conditions per each level of auditorytask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242Table D.11 Statistical significance of all NASA-TLX scores and total workload243xvList of FiguresFigure 1.1 Main parts of Periodic Vibrotactile Guidance . . . . . . . . . 9Figure 1.2 PVG in a control setting . . . . . . . . . . . . . . . . . . . . . 14Figure 1.3 A user’s location and speed response to the PVG system inthree control settings . . . . . . . . . . . . . . . . . . . . . . 17Figure 1.4 The three phases of our research . . . . . . . . . . . . . . . . 19Figure 3.1 Setup of Experiment 1 during “walking with visual workload”condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56Figure 3.2 VPM2 eccentric-mass tactor . . . . . . . . . . . . . . . . . . 57Figure 3.3 Body sites used in Experiments 1 and 2 . . . . . . . . . . . . 60Figure 3.4 Mean Detection Rate and Reaction Time per body location inExperiment 1 . . . . . . . . . . . . . . . . . . . . . . . . . . 62Figure 3.5 Mean Detection Rate and Reaction Time for different intensi-ties in Experiment 1 . . . . . . . . . . . . . . . . . . . . . . 63Figure 3.6 Mean Detection Rate per body location and condition in Experi-ment 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64Figure 3.7 Mean Reaction Time of high intensity stimuli per body loca-tion and condition in Experiment 1 . . . . . . . . . . . . . . . 65Figure 3.8 DR per body location on the body map . . . . . . . . . . . . . 66Figure 3.9 Mean Detection Rate and Reaction Time per body location inExperiment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 71Figure 3.10 Mean Detection Rate per body location and condition in Experi-ment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71xviFigure 3.11 Mean Reaction Time of high intensity stimuli per body loca-tion and condition in Experiment 2 . . . . . . . . . . . . . . . 72Figure 4.1 RRACE Pseudocode . . . . . . . . . . . . . . . . . . . . . . . 92Figure 4.2 Experiment walkway, start and end points . . . . . . . . . . . 96Figure 4.3 ER (ER) as a function of Speed Condition for 4-Second Win-dow RRACE . . . . . . . . . . . . . . . . . . . . . . . . . . . 99Figure 4.4 ER is a function of Window Size per each LOP for all SpeedConditions lumped . . . . . . . . . . . . . . . . . . . . . . . 101Figure 4.5 ER is a function of Window Size per each Speed Condition forall LOPs lumped . . . . . . . . . . . . . . . . . . . . . . . . 101Figure 4.6 Example of the MPTrain time-based step detection algorithm . 104Figure 4.7 ER compared for all algorithm variants and ordered by median 107Figure 5.1 PVG regulates a walker’s step frequency with subtle cues . . . 115Figure 5.2 The Haptic Notifier and the Xbee USB radio . . . . . . . . . . 121Figure 5.3 Cadence by guidance rate when cue is on and off . . . . . . . 126Figure 5.4 Cadence ratio by guidance rate, when cue is on and off . . . . 127Figure 5.5 Cadence error % by guidance rate, when cue is on and off . . 128Figure 5.6 Scatter plot of P4’s cadence during trials 6-10 by guidance rate 129Figure 5.7 Scatter plot of P4’s cadence error % during trials 6-10 by guid-ance rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129Figure 6.1 Experiment setup . . . . . . . . . . . . . . . . . . . . . . . . 134Figure 6.2 Data flow, throughout the experiment and during data processing145Figure 6.3 The Haptic Notifier and the Xbee USB radio . . . . . . . . . . 146Figure 6.4 Flowchart of the experiment . . . . . . . . . . . . . . . . . . 149Figure 6.5 All statistical effects visualized . . . . . . . . . . . . . . . . . 151Figure 6.6 Cadence error % per guidance condition and auditory task . . 154Figure 6.7 Cadence per guidance condition and auditory task . . . . . . 154Figure 6.8 Speed, speed ratio, and stride length per guidance conditionand auditory task . . . . . . . . . . . . . . . . . . . . . . . . 156Figure 6.9 NASA-TLX results colour-coded by guidance condition and au-ditory task . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157xviiFigure D.1 NASA-TLX Screenshots - Part 1 . . . . . . . . . . . . . . . . 232Figure D.2 NASA-TLX Screenshots - Part 2 - 1/4 . . . . . . . . . . . . . . 233Figure D.3 NASA-TLX Screenshots - Part 2 - 2/4 . . . . . . . . . . . . . . 234Figure D.4 NASA-TLX Screenshots - Part 2 - 3/4 . . . . . . . . . . . . . . 235Figure D.5 NASA-TLX Screenshots - Part 2 - 4/4 . . . . . . . . . . . . . . 236Figure D.6 NASA-TLX Screenshots - Results . . . . . . . . . . . . . . . . 236xviiiGlossaryANOVA Analysis of Variance, a set of statistical techniques to identify sources ofvariability between groupsCHI The ACM CHI Conference on Human Factors in Computing SystemsGPS Global Positioning SystemDGPS Differential Global Positioning System, an enhancement to GlobalPositioning System (GPS)PGS Personal Guidance SystemPFG Potential Field GuidanceLAG Look-ahead GuidanceDOF Degree of FreedomMIDI Musical Instrument Digital InterfaceMDS Multidimensional ScalingCPU Central Processing UnitPI Proportional-IntegralPID Proportional-Integral-DerivativeDTG Dynamic Tour GuideMRI Magnetic Resonance ImagingxixDR Detection RateRT Reaction TimeGLMM Generalized Linear Mixed ModelSI The primary somatosensory cortexTTC Time to CollisionPW Pulse-widthRRACE Robust Realtime Algorithm for Cadence EstimationFASPER Fast Calculation of the Lomb-Scargle PeriodogramER Error RatioFFT Fast Fourier TransformLOP Location on PersonEWMA Exponentially-Weighted Moving AverageFSR Force Sensing ResistorIMU Inertial Measuring UnitJNI Java Native InterfaceSPM Steps per MinuteSPS Steps per SecondPVG Periodic Vibrotactile GuidancePVC Periodic Vibrotactile CueVT VibrotactileLOESS Locally Weighted Regression, a way of estimating a regression surfacethrough a multivariate smoothing procedurexxNASA-TLX NASA Task Load Index, an instrument for gauging the subjectivemental workload experienced by a human in performing a taskSWAT Subjective Workload Assessment TechniqueCLC Closed-loop ControlGLM Generalized Linear ModelHANS Haptic Notification SystemMRT Multiple Resource TheoryxxiAcknowledgmentsFirst and foremost, I would like to express my sincere gratitude to my supervisor,Dr. Karon MacLean, who supported and guided me from the moment I applied tothe Ph.D. program through the end of this work. Whenever I felt discouraged bythe results or an unsolvable dilemma, her advice helped me think out of the box tofind a solution and her encouragement motivated me to work harder.Besides my supervisor, I would like to thank the members of my supervisorycommittee: Drs. Todd Handy, Tania Lam, and Michiel van de Panne for theirinvaluable feedback throughout this work. I am also grateful to my examiningcommittee members: Drs. Nadine Sarter, Machiel Van der Loos, and Darren War-burton for their very interesting questions and suggestions which further improvedthis dissertation.I would like to give special thanks to Steve Yohanan, who always gave megreat advice like a big brother, and Oliver Schneider, a wonderful collaborator anda great person to talk to about anything Computer Science related or otherwise. Imust also thank my other collaborators: James Bigland, Bryan Stern, Mike Wu,Michelle Chuang, Zoltan Foley-Fisher, Russel MacKenzie, Sebastian Koch, andMohamed El-Zohairy, without whom this research would not have happened. I amalso thankful to my colleagues in the Sensory Perception and Interaction ResearchGroup (SPIN) for their technical and moral support.Furthermore, I thank my parents, Hamid Karuei and Haydeh Pishdad, myteachers ever since I started exploring the world who always praised curiosityand skepticism. Finally, words are powerless to express my gratitude to my wife,Mahtab Ghavami, who encouraged me at every step and supported me in manydifferent ways during this journey.xxiiDedicationTo my parents, Haydeh and Hamid,and to my wife, Mahtab.xxiiiChapter 1IntroductionIf you want to find the secrets of the universe,think in terms of energy, frequency, and vibration.— Nikola TeslaIt is a sunny afternoon in a beautiful city, where you are attending a conference.You just had lunch with an old colleague, whom you had not seen for many years.You had planned to see him for an hour during the lunch break but your conversa-tion got very interesting and continued longer than expected. Despite trying veryhard not to be seen checking your watch you were caught and it felt uncomfortable.Eventually the conversation ended and now you are walking back to the conferencehotel thinking that you have missed more than half of the first session in the after-noon. It is the second session that you should definitely attend because it is closelyrelated to your field. You are nervous; you do not have much time, so you checkyour watch and take the conference schedule out of your pocket and look at it, thatlong list of talks and their lengths in awfully small fonts, while walking very fast;the lunch break was 80 minutes and you had not accounted for the coffee breakbetween sessions. You actually have about 55 minutes until the next session. Youfeel relieved and calm, for about two seconds, until you hear a loud car horn anda man shouting at you in a language you do not understand; either because it isnot your mother tongue or because it is too fast and unexpected. You jump to thesidewalk with fear and guilt. When you look up you see the conference hotel; youreally did not think you could be there so fast. You enter the hotel and walk towards1the conference rooms. Now you take the conference schedule out of your pocketto check the room number; this time you stop walking. On your way to room B2,where you should be for part of the first session and all of the next one, you grab aglass of water, instead of black coffee which you usually drink. You open the doorand feel that all eyes are on you, entering the room in the middle of a talk. There isno empty seat, so you must be standing up for the next 49 minutes. The slides seemvery interesting and the speaker is great but you have no clue what the talk is aboutbecause you missed one third of it. Your eyes are on the screen but your mind issomewhere else. You are looking but not seeing, and hearing but not listening justlike when you were walking a few minutes ago. You are wondering if you couldhave talked longer with your old colleague. You have already completely forgottenthat you could have been run over by a car.1.1 MotivationThe world we perceive has four dimensions, three spatial and one temporal. We areconstrained by these dimensions but we try very hard to free ourselves from them.We created telescopes and maps to understand where we are on earth and in theuniverse, and we built ships to cross oceans, cars to travel on land, and airplanesto conquer the skies; but moving very fast was not sufficient, so we invented tele-phone, video conferencing, and teleoperated robots to perform tasks in far awaylocations, and to be, almost, in two or more places at the same time. With all thoseachievements, location has become less relevant in our lives and time has becomethe more important constraint. Albert Einstein once said that “The only reason fortime is so that everything doesn’t happen at once”. In a sense time is one more “de-gree of freedom” in our lives but we do not have much control over it. One strategyis trying to multitask; we talk over the phone while driving, send text messageswhile walking, and listen to radio shows while writing an essay; sometimes we arejust lucky not to lose our lives or others’ just for saving a few seconds. Anotherstrategy we use is filling all spaces between tasks with other tasks; we have meet-ings at lunch breaks, send emails in between talks, and take the garbage out duringTV commercials. This strategy is prone to failure too because it is very sensitive touncertainties, although its direct consequences, being late for the next task for ex-2ample, may not be as terrible as the previous strategy. In reality, whenever possiblewe use both strategies at the same time. To reduce the likelihood of failure we planahead of time but it is not sufficient and there is not much more that we can do. Inmost cases, the events that jeopardize timing and performance of our tasks happenat micro level; when we fail to notice passage of time during a conversation, orwhen we see someone by chance on our way to the conference for example. Mostof these “micro” events cannot be accounted for in a plan. Nevertheless, planningat micro level can be very time and energy consuming.Powerful Computers in our PocketsTechnology has come to our help, to “save time” whenever possible. We useGlobal Positioning System (GPS) devices that constantly receive traffic updates,smartphones (or smart watches nowadays) that update their time zone based onlocation, and application software such as to-do lists, calendars, and alarms that areimproved everyday to accommodate us better and save us more time and energy.Most of these devices do magical things that ordinary users take for granted or failto notice: they make, mostly, correct assumptions about location of the user andtime of day and only inform him/her of relevant events; they take different daylightsaving times of the countries that use them, and update them whenever countriesdecide to start or stop to use them. However, often times, these technologicaladvances fail to help us perform better or even put us at more risk. Pedestrians usetheir smartphones while walking and even when crossing streets, not just to talk tosomebody, but to read and send text messages, or use very engaging applicationson their phones with their heads down and their ears not hearing the sound ofapproaching cars, which are getting faster and quieter by the way. Drivers do thesame thing with one hand on the steering wheel and a foot on the gas pedal. Oneway to mitigate the negative effects is reducing usage, by passing laws for example,which is like erasing a question instead of answering it. Another way to reduce thenegative effects, and possibly increase positive effects, is to make changes to thetechnology to address the user’s needs and constraints.The tools that assist people with their daily planning provide them with tem-poral and/or spatial information (e.g., watches, calendars, maps) or guide themthrough their tasks (e.g., GPS). However, these tools fall short of ideal assistance3for the following reasons:1. They occupy visual and auditory senses that should be dedicated to the pri-mary task (e.g., watching ahead while walking) or a parallel secondary task(e.g., talking to another person).2. They use vision and audition in situations where vision and audition areimpaired (e.g., fire fighters’ vision impaired by smoke) or are not preferred(e.g., alarm clock in a library).3. They do not take the context and environment of the user into account (e.g.,a driver who may not hear GPS directions because of loud music or too muchnoise in the car).4. They provide us with numerical values (e.g., meeting in 30 minutes, 23 kil-lometers to destination) or abstract messages, some of which could be com-pletely arbitrary from the user’s point of view (e.g., a beep sound represent-ing a calendar alarm) which require cognitive processing.Partial Fixes for The IssueTo address the first and second issues, many have proposed substituting vision andaudition with tactile and haptic perception [15, 37, 70, 158]. Tactile messages(or Haptic Icons) [14, 155] and haptic/tactile guidance for pedestrians [11, 35]are examples of substituting (or augmenting) audiovisual channels with the hap-tic channel. However, the most globally adopted example of this is the vibrationalerts on mobile phones that replace auditory ringtones for two reasons: to be feltin noisy (e.g., a concert) and quiet (e.g., a library) environments. Due to their suc-cess in improving mobile phone interfaces, vibration alerts are the most widelyused vibrotactile interactions, however, as will be discussed in Chapter 3, thesevibrotactile cues do not take into account the fact that the user might be in motionand less sensitive to vibrations, especially on his/her thighs (i.e., the user’s pocket)where the device usually is.Many interfaces present information as numerical values or abstract messagessince it is the most straightforward solution in most situations (e.g., a car’s speedome-ter) but does not translate very well to the user’s perception of time and space; for4example, most GPS devices notify the driver of the distance to a “highway exit” ora “left/right turn” in metric or imperial units and the driver should try to estimatethe distance and make a judgment on where to turn or change lanes while the caris moving very fast. In contrast, humans use a much easier to understand languagewhen they guide each other in space: “turn right after the gas station” or “see thered car on the right lane? follow that”. The same issue exists with presentationof temporal information but may not be quite as evident; the fact that we roundup periods of time to hours and half hours while in reality we care about smallerfractions such as five minute periods (e.g., we take 10 minute breaks, or allow 5minute question periods) shows that we care about minutes but usually count hoursto make life simpler. It may be harmless to take a one hour exercise instead of 62minutes, but you may miss a bus if you try to be at the bus stop at 11:25 when thebus actually arrives at 11:23.What is Really NeededWe believe guidance systems (and any other interface for that matter) can be veryefficient and intuitive if we (a) use the right medium, (b) avoid unnecessary ab-stractions, and (c) use the right mapping for information presentation.Using the right medium: The right medium is the one that is not blocked, pro-hibited, or overly occupied in the context it is employed; we should note that thisapplies to all types of interfaces. Imagining a fire fighter looking at the graphicaldisplay of a handheld device in thick smoke is as absurd as a construction workeron a jackhammer wearing a vibrotactile belt.Avoiding unnecessary abstractions: It is well understood that sometimes ab-stractions (e.g., converting time to numbers, using different alarms for differentpurposes) are inevitable. However, there are times that we abstract informationout of habit or tradition. For example, when a presenter looks at the timer on thepodium, all he wants to know is if he is on time and should continue his talk in thesame way or if he is behind schedule and should do something about it. If insteadof seeing a four digit number – which could distract him for a moment, severaltimes during the talk – he sees a smiley face on the timer, he can just continue hispresentation and will only worry when the face changes.5Using the right mapping for information presentation: When we present infor-mation, we map it from its actual form and we use units, numbers, or even colourand sound to communicate it to the user. This mapping can always be done ininfinitely many ways, but most of them are hard to understand for the user. Un-derstanding the user’s abilities and needs can help us make the presentation ofinformation more beneficial to the user. In the above example, a smiley face is agood indicator but it ignores an important aspect of time: continuity. The presentermay know that he is on track or not, but he cannot know to what extent. If thesmiley face would move to right (when the presenter was ahead) or left (when thepresenter was behind), the presenter could easily understand the extent to which heis early or late and if his current pace is compensating for that.In the next section, we present our idea for a new solution called PeriodicVibrotactile Guidance.1.2 ApproachPeriodicity: Our solution for the above problems comes from a simple idea: in-stead of providing users with abstract high-level information (e.g., remaining time)which requires cognitive work to be translated to task related parameters (e.g., howfast some aspect of a task should be done), we can provide them with lower-levelparameters that are directly related to accomplishing a task. For example, when aperson wants to catch a bus, he/she needs to check the bus schedule, subtract thebus arrival time with current time to know how much time he/she has, then decidewhen to walk towards the bus station. There are applications, mobile or otherwise,that go one step further and tell the user, based on average walking speed of people,approximately when the user should start to walk (e.g., Google maps [55]). Theseapplications partially solve the problem by doing the math for the user, but they arestill bound in abstractions. What if we could communicate this information to theuser in a way that the he/she would know, based on his/her own typical speed, whento start walking? We believe instead of communicating when, we can communi-cate “how fast” the user should be walking at any point in time and the user getsto choose. Obviously, the earlier the user starts to walk, the slower he/she needs towalk. This idea seems promising until you realize that there is no consistent and6accurate language for communicating absolute speed and this new solution verymuch relies on accuracy and consistency of communicating speed, otherwise userswill not know what is the right speed and when is the right time. However, walkingspeed is tied to something else that is more closely related to our bipedal motioncontrol: cadence (or stride frequency). Walking speed is the product of cadenceand stride length as shown in Equation 1.1 where v is walking speed, f is stridefrequency (cadence), and l is the stride length.v = f × l (1.1)These parameters are automatically adjusted based on energy efficiency in uncon-strained normal walking [144]; however, each one of them can be constrained andit affects the other ones as well [67, 89]. As will be explained later in Section 6.3.2,stride length and frequency also vary as a function of speed.Instead of trying to guide a user’s speed, we can guide his/her cadence, byproviding the desired cadence and requiring the user to synchronize his/her cadencewith it; this is based on the assumption that if we constrain cadence, stride lengthwill stay constant or change in the same direction as cadence. As will be discussedlater in Chapter 6, by controlling the guidance cadence and adjusting it we canachieve high level goals such as the user’s desired speed or desired time of arrivalat a destination as shown in Equation 1.2 where t is the time until arrival, d isdistance, and v is speed.t = d/v (1.2)Vibrotactile Cueing: At first glance an auditory cue with the desired tempo seemsto be the perfect solution. Walking to the tempo of a metronome is very similar todancing to the beat of music. In fact, it has already been shown that users can syn-chronize their cadence with the tempo of a metronome [29]. However, as explainedin Section 1.1, the medium should fit the context. Auditory cues can easily be sup-pressed by the noise in the environment. Moreover, the auditory channel mightbe occupied for other activities such as participating in a conversation or listening7to a podcast or music during daily commutes and physical activities; this meansthat using auditory cues for guidance is almost impossible. In the face of thesechallenges, and considering that the vibrotactile cues offer the same solution forcommunicating tempo without the disadvantages of the auditory channel, we pro-pose to use periodic vibrotactile signals to directly guide users by synchronizationof cyclical movements. Furthermore, we emphasize the inclusion of temporal pa-rameters in addition to spatial parameters for guidance. In this method, a wearable/handheld tactile display periodically vibrates, taps, or squeezes the user’s skin toindicate pedaling, paddling, or stride frequency; it gives the user a new sensationof velocity or urgency by mapping spatiotemporal constraints into parameters ofthe haptic rhythm such as tempo. This sensation is similar to the sound of a carengine that indicates its revolutions per minute or the beat of music that helps adancer synchronize with it. Because this method of speed control uses a very sim-ple vibrotactile cue, it is fairly easy to learn and does not rely on memory whichwe believe causes very little workload.1.2.1 Requirements of Periodic Vibrotactile Guidance (PVG)A Periodic Vibrotactile Guidance system has four main parts: speed and locationmeasurement unit (i.e., GPS), cadence detection unit, vibrotactile display, and theplanner. These can be seen in Figure 1.1. Throughout this thesis, we assume that auser will be wearing a standard smartphone on his/her body (with few or minimalrestrictions on how it is worn), with basic accelerometer and GPS functionality ata quality that was commonly available in 2012. In some examples, continuouslocation (GPS) data are important to an algorithmic variation, and in other casesnot. The power-draw implications of this continuous sensing and computation in awearable device were significant at the time of this writing, but expected to improvesubstantially in the near future due to advances in computational efficiency andmore fine-grained control over processor function.Speed and Location Measurement: At every point in time the user’s location andspeed is measured by a GPS and sent to the planner.Cadence Detection: The user’s cadence (stride frequency) is also measured inrealtime and sent to the planner. In Chapter 4 we will present Robust Realtime8Figure 1.1: PVG consists of a GPS for speed and location estimation, vibrotactile display, ca-dence detection unit, and a planner; the planner receives the task parameters such as timeand destination and the user’s location, speed, and cadence and determines the tempo(and/or intensity) of the vibrotactile cues.Algorithm for Cadence Estimation (RRACE), an algorithm that uses the input fromaccelerometers (that are available in most handheld devices) to measure the user’scadence.Planner: The planner consists of:• Speed planner which measures the user’s desired speed based on time anddestination.• Speed-cadence model which is created (and updated as necessary) based onthe user’s cadence and speed measurements. This model can estimate thecadence associated with a desired speed.Vibrotactile Display: consists of one or several eccentric-mass tactors, placed ina wearable device in a way that vibrations from the tactors are easily felt on theskin; this means that the tactors should directly or through a sufficiently thin layerof material touch the skin. In Chapter 3 we study different options for placementof vibrotactile displays.91.2.2 Applications of PVGPVG has many temporal and spatiotemporal guidance applications in the daily livesof people. Temporal guidance systems are suitable for assisting users for tasks thatare independent to location of users. Spatiotemporal guidance is a generalizationof temporal guidance when the location of users also matters for the task.Temporal GuidancePVG can help users manage their time, control their body movement frequency andspeed, and synchronize themselves with a reference. The dominant element of thetemporal guidance we are proposing here is its periodic, tightly resolved nature.We should note that using start/stop alerts for an entire event is also temporal guid-ance (e.g., “start your run now”, “stop it now”, “the event is in 5 minutes and this isthe only alert”) but it is not the focus of this dissertation. There are many potentialapplications for temporal guidance through PVG: handheld calendars, sport exer-cises, group synchronization, and performing arts. The following scenarios explainsome temporal applications of PVG.Vibrotactile Timed Alerts: Instead of checking the calendar on the graphical dis-play of a mobile phone, a user can keep it in his/her pocket until it starts vibratingslowly with a simple but identifiable rhythm. The user looks at the screen andremembers the event and puts the mobile phone back into his pocket. Based onthe type and priority of the task, the mobile phone starts vibrating with the samerhythm and as it gets closer to the time of the event, the rhythm becomes faster andfaster to give a feeling of time relative to the event. This also represents a naturalfeeling of urgency as the time of the event approaches.Wearable Guidance for Exercising: Athletes who prepare themselves for futureevents or just exercise to improve their abilities try to improve their speed, stamina,or strength gradually and over long periods of time. For them, it is very helpful tohave a reference tempo for running, paddling, or cycling . It can be used to keepa constant tempo or to keep a record of past tempos for increasing it graduallyduring the training regimen (e.g., days or weeks of training). The guidance systemonly needs to display the required tempo through vibrations. It can also measure10deviation from the desired frequency.Wearable/Mobile Guidance for Collaboration: In most live performances manyagents collaborate with tight schedules. A coordinator informs everybody about thetime of different actions or changes in schedule. PVG can be used in this scenarioto make collaboration easier. The coordinator may update the time of the eventsand use a central device to communicate those timings to wearable PVG devicesworn by his/her agents. As they approach the time of an event, the correspondingagent feels a significant increase in the heartbeat of wearable device and gets readyfor the task he/she is responsible for.Vibrotactile Guidance in Synchronized Sports: Moving at the same frequencyand with the same phase is the most important aspect of synchronized sports. Thetempo and phase of movement can be communicated by a PVG device which doesnot rely on vision or audition and therefore allows athletes to use their vision forthe primary task and does not get masked out by the noise. A team of rowers orpaddlers can be synchronized by tappings of the guidance system on their shouldersor wrists. The frequency of tappings can be constant or be controlled by the leaderof the team or their coach or an artificially intelligent system that optimizes thespeed based on the information it collects about athletes by monitoring the signalsfrom biosensors attached to their bodies.Vibrotactile Guidance in Performing Arts: Speed and rhythm of a performingartist is a very important factor which needs to be precise. The artist has to memo-rize the speed of performance and the ups and downs of it during the performance.Sometimes the rhythm of the music is a reference (if there is a rhythmic music) buteven musicians need a reference for the speed and/or a reminder for different partsof the performance too (e.g., conductor). A PVG system can help musicians andother performing artists by displaying a gentle and precise haptic rhythm which ishidden from the audience and does not get masked out by other sounds.Spatiotemporal GuidanceSpatiotemporal guidance can assist users with tasks that have temporal and spatialrequirements. In other words, they can help users make the right decisions about11time and location of tasks. Pedestrians, drivers, workers, and athletes can benefitfrom spatiotemporal haptic guidance.Wearable Guidance for Pedestrians: A pedestrian needs to know when he/sheshould start walking and at what direction and speed, and for how long to reachhis/her destination at a certain time. This destination can be a bus stop, a classroom,workplace, or a meeting. People usually depend on their own estimations whichmay cause them to be late or too early – e.g., if the bus, whose current location yourdevice might know, is delayed. PVG system can in fact, calculate the precise speedbased on the time of arrival and convert it to cadence and communicate it to the usercontinuously until the user reaches his/her destination. Because the calculationscan happen in realtime some unexpected events such as stopping to talk to someoneor to get coffee can be considered as a disturbance and can automatically be takencare of as long as it is relatively short.Vibrotactile Spatiotemporal Navigation: A GPS device may visually or verballycommunicate distance or time to turn to a pedestrian, cyclist, or driver, which re-quires the user to do estimations and depends on auditory and/or visual attention.However, replacing it with a haptic rhythm which gets faster as the user gets closerto a turning point gives the user a feeling of closeness to a point in time and/orspace in a more natural way that does not need any verbal explanations.Vibrotactile Speed Control in Sports: An athlete needs to have a strategy for hismovement speed. His speed may need to increase or decrease at different points.A spatiotemporal guidance system can assist in training of runners, cyclists, androwers. Coaches can define a reference, fine-grained movement frequency (e.g.,gait or pedaling frequency), and use the guidance system to communicate it to theathlete during movement. PVG can also be used as a sensory augmentation methodfor athletes in competitive sports; it can notify athletes of the relative position orspeed of the closest competitor, the distance behind or ahead of the winning pace,or the speed required for breaking a record.121.2.3 Speculated Closed-loop Control of PVGA PVG system works on a simple principle: it collects information about a task andthe user and then creates appropriate Periodic Vibrotactile Cues to guide the user.This system can be built with up to two feedback loops. Although in this thesis weonly implement open-loop control, it is useful to explore how a full implementationcould play out. To understand this architecture better and particularly distinguishthe two loops, we start with an open-loop system and build the closed-loop struc-ture on top of that in two steps.Open-loop ControlIn the simplest form PVG can be a completely open-loop system as shown in Fig-ure 1.2 (top). In this system, the guidance signal only relies on the input about thetask and environment and is independent of the user’s performance and state. In thecase of cadence guidance, the system can compute desired speed of the user basedon the distance to destination and desired time of arrival. The system also incor-porates a Cadence-Speed Model based on past measurements or user history or anestimated average stride length, which can get updated as deemed necessary; theCadence-Speed Model can then estimate the desired cadence and communicate itto the user through the tempo (and/or intensity) of the Periodic Vibrotactile Cues.If the user’s performance is ideal (e.g., the user walks exactly at the desired ca-dence) and the system’s estimation of desired cadence is correct (e.g., the cadenceestimation of the desired speed for that particular user is correct) the user will beat his/her destination at the desired time. However, we can be sure that there isalways some error in the estimation of the desired cadence. On the other hand, theuser may not walk exactly at the displayed cadence. As a result, over time erroraccumulates and more likely than not grows in amplitude. In Figure 1.3, the redlines show the speed and location of a user who is given a constant cadence cue.Because the user walks slower than desired, he/she arrives late at the destination.Single Closed-loop ControlThe PVG system can easily reduce the estimation error and the user’s divergencefrom the guidance cue by constantly updating the guidance cue based on the user’sstate (i.e., time and location) as shown in Figure 1.2 (middle). If the user’s speedis exactly as expected the guidance cue stays the same (even if there is substantial13Figure 1.2: PVG in a control setting. Open-loop control (top): PVG consists of a Speed Plan-ner and Speed-Cadence Model which are shown as pink boxes; cadence and speed mea-surement (i.e., measured by a GPS) units and the vibrotactile display are not explicitlypresented as boxes but as red, blue, and purple (dashed-line) arrows from the user to themodel. Speed Planner estimates the desired speed based on time and distance to des-tination and the Speed-Cadence Model – which is built based on the user’s speed andcadence (and updated as needed) – estimates the tempo of the vibrotactile cue (i.e., de-sired cadence) according to the desired speed. Single closed-loop control (middle): speedestimation (desired speed) is updated constantly based on the user’s location, shown asgreen dash-dotted line arrow, (e.g., measured by the GPS). Double closed-loop control(bottom): The input to the Speed-Cadence Model is adjusted based on the user’s currentspeed (e.g., measured by the GPS) to minimize the difference between the user’s speed andthe desired speed.14divergence between the cadence cue and the actual step rate). However, if the user’sspeed is slower (or faster) than desired, the speed setpoint1 will adjust in responseand the guidance cue will change accordingly to compensate for the lateness (orearliness) of the user.One of the characteristics of this system is that if the user always walks slower(or always walks faster) than the setpoint, the setpoint constantly grows (contracts).To demonstrate this let us assume that T is the desired arrival time, X the destina-tion location, and x(t) the user’s location at time t (where t < T ). Controller’s speedsetpoint, and the user’s speed can be defined by Equations 1.3 and 1.4 respectively,where Vc(t) is controller’s speed setpoint, and Vu(t) is the user’s speed.Vc(t) =X− x(t)T − t(1.3)Vu(t) = x′(t) (1.4)where “ x′(t) ” (prime) indicates derivative with respect to time (t). Using thequotient rule2 we can calculate V ′c(t) as shown in Equation 1.5.V ′c(t) =−x′(t).(T − t)+X− x(t)(T − t)2(1.5)Based on Equations 1.3 and 1.4 we can conclude that:(1.3),(1.4),(1.5)⇒V ′c(t) =Vc(t)−Vu(t)T − t(1.6)If the user always walks slower (or faster) than the controller’s speed setpoint,V ′c(t) will always be positive (negative) before reaching the destination (i.e., fort < T ) and this can cause problem if the user always walks slower than the speed1Setpoint is the desired output that an automatic control system aims to reach.2If f (x) = g(x)h(x) , the derivative of f (x) is f′(x) = g′(x)h(x)−g(x)h′(x)h(x)2 .15setpoint because the speed setpoint will continue to increase until the user reacheshis/her maximum speed possible (i.e., cannot walk faster) in which case arrivingon-time will become impossible because the user’s speed is not fast enough. Thissituation is demonstrated in Figure 1.3.Double Closed-loop ControlUser error and estimation error, including the estimation of the desired cadencebased on the desired speed, can easily be minimized with an additional feedbackchannel in the system by adding the user’s estimated walking speed (e.g., via GPS)as a feedback to the system as shown in Figure 1.2 (bottom); to make sure errordoes not accumulate, we can use a PI controller3 that tries to minimize currenterror as well as summation of error over time. This internal loop is responsible forconstantly adjusting the guidance cue until the user walks at the desired speed, eventhough the desired speed might be changing over time according to the remainingtime and distance to destination. The blue line in Figure 1.3 shows how this settingmanages to bring the user up to the desired speed before it is too late. It is worthnoting that with the addition of the new feedback loop, the “short-term response”of the system actually becomes slower (i.e., the gradual increase of speed underdouble closed-loop system in contrast with the other two in Figure 1.3), but thelong-term response is superior to previous settings. It is possible to use a PIDcontroller or to combine this setting with one of the previous ones in order to get afast short-term response and error-free long-term response but that is beyound thescope of this discussion.In this thesis, we have begun to explore the possibility of using Periodic Vi-brotactile Cues for the fine-grained control of cadence and speed (i.e., guiding awalker’s cadence and speed with precision). The above control systems are justexamples of a very large design space of cadence controllers that are possible, noneyet tested. Our focus in this thesis is on the user’s ability to follow periodic cuesand the immediate challenges and requirements of a PVG system such as sensitivity3A Proportional-Integral-Derivative (PID) controller calculates error of a system (the differencebetween a measured variable and a desired setpoint) and tries to minimize it by adjusting the input tothe system. The proportional, integral, and derivative values are responsible for reducing the currenterror, accumulation of past errors, and the rate at which error increases (i.e., predicted future error)respectively. In many applications it is common to use just a PI controller rather than a PID controllerbecause derivative action is very sensitive to noise and can be problematic.160501001502000 500 1000Time (s)User's Location (m)1: Open−loop2: Single Closed−loop Control  3: Double Closed−loop Control  0.000.050.100.150.200.250 500 1000Time (s)User's Speed (m/s)Figure 1.3: A user’s location (upper) and speed (lower) response to the PVG system in threesettings: open-loop (red), single closed-loop (green), and double closed-loop, with a PIcontroller (blue). The user has 1000 seconds to be at a destination 200 meters away.The desired speed is thus 0.2 m/s and the user always walks 20% slower than the speedsetpoint. As a result, with the open-loop system (i.e., no feedback) the user arrives 250seconds late. As Equation 1.6 shows, if the user always walks slower than the speedsetpoint which is estimated based on the location of the user in the single closed-loopcontrol system (i.e., location feedback) increases the tempo of the guidance cue over timeand rushes the user near the end until it reaches the user’s limit (see the overshoot) butthe user still arrives 30 seconds late. The double closed-loop system (i.e., location andspeed feedback) increases the tempo of the guidance cue based on the user’s location too,but it also takes the user’s departure from the desired speed into account and increases thesetpoint even more until the user walks at the desired speed; as a result, before it is toolate, the user gets a steady speed (slightly larger than 0.2 m/s to make up for the time theuser walked slower than desired) and eventually arrives on time.17in mobile contexts, cadence estimation, susceptibility to periodic cues, workload,and auditory multitasking.1.3 Research GoalsThe overall goal of this thesis is to develop a new haptic guidance method (PVG) tobe used in mobile contexts. In particular, we try to find the best setting for wearablevibrotactile displays with a focus on guidance of human walking, develop a robustcadence estimation algorithm, and examine users’ performance and workload un-der PVG and the extent to which their ability to utilize it is impeded by auditorystimuli. Our aim is to answer the following questions:1. What is the most effective location on a user’s body for placement of vibro-tactile displays in mobile applications from a sensory standpoint?2. How well can we measure cadence in realtime under a convenience-drivenconstraint that no a priori knowledge about the user is used?3. Can walkers synchronize their step frequency with a Periodic VibrotactileCue?4. How much workload does Periodic Vibrotactile Guidance cause?5. How much do different types of auditory tasks affect walkers’ performance?1.4 Research ApproachThis research is done in three main phases as depicted in Figure 1.4: First, we4need to find the right vibrotactile display and the best location for its placement inmobile contexts and PVG in particular. Second, we develop a cadence estimationdevice to be used in evaluation of PVG and eventually in the PVG control system.Finally, we study users’ ability to follow Periodic Vibrotactile Cues at differentrates and during auditory multitasking. In this section we will explain these phases.4For a list of contributors and their level of involvement please refer to the Preface on page iv.18Figure 1.4: The three phases of our research are shown as three large rectangles. Phase 1(yellow): evaluation of vibrotactile stimuli under mobile conditions and finding best bodylocations for their placement. Phase 2 (cyan): development and evaluation of a cadence de-tection algorithm. Phase 3 (pink): study of performance, workload, and effect of auditorymultitasking on PVG. Each phase starts with a development stage (small coloured rect-angle) followed by two experiments (white rectangles). Solid arrows show dependenciesbetween stages/phases: starting point is a prerequisite for the ending point. Dashed-linearrows show a reiteration in a development stage where the findings of an experiment (orrequirements of the next experiment) dictate changes to the developed system.191.4.1 Phase 1: Sensitivity to Vibrations in Mobile ContextsThe choice of haptic display technology and more importantly, location for place-ment of vibrotactile display was not trivial at first. As our first step, in Chapter 2,we review the haptic channel, tactile technologies, and the way they have been usedin guidance system to identify the potentials and challenges that we are facing forthe development of PVG.In Chapter 3 we compare several body locations for placement of vibrotactiledisplays in terms of Detection Rate (DR) and Reaction Time (RT) during walkingin two experiments. In the first experiment we also measure the effect of visualworkload and in the second experiment we compare vibrations on expected bodylocations with vibrations on unexpected body locations. Phase 1 is shown as ayellow rectangle in Figure 1.4 and is a prerequisite of Phase 3.These are the questions we tried to answer:1. Which body locations are more sensitive to vibrations?2. Which body locations are more affected by movement?3. Does visual workload impact performance?4. Which body locations are preferred by users?1.4.2 Phase 2: Cadence DetectionThe PVG system that we proposed relies on a cadence estimation unit that couldmeasure the user’s cadence with high precision and in realtime. We required thesystem to be small enough to be carried by users, work out of the box (i.e., notuning required), robust to user differences and placement on the body. At thetime, such a solution that was available for modification as open-source softwaredid not exist. We knew we could use a smartphone’s accelerometers to detectbody movement during walking. Our idea was that the main component of thefrequency-domain transformation of the accelerations would belong to the cadenceor one of its harmonics. In Chapter 4 we explain the development and evaluation ofthe resulting algorithm: RRACE. This algorithm was also the cadence measurement20instrument in the next phase of our research. Phase 2 is shown as a cyan rectanglein Figure 1.4 and is a prerequisite of Phase 3.The main attributes of RRACE are as follows:1. It can work across many body locations.2. It is robust to change of orientation.3. It works out of the box and does not require calibration.We tested our algorithm on a treadmill and outdoors, under normal uncon-strained walking conditions and examined the effect of body location and speed.We also conducted a thorough comparison between our “frequency-based” gaitdetection method and the highest-performing published “time-based accelerationthreshold” method.1.4.3 Phase 3: Study of Periodic Vibrotactile GuidanceThe goal of the final phase of our research was to verify that people can followPeriodic Vibrotactile Cues, measure the workload caused by PVG, and examine theeffect of auditory tasks on users’ performance. This was done in two steps, eachcorresponding to a separate study; in step one we studied susceptibility to PVGof human walking, which is explained in Chapter 5. In step two, we measuredworkload caused by PVG and the effect of auditory task on users’ performance andworkload, which is explained in Chapter 6. Phase 3 is shown as a pink rectangle inFigure 1.4; both previous phases are prerequisites of Phase 3.These are the questions we tried to answer in this phase:1. How well can walkers follow Periodic Vibrotactile Cue of different tempos?2. Does repetition improve performance of walkers?3. How much do auditory tasks of different kinds affect walkers’ performance?4. How much workload does PVG impose on walkers?5. Does PVG affect walkers’ stride length?6. Does PVG affect walkers’ speed?211.5 Summary of ContributionsThe research presented in this dissertation makes the following primary and sec-ondary contributions:Primary Contributions1. Sensitivity to Vibrations in Mobile Contexts: Evidence for(a) positive effect of vibration intensity on Detection Rate (DR) of stimuliand reduction of Reaction Time (RT);(b) higher DR at certain body locations;(c) negative effect of movement on DR and increase in RT;(d) effect of visual workload on increasing of RT;(e) faster RT to stimuli on expected locations versus random locations;(f) gender differences in terms of DR and RT;(g) subjective preferences.These are encapsulated in guidelines for the design of wearable vibrotactiledisplays.2. Cadence Detection:(a) The Robust Realtime Algorithm for Cadence Estimation (RRACE).(b) Evidence for performance and robustness of RRACE(c) Evidence for superior performance of RRACE over the readily availablestate-of-the-art time-based cadence estimation method.3. Study of Periodic Vibrotactile Guidance:(a) A new guidance method based on fine-grained measurement of move-ment.(b) Tactile delivery of such guidance through interval/tempo of periodiccues.(c) Evidence for humans’ ability to follow Periodic Vibrotactile Cues (PVCS).22(d) Evidence for the effect of Periodic Vibrotactile Guidance (PVG) onwalking speed and stride length.(e) Analysis of the effect of repetition on PVG.(f) Measurement of effect of auditory task on performance under PVG.(g) Measurement of workload under PVG.Secondary Contributions1. Experimental design, methodology, and statistical analysis examples for:(a) examining sensitivity to vibrations in mobile contexts;(b) measuring cadence in indoor (treadmill) and outdoor settings;(c) analyzing stride frequency and length, and walking speed and work-load, under guidance/no guidance and different auditory tasks.2. Shared/open source data on:(a) detection rate and reaction time to vibrotactile stimuli of different in-tensities across the body in stationary and mobile conditions and undervisual workload or no visual workload;(b) detection rate and reaction time to vibrotactile stimuli of different in-tensities across the body on random or expected locations in stationaryand mobile conditions;(c) acceleration of six body locations at different walking speeds and stepfrequencies with accompanied gold standard cadence measurements;(d) performance under PVG and no guidance with and without auditorytasks.(e) workload under PVG or no guidance with and without auditory tasks.1.6 Dissertation RoadmapThis dissertation is organized as follows:23Chapter 2 gives a broad coverage of the literature on the haptic channel, tactiledisplays, and guidance systems with some of its elements repeated in thefollowing chapters in narrower scope.Chapter 3 describes two experiments that examine sensitivity to vibrotactile stim-uli under different conditions of movement, visual workload, and expectationof stimuli location.Chapter 4 presents RRACE, our algorithm for measurement of cadence in realtimeand describes two experiments that verify its accuracy and robustness.Chapter 5 describes an experiment that examines walkers’ ability to follow Peri-odic Vibrotactile Cues of different tempos.Chapter 6 describes an experiment that measures workload under PVG and theeffect of auditory tasks on walkers’ performance.Chapter 7 summarizes this dissertation and provides future directions for the re-search.Appendices A, B, C, and D document the supplemental materials used through-out this research and associated with Chapters 3, 4, 5, and 6 respectively.24Chapter 2Related WorkIf you rely only on your eyes, your other senses weaken.— Frank Herbert, Dune (1965)The goal of this chapter is to review guidance and explain our rationale forchoosing the tactile channel for the particular guidance system that we are inter-ested in. To achieve this goal, first, we study several guidance systems categorizedby medium (e.g., visual or haptic) and application in Section 2.1, define spatial,temporal, and spatiotemporal guidance in Section 2.2, and explain the benefits anddrawbacks of guidance in Section 2.3. Then we review the haptic channel in Sec-tion 2.4 and tactile display technologies in Section 2.5 to explain our choice oftactile display throughout this dissertation.2.1 Guidance SystemsThere is a wide variety of guidance systems, with different communication chan-nels (e.g., audiovisual vs haptic), application areas (e.g., object manipulation vscontrol of vehicles vs spatial awareness), and ergonomics (e.g., stationary vs hand-held vs wearable). In the next few sections we will explain these with examples.2.1.1 Non-haptic GuidanceA large body of research is dedicated to the studying of visual and auditory guid-ance for object manipulation tasks. Many of these applications are related to25medicine and surgery such as image-guided breast and laparoscopic surgery, neu-rosurgery, and robot assisted minimally invasive surgery [45, 59, 109, 140]. Fuchset al. developed a head-mounted display that provided visual cues during laparo-scopic surgery [45], Grimson et al. developed an image-guided neurosurgery sys-tem that shows the location of instruments with regards to the Magnetic ResonanceImaging (MRI) [59], and Sato et al. used 3-D ultrasound images for the guidanceof breast surgery [140]. More recently, Mourgues et al. developed a visual guid-ance method for robot assisted minimally invasive coronary artery bypass graft thatused coronary tree model from endoscopic images which were updated in realtimeas overlay images for assisting the surgeon during the operation [109].Guidance methods can also be used in navigation. Golledge et al. outlinedseveral hardware requirements for a Personal Guidance System (PGS) for blindusers [53]. They proposed the employment of Differential Global Positioning Sys-tem (DGPS) in addition to a head mounted compass. Their system would guideusers by direct speech and relied on a virtual acoustic display consisting of binau-ral earphones that “allow features to call as if from their real location in objectivespace” to give a spatial awareness to blind users. In the above examples, visionand audition were the communication channels from the guidance system to users.This can be problematic in cases where users need their eyes and ears for the pri-mary task such as looking at the road while driving or an important secondary tasksuch as talking to someone while walking. The requirements of such tasks suggestthe use of haptic channel, which is relatively free and may not compete for visualand auditory resources.2.1.2 Stationary Haptic Guidance for Object ManipulationOne of the first areas that welcomed haptic guidance was telepresence and objectmanipulation. The sensors and haptic displays that were put together to give thefeel of presence in a remote site to the operator were used in a different way: thehaptic feedback link between the operator and the remote environment was “cor-rupted” , as Rosenberg defined, by overlaying perceptual information to improvetask operation [131]. Rosenberg introduced virtual fixtures, which were computersimulations run on several Degrees of Freedom (DOF) force-feedback displays.26The virtual fixtures would limit the user’s movement like a ruler used for drawinglines but with additional advantages that came from the nature of computer simu-lations. His studies showed that virtual fixtures could enhance the performance ofa teleoperation by 70%.In a similar work, Bettini et al. studied the employment of both hard and softvirtual fixtures for a multi-step task of surgical tool manipulation [8, 9]. They usedadmittance control to develop soft and hard virtual fixtures for haptic guidance inmicro and macro scales. Their tests on Steady Hand Robot of Johns Hopkins Uni-versity showed significant improvement of performance of hard and soft virtualfixtures, which included higher confidence and speed of users and reduced posi-tioning error. Soft virtual fixtures seemed to be ideal because in addition to pro-viding high-level guidance, they let users override the method. However, Bettini’swork showed superiority of hard virtual fixtures in terms of performance.Haptic guidance has also been used in animation browsing and editing. Donaldand Henle designed a system for haptic guidance of animators during motion cap-ture data manipulation [32]. They mapped the multi-dimensional animation con-figuration space into a lower dimensional space of vector fields that were displayedon a six DOF PHANToM device by SensAble Technologies Inc.1 (Woburn, MA,USA) [145]. These vector fields felt like virtual objects and guided animators alongcertain trajectories or resisted against movement towards undesired regions. Theyalso mentioned the use of time-varying higher-order vector fields which wouldchange in time.These examples show how sense of touch can assist users accomplish taskswhile they are visually engaged in their primary tasks. However, the display mech-anisms are stationary and do not support for mobility of the user and portability ofthe device.2.1.3 Haptic Guidance and Shared Control of Inland VehiclesIn some other works haptic guidance is referred to as shared control because ofthe difference in approach or the point of view of researchers. According to Steeleand Gillespie, shared control can take advantage of both mechanized control and1SensAble Technologies Inc. was acquired by Geomagic Solutions in 2012.27human abilities [153]. They developed a guidance method for land vehicles basedon shared control of a steering wheel. The control agent that can be viewed as livefixtures, as opposed to the conventional virtual fixtures, exchanged power with thehuman user to avoid departure from the center of a road. The torque applied to thesteering wheel by the system was proportional to the angular displacement fromthe desired steering angle which was calculated based on the lateral displacementof the car sensed by Global Positioning System (GPS). Their studies showed 50%reduction in path following error, 42% lower visual demand, but no significantcognitive load reduction. They argued that the latter was because driving is highlylearned and requires minimal amount of processing load and the verbal recitationof numbers – which was used as the secondary task – did not compete over thecentral processing code resources with the driving.In a follow-up work, Griffiths and Gillespie designed new experiments withobstacles in the middle of the road and audio signals to be identified by users tocompare cognitive load during and in the absence of guidance [58]. They foundthat the guidance system helped in maintaining the performance of the primary taskwhen a secondary task was given to the user. The presence of the secondary taskreduced the performance of the non-guided driving by 20% but it only reduced theperformance of guided driving by 4%.Forsyth and MacLean developed a haptic guidance method for inland vehiclesbased on a predictive control method [42]. They developed two guidance methods:one based on Potential Field Guidance (PFG) [132, 133] and the other based ona look-ahead control method [36]. These methods were employed in a drivingscenario where users had to drive in a curvy road while keeping the vehicle in laneand as close to the center of the road as possible. Their experiments compared thesetwo methods with a no-guidance scenario and showed that Look-ahead Guidance(LAG) significantly outperformed the other two (i.e., no guidance and PFG) in termsof smaller mean squared error from the desired trajectory. LAG also producedsmaller forces than PFG and was preferred by subjects.The haptic displays used in the above applications are not attached to the bodyof the user hence they are considered grounded. This limits their applicability tosituations where the user is stationary on the ground or in a vehicle.282.1.4 Haptic Guidance for TrainingThe idea of virtual fixtures evolved into virtual training in the work of Gillespieet al. [51]. Their virtual teacher was similar to virtual fixtures in the sense that itwould facilitate task execution, but unlike virtual fixtures, the virtual teacher waspresent only during training phase. They simulated moving a crane and used itas their task example. During training period, the user would try virtual teacher’sstrategy. They argued that by showing the analytically obtained strategy to the userhe/she can bypass some parts of the usual practice time. Their experiments showedthat the optimal strategy could be communicated successfully to users.Yokokohji et al. investigated the possibility of transferring skills from oneperson to another through a visual/haptic display [175]. They proposed several ap-plications of skill transfer: sports (tennis, golf), calligraphy, and surgeries (laparo-scopic, endoscopic, and arthroscopic). They brought up the debate about trainingthrough guidance: on one hand, guidance prevented error, which was advanta-geous; on the other hand however, trial and error could be very helpful for learning,which made guidance systems contradictory to the process of learning. Yokokohjiet al. developed and tested five different training methods: visual cue only, visualcue and force playback, visual cue and motion playback, visual cue and hybridplayback, and visual cue and hybrid playback with inverted force. They arguedthat motion playback was the most promising method. However, they noted thattheir results were not statistically significant to derive any conclusion probably be-cause their task example was not difficult enough.O’Malley et al. examined haptic guidance by comparing the performance andeffectiveness of training under shared control, virtual fixture guidance, and noguidance [116]. They designed a hitting target task under a second order man-ual control. Two masses with a spring-damper connection formed the under ac-tuated 4 DOF system. The subject could only control two DOFs (x and y of oneof the masses). Their first experiments showed performance improvement underboth guidance methods with no significant difference between them. Their secondexperiment showed no significant advantages for the guidance methods over thepractice in an unassisted mode.The previous work was followed by Li et al., which elaborated on the negative29effect of guidance on training [94]. The study of a target-hitting task showed thatsubjects who practiced in the absence of shared control guidance had better perfor-mance during the actual task. It should be noted that the negative effect of sharedcontrol for training does not prove inefficiency of other haptic guidance methodsthat are not based on force feedback or shared control. In other words, other hapticdisplay methods may not have negative effect on training. In addition, it is notalways desired to remove the guidance method during the actual performance.2.1.5 Haptic Situational Awareness AidSklar and Sarter suggested event-driven domains such as aviation as potential ap-plications of tactile communication [147]. They emphasized on multiple-resourcetheory [171] and suggested distribution of information to haptic modality to reducepilots’ mode errors or automation surprises (failure to notice change of status).They designed an experiment to compare three modes of feedback: visual-only,tactile-only, and visual-tactile during four phases of flight different on difficultylevel. Their tactile display consisted of a wristband with one tactor attached toinner wrist and another one attached to the outer wrist. The pilots who receivedvisual only feedback showed significantly lower detection and reaction time per-formance. In addition, pilots assisted by visual and tactile feedback missed a fewmore status changes than the ones assisted by tactile only feedback during the dy-namic phase. This was surprisingly inconsistent with multiple-resource theory thatsuggested improvement of performance because of multiple modalities. Sklar andSarter argued that during the dynamic phase of flight, pilots required a lot of visualattention for the primary task, and visual feedback would compete over it and causevisual scanning penalties.Following many other researchers interested in potential uses of tactile sig-nals, Ho et al. designed two experiments to study spatial information presentationthrough vibrotactile signals in cars [68]. They designed a pseudo-driving simula-tion and asked their participants to check if they were approaching the car aheador being approached by the car behind whenever they received a vibrotactile signalfrom the back or front. One of their experiments was spatially predictive, meaningthat 80% of the vibrotactile signals corresponded to the same direction (e.g., front30tactors for the car ahead and vice versa) and the other experiment was spatiallynon-predictive with random direction of vibrotactile signals (i.e., 50% likeliness ofvibrotactile signal having the same direction as the car approaching). Subjects re-sponded faster and more accurately to visual events preceded by vibrotactile cuesof the same direction; however, the difference between the mean cueing effect ofthe two experiments was not significant to prove the advantage of spatially predic-tive over spatially non-predictive signals.Enriquez and MacLean confirmed the harmful effect of false positive warningsignals (false alarms) [34]. They used a throttle pedal with force feedback in adriving scenario. Subjects had control over the position of the pedal which definedthe acceleration of the car. They were asked to avoid collision with another carahead of them by controlling the speed of their own while being occupied witha secondary task of identifying objects shown on the same graphical screen. Thedrivers would feel pressure from the pedal when their car approached the leadingcar. This pressure was produced by the force feedback system as a means of warn-ing signal. Enriquez and MacLean examined effects of error in warning signalsby adding false positives (false alarms) and false negatives (misses) and found thatonly false positives had negative effect on the use of haptic warning signals. Theyargued that false positives could destroy the user’s trust and willingness to use theinformation presented by the system.The above papers suggest that haptic guidance can be used in event-drivensituations such as aviation and transportation. In addition, most of our tasks thathave temporal parameters are in fact event-driven; for example, arrival time of abus, train, or ferry and changes in traffic patterns are important events in an urbancommute. One may argue that these tasks are not as sophisticated as aviation torequire assistance. However, people normally try to maximize their use of timeby accomplishing their daily tasks in parallel with as many secondary tasks aspossible. Situational awareness aid seems promising in these conditions too.2.1.6 Wearable and Handheld Haptics and Spatial GuidanceSkin is the largest organ that covers the whole body with a vast number of heat,touch, and pain sensors, which is a great opportunity for interaction designers if31they intend to build handheld or wearable haptic devices. Baumann et al. used aniterative low-fidelity prototyping or “physical brainstorming” to explore the poten-tials of wearable or holdable haptic displays for attentional cueing [6].Navigational guidance has been an area of interest since a decade ago. Ertanet al. introduced a wearable navigation system for guidance of blind users in un-familiar indoors areas [35]. They used a vibrotactile display consisting of a 4-by-4array of micromotors embedded in the back of a vest to communicate stop signalor the four cardinal directions to the user. Due to the bad reception of GPS signalsindoors they used infrared transceivers in the ceiling of the hallways instead forsensing the position of the user. A wearable computer in the user’s backpack wasresponsible for route planning.Bosman et al. developed a wearable haptic guidance system that could beattached to both wrists of a pedestrian to guide him inside unknown buildings [11].Although their design could be modified to help blind or visually impaired users,they claimed it to be a great match with regular users’ vision and perception ofthe space around them. The advantages of their haptic guidance method were itsobjective performance and subjective desirability. They used vibrations to indicatedirections and stop signal.Tsukada and Yasumura developed a belt with eight vibrotactile haptic displaysto guide a pedestrian towards destinations, predefined locations, or valuables leftbehind [163]. They used GPS to locate the user and geomagnetic sensors to de-tect the orientation of the user. The eight vibrotactors were located around theuser’s waist, four of them pointing at front, back, left, and right, and the other fourpointed at between those directions. Vibration of each tactor showed a desired di-rection to the user. They found that subjects could feel vibrations when stopped butoften failed to recognize vibrations with intervals less than 500 ms when walking.However, subjects could stop for a moment to recognize the direction of the vibra-tion; we will explain the negative effect of movement on sensitivity to vibrationsin Chapter 3. They reported subjects’ preference for receiving signals only whenthey were lost and not all the time. Van Erp et al. used a similar system for way-point navigation [167]. They mapped the four cardinal and four oblique directionsto vibrations on eight tactors embedded in a belt. In order to display distance tonext waypoint they developed four different schemes in addition to a control con-32dition (i.e., no coding of distance). Two of the schemes were based on monotonicrelation between distance and tempo of the rhythm (faster tempo indicated shorterdistance) and the two others were based on communicating departure, arrival, andintermediary phase by three fixed but different tempos of the rhythm. They foundno significant difference between the scheme; users maintained a below normal butacceptable average walking speed. They examined the directional-only guidancein two operational scenarios too: for a helicopter pilot and a fast boat driver andthe system showed to be successful despite the vibrating environments which couldhave blocked the perception of the vibrotactile signals.Wearable haptics seems to be an obvious choice for spatial guidance because ofthe easy mapping between directions and locus of stimuli. This direct and intuitivemapping is the main reason of success for those applications. However, the tem-poral aspects of haptic signals can also be used for the mapping of temporal andspatiotemporal parameters which is not explored yet.2.1.7 Non-haptic Temporal GuidanceMaruyama et al. developed a personal navigation system called P-Tour that pro-vided users with temporal guidance in addition to the regular map-based naviga-tion [104]. P-Tour computed the nearly best schedule for visiting multiple touristattractions based on the user’s preferences and restrictions. It would find a semi-optimal solution for the modified traveling salesman problem through a geneticalgorithm which would give a suboptimal subset of tourist attractions with a sub-optimal order of visiting and time of visits.Rhythm consists of several temporal parameters such as frequency and timewhich can be used in temporal guidance. In [177] Zelaznik and Lantero studiedthe effect of spatial visual guidance and temporal auditory guidance for the ex-ecution of a repetitive circular movement. They found that withdrawal of visualguidance affected the topocinetic aspects (size and location) of the task but didnot affect the morphocinetic aspects (shape) of the task. They also found that thetemporal guidance of the metronome had almost no effect on the spatiotemporalaspects of the task. Their conclusion was based on high precision of subjects’ meaninterval duration: “the overall within-subject standard deviation was about 2.5% of33the mean interval”. One of their most important findings was that subjects couldmaintain the proper average cycle duration in all conditions but they needed a fewcycles to get synchronized with the rhythm.These studies support the idea of temporal guidance in two ways: (a) feasibilityof temporal guidance from software and hardware development standpoint and (b)practicality of rhythmic signals and ability of users to synchronize with it. Use ofvision can be questionable in situations where the user’s vision is engaged by theprimary task.2.1.8 Haptic Feedback in MusicAlmost all acoustic and mechanical musical instruments form a closed-loop sys-tem with the user. While the user manipulates the instrument to make sounds, theauditory and tactual channels make a feedback and close the loop. Chafe arguedthat these two forms of feedback were necessary to the control of sound making inmusic composition and vocal communication [20]. He proposed the incorporationof haptic feedback in new musical instruments to solve the problem of instrument’snon-determinism. He set up an experiment to test if vibrotactile feedback at the fin-ger tip would improve the problem of an electronic French horn. This vibrotactilefeedback was simply made by sending the audio output to an actuator that vibratedthe controlling device. He concluded that the resulting device improved the user’sperception of the music creation.Haptic feedback can also be used in musical motor learning, as Grindlay pro-posed [60]. He studied the effect of haptic guidance on percussion training bybuilding a single axis system to record and playback rotational movement of awrist during drumming; this system could be considered as a spatiotemporal hapticguidance system. He measured accuracy of users on note timing and drumstick ve-locity under three guidance conditions: audio only, haptic only, and audio+haptic.His results showed the superiority of audio only over haptic only, and audio+hapticguidance over the other two. He suggested generalizing audio and haptic guidanceto other applications such as dance, sports, and remote medicine.Pedrosa studied the effect of haptic guidance for helping users follow and learndrum beat percussion patterns [121]. He asked the users to follow a rhythmic pat-34tern on a device similar to an electronic drum machine under four different condi-tions: no guidance, visual guidance, haptic guidance, and visual+haptic guidance.The results showed no statistically significant difference among the guidance con-ditions. Two of the reasons discussed in his thesis are more important for us: (a)the rhythmic patterns were harder than expected for users with little or no musicalbackground and (b) users’ delay in following the rhythm and the variation in delaycontributed a lot to the error even if they seem to be playing the right rhythm.Hearing and the sense of touch are closely related. On one hand, the vibrationbandwidth perceivable by the receptors in humans’ skin overlaps with the rangeof sounds that they can hear [62, 134]. On the other hand, vibration and sound(music in particular) both have temporal parameters such as duration, tempo, andrhythm. Aside from the natural coupling of music and haptics (e.g., vibrations onthe body caused by loud music or touching the instrument during performance),there has been some attempt to take this relationship further. Gunther et al. in-troduced the idea of tactile composition and performance [62]. They designed awearable system consisting of thirteen transducers across the entire surface of thebody with most of them close to glabrous (non-hairy) skin to increase sensitiv-ity. They used the standard Musical Instrument Digital Interface (MIDI) protocolto compose thirteen tactile tracks to be played on tactile displays in presence andabsence of music. They held a series of concerts and collected the feedback fromthe audience. Users’ experience was very similar to their expectation from music.For example, the audience would be surprised if a repeated pattern varied suddenly.Some of their audience also reported that it felt as if the interface was making theirbody move which showed the potential of using wearable tactile displays to guidemovement of the body.The applications of haptic guidance in music show how haptic guidance canalso be used for temporal aspects of tasks. The papers presented above suggest theuse of haptic temporal and spatiotemporal guidance where human movements arecyclical.352.1.9 Haptic Communication through RhythmPeriodic haptic signals have been used to communicate information to users. Hap-tic icons were introduced as a means of communicating abstract information throughthe sense of touch to users under visual/auditory workload. Chan et al. suggestedthe use of a set of haptic icons as a turn taking protocol in a collaborative envi-ronment [21]. They developed a design protocol that perceptually optimized anicon set to address the requirements for such applications: being easily learnable,detectable, and identifiable. They designed three families of icons, one of whichconsisted of two-tone vibrations to convey status transition and the two other fam-ilies consisting of periodic icons to indicate status of collaborators (e.g., in controlof the device or waiting to gain control of the device) to themselves. They used theLogitech iFeel mice, developed by Logitech (Lausanne, Switzerland) and Immer-sion (San Jose, CA, USA), [96] for their studies which are capable of displayingvibrations from 0.01 to 500 Hz. Users learned all the seven icons in approximately3 minutes, identified them within 2.5 seconds in absence of workload. In presenceof workload, identification time increased to an average of 4.3 seconds which isstill acceptable in many applications. In both conditions they had an accuracy of95%.Ternes and MacLean used the Multidimensional Scaling (MDS) method in aprotocol to design a large set of haptic icons using rhythm, frequency, and ampli-tude as parameters to distinguish between them [160]. They defined rhythm as a“repeated monotone pattern of variable-length notes” that can be manipulated bychanging the number of notes, their lengths, and the gaps between them. Theysuggested limiting the length of icons to 2 seconds and 4 repeats (500 ms each)with the shortest perceivable note to be 31.25 ms followed by the same length ofrest. The user studies revealed that the evenness/unevenness (i.e., regular repeatingnature versus irregularities of the rhythm) could be felt distinctively. They con-cluded that haptic rhythms could be distinguished by note length and evenness butsuggested other parameters such as melody, emphasis, and tempo to be effectivetoo.The above papers show the possibilities and advantages of communicatingthrough rhythmic haptic signals; they are reliable and they give us additional de-36grees of freedom such as note/rest length, evenness/unevenness, emphasis, andtempo. These parameters have been used in an indirect communication throughabstraction of meanings. However, one may use an indirect mapping of informa-tion to these parameters which seems to be less cognitively demanding.2.2 Spatial, Temporal, and Spatiotemporal GuidanceGuidance systems can be categorized by the dimension(s) of tasks they deal with:space or time. In this section we will explain Spatial, Temporal, and Spatiotempo-ral guidance systems.2.2.1 Spatial GuidanceA spatial guidance system is an assistive tool that deals with tasks in time-invariantdynamics; i.e., the guidance system is responsible for assisting the user for a taskthat is not explicitly dependent on time. A compass is one of the simplest andoldest spatial guidance tools. Global Positioning Systems are also spatial guidancesystems that traditionally use audiovisual channels. Many haptic guidance sys-tems [8, 34, 42, 52, 58, 131, 153], and many non-haptic guidance systems such as[53, 109] can be categorized as spatial guidance according to the above definition.These devices only deal with spatial aspects of the task such as direction and dis-tance. If they are used in contexts where time (and eventually speed) is important,the user will have to deal with those aspects on his/her own.2.2.2 Temporal GuidanceA temporal guidance system is an assistive tool that is location-invariant anddeals with time-variant tasks; i.e., the guidance system is only dependent on time.Timers, alarms, metronomes, and many devices that are used to help users keeptrack of time, or frequency of a repetitive task are in fact temporal guidance toolsfor general use and with very little or no knowledge about the user’s state and goals.Temporal haptic guidance systems could potentially reduce visual and auditory at-tention by keeping the user informed about future events only when necessary.Haptic feedback is a good candidate for this because there are many locations onthe skin that are not engaged in any tactile communication and can be used in37interrupt-based communication [156], whereas the visual and auditory channel arebusy most of the time. The option can always be open for the user to check thegraphical display of the device. However, if the user trusts the guidance system’sjudgements, he/she no longer needs to double check the time to future events likehe/she does with a clock alarm before it rings or a calendar to double check futureevents. In addition, a temporal guidance system can assist users in micromanage-ment of time. In the context of this research, micromanagement of time includescontrolling movement frequency or speed and synchronization with a reference(e.g., tempo of the music) or another user (e.g., lead rower of a row boat).2.2.3 Spatiotemporal GuidanceA spatiotemporal guidance system is a generalization of both temporal and spatialguidance because the system deals with time-variant location-variant dynamics[104, 159]. The guidance system assists the user in space and time. Passage oftime affects spatial constraints and vice versa; e.g., if the user takes longer thanexpected to take a bus at a certain station, the system may realize that the next bestchoice is another bus at a different station. A simple example of spatiotemporalguidance can be seen in [32] where the user has to follow a 3D trajectory at acertain speed.Mobile tour guides are one of the few guidance systems that take both time andlocation into account [104, 159]. Maruyama et al.’s P-Tour computes the near-bestschedule and navigation for visiting several of a tourist’s destinations and modifiesthe schedule based on tourist’s location [104]. In addition to scheduling and navi-gation, ten Hagen et al.’s Dynamic Tour Guide (DTG) also provides location basedinterpretations [87, 159]. Although there are not very many spatiotemporal guid-ance systems, many of us retrofit existing technology to create our own spatiotem-poral guidance systems. For example, on most smartphones tapping on an addressopens the GPS enabled map software which may also provide turn based naviga-tional cues (spatial guidance system); a mother who has to get through a complexroute/itinerary on Saturday, dropping off and picking up kids at their events at theright time, often in unfamiliar locations, may add the addresses of places she hasto be to events in her phone’s calendar application; at time of each event (e.g.,38dropping her son at a soccer field) she can just tap on the address and be guided tothe destination.2.3 Benefits and Drawbacks of GuidanceIn this section we will explain some of the major benefits as well as drawbacks ofguidance systems.2.3.1 Benefits of GuidanceTemporal and spatiotemporal guidance have many potential benefits for their users.They can improve the overall performance, decrease the amount of effort neededfor task completion, decrease the anxiety level of the user, or facilitate learning ofthe task.Decrease in Human Effort In order to do a task, a person uses his/her own knowl-edge and may acquire additional information from other sources before and duringthe task, then bases his/her decisions on approximate calculations about time, lo-cation, or other parameters. A guidance system can assist the user by collectinginformation, calculating and estimating dependent parameters, and participatingin decision making to some extent. Either of these can decrease the amount ofprocessing load required from the user to accomplish the task [58].Performance Improvement A guidance system can improve human performancein two ways: improving the information collection process qualitatively and quan-titatively, and increasing precision and speed of calculations; more importantly, itexecutes the actual movements from the start to the end.Firstly, guidance systems have access to sources of information that are nototherwise accessible by the user alone. Maps, GPS data, exact time of future events(e.g., train, plane, bus arrivals/departures), and even the accurate traveling speed ofthe user at every moment are available to the guidance system through the Internet,satellites, and wearable sensors but the user has no direct access to them. This in-formation is directly related to the user’s tasks and can be used to make or changedecisions. Secondly, the user has to make decisions based on approximate calcu-lations. At best, he/she can use other devices (e.g., watches, maps, schedules) to39improve precision but it takes him/her a lot of time. The guidance system however,has a built-in computation unit that takes care of calculations in a fraction of a sec-ond. As a result, guidance systems have the potential to make the decision makingfaster and more reliable which improves the overall performance of the user. Inaddition, a guidance system can increase the frequency of access to information asneeded which is a luxury a user with no guidance system cannot afford.Decrease in Anxiety Level Guidance systems can lower the anxiety level by of-floading some of the user’s workload. In addition, after successfully assisting theuser in several occasions, the guidance system gains the user’s trust. The guidancesystem’s decisions will prove to be reliable and the user will understand that thesystem has alternative solutions in hand just in case the primary solution is inval-idated. The user would depend on the guidance system and worry less. Bodrovintroduced many causes for stress [54] and grouped them as semantic (i.e., relatedto facts, concepts, strategies), temporal, and organizational.Semantic causes for stress are:1. subjective task complexity,2. deficient or controversial information,3. dangerous situations,4. uncertain time of information presentation.Employment of a guidance system can greatly reduce stress by removal of theabove causes. The guidance system can reduce the complexity of the task by takingresponsibility for parts of it. It can also help the user avoid deficient informationby improving the collection and using of it directly in high precision calculations.Increasing safety is an important goal of some guidance systems which directlyreduces the user’s stress. The temporal and spatiotemporal guidance systems candecrease the level of uncertainty of information presentation by developing a grad-ual awareness about the time of future events.Temporal causes of stress are:1. time deficit,402. high rate of information presentation,3. increased information flow.Guidance systems can also decrease the stress level by removal of the tempo-ral causes of stress. They can solve the problem of time deficit by helping usersaccomplish tasks faster. In addition, they can reduce the rate and speed of infor-mation presentation by filtering out the non-necessary parts and using non-abstractforms of communication.2.3.2 Drawbacks of GuidanceIn addition to their benefits, guidance methods have some drawbacks. Some ofthem are more critical because they interfere with the primary task or annoy theuser. The rest hurt the performance of the guidance system and reduce its ef-ficiency. These drawbacks and possible ways of removing them will be brieflydiscussed in this section.Intrusiveness Unfortunately guidance systems are no exception to the general in-trusiveness problem of many devices in multitasking environments. As MacLeandiscussed it, in some cases the interaction with the device may just become anotherdistraction for the user [100]. For example, looking at the screen of a GPS deviceafter hearing an audio signal for direction can make the driver miss a road signor an obstacle. Using the haptic channel instead of vision or hearing can reducethis effect to some extent by simply not interfering with the senses (usually be-ing vision and hearing) that are already engaged with the primary task. However,because haptic signals can still distract users and guidance systems have a multi-tasking nature, haptic guidance designers should balance the level of intrusivenessof haptic signals with their priority level; an urgent signal should attract more at-tention from the user while a less important message should be less interrupting.More importantly, the level of attention the user needs to give to the primary taskshould be taken into account; if it is unsafe to distract the user from his/her primarytask the system should be less interrupting [100].Cumbersomeness Because guidance systems are supposed to be carried by usersall the time they should be light and small. However, guidance systems require41several pieces of hardware which can make them big and heavy. In order to avoidcumbersomeness one can use simple designs with minimum number of sensors andsmaller parts such as miniature sensors and vibrotactile displays if possible; thiswill also be better in terms of lower power consumption for a device that dependson battery power for portability.Reliance on External Data Guidance systems rely heavily on external data sourcessuch as GPS network or the Internet to acquire navigational and temporal informa-tion for planning. This makes the guidance system vulnerable to accessibility prob-lems. When the data networks are not in range the guidance system will becomeunusable/unreliable unless the information is provided to them from an alternativesource. Using indoors infrared navigational signals, where there is no reception, isan example of providing an alternative source of information [35].Sensory Adaptation When people are exposed to stimuli for a significant amountof time they adapt to them which means that their sensitivity threshold increases orthey become less sensitive to the stimuli [23]. Most guidance systems continuallysend signals to users and because of that there is a likelihood that after some us-age users will become less sensitive to the guidance signals. To prevent this fromhappening, we should avoid long periods of stimulation when not necessary. Anexample way of doing this in vibrotactile communication is to use as short as possi-ble vibration cycles (which might be perceived as taps) and embed the informationin the length of silence between vibrations. Of course, this is only feasible whenthe communication is as minimal as mapping a single guidance parameter to justone degree of freedom which is the rest (silence) between notes.Error Situations Guidance systems are vulnerable to several types of errors. Theinformation supplied to the system can simply be wrong as a result of sensor noise,inaccuracy of measurements, or network errors. In addition to machine relatederrors, there are errors that happen on the user side such as misunderstanding of asignal, missing a signal, or even confusing stimuli from another source (e.g., coinsmoving in the pocket or touching each other) with the guidance signal. Noisecan be avoided by using appropriate filtering of the signal. Network errors can beovercome by repetition and minimal use of bandwidth. The errors in perceptionof the signal can be reduced by using better contact and choosing the right locus42of stimuli. Also, if the guidance method works based on repetitive communication(such as the proposed rhythmic method) when the user misses or misunderstands asingle signal he/she can be corrected by the repetitions of the signal.Hindering Skills and Attention Guidance systems may also disengage their usersfrom their environment [92]. By using guidance systems we stop relying com-pletely on our own cognitive functions. Over time, those functions do not getpracticed as much as before which may hinder their development and/or be lostaltogether. For example, GPS devices have long been criticized for obstructing thedevelopment of cognitive maps [18].2.4 The Haptic ChannelSkin is the largest organ which is covered by a vast number of receptors thatform proprioception (sense of relative position of body parts), mechanoreception(touch), thermoception (temperature), and nociception (pain). The haptic channelhas advantages to vision and audition that makes it a better choice than them insome applications but has limitations which should be considered in the design ofthose application.2.4.1 Advantages of Haptic ChannelThe haptic channel has some unique features which make it a great match for guid-ance systems and very advantageous according to Van Erp, Grindlay, Feygin, andmany others [11, 40, 61, 163, 165, 167].1. The haptic channel is available most of the time to receive new information.2. It is private.3. It can help capture and direct attent to audiovisual displays.4. It can free the overloaded visual and auditory channels.5. It can replace a visual display when vision is blocked (e.g., firefighters indense smoke or divers in dark waters)436. The haptic channel can be used in environments which must be auditoriallysilent.The single biggest advantage of the haptic channel is being dispersed. There aresimply contexts where the convergence of mobility, attention, and other contextualfactors, not only render visual and auditory modalities inappropriate, but touch canbe actively preferred and more comfortable. It keeps things compartmentalized andin the background in a way that feels good and optimal.In situations where users are overloaded with (mostly visual) information, au-ditory and haptic notifications can help direct their attention [39] and make themnotice visual changes [158]. In contrast to peripheral vision – which can also beused for attention allocation [114] – touch and audition are omnidirectional (i.e.,do not require a particular orientation of head) and they do not require real estate[139]. Touch is often preferred to audition because it also avoids overloading of theauditory channel, which could be occupied by alarms and conversations [39]. It isshown that the haptic channel is also capable of distinguishing the level of urgency(e.g., “ignorable” vs “demand action”) when capturing attention [179]. Capturingusers’ attention selectively (with users’ priorities in mind), reducing distractions,and shortening response time is especially important in the face of the exponentialincrease in the amount of information presented to users in different contexts suchas driving and navigation [91].The above arguments particularly support the idea of using the haptic channelin temporal and spatiotemporal guidance when users are already occupied in aprimary task which involves visual and/or aural attention or when those senses areblocked or using them is not desired in those particular environments. They alsoexplain the increasing use of touch as an information channel in many settings asa response to the problems that arose with the increase in audiovisual informationand the opportunity that haptic technologies provided [73].2.4.2 Disadvantages and Limitations of Haptic ChannelThe haptic channel has limitations and some disadvantages compared to the visualand auditory channel too:1. Haptic wearables are intrusive and sometimes the stimulus is irritating.442. Site availability is a problem.3. Physiological sensitivity to tactile stimuli decreases if the body part receivingit is in motion [23].4. Using current technology and human sensory training, the amount of infor-mation that can be transmitted is very limited compared to the visual andauditory channel [163].However, some of these are not unique to haptic channel. For example, head-phones (auditory) are also intrusive and imperfect; they can fall out of ears, theyare cumbersome, social irritant, subject to interference from ambient sound, andthey can even cause ear damage if over or mis-used. Yet, over the last few yearsthey have become quite accepted socially and by individuals for public use at levelsthat seemed unthinkable even a decade ago. The really important disadvantage ofhaptics relative to vision and audition is information transmissibility, at least whendefined as a “bit-rate” deliverable by current technology. We should point out thatthis only applies to synthetic touch; real-world touch is somewhat different.Good application candidates for haptic channel are those which require modestinformation transmission and consider the other limitations.2.5 Tactile DisplayStimulus display is essential to the proper functioning of a guidance system, whetheropen or closed loop. As mentioned in Section 2.4, haptic displays have advantagesrelative to audiovisual displays but they have relatively lower information trans-missibility, which should be considered in the design of haptic interfaces. In thissection we present several types of haptic displays and our rationale for choosingone of them.There are two types of haptic displays: force-feedback and tactile. Force-feedback displays are bidirectional physical interfaces that exert force or torqueon the input device (e.g., steering wheel, joystick, mouse, etc.). While manipulat-ing the input device, the user may feel forces that encourage or resist the movementof the device. Some examples of this type of guidance can be seen in cars: force-feedback enabled steering wheels and pedals [34, 42, 58, 153]. Force-feedback45has also been used in guidance of hand motion in surgical operations or objectmanipulations [8, 32, 131]. The PHANToM haptic device from SensAble is themost common haptic display in these applications [145]. These devices should begrounded to be able to exert forces to the user. Because force-feedback devicesrequire physical grounding – and also tend to have significant power needs – theyare less appropriate for mobile applications.Another emerging subgroup of force-feedback displays is exoskeleton force-feedback systems [7, 12, 44, 63]; these are wearable haptic devices with limbsand joints that wrap around parts of the user’s body and exert forces as the usermoves his/her limbs. Among these, exoskeleton force-feedback for fingers such asthe Rutgers Master II [12] has potential for mobile applications if it can be madesufficiently lightweight and power efficient, because instead of being grounded itcan be fixed to the user’s body. The force-feedback display modality that seemslike a serious candidate for a mobile applications is pressure display (e.g., a com-pressive wristband) such as Baumann et al.’s ServoSqueeze, a wristwatch band thatemployed a micro-servo motor to emulate the sensation of being squeezed [6].Tactile Displays are unidirectional physical interfaces that employ vibrations[166, 180], tapping [6], twisting or stretching of the skin [98], compressing of theskin, and indentation to convey messages to users [118]. Unlike the force-feedbackdisplays, tactile displays are not necessarily collocated with the input device andneed not be grounded. These two characteristics make them suitable for portableand wearable devices that can be carried by users. Users can hold these devices intheir hands, keep them in their pockets, put them on, or even feel them wheneverthey touch the device in their environment.In this dissertation we restrict our focus on ungrounded displays, such as tactile,because the Periodic Vibrotactile Guidance (PVG) has to be portable and does notrequire bidirectionality. In the next section we discuss tactile display technologies.2.5.1 TechnologyTactile displays can be put into different categories based on the kind of deforma-tion to the user’s skin (e.g., tapping, vibrating, pinching, squeezing, and twisting)or the technology they use [14, 50]; here, we use the latter categorization because46the focus of this research is on periodic cues and the user’s susceptibility to themrather than perception of each single stimulus:1. Eccentric-mass tactors2. Voice coil speakers3. Piezoelectric speakers4. Pneumatic vibrators5. Electrotactile displaysFrom among these, voice coil and piezoelectric speakers and eccentric-masstactors are most commonly used for the development of tactile displays.Eccentric-mass tactors are widely used in consumer electronics and handhelddevices in particular. One way to produce mechanical vibration is by rotating anoff-centered weight. These displays commonly known as eccentric-mass tactors orbuzzers consist of a small motor with an off-centered weight attached to its shaft.When the motor is running, the centrifugal forces make the whole body of the dis-play vibrate at the frequency of the motor; this means that the vibration frequencyand amplitude of the eccentric-mass tactor cannot be changed independently. An-other way of producing vibration or tapping is to move, push, or stretch the skin.Voice coil and piezoelectric speakers are also common in wearable hapticscommunity but not as widely used in consumer electronics as mechanical vibra-tion. Same as eccentric motor vibrators, both above types of displays are inexpen-sive, compact, and easy to control. Voice coils have an extra advantage too: theycan provide a range of frequencies [62].2.5.2 Degrees of FreedomApart from their portability advantages and in spite of their simplicity, vibrationand tapping mechanisms have much potential for employment in the context ofguidance systems. These mechanisms provide several Degrees of Freedom [14,118, 165] some of which are correlated:1. Amplitude of vibration/tapping472. Frequency of vibration3. Rhythm (note density, number of notes and rests and their length)4. Location of stimuli5. Tempo of a rhythm or time interval between single vibrations/tappings6. Duration of vibration7. Duration of silenceThese parameters can be used to communicate different types of temporal and spa-tiotemporal information to users.Frequency and Amplitude: Humans’ skin is sensitive to a bandwidth of 700 Hz[134]. Our ability to analyze vibration frequency is very limited. Rothenberg etal. found that people can differentiate seven levels of vibration frequency in theclearest region of sensation (80− 90 Hz) with their forearm and up to ten levelswith their fingers [134]. Other papers reported slightly different results. For ex-ample, Gunther et al. reported that humans could perceive vibrations from 20 Hzto 1000 Hz with the maximum sensitivity at 250 Hz [62]. The inconsistency ofpsychophysical parameters (frequency in particular) among different papers is be-cause of the dependency of the results on the stimulation medium and the locusof stimulation. Sherrick investigated the interaction between frequency and ampli-tude and found when frequency and amplitude are co-varied redundantly, peoplecould differentiate more levels (5-8) than when amplitude was constant and onlyfrequency varied (3-5 levels) [146]. However, as Sherrick discussed, the designershould be cautious as low frequency at high amplitude could be confused withmoderate frequency at medium amplitude.Rhythm: Similar to music as a particular form of aural stimuli, tactile stimuli havea rhythmic characteristic. Number of notes and their timings in a repetition canform different rhythms. Swerdfeger et al. performed a set of studies which showedthat rhythmic differences (i.e., evenness/unevenness) dominate other parameters interms of being perceived by humans [155]. Intensity differences (co-variation offrequency and amplitude) came right after rhythm.48Location of Stimuli: The vast number of touch sensors in the skin that covers thewhole body gives us another parameter to use in haptic communication: locationof stimuli. An interaction designer can place vibrotactile displays on both wristsof a user to distinguish left and right and create other messages as in the work ofBosman et al. [11]. Alternatively, eight vibrotactile displays at the four cardinaldirections and the four intermediate directions around the waist [163, 167] or anarray of vibrotactile displays on the user’s back [35] can communicate direction.Tempo of Rhythm: Similar to rhythmic music, periodic vibrotactile cues have atempo that can be used for mapping of continous variables such as time or distance.For example, faster tempo (i.e., shorter interval between vibrations) can conveyshorter time or distance to a destination [167].2.6 SummaryThere are many examples of haptic and non-haptic guidance systems. Most ofthem are spatial and very few are temporal or spatiotemporal guidance systems.Despite the fact that one of the greatest potentials of the tactile sensation is its tem-poral aspects such as rhythm and tempo, it is not employed very often in temporal,and particularly spatiotemporal guidance systems. To the best of our knowledge,tempo of a cue (auditory or tactile) as a fine-grained control method has not beenused or suggested for any temporal or spatiotemporal guidance system. We believePeriodic Vibrotactile Cues can be used in spatiotemporal guidance of human move-ment. As we discussed in Section 2.5, our ability to distinguish between vibrationfrequencies is very limited and frequency of vibration is not a good fit for thefine-grained guidance that we are interested in. Therefore, eccentric-mass tactors,which are cheaper and more powerful than piezoelectric and voice-coil speakers,are our tactile display of choice throughout this work.49Chapter 3Detecting Vibrations Across theBody in Mobile ContextsTouch comes before sight, before speech.It is the first language and the last, and it always tells the truth.— Margaret Atwood, The Blind Assassin (2000)In this chapter we1 explore the potential and limitations of vibrotactile displaysin practical wearable applications, by comparing the user’s detection rate and re-sponse time to stimuli applied across the body in varied conditions. We examinewhich body locations are more sensitive to vibrations and more affected by move-ment; whether visual workload, expectation of location, or gender impact perfor-mance; and if users have subjective preferences to any of these conditions. In twoexperiments we compare these factors using five vibration intensities on up to 13body locations. Our contributions are comparisons of tactile detection performanceunder conditions typifying mobile use, an experiment design that supports furtherThis chapter appears with minimal modifications in [78]:• I. Karuei, K. E. MacLean, Z. Foley-Fisher, R. MacKenzie, S. Koch, and M. El-Zohairy.Detecting vibrations across the body in mobile contexts. In Proceedings of the 2011 annualconference on Human factors in computing systems - CHI 11, pages 3267-3276, 20111For a list of contributors and their level of involvement please refer to the Preface on page iv.50investigation in vibrotactile communication, and guidelines for optimal display lo-cation given intended use.3.1 IntroductionGraphical and auditory interfaces prevalent today are information-dense, but alsolead to problems such as perceptual overload and inefficiency of the visual andauditory channels [70, 162], decline in primary task performance from secondarytask competition for perceptual resources [163], and situations where vision and/oraudition are unavailable or inconvenient [167]. In mobile environments, phones,GPS guidance tools, and music players contribute to sensory resource starvation,where vision is heavily occupied, and auditory channels are compromised by ex-ternal noise and social concerns. Tactile display is seen as a promising conduit formobile communication, lacking the drawbacks of visual or auditory display; butit brings its own challenges. Vibrotactile displays embedded in a handheld devicecan notify users, without visual load and in private or noisy situations. However,the device must be held in the hand (a condition incompatible with the secondary ormonitoring tasks that typically trigger such alerts) or stowed close to the skin. Tac-tile sensitivity varies widely by body location [73, 90] and with movement [124];many users have experienced this variance with missed calls and messages. Thisflaw undermines the whole notion of mobile tactile notification.One solution is for users to wear a tactile display driven through a local bodynetwork, which can then be located to optimize tactile communication rather thanaccess to an associated graphical display. With this distributed approach, bodilylocation of the tactor becomes a design parameter which we do not adequatelyunderstand. Local skin sensitivity is critical, but so is context of use, convenience,appearance, and sometimes the tactor technology; some sensitive body regions areimpractical for reasons of mobility and wearability. In the absence of a singlecorrect answer, designers need guidelines based on the relative perceivability ofbody sites under conditions that typify mobile contexts. Of particular interest arebodily movement, for its known impact on sensitivity; and visual workload, forpossible mental-resource competition.The present experiments were constructed to inform such guidelines. While51some of the needed data exist, gaps and disparate sources make comparisons difficult.We aimed to systematically address the questions of(a) which body locations are more sensitive to vibrations and(b) which are more affected by movement;and whether(c) visual workload,(d) gender, or(e) expectation of location impact performance;and if(f) users subjectively prefer any of these locations.Our specific contributions are:1. Comprehensive assessment of the effect of all of loci, movement, and expec-tation on detection probability;2. An experiment design that can be replicated to answer more questions aboutvibrotactile communication; and3. Compilation of our results into design guidelines for optimal display locationfor a given purpose.3.1.1 ApproachWe conducted two experiments. The first, Experiment 1, varied factors identifiedin research questions (a-d) with stimuli applied in a random and unanticipated se-quence; Experiment 2 varied expectation (e). For Experiment 1, we chose 13 bodysites based on practicality for wearable use; Experiment 2 employed the 9 mostpromising of these. Experiment 1 varied body site, movement (sitting or walkingon a treadmill), presence or absence of visual workload, and signal intensity (5 lev-els), counterbalanced by gender. Experiment 2 also varied expectation of stimulus52site in place of workload. A trial consisted of a single vibration at a single site. Wemeasured the subject’s response time and logged undetected stimuli, and collectedsubjective preferences. A statistical analysis informed our guidelines.3.2 Related WorkIn recent years, tactile displays (individual elements are known as “tactors”) haveemerged from specialized uses to become accepted consumer gadgetry, with in-novation in size, power use, and controllability. Vibrotactile variants (piezo andeccentric-mass tactors are most common) tend to be lowest in cost and power needsand most deployable; designers are already embedding tactors in clothing. A sub-stantial body of psychophysical and design research exploring tactile sensitivityand wearable potential exists; here we highlight the most relevant works.3.2.1 Sensitivity to Vibrotactile StimuliSpatial LocationConsiderable research has examined sensitivity of particular body locations to vi-brotactile stimuli. One of the most recent and comprehensive is Jones and Sarter’sreview compilation of the effect of vibrotactile stimulus frequency, duration, inten-sity, and locus on detection [73]. They present sensitivity thresholds of many bodylocations of interest at different frequencies, and suggest ideal ranges of frequencythat are most perceivable by humans. Most commercial vibrotactile displays al-ready work within these frequency and intensity ranges.Lederman and Klatzky provide a research summary on haptic perception. Theresearch cited here is based on two-point and point-localization threshold meth-ods to compare the sensitivity of different body locations [90]. While completelyappropriate for the design of closely-spaced tactor arrays, these methods are mis-matched to a large class of mobile contexts. For single-tactor displays (e.g., heldor worn cellphone), users do not identify exact vibratory location or spatial pat-tern; relevant metrics are likelihood and speed of detection and response. Fur-thermore, consumer-grade vibrotactile display diameters exceed the body’s largestpoint-localization threshold (e.g., back).Hoggan et al. used consumer-grade vibrotactile displays in a handheld device53and compared location recognition of vibration on fingers under different station-ary conditions [70], with promise for loci and rhythm for encoding information.However, two factors that remain unexamined in a practical context are (a) move-ment and its interference with other factors and (b) expectations about stimuluslocus.MovementStudies connecting movement to tactile sensitivity have involved animal and hu-man models, and vibro- and electrotactile stimulation. For example, Chapin andWoodward found suppression in movement conditions in SI2 cortical response ofrats to electrical stimulation through electrodes implanted in the forepaw, whencomparing treadmill locomotion, spontaneous grooming, quiet resting and “tensed-up” mobility [22].Using electrotactile stimulation on the forefingers of human subjects, Angeland Malenka [3] found correlations between sensory suppression and movementspeed in detection rates. In a similar experiment, Chapman et al. found that bothactive and passive movement of the ipsilateral arm increased the detection thresh-old by 50% on the mid-ventral aspect of the right forearm [23].Post et al. studied the same effect but with vibrotactile stimulation [124] onthe operant arm (forearm, thenar eminence, and distal digit) under different motoractivity levels. Voluntary motor activity increased the vibrotactile detection thresh-old. The above papers consistently indicate that body motion directly affects thedetection of vibro- or electrotactile stimuli. However, none compare relative vibro-tactile sensitivity by site, for activities of interest here such as natural walking.3.2.2 Wearable Haptic SystemsBosman et al. developed a dual-wrist system to guide a pedestrian inside an un-known building; vibrations indicated directions and stops [11]. Although theirdesign could help blind or visually impaired users, it was intended to augmentunimpaired space perception, and improved performance. In a different strategy,Rukzio et al. developed a guidance system based on the single palmar vibrotactilephone display and a public display with 8 lights [136]. The lights toggled in a2The primary somatosensory cortex54rotation, while the phone vibrateed when the public display direction matched theuser’s route direction. Tsukada and Yasumura developed a belt with eight vibro-tactile displays distributed evenly around the waist to guide a pedestrian towardsdestinations, given realtime user location and user orientation [163]. Subjects feltvibrations when stopped; but when walking, often failed to recognize vibrationswith intervals less than 500 ms, and stopped to assess it.Driving support systems are natural targets for body-situated guidance andalerts. Ho et al. examined spatially informative vibrotactile signals in a drivingsimulation where front vs back stimuli might indicate direction of an oncomingcar [68], and found promise for encoding directional information to locus of stim-uli. Meanwhile, Straughn et al. compared auditory and tactile pedestrian warningsystems for drivers, finding two vibrotactile displays on the driver’s biceps more ef-fective than auditory signals [154]. For short Time to Collision (TTC), the warningsignal was best utilized to generate a reactive motor response (warning direction= safe direction), whereas for long TTC, attention is best served with warning =hazard direction.In summary, numerous tactile display setups have been prototyped; these, andothers featuring back and arm. Their use confirms reduced performance duringmovement, which might however be confounded with workload. To our knowl-edge, relative site sensitivity has not been systematically explored in mobile con-texts.3.3 Apparatus and SetupOur setup consisted of a tactor array, a treadmill, a tall chair, and a large-screendisplay, which were deployed to create the conditions described below (Figure 3.1).3.3.1 Vibrotactile Array and CalibrationWe built an array of tactors of which different subsets could be activated (Fig-ure 3.2), with inexpensive VPM2 eccentric-mass tactors from Solarbotics Ltd.(Calgary, AB, Canada) [152], 12 mm in diameter and 3.4 mm thick. A Duemi-lanove Arduino processor, developed by Smart Projects Srl (Strambino, Italy),[149] drove a tactor drive circuit with quick release connectors. Resistor networks55Figure 3.1: Setup of Experiment 1 during “walking with visual workload” condition. Tallchair is not shown.and Darlington transistor arrays provided 80 mA at 3 V to each motor (Figure 3.2).The tactors were energized with Pulse-width (PW) modulated signals. To main-tain resolution despite variable site sensitivity but without concern for discrim-inability, we specified five intensity levels spanning all site detection thresholds.We performed an iterative perceptual calibration in which we recorded pilot-subjectdetection rate, beginning with a logarithmic PW distribution and adjusting it toachieve satisfactory perceptual separation. To check for inter-unit variability, wemeasured the output of all the tactors used with a piezo-electric accelerometer(PCB Inc) aligned normal to the eccentric mass rotation plane and sampled at 5kHz, with the tactor restrained by a magnetic mount screwed onto the clampedaccelerometer. A Welch power spectrum analysis on 20 s samples indicated fre-quency varied by 16% (mean = 190 Hz, SD = 30), and power by 5% (mean =59.0 dB/Hz, SD = 2.87). We addressed this variance by placing tactors on bodysites with a different random layout for each participant.56Figure 3.2: VPM2 eccentric-mass tactor.3.3.2 Movement Setup and TaskThe sitting and walking conditions were chosen as typical and distinctive move-ment states in wearable contexts. For the former, particpants sat in a tall chair for aconsistent screen view. When walking, participants chose a comfortable treadmillpace that they could maintain for twenty minutes. The mean speed chosen was 2.4Km/h (SD = 0.5).3.3.3 Visual Workload Setup and TaskDuring trials with visual workload, participants sat and walked approximately twometers from a simple geometric scene on a 3(H) × 4(W) meter display (Fig-ure 3.1). The scene showed twenty-five red, green, blue, yellow, and pink blocksin equal quantities, each numbered between 1-5, bouncing slowly around a three-dimensional room. Participants were asked to count the times a single highlightedblock hit any walls in the room, including the invisible wall represented by the57screen. This task was chosen as controllable continuous visual workload char-acteristic of a pedestrian’s everyday attention and memory tasks, but not so dis-tracting that participants were liable to stumble. The collision count was meantto reproduce the mental activity of a pedestrian keeping track of nearby cars andpedestrians. The other blocks simulated local objects that are distracting but neednot be tracked.3.3.4 Metrics and Analysis TechniqueOur primary metric to assess site sensitivity as a function of condition was numberof detected vs missed vibrations or Detection Rate (DR); we also used ReactionTime (RT) as a secondary indicator. Because detection data are distributed bi-nomially, we statistically analyzed Detections with a Generalized Linear MixedModel (GLMM) using “R” and the glmmML package [16]. We refined the modelwith backwards selection, beginning with many terms then iteratively removingthose with the largest p-value until all the terms had significant p-values (p < 0.05).Only main effects and significant interactions are reported.The presence of missed-stimuli trials prevented a normal RT distribution anduse of ANOVA. We replaced the censored data points (missed trials) of RT witha “sufficiently” large value and used a Kruskal-Wallis analysis, which uses metricrank rather than value to compute a test statistic. The value chosen for censoreddata points then needs only be larger than the maximum; we set RTm = 3500 msfor “miss” trials. RTm renders RT means meaningless for conditions with manymiss trials, which are common at low amplitudes for some body sites. Thereforein graphical comparisions of RT (but not DR) between conditions, we focus onhigh intensity stimuli with their higher detection rates. We also ran the Kruskal-Wallis test on the high-intensity subset, which were detected at≥98% for all factorsexcept intensity; and on the “all detected” subset for intensity, to confirm that theresults are not simply due to the missed data points.3.4 Experiment 1: Random Site With Visual LoadIn our first pass (Experiment 1), we tested potentially relevant body sites at five am-plitudes while addressing initial experimental factors of visual load and movement.58We balanced gender to allow the consideration of its impact, which could arisethrough, for example, gender-linked differences in body fat composition. Specifi-cally, we examined the following hypotheses:H1 Intensity increases DR and decreases RT.H2 Body sites will differ in terms of DR and RT.H3 Movement decreases DR and increases RT; and it affects different body sitesto different degrees.H4 Visual workload decreases DR and increases RT.H5 Gender differences in DR and RT exist.3.4.1 DesignExperiment size imposed a limit of 15 tactor sites. We chose seven sites corre-sponding to common or potential wearable locations, and mirrored these to addresspossible response assymmetry (Figure 3.3 and Table 3.1). 500 ms vibrations werepresented in randomized order across the body sites. Per condition, each intensitywas displayed twice at each right and left site, or four times at the spine.Half the male and half the female participants first sat in a chair and subse-quently walked on a treadmill, while the other half walked first then sat in a chair.During half of the walking and half of the sitting trials, we asked participants todirect their attention to the visual scene, which was turned off during the othertrials.Using a full-factorial design, we ran 5×4×7×2×2 (intensity× repetitions×site × movement × visual workload) trials, for a total of 560 trials per participant.3.4.2 ProcedureAfter signing consent forms, participants changed into sports clothing. We attachedtactors (which vibrated normal to skin without slip or shear) directly to the skinat defined locations with Lightplast Pro sports tape. Except for the feet (tactorscovered with socks but no shoes), no clothing covered the tactors. The intervalbetween tactor vibrations was randomized to between four and six seconds, with59Figure 3.3: Body sites used in Experiments 1 and 2; sites 6, 7, 10, 11 were ommitted inExperiment 2.interval length doubled on random trials (odds of 1 to 7) for a more arrhythmicpattern. We asked participants to press the right button on a modified computermouse when they detected a vibration. We recorded RT up to a cutoff of 3500 ms,noting missed responses. No feedback was given to responses.Training conducted before experiment trials:1. Experience maximal vibrations on each site.2. Experience each of the five intensities on the wrist.3. Respond to ten maximal vibrations at random sites.60Table 3.1: Body sites used in Experiment 1. ‘*’ indicates sites used in Experiment 2.Body site Number LocationFoot* 0,1 top surface of the foot,e.g., tongue of a shoeThigh* 2,3 outer thigh, halfway between knee and hip joint,e.g., hem of shorts on the sidesWrist* 4,5 posterior between small bones,e.g., watch face*Stomach 6,7 halfway between navel and hip bone,e.g., belt or waist bandUpper arm* 8,9 halfway between shoulder and elbow on the sides,e.g., arm bandChest 10,11 below collar bone,e.g., necklace or shirt collarSpine* 12 four centimeters below C7 vertebrae4. Count ten wall collisions in the visual task.5. Respond to ten maximal vibrations in four conditions: Sit+No Workload,Sit+Workload, Walk+No Workload, Walk+Workload.Experiment: Respond to 140 vibrations (location × intensity × repetitions) infour conditions, order counterbalanced by participant.Participants took a short break after each condition and a longer break beforeswitching movement state. After training, between conditions, and at experimentend, tactor function was verified. Participants answered online survey questionsbetween sitting and walking conditions and at experiment end. During trials, par-ticipants wore noise canceling headphones. Sessions lasted 90-110 minutes.3.4.3 ResultsFor this experiment, 16 participants (8 male) volunteered. These were distributedin age as 18-25 (n = 12), 31-40 (n = 2) and 40-60 (n = 2); in height as tall (n = 8),average (n = 3) and short (n = 5); and body type as ecto (n = 6), meso (n = 7)and endomorph (n = 3). In the prior year, participant use of portable devices withtactile feedback was distributed as daily (n = 10), 2-3 times/week (n = 4), and <1time/week (n = 2). Participants used a treadmill ≤1 time/month (n = 14) and 1610.000.250.500.751.00Wrists ChestSpineArmsStomach Feet ThighsBody LocationDetection Rate0123Wrists ChestSpineArmsStomach Feet ThighsBody LocationReaction Time (s)Figure 3.4: Mean Detection Rate (left) and Reaction Time (right) per body location in Experi-ment 1.time/week (n = 2). All reported themselves righthanded.Detected/Missed Stimuli DRIntensity initially had a nearly linear effect on the estimated odds ratio of DR inour GLMM model. Therefore, we considered it as a continuous variable to increasemodel readability, causing only slight differences in estimates and correspondingp-values for other covariates. Finding no differences between sides, we mergedleft and right body sites except for spine. Feet are the baseline for sites, male forgender, sitting for movement, no workload for workload, and first trial for trialnumber.In the GLMM results (Table 3.2), p-value indicates effect significance (p <0.05). For a significant p, a negative coe f decreases and a positive coe f increasesodds of detection, i.e., the quotient of the probability of detecting (p) and missing(1− p) a signal, i.e., p/(1− p). The odds ratio of a particular factor (e.g., wrist inTable 3.2) is the ratio of the odds of detection under that condition (e.g., wrist) tothe odds of detection under the reference condition (e.g., foot). There were very620.000.250.500.751.001 2 3 4 5IntensityDetection Rate01231 2 3 4 5IntensityReaction Time (s)Figure 3.5: Mean Detection Rate (left) and Reaction Time (right) for different intensities inExperiment 1.few false positives (1.2%), therefore we neglected their effect in the analysis.Main effects: As we can see in Table 3.2 and Figure 3.4, all body sites ex-cept thighs are significantly different from feet. In terms of detecting vibrotac-tile signals, thighs are as bad as feet; stomach, chest, and arms are slightly better;wrists and spine are best. Walking greatly decreases odds of detection. Intensityhas a significant effect (Figure 3.5), as is expected. Gender and the presence ofthe visual task do not have a significant effect on detection of vibrations. Trialnumber, which accounts for the opposing differences caused by learning and fa-tigue, is marginally significant (p = 0.048). Since its coefficient is very small(−5.7E−4), we computed the odds ratio of detecting a vibration after 100 trials as(exp(coe f × 100) = 0.94); i.e., the odds of detecting a vibration decreases by 6%after 100 trials, suggesting minimal practical impact.Interactions: Several factors interact with body sites. By Gender: females de-tect significantly more vibrations on their thighs. By Intensity: higher intensity630.000.250.500.751.00WristsChest SpineArmsStomach Feet ThighsBody LocationDetection Rate ConditionSitting − No WorkloadSitting − With WorkloadWalking − No WorkloadWalking − With WorkloadFigure 3.6: Mean Detection Rate per body location and condition in Experiment 1.increases detection on spine, arms, wrists, and stomach less than other sites, withspine the least sensitive.For all sites except spine and stomach (e.g., Wrists:Walking), Movement de-creases DR but it affects chest, arms and wrists least (Figures 3.6 and 3.8). Thepositive coefficients for interactions between Movement and these body sites donot compensate for the negative main-effect of movement coefficient.Reaction Time (RT)We ran two sets of Kruskal-Wallis tests for RT: on the entire dataset, using 3500 msfor missed vibrations, and on a data subset containing only high-intensity trialswhere most of the vibrations (99.2%) were detected. Both sets show that Intensity,Site, Movement and Task have a significant effect on RT, but gender and trial ID donot (Table 3.3). Intense vibrations are detected faster (Figure 3.5), and movementand visual workload increase RT (Figure 3.7).We also ran the Kruskal-Wallis test on Intensity for only the trials that weredetected (excluding misses), finding a significant effect of Intensity on RT (p <640123WristsChest SpineArmsStomach Feet ThighsBody LocationReaction Time (s) ConditionSitting − No WorkloadSitting − With WorkloadWalking − No WorkloadWalking − With WorkloadOnly The Highest IntesityFigure 3.7: Mean Reaction Time of high intensity stimuli per body location and condition inExperiment 1.0.05).Subjective ResultsUsers preferred wrists and arms the most, feet and thighs the least. When we askedwhich site they would choose for notifications, directional guidance, and for cuesduring exercise, they chose wrists, arms, and spine.65Figure 3.8: Detection Rate (DR) per body location on the body map; pink bars show DR duringsitting conditions and blue bars show DR during walking conditions.66Table 3.2: Generalized Linear Mixed Model (GLMM) of Detection Rate (DR) in Experiment 1.Pr smaller than 0.05 indicates that DR is significantly different from the reference for thatfactor (e.g., from feet, for body sites). ‘*’ indicates statistical significance.coe f se(coe f ) z Pr(> |z|) O.R.(Intercept)* -4.02 0.31 -12.81 <0.001 0.02Female -0.40 0.25 -1.60 0.11 0.67Wrists* 2.57 0.36 7.07 <0.001 13.11Stomach* 1.28 0.36 3.54 <0.001 3.60Thighs -0.36 0.41 -0.89 0.37 0.69Chest* 1.07 0.38 2.82 <0.001 2.90Arms* 1.62 0.36 4.49 <0.001 5.06Spine* 2.28 0.35 6.43 <0.001 9.73Intensity* 2.02 0.11 18.73 <0.001 7.55Walking* -1.95 0.20 -9.56 <0.001 0.14Workload -0.06 0.07 -0.85 0.40 0.95TrialID* 0.00 0.00 -2.82 <0.001 1.00Female:Wrists 0.23 0.25 0.91 0.36 1.26Female:Stomach 0.18 0.25 0.75 0.46 1.20Female:Thighs* 0.62 0.25 2.47 0.01 1.87Female:Chest 0.37 0.25 1.47 0.14 1.45Female:Arms 0.11 0.25 0.43 0.67 1.11Female:Spine -0.21 0.25 -0.82 0.41 0.81Wrists:Walking* 0.58 0.28 2.07 0.04 1.78Stomach:Walking -0.26 0.28 -0.93 0.35 0.77Thighs:Walking* -1.33 0.31 -4.25 <0.001 0.26Chest:Walking* 0.86 0.28 3.14 <0.001 2.37Arms:Walking* 0.66 0.27 2.42 0.02 1.93Spine:Walking 0.16 0.28 0.59 0.56 1.18Wrists:Intensity* -0.35 0.15 -2.30 0.02 0.70Stomach:Intensity* -0.38 0.14 -2.72 0.01 0.69Thighs:Intensity -0.11 0.15 -0.72 0.47 0.90Chest:Intensity -0.10 0.15 -0.68 0.50 0.90Arms:Intensity* -0.35 0.14 -2.42 0.02 0.71Spine:Intensity* -0.41 0.14 -2.86 <0.001 0.6767Table 3.3: Results of Kruskal-Wallis tests on Reaction Time (RT), Experiment 1. ‘*’ indicatesstatistical significance.Full Setchi-squared df p-valueBodySite* 434.3 12 <0.001Task* 24.1 1 <0.001Movement* 422.7 1 <0.001Gender 11.3 1 0.596Intensity* 4517.2 4 <0.001TrialID 509.4 559 0.162Subset: High Intensitychi-square df p-valueBodySite* 130.9 12 <0.001Task* 48.4 1 <0.001Movement* 62.6 1 <0.001Gender 1.3 1 0.249TrialID 487 528 0.899Subset: All Detectedchi-square df p-valueIntensity* 4517 4 <0.001683.5 Experiment 2: Random vs Expected SiteIn Experiment 1, participants did not know which of the 13 sites would receivethe next vibration, whereas in actual wearable use, usually only one site would beused. We theorized that there could be a performance cost associated with scanningmultiple body sites, and therefore performed a second experiment (Experiment 2)where site expectation mode is controlled. To maintain experiment size, we alsoremoved the two least-likely body site pairs (stomach and chest), and the visual taskcondition because it did not have a significant effect on DR, our primary metric. Allother aspects were identical to Experiment 1. In addition to verifying H1-H3 andH5 from Experiment 1, we examined the following Experiment 2 hypotheses:H6 Expectation of site increases DR and decreases RT.H7 Expectation reduces the effect of movement.H8 Expectation impacts different genders differently.3.5.1 DesignIn Experiment 2, we used five paired body sites (Table 3.1). Half the male andhalf the female participants first sat in a chair and subsequently walked on a tread-mill, while the other half walked on a treadmill first then sat in a chair. Duringhalf of the walking and half of the sitting trials, the vibrations were displayed in10-trial clusters (5 intensities × 2 repetitions) at each body site and participantswere informed of the site (Expectation condition). During the other half, the vibra-tions were randomly displayed on any site and participants were not informed oflocation.After signing consent forms, participants completed the following training steps(1-3 are the same in Experiment 1):Training 1:1. Experience maximal vibration on each site.2. Experience each of the five intensities on the wrist.3. Respond to ten maximal vibrations at random sites.69Training 2: Respond to 4 counterbalanced conditions of:4. Sitting+Expectation: sets of four vibrations, random intensity on three ran-domly selected sites, sitting.5. Sitting+No Expectation: twelve vibrations of random intensity on randomlyselected sites, sitting.6. Walking+Expectation: sets of four vibrations, random intensity, on threerandomly selected sites, walking.7. Walking+No Expectation: twelve vibrations of random intensity on ran-domly selected sites, walking.Experiment: Respond to 100 vibrations (location × intensity × repetitions) infour conditions, order counterbalanced by participant. Participants took a shortbreak after training and between the second and third conditions, and filled ques-tionnaires at the beginning (profile) and end (preferences) of the experiment. Eachperson was compensated $15 for participation. Total experiment time was 90 min-utes.3.5.2 ResultsFor this experiment, 16 participants (8 male) volunteered; none were from Experi-ment 1. These were distributed in age as 22-24 (n = 4), 25-27 (n = 6), 28-30(n = 6); in height as tall (n = 4), average (n = 10), short (n = 2); and body type asecto (n = 5), meso (n = 9), endomorph (n = 2). In prior year, participant useof portable devices with tactile feedback was distributed as daily (n = 9), 2-3times/week (n = 2), 1 times/week (n = 1), <1 time/month (n = 4); and partici-pants used a treadmill <1 time/year (n = 3), ≤1 times/month (n = 9), 2-3 times/-month (n = 1), 1 time/week (n = 3). 4/16 reported themselves lefthanded. As inExperiment 1, false positive effect was negligible (0.8%).Detected/Missed Stimuli DROur GLMM analysis was conducted as for Experiment 1. With expectation is thereference for the new expectation factor. Main effects: All body sites are signif-icantly different from feet (Table 3.4, Figure 3.9), with wrists and spine best and700.000.250.500.751.00Wrists SpineArms Feet ThighsBody LocationDetection Rate0123Wrists SpineArms Feet ThighsBody LocationReaction Time (s)Figure 3.9: Mean Detection Rate (left) and Reaction Time (right) per body location in Experi-ment 2.0.000.250.500.751.00Wrists SpineArms Feet ThighsBody LocationDetection Rate ConditionSitting − ExpectedSitting − UnexpectedWalking − ExpectedWalking − UnexpectedFigure 3.10: Mean Detection Rate per body location and condition in Experiment 2.710123Wrists SpineArms Feet ThighsBody LocationReaction Time (s) ConditionSitting − ExpectedSitting − UnexpectedWalking − ExpectedWalking − UnexpectedOnly The Highest IntesityFigure 3.11: Mean Reaction Time of high intensity stimuli per body location and condition inExperiment 2.feet worst at detecting vibrations. Walking greatly decreases detection odds. Asexpected, intensity is significant. Gender has a significant effect on the odds ofdetecting a vibration (females seem to have higher DR) but it is cancelled out withthe interaction effects (see below). TrialID (time into the experiment) and Expecta-tion have no significant effect on the odds of detecting a vibration. An interactionbetween Intensity and spine reduces the main effect of Intensity, suggesting In-tensity plays a less important role for spine than for other body sites. Movementdecreases detection odds at all sites (Figure 3.10); wrists and spine least, thighsand feet most. Again, positive interaction coefficients for Movement and sites donot compensate for the negative main Movement coefficient. Movement:Intensityreduces the main effect for Movement. The interaction effect between Gender andbody sites indicates that females have higher odds of detection only on their feet.Reaction Time (RT)As with Experiment 1, we ran two sets of Kruskal-Wallis tests: one on the entiredata set, using 3500 ms for missed vibrations, and another on the subset of high-72intensity trials where most of the vibrations (98.4%) were detected (Table 3.5).Both tests show that Intensity, Movement, Expectation and Gender have a signif-icant effect on RT but Trial ID does not (Figure 3.11). More intense vibrationsare detected faster, movement and lack of expectation increase RT, and males areslightly faster to respond than females. A Kruskal-Wallis test on Intensity for thetrials where vibrations were detected showed a significant effect of Intensity on RT(Table 3.5, Subset=All detected).Subjective ResultsExperiment 2 participants preferred spine and wrists the most, feet and thighs theleast (relative to Experiment 1, spine replaced arms as a preferred site). For notifi-cations and directional guidance they chose wrists and for exercise cues they chosespine.3.6 Summary and DiscussionWe begin our discussion with an examination of our hypotheses, then further reflecton their implications.H1 - Vibration Intensity: Both Experiment 1 and Experiment 2 showed that in-creasing vibration intensity strongly increases detection odds and reduces reactiontime, supporting H1. However, impact of DR varies across the body. In Experiment1, DR increases with intensity for all body sites but less so for spine, wrists, armsand stomach; in Experiment 2, less so for the spine.H2 - Body Sites: Experiment 1 and Experiment 2 consistently show that wrists andspine are most sensitive in detecting vibrotactile signals, whereas feet and thighsare least sensitive. As described for H1, body sites are differentially sensitive tointensity in terms of absolute detection. However, Experiment 1 and Experiment2 also demonstrate that response time for high intensity signals (≥ 98% detection)is similar across the body. Thus, H2 is confirmed for detection rate, but not forresponse time.H3 - Movement: Walking significantly reduces odds of detecting a vibration, andincreases reaction time even to high intensity vibrations. Both experiments furtherconfirmed that the DR of thighs and feet are most affected by walking. H3 is thus73Table 3.4: GLMM of DR in Experiment 2. Pr smaller than 0.05 indicates that DR is significantlydifferent from the reference for that factor. coef greater than zero indicates increased oddsof detecting a vibration. ‘*’ indicates statistical significance.coef se(coef) z Pr(> |z|) O.R.(Intercept)* -1.36 0.35 -3.85 <0.001 0.26Female* 0.89 0.34 2.63 0.009 2.42Thighs* 0.65 0.28 2.32 0.021 1.92Wrists* 1.84 0.29 6.29 <0.001 5.32Arms* 0.76 0.29 2.61 0.009 2.14Spine* 1.84 0.28 6.66 <0.001 6.28Intensity* 1.70 0.11 15.72 <0.001 5.50Walking* -2.91 0.25 -11.48 <0.001 0.05Randomized -3.01 0.19 -0.05 0.960 0.99TrialID 0.00 0.00 -1.78 0.075 1.00Female:Thighs* -1.02 0.25 -4.14 <0.001 0.36Female:Wrists* -1.39 0.26 -5.26 <0.001 0.25Female:Arms* -0.62 0.26 -2.40 0.016 0.54Female:Spine* -1.45 0.25 -5.91 <0.001 0.23Female:Randomized* -0.42 0.16 -2.63 0.009 0.65Thighs:Intensity -0.17 0.14 -1.27 0.204 0.84Wrists:Intensity -0.08 0.16 -0.47 0.638 0.93Arms:Intensity 0.24 0.16 1.51 0.132 1.27Spine:Intensity* -3.39 0.13 -3.00 0.003 0.68Thighs:Walking* -1.26 0.32 -3.99 <0.001 0.28Wrists:Walking* 1.84 0.30 6.20 <0.001 6.30Arms:Walking* 1.16 0.29 3.96 <0.001 3.18Spine:Walking* 1.54 0.27 5.70 <0.001 4.68Thighs:Randomized* -0.09 0.24 -0.38 <0.001 0.91Wrists:Randomized* 0.87 0.26 3.31 0.001 2.38Arms:Randomized 0.16 0.26 0.63 0.531 1.17Spine:Randomized -0.31 0.24 -1.27 0.204 0.73Intensity:Walking* 0.28 0.09 3.17 0.002 1.3374Table 3.5: Results of Kruskal-Wallis tests on RT, Experiment 2. ‘*’ indicates statistical signif-icance.Full Setchi-squared df p-valueBodySite* 284.450 8 <0.001Randomization* 10.320 1 0.001Movement* 402.495 1 <0.001Gender* 31.916 1 <0.001Intensity* 3116.700 4 <0.001TrialID 358.017 399 0.931Subset: High Intensitychi-squared df p-valueBodySite* 149.65 8 <0.001Randomization* 9.3279 1 0.002Movement* 72.8988 1 <0.001Gender* 36.237 1 <0.001TrialID 380.7202 384 0.538Subset: All detectedchi-squared df p-valueIntensity* 3116.7 4 <0.001confirmed.We note that while thighs and feet moved the most during walking in thisexperiment, participants also swung their arms. Walking was chosen as a repre-sentative movement in mobile contexts. Further work is required to establish moregeneralizable patterns of body sensitivity to different types of movement; but thepresent result is highly relevant to designing for mobile uses.H4 - Visual Workload: Our visual workload task did not have any apparent effecton vibration detection. It did significantly impair reaction time, increasing even forthe most intense vibrations. H4 is thus partially rejected and partially confirmed.There is no evidence in our results of body site specificity in impact of the workloadtask.Wickens proposes four qualities to describe workload: mental stage, modality,75channel and processing code [171]. Stage can be perceptual or responsive. Modal-ity is typically visual or auditory, and it is better to spread work across modalitiesrather than on time-sharing a single modality. Visual workload can be focal orambient without competition. Codes are analogue/spatial or categorical/symbolic.Typically, people perform simultaneous manual and focal tasks well. Thus, our vi-sual task (focal) and vibration response modality (manual) do not compete heavilyfor the same resources.Ferris et al. presented vibration patterns from back-mounted tactors to partic-ipants in a driving simulation, with categorical (TC) or spatial (TS) visual tasks[38]. Their visual task had a significant effect on RT but similarly to our results,the overall effect of task on accuracy (detection of the type of visual stimuli) didnot reach significance; in particular, while their TC task impacted accuracy, theirTS task (which seems more similar to our visual task) did not.We did not choose a harder visual task or one which more specifically inter-fered with detecting and responding to signals because we aimed to simulate atypical mobile context, i.e., watching for other pedestrians and cars over a widefield of view. However, there will be situations when more severe competitiondoes occur, even if not endemic.H6 and H7 - Expectation: Expectation had a significant effect on detection only atthe wrists where, surprisingly, it reduced detection odds. One possible explanationis that in the no-expectation mode where in recent trials a perceptually weakerstimulus had been felt elsewhere, the wrist percept was relatively more salient.Another possibility is that sensory adaptation acted as a side effect of sending anumber of signals to the wrists. Because wrists detect more vibrations than othersites, the adaptation effect on wrists should be larger than elsewhere. However,the positive effect of expectation (which cancels adaptation on other body sites) isnot large enough for wrists to compensate for adaptation. Finally, there is a one in20 chance that this result is simply due to chance; our analysis employed a 95%confidence level.Expectation significantly reduced response time: scanning the whole bodywhen the stimulus site is unknown slows the process of vibration detection andresponse. Thus, H6 is confirmed with respect to reaction time. Expectation did76not have a significant effect on detection rate and (compared to movement) it had avery small effect on reaction time. Therefore, expectation alone cannot cancel theeffect of movement, and H7 was not confirmed.H5 - Gender: In Experiment 1, males were better than females at detecting vi-brations on the chest and stomach, the sites omitted from Experiment 2. For theremaining sites, males always detected vibrations on wrists and spine better thanfemales. However, Experiment 1 and Experiment 2 disagree as to the body siteswhere females were best: thighs in Experiment 1, arms and feet in Experiment 2.In general, females’ reaction times were slightly longer than males, with the ex-ception of the feet, where females were faster. Thus overall, while H5 is confirmed(gender does have some impact) the difference is not consistent or large.Subjective ResultsOn average, Experiment 1 participants preferred vibrations on their wrists most,arms the second; Experiment 2 participants preferred spine, then wrist. Groupingthe 32 participants of both experiments, there is a tie for highest preference betweenspine and wrists. Both groups disliked vibrations on their feet by far the most; thighis second least preferred.Both groups chose wrists for notification applications, arms and wrists for di-rectional guidance, and spine as the most appropriate spot for vibrotactile signalsduring exercise.3.7 Conclusion and Future WorkWe ran two experiments to study the differences between sensitivity of severalbody sites to vibrotactile signals. We narrowed down the number of body sites tothose most practicable for wearable haptics and mobile applications: wrists, upperarms, outer thighs, feet, chest, stomach, and spine. Most of these locations havebeen suggested or used in past wearable tactile systems such as belts, back arrays,wrist and arm bands, tactile shoes, and most commonly, cellphones in pockets (onthe thighs).We compared these body sites under conditions of presence or absence of avisual workload, sitting in a chair or walking on a treadmill, and with or without77knowledge of location of the next stimuli. We also looked at gender differences,and considered five vibration intensities.One of our most important and perhaps surprising results is that expectation ofstimulus location does not improve detection rate, under the conditions of Experi-ment 2; but it does decrease reaction time. We did not include a visual workloadcondition in Experiment 2 because of its limited impact in Experiment 1; how-ever, it will be of interest to see if expectation can counteract negative effects ofworkload tasks which cause more interference.The fact that our workload task did not interfere with vibration detection inExperiment 1, i.e., even when the next vibration location was unknown and partic-ipants had to scan their body to detect it, is an encouraging result. To the extentthat this kind of workload is realistic, vibrotactile signals can still “get through”anywhere on the body even under load conditions, albeit more slowly when theuser is under mental effort. The implication is that the detection and some kinds ofworkload typical of mobile contexts do not directly compete for mental resources.In another notable result, the thigh was among the least effective and leastpreferred stimulus site we tested; and yet, the front pocket is a common location tostow a mobile device, particularly for men.Although H1-H5 seem to be predictable from past work, none of our hypothe-ses have ever been confirmed in a controlled comparison with realistic displaytechnology and is very necessary from a design perpective. For example, H1 con-firmation informed/justified our choice of intensity levels and assumptions on itslinearity (which were used later in the GLMM). Furthermore, the secondary re-sults of H2-H5 (e.g., interaction effects, change in the ordering pattern) were notpredictable from published data.3.7.1 Design GuidelinesFrom our results, we propose the following guidelines. We note that these heuris-tics have particular relevance for applications which have either of two attributes:intolerance to missed signals, and/or a requirement for fast responses. The first istypified by tasks that rely on background processes, such as notification, or thosewhere signals carry notable content, e.g., haptic icons [100] where inattention could78distort the signal’s meaning. The second includes gaming and time-and-safety-critical guidance systems. Others have need of both, e.g., driving systems that useboth guidance and notifications.Location, Location, Location Wrists and spine are generally best for detectingvibrations, and are also the most preferred, with arms next in line. Feet and thighsare poor candidates for vibrotactile displays, exhibiting the worst detection perfor-mance of those we tested and ranking lowest in user esteem. However, for reactiontime, location does not matter.Stronger Vibes Are Felt Faster Unsurprisingly, increasing intensity increases de-tection rate and reduces reaction time, particularly on the lower-body sites testedhere. This result does not imply that strong vibrations will always be preferred orappropriate; but when a notification must get through, intensity increases salience.Don’t Take Movement For Granted Movement can decrease detection rate andincreases response time. Walking (the movement we tested) affects lower bodysites the most. For applications that involve a considerable movement, other factorssuch as intensity and body location need to be adjusted to compensate for this.Visual Workload Slows Users Down Although workload of the type we employed(visual search) does not apparently impact vibration detection rate, it does increaseresponse time. Therefore, expect some lags and irregularities in user response tovibrotactile displays in visually demanding situations.Users React Slower to Unexpected Vibrations Multiple site tactile interfaces meansurprises for the user; single site interfaces mean the user always knows where to“watch”. If reaction time is critical, designers should be cautious in proliferatingdisplay sites across the body. If only detection matters and time is not critical, thenumber of sites does not matter, and the redundancy may in fact prove more robustto local interference.Gender Differences do not Change Our Suggestions Men detect vibrations ontheir wrists and spine a little better than women. Women detect vibrations some-what better on thighs and arms. However, wrists and spine are still the best choicesfor both genders, and differences are not large.793.7.2 Future WorkWe embarked on this study because we required guidelines of this sort to reducedesign errors and shorten the iterative design process for our wearable haptic sys-tems. These results solve our immediate needs, and the body sites investigated area good sample of those that might ever be successfully used in wearable contexts.However, other factors deserve broader investigation. Of greatest importancewill be to encompass a broader set of workload tasks and movement types beyondvisual search and walking, and to incorporate auditory and vibrotactile noise oftypical environments such as moving vehicles.3.8 AcknowledgmentThis work was funded by the Natural Sciences and Engineering Research Coun-cil of Canada (NSERC). User data were collected under University of BritishColumbia’s Research Ethics Board approval H01-80470.80Chapter 4Cadence MeasurementEverywhere is walking distance if you have the time.— Steven WrightWe1 present an algorithm that analyzes walking cadence (momentary step fre-quency) via frequency-domain analysis of accelerometer signals available in com-mon smartphones; and report its accuracy relative to published state-of-the-art al-gorithms based on data gathered in a controlled user study. We show that ouralgorithm, Robust Realtime Algorithm for Cadence Estimation (RRACE), is moreaccurate in all conditions, and is also robust to speed change and largely insensitiveto orientation, location on person, and user differences.RRACE’s performance is suitable for interactive mobile applications: it runs inrealtime (∼2 s latency), requires no tuning or a priori information, uses an exten-sible architecture, and can be optimized for the intended application. In addition,we provide an implementation that can be easily deployed on common smartphoneplatforms. Power consumption is measured and compared to that of current com-mercially available mobile apps.This chapter appears with minimal modifications in [79]:• I. Karuei, O. S. Schneider, B. Stern, M. Chuang, and K. E. MacLean. RRACE: RobustRealtime Algorithm for Cadence Estimation. Pervasive and Mobile Computing, (0):52–66,2014. ISSN 1574-11921For a list of contributors and their level of involvement please refer to the Preface on page iv.81We also describe a novel experiment design and analysis for verification ofRRACE’s performance under different conditions, executed outdoors to capture nor-mal walking. The resulting extensive dataset allows direct comparison (conditionsfully matched) of RRACE variants with a published time-based algorithm.We have made this verification design and dataset publicly available, so it canbe re-used for gait (general attributes of walking movement) and cadence measure-ment studies or gait and cadence algorithm verification.4.1 IntroductionContemporary smartphones carry a wealth of sensors which can be used to estimateaspects of a user’s context and activities that are of value in a multitude of appli-cations. One notable example, walking cadence (“the beat, time, or measure ofrhythmical motion or activity” – Merriam-Webster; used hereafter to refer to stepfrequency as estimated in realtime), has broad utility for applications that supportfitness, rehabilitation, gaming, navigation, and context awareness. But availablecadence detection methods require unrealistically specific placement and sensorcalibration to achieve viable performance. There is a need for realtime cadencedetection that is robust to carrying method.Current realtime mobile cadence detection methods are largely based in thetime domain, detecting timing of individual footfalls which themselves are esti-mated when an accelerometer signal exceeds a threshold. This threshold depen-dency is not ideal from a usability standpoint because the threshold is specificto many parameters – for example, Melanson et al. show that threshold-basedpedometer accuracy changes dramatically by age, weight, and height [106]. De-tection accuracy consequently necessitates device (or additional sensor) placementin a location known to the algorithm, on one of a small number of body sites withhighly regular movement – e.g., the pocket, on the hip, or the leg [43] – at a specificorientation, and with user-specific calibration to adjust for weight, height, and bodyshape of the user. This invokes a harsh tradeoff between reliability and usability[47].Frequency methods for cadence detection have received little attention to date,yet in contrast to acceleration thresholds, there is substantial qualitative common-82ality in frequency profiles as a function of position in various body locations [93].Because the frequency and wavelength of the acceleration depend on the time inter-val between footfalls and the wave’s shape and amplitude depend on the individualand location on the body, theoretically the major frequency component of the ac-celeration should be more robust to the individual-, location- and model-specificamplitude concerns which make time-based thresholds so problematic.In this chapter, we describe an algorithm, Robust Realtime Algorithm for Ca-dence Estimation (RRACE), to analyze cadence through a frequency-domain anal-ysis of movement, and report its accuracy based on data gathered in a user study.RRACE’s basic structure is a computationally efficient moving window that is sub-jected to a spectral analysis followed by an analysis of frequency peaks. Empiri-cally, we found that performance peaks at a window length of 4 s, producing about2 s latency including computational delay. This algorithm is extendable, allowingfor improvements with advanced filtering or harmonic analysis, and can be used toprovide spectral information for classification of gait (general attributes of walkingmovement) and other gait analysis applications.While others have reported using frequency-based approaches [93, 178], ourapproach’s exceptional robustness is due in part to its ability to utilize non-uniformlysampled data (the most readily available) and in part to the reliance on accelerationvector magnitude (the component unaffected by orientation) to determine cadencewithout knowledge of the placement of the device on the user’s body.Our contributions are (a) a cadence detection algorithm that can work acrossmany body locations, is robust to change of orientation, and does not requirecalibration; (b) an experimental setup for assessing the accuracy of a gait de-tection method across many body locations, outdoors and under normal, uncon-strained walking conditions; (c) performance data examining the effects of bodylocation and speed on the algorithms we tested; (d) a thorough comparison betweenour frequency-based gait detection method and the highest-performing publishedtime-based acceleration threshold method, hereafter referred to as the time-basedmethod; and (e) an implementation that can be easily deployed on common smart-phone platforms.After discussing related work, we describe the RRACE algorithm and presentour pilot and main validation experiment with RRACE running in realtime on a83smartphone. We then compare RRACE to the time-based method, and concludewith a discussion of our findings and plans for future work.4.2 Related WorkTo ground the presentation of our algorithm and evaluation, we first discuss real-time gait and cadence detection and its applications, then examine the state-of-the-art in time-domain and frequency-domain methods of cadence detection.4.2.1 What is Realtime Cadence Detection Good For?Gait and cadence information is relevant to many current and future mobile ap-plications. Often attributed to Thomas Jefferson [172], the modern pedometer haslong been a fitness tool for dedicated walkers and runners. Today’s ever-expandinglineup of smartphone app versions further support logging, mapping, calorie burn-ing estimates and social media [46, 115, 164].Kavanagh and Menz point out the popularity of accelerometer-based systemsfor human gait measurement and give a broad overview of accelerometer-basedgait measurement systems with suggestions on optimal use conditions, reliability,and applications [80].A number of fitness applications and products focus on automaticity, personal-ization, and direct feedback to increase motivation. As early as 2008, UbiFit usedpersuasive technology in its visual displays (using a metaphor of a garden’s health-iness) of activity and goal achievement [25]. The Nike+iPod Nano, developed byNike Inc. (Beaverton, OR, USA), measures distance, speed, and energy expendi-ture and can be programmed to play a motivational song when necessary [113].Endomondo app, developed by Endomondo (Copenhagen, Denmark), claimed tobe the most highly-reviewed activity monitoring Android app, uses a GPS signal totrack speed, distance, duration, and calories burnt for running, cycling, and othersports [33]. Runtastic Pedometer by Runtastic GmbH (Linz, Austria), another An-droid app, also uses accelerometer data to count steps and measure calories burnt[137].MPTrain (later extended to be TripleBeat) goes further by selecting and play-ing music with specific features to support pace goals like speeding up and slowing84[31, 115]. Garmin (Olathe, KS, USA) produces the Forerunner 910XT, a multi-sport watch that can be used for running, biking, and swimming [49]. It can detectwalking steps and swimming and cycling strokes using its 3-axis accelerometers,measure elevation by a barometric altimeter, and be paired with a heart rate mon-itor; as for many tools, users can plan their workouts and analyze their activitythrough a number of metrics. The burgeoning area of exercise games could benefitfrom realtime knowledge as well; previous work has linked game performance tostep count [95] and overall physical activity [48].All-day wearable activity monitoring currently includes successful commercialproducts like the Nike+ Fuelband [112], developed by Nike Inc. (Beaverton, OR,USA), and the FitBit, by FitBit Inc. (San Francisco, CA, USA), [41] and mobileapps such as Endomondo [33] and Runtastic Pedometer [137]. These products,using 3-axis accelerometers along with various extras like GPS and ambient lightsensors, aim to track and support goal achievement including steps taken, caloriesburned, and hours of sleep, providing a global view of activity levels as distributedover the day, week, and longer periods of time. Based on our informal measure-ments, Fuelband (worn on the wrist) appears to be less precise in measuring steps,while we saw Fitbit’s error remain within a 5% bound and Runtastic Pedometer’swithin a 10% bound when counting steps.These devices are representative of the current market selection, which is rapidlymoving. Popularity and supported price points (presently $100-200 USD) high-lights growing consumer interest in holistic, conveniently acquired perspectives ontheir activity.Meanwhile, it is possible to fuse cadence estimates with other data to identifymore complex user states. When Global Positioning System (GPS) is unavailable,they can augment navigation algorithms through dead-reckoning [105, 178]. Accu-rate cadence information provides a valuable feature for mobile fitness games anddetailed guidance tasks that require higher-resolution data (e.g., skipping, hopping,“turn here”) in addition to GPS and biometrics [47]. Context-aware applicationsbenefit from discerning walking, running, or sedentary states by using gait alongwith posture, auditory and other data to optimize notification timing [69, 81, 82].Cadence can also supplement interior GPS and localization systems [1]. In all ofthese examples, accuracy and convenience of mobile collection is paramount.85A number of specialized, commercially available devices record and analyzehuman movement with good accuracy for medical purposes such as clinical, biome-chanical, physical therapy, and movement disorders research, as well as athletictuning. Movement Monitors by APDM Movement Monitoring Solutions (Port-land, OR, USA) are watch-sized Inertial Measuring Units (IMU), intended to beworn on multiple sites simultaneously (wrist, ankle, belt, and sternum straps), us-ing accelerometers, gyroscopes, and magnetometers [4]. Industrially, IMUs are avaluable diagnostic and research tool for industrial vibration or movement monitor-ing, inertial guidance, virtual reality, or any application where precise monitoringof subtle movement is required. However, in these specialized situations it is feasi-ble to wear or install a potentially expensive specialized device, precisely calibratedand location constrained. This is not the case for most potential consumer uses ofcadence or gait detection.4.2.2 Sensor TypeTraditional pedometers identify individual steps using mechanical or piezoelectricsensors. Purely mechanical sensors detect a step if acceleration surpasses a thresh-old, measured when a sensor element strikes a surface. Piezoelectric sensors varyin form and sophistication. Like their mechanical cousins, many operate on anacceleration-threshold principle while more sophisticated devices compare an ac-celeration time series to a model of a step. The variants found in contemporarysmartphones and IMUs, however, typically rely on 3D accelerometers. Their out-put can be processed in the same way as a piezoelectric or mechanical sensor, butalso give rise to new algorithmic possibilities as described below.There has been some sensor-based improvement in pedometer accuracy ob-served for piezoelectric relative to mechanical sensors, in particular at very slowwalking speeds, likely due to increased sensitivity. A 2004 treadmill-based analy-sis of mechanical and piezoelectric pedometers found error reduced with speed forslow walking: 29% (< 0.89 m/s), 9–26% (0.89 to 1.34 m/s), and 4% (> 1.34 m/s)2for mechanical pedometers. In a second variant, at speeds between 0.80 and 0.89m/s, the piezoelectric’s error was < 3% as compared to the mechanical sensors’s21 m/s = 2.24 mph865−48% [106]. The lowest error reported (0.3%) is for a piezoelectric sensor wornon the ankle [43]. In these studies, all pedometers were tested while worn on theiroptimal and calibrated specified location with specific orientation.Accelerometer-based instruments are sampled using an embedded CPU, andaccessed by an application through its operating system. Thus, their usable accu-racy is both due to the sensor itself, and the quality, rate and latency of access toits output permitted by the operating system; these parameters all vary widely andtheir relative impact is not generally discussed. Current overall performance levelsare discussed below.4.2.3 Estimating CadenceTime DomainWhether standalone or in an iPhone app, an algorithmic (programmed) cadenceestimate derived in the time domain is based on thresholds or peaks and step modelparameters. These are in turn generated from user-supplied information such asbody mass and height, as well as the sensor’s known or constrained Location onPerson (LOP), and details of the hardware platform. Without this, accuracy is poor(as we will demonstrate in Section 4.5.2), and this need for substantial contextand/or limitations in where and how they can be worn is their major drawback.The most straightforward solution for cadence estimation of any type is to an-alyze acceleration in the time domain and detect individual footfalls. This requiresjust a single axis of acceleration, and produces algorithms that are computationallylean. Time domain approaches can be quite effective when context information isknown or constrained, being simple and reasonably accurate. A brief frequency-based peak-detection algorithm (such as the one we compare later in this chapter)delivers a latency equivalent to the last two steps.Yang et al. sampled a waist-mounted tri-axial accelerometer module with built-in low pass filter, and computed autocorrelation in the time domain to measure ca-dence in realtime [174]. They reported a mean absolute percentage error of 4.89%when comparing their results with cadence measurements from the synchronizedvideo. Their algorithm used a 3.5 s window.87The specifics of most commercial pedometer algorithms such as Runtastic Pe-dometer [137] are unavailable. However, an MPTrain publication [115] identifiesits step detection as an adaptive accelerometer threshold with a low-pass filter, andreport its accuracy as comparable to standard piezoelectric pedometers [115]. Wefurther describe the MPTrain algorithm in Section 4.5.1, as we use it for compari-son with our algorithm.Frequency DomainFrequency analysis has been instrumental in revealing interesting characteristicsof gait (e.g., discriminating the steps of the left and right foot [181] or comparingacceleration frequency content of two devices to determine if they are carried bythe same person [93]). Zhao et al. use gait detection in assisted GPS systems [178],and identify the uncertainty of the sensor location as an important issue. Theirsolution is to classify sensor location by extracting time and frequency domainfeatures, then choosing a dead-reckoning algorithm according to the classificationresult.Unlike cadence estimation in the time domain, where each footfall is recordedand time-stamped, a frequency-based algorithm looks at a bigger picture: it iden-tifies the signal’s major frequency components during a given window of time.At the cost of not detecting single footfalls and (typically) a larger delay to col-lect multiple samples, a frequency-based algorithm is far less dependent on signalshape and amplitude. This is because a frequency-based algorithm can distinguishbetween the major frequency component of the signal influenced by the repetitionintervals (i.e., step duration), and the harmonics influenced by placement of the sen-sor and the subject’s unique walking pattern and noise. A frequency-based cadenceestimation algorithm is thereby theoretically more robust to individual differencesand sensor location than a time-based approach.Thus, in applications where the exact time of footfalls is not required but easeand flexibility of use is valued, a frequency-based algorithm seems a promising ap-proach, and is the one we took here. For related reasons, autocorrelation is anotheravenue that deserves attention, although it is beyond the scope of this chapter.For either, achievable latency and accuracy then become the crucial issues, and88necessitated a development plan that included careful validation. Of publishedalgorithms, the frequency-based ones state that they depend on proprietary infosuch as placement on the body or adjustments to the parameters to compensate foruser differences. Kavanagh and Menz note the necessity of user-specific calibra-tion procedures and errors caused by change of orientation [80]; they present anelaborate list of accelerometer attachment methods from past research with everyone of them using a single location for the placement of the sensors. Zijlstra andHof for example, like the majority of other researchers, placed accelerometers onthe lower trunk [181]; specifically, they fixed the position of accelerometers at thedorsal side of the trunk with a fixed orientation. To our knowledge no realtimefrequency-based methods have been reported for measuring cadence that uses thebuilt-in sensors of a commodity smartphone and works out-of-the-box (i.e., withoutcalibration).4.2.4 Performance Assessment of Cadence Estimation AlgorithmsPublished performance data are a rarity in cadence and gait estimation. Schneideret al. elaborate on the challenge of comparing performance of such algorithmsfor realtime gait classification and accelerometer-based activity recognition [143],which are partly due to the large number of possible parameters and settings, andformat of testing. The sheer logistical effort of precise validation may be an evenmore significant problem: natural walking is best done outdoors, and the techniquein question must be compared with one or ideally two additional, independent andhighly accurate ‘gold standard’ methods that are sampled at the same time. As canbe seen in the following pages, this entails a considerable commitment in setup anddata collection that most published works have omitted.Furthermore, many of the examples we have cited are proprietary algorithms incommercial products released within a fast-moving market, with minimal or zeroinformation available about their function or performance. Without easy accessto their internal realtime data streams (for example, FitBit must compute realtimestep frequency, but does not share it with the consumer even post-hoc) it is difficultfor a 3rd party to independently verify their accuracy and other parameters.Much of this difficulty would disappear with the availability of standardized89datasets: published trajectories of carefully collected and documented accelerationdata, ideally as streams obtained simultaneously from multiple points on the bodyduring a range of walking conditions. Different algorithms can then easily be com-pared. This practice is common in other communities, such as machine learning,but there is no standard data set for gait detection that we know of. For this reason,we are making our own dataset available, as detailed in Section 4.4.4.3 Approach: The RRACE AlgorithmTo support other research in our lab, we required a reliable, outdoor-ready cadencedetection method that is both unconstrained in body location, and does not requireusers to acquire or wear specialized hardware.Our goal was therefore to develop an algorithm that measures cadence at thesame rate of accuracy as the best on record [35] (5% error) or better, that works ontypical smartphones and is independent of orientation, placement on the body andindividual wearer’s physiology, and works out-of-the-box and in realtime. We alsopredicted that due to their growing ubiquity, a highly usable, smartphone-readycadence detection algorithm would enable many new possibilities beyond our im-mediate needs. We chose a frequency-based approach for the reasons cited above,and developed an implementation that solved a number of inherent complexities asdescribed below.4.3.1 OverviewOur cadence-detection algorithm, (RRACE), performs a spectral analysis on a four-second window of sampled 3-axis accelerometer data. Our approach has threecharacteristics that make it appropriate for realtime cadence estimation on mobilephones: (a) it is independent of body location and subject differences (as discussedbefore), (b) it is robust to orientation, and (c) it is robust to sampling irregularities.Without published details or even the identity of other frequency-based algo-rithms, it is difficult to compare our approach to others on theoretical grounds.However, all the frequency based algorithms of which we are aware report usingfixed-rate sampling (e.g., [30]), and, as some of them point out, for smartphonesignals this would likely be a source of considerably reduced accuracy.904.3.2 Implementation DetailsSupporting orientation-invariant information: To estimate overall movementfrom this measure, we use the magnitude (Euclidean or L-2 norm) of the threeaccelerometer axes (x,y,z) as our signal, as in [93]. This is a simple path to orien-tation invariance which we later show to be effective.Accommodating Non-uniform Sampling (FASPER): Most smartphones supplyaccelerometer data which are not sampled at a constant rate (e.g., 25±5 Hz); ourdata indicate that irregularities in accelerometer sample intervals are endemic. Forexample, the variance in the data analyzed for this chapter is:• Sampling period: mean = 40.0 ms, median = 31.0 ms, SD = 37.7 ms• Sampling frequency: mean = 127.7 Hz, median = 32.3 Hz, SD = 231.5 HzSpectral analysis of such irregular data is not possible with Fast Fourier Transform(FFT), which computes a Fourier decomposition under the assumption that samplesare equispaced. Attempts to ‘repair’ the data, e.g., with interpolation, obviouslyintroduce new sources of uncertainty, and this renders the most common spectralanalysis methods inappropriate.However, the Lomb-Scargle periodogram approach (also known as least-squaresspectral analysis), derived by Lomb [97] and later validated with a mathematicalproof by Scargle [141], accurately handle non-equispaced data by, effectively, fit-ting a sine wave and estimating its frequency spectrum.In particular, Fast Calculation of the Lomb-Scargle Periodogram (FASPER)[126] employs four parameters: the vector time series along with the time coor-dinate of each sample, an output gain and an oversampling parameter to controlresolution of the computed spectrum. FASPER computes the significance level foreach of a discrete set of frequencies.RRACE uses FASPER to find the spectrum of the overall movement of the de-vice. We then make the key assumption that cadence is the most significant fre-quency peak in the spectrum for a given computational window. We define ouralgorithm’s latency as half the window length – e.g., a 4 second window has alatency of 2 seconds. 33Although it may be possible to improve the performance of our algorithm through signal pro-914.3.3 PseudocodePseudocode for our implementation is shown in Figure 4.1. We obtained the high-est accuracy using 0.25 and 4.0 for FASPER’s output gain (“hifac”) and oversam-pling (“ofac”) parameters, respectively.function RRACE():(timestamps, xs, ys, zs) := get_accelerometer_values(from 4s ago to now);n := length(timestamps);magnitudes := new array of length n;for i := 1 to n domagnitudes[i] := sqrt(xs[i]ˆ2 + ys[i]ˆ2 + zs[i]ˆ2);size := 128*n;hifac := 0.25;ofac := 4;frequencies := fasper(timestamps, magnitudes, size, ofac, hifac);cadence := most_powerful(frequencies);Figure 4.1: RRACE Pseudocode.4.3.4 Android-Based Validation PlatformThe results of Section 4.4 are based on data from up to six simultaneously-wornGoogle Nexus One smartphones running Android OS version 2.3.4 (Gingerbread).Our main application was implemented in Java, the primary programming lan-guage for Android development. Numerical programming algorithms (includingFASPER) were implemented in C for speed benefits and because of readily-availableimplementations [126]. We used the Java Native Interface (JNI) to connect the twolanguages.4.4 Experimental Validation of RRACEIn laying out RRACE’s formal validation, we first summarize a pilot study whichinformed our subsequent methodology. We then describe our full study’s walkingtask, apparatus, and measurement; and its design, metrics, analysis, and subjects.cessing, e.g., by employing a smoothing filter, for the current analysis we did not use any filter orother processing components other than the parts we describe here. This permitted us to make thefairest comparison possible to other algorithms, since we were not aware of what optimization theyhad undergone.92Next, we present the results of the analysis for the optimally configured (4 secondwindow) RRACE and compare it with other RRACE variants. Finally, we measurethe power consumption of our algorithm and compare it with similar Android appsin the market.Our dataset is available at: http://www.cs.ubc.ca/labs/spin/data/.4.4.1 Treadmill-based Pilot ValidationBefore conducting our full outdoor experiment, we built confidence in our gen-eral approach (algorithm and smartphone implementation) with a preliminary studybased on four participants (one female) who volunteered without monetary com-pensation out of interest in the research. The setup consisted of a treadmill, thethree Google Nexus One smartphones available at the time, and a PC x86-64 formanual logging of footfalls. Smartphones were synchronized with the PC; eachphone recorded accelerometer signals (average sampling frequency = 24.8 Hz) andestimated cadence using RRACE in realtime.Subjects walked for 15 minutes at a selection of speeds chosen to representslow to fast walking based on similar studies and pedestrian speeds [43, 85], whilewearing smartphones on 3 of the 6 LOP sites at a time, randomly selected eachtrial. They then walked for another 15 minutes wearing the phones on the otherthree locations. For both segments, they were instructed to adjust their walkingspeed to keep up with the changes in treadmill speed, but given no instructions asto step frequency.We assessed accuracy of cadence measurement relative to the manually recordedstep interval (Tm). Our primary metric – Error Ratio (ER) – was thus the ratioof RRACE’s measurement “error” (the difference between the frequency measure-ment produced by RRACE, Fa, and the reference frequency, Fr) to the referencefrequency:ER =|Fa−Fr|Fr(4.1)where93Fr =1TmPilot Study Results: In an Analysis of Variance (ANOVA) on ER, our independentvariables were LOP (6 sites), Speed (10 speeds ranging from 0.45m/s to 1.65m/s),and Window Size (2 lengths: 4 s and 8 s). We used a significance level of 0.05,applying a Bonferroni correction to counteract the multiple comparisons problem.Location had a significant effect on ER. Front pocket (mean = 6%,SD = 11%),belt (mean = 14%,SD = 30%), arm (mean = 15%,SD = 30%), and bag (mean =15%,SD = 32%) were much more reliable than back pocket (mean = 30%,SD =38%) and hand (mean = 31%,SD = 27%). Speed also had a significant effect onER. RRACE had a lower ER at higher speeds across all LOPs except hand. Arm,bag, belt and front pocket reached their low ER at a much lower speed than didback pocket. Surprisingly, hand did better at lower speeds. The impact of windowsize was statistically significant but of a small numerical value, with 8 s windowoutperforming 4 s window.The pilot study confirmed general accuracy for our approach and suggesteda better choice of factors for use in a more naturalistic outdoor-walking study.Specifically, because 4 s and 8 s window sizes both produced good performancewith minimal difference, we concluded that the accuracy of window sizes largerthan 4 s is not worth the extra latency and we decided to try smaller window sizesfor comparison. We reduced the number of speed levels to five.4.4.2 Primary Outdoor Walking Task and Measurement ApparatusThe primary experiment to validate RRACE was run on a concrete sidewalk in anopen area on a university campus, with no nearby buildings to block GPS signals.4Subjects were asked to walk twice at each of five different speeds, and instructedwith the definitions provided in Table 4.1. We further instructed all subjects that‘leisurely’ walking speed meant their slowest normal walking speed, and ‘typical’walking speed is their usual walking speed. Allocation of walking speed order was4While GPS data were collected for possible use in validation, RRACE does not use GPS data itself.94randomized.We note that subjects’ walking speed and cadence were not expected or re-quired to be perfectly consistent (either within or between subjects) for measuringaccuracy of cadence detection. Our goal was to observe walking at a larger vari-ety of speeds for every individual, and this loosely controlled mechanism allowsa more finely resolved spectrum of actual speeds; meanwhile, this allowed a largedataset, mitigating the effect of imbalances and imperfections.Table 4.1: Walking Speeds During the Experiment.Label Definition Mean (m/s) SDSpeed -2 leisurely (slowest) walking speed 1.14 0.22Speed -1 slower than typical but faster than leisurely 1.34 0.14Speed 0 typical walking speed 1.52 0.08Speed 1 faster than typical but slower than the fastest speed 1.67 0.12Speed 2 fastest walking speed 1.95 0.12Apparatus: The experimental setup consisted of six Google Nexus One smart-phones, an external GPS receiver connected to one of the phones via bluetooth,our reference cadence measurement consisting of two shoe-mounted Force Sens-ing Resistor (FSR) sensors [72] to detect footfalls and connected to a Bluetooth-enabled Arduino board, developed by SmartProjects (Strambino, Italy) [150], twolaptops (one for logging trials and a second, a small netbook, to log footfalls sentfrom the Arduino board via Bluetooth), a backpack, a stop watch, and two flagsfor experimenters to send timing signals to each other. The study required threeexperimenters to run.Prior to the experiment, subjects were asked to wear pants with front and backpockets but pocket locations were not controlled. The six phones and the Arduinoboard were synchronized with the main computer at the start of the experiment.One of the phones, the GPS receiver, netbook, and Arduino were put in the back-pack (bag). The bag had a filled weight of approximately 2 kg. See Table 4.2for general phone locations, which were chosen as the places people used mostfrequently for their mobile phones while commuting [28].FSR Footfall Detection: We used timestamped FSR data as our reference footfall95detection method:ER =|Fa−Fr|Fr(4.2)whereFr =1TFSRFSRs are ideal for detecting changes in force. We placed an FSR force sensor,by Interlink Electronics (Camarillo, CA, USA) [72], inside each shoe (to measurethe force exerted by subjects’ feet and compare it with a threshold), and connectedboth to the Arduino. The Arduino timestamped the FSR readings (avoiding impactof Bluetooth latency) and sent them on to the netbook via Bluetooth. The footfalldetector system was calibrated and verified for each subject at the beginning of theexperiment. To analyze the data, we used the median of the last three intervals ofeach of the two feet (TFSR in Equation 4.2) to filter errors caused by false positives(extra footfall detected) or false negatives (footfall missed).Trial Length and Speed Measurement: We wished to collect 20 seconds of walk-ing data for each trial (twice the length of our largest window size with a 25%safety margin) and compute step frequency every 200 ms. We asked subjects towalk a known distance, either 30m or 60m (marked by small flags along the walk-way), depending on whether 20 seconds had elapsed by the time the 30m point(first end time) had been reached (Figure 4.2). Timespan was manually recordedvia stopwatch.Figure 4.2: Experiment walkway, start and end points.964.4.3 Experiment Design, Metrics, and SubjectsThe design was within-subjects repeated-measures, with independent variables ofwindow size, LOP, and speed condition (Table 4.2). The five speed conditions andtheir repetitions (10 trials) were randomized.Table 4.2: Experiment designFactor Number of Levels Factor LevelsWindow Size 4 1, 2, 4, or 8 secondsLOP 6 back pocket, bag (backpack), dominant hand (held),front pocket, hip (mounted on belt), upper arm (mounted)Condition 5 typical (0), fastest (2), leisurely (-2),faster than typical (1), slower than typical (-1)Repetition 2 first time, second timeMetrics and Analysis: We assessed RRACE’s accuracy by comparing it to ourshoe-located force sensor reference (Section 4.4.2). As in the pilot, our primarymetric was ER (Equation 4.1 in Section 4.4.1). We conducted our analysis withGeneralized Linear Models (GLM), using unpaired Z-test comparisons for post-hocanalysis and p = 0.05 significance, again applying a Bonferroni correction. Notethat for sample sizes as large as our dataset, Z-test produces the same result as at-test. Also, we report differences between effect levels as z-scores, and becausez-scores are normalized by standard deviation, differences between means in ouranalysis are analogous to Cohen’s d statistics of effect size.Subjects: Eleven individuals (6 female and 5 male), aged 21−30 years (mean =25.2, SD = 3.3), 155− 179 cm tall (mean = 165.9, SD = 7.0), and weighing46− 80 kg (mean = 59.1, SD = 10.0) volunteered. No subjects had physical im-pairments.Speed / Frequency Relationship: As a basic check for our measurements weverified a correlation between walking speed and cadence (r = 0.84 using Pearson’sCorrelation), which is consistent with [67].974.4.4 Results for Outdoor Validation of 4 Second Window RRACEWe chose the 4-second window RRACE as our analytical baseline, and describe itsanalysis first: both theoretically and in our pilot, 4 seconds is enough to detect awide range of walking cadences. We then present results from alternative windowsizes. Because the phones were prone to dropping data (14%), we used GLM forits robustness to this situation.Main Effects: LOP has a significant effect on ER. Results were consistent withour pilot: front pocket, belt, arm, and bag (light-green box plots of Figure 4.3) aremuch more reliable than back pocket and hand (dark-red box plots of Figure 4.3).Speed condition also has a significant effect on ER; ER is generally lower at thetypical and fast speed and higher at the slowest and the fastest speed.Interaction Effects: Both LOP / speed condition and LOP / window size on ERinteract significantly. Arm, bag, and front pocket ER remain consistently below 5%under all speed conditions, with their minimum at the middle (typical) speed. Beltproduces lowest ER at the typical speed and largest ER at the fastest speed. Backpocket produces lower ERs at higher speeds and hand produces lower ERs at lowerspeeds (Figure 4.3). As we will see in Section 4.4.5, the interaction between LOPand window size does not affect our general conclusions about LOPs.Quantitative ComparisonsLocation on Person (LOP): Table 4.3 compares ER as a function of LOP – fourout of six locations have an ER of 5% or below. Front pocket and bag, with thelowest ERs, significantly outperform the other locations. For example, arm has anER of 3.6% on average and is 0.3% different from front pocket while bag and frontpocket are not significantly different from each other.This accuracy is approximately the same as the best reported elsewhere and hasproved acceptable for most applications [43, 106]. As noted earlier, it is not cur-rently possible to make a direct comparison (i.e., based on running the algorithmson the same dataset, or confirmation that the datasets / experimental conditionsare fully comparable) with other reported results given the level of implementationdetail available. However, to the best of our knowledge the comparison is conser-98Slowest Slow Typical Fast Fastest05101520ArmSpeedError Ratio (%)Slowest Slow Typical Fast Fastest01020304050Back PocketSpeedError Ratio (%)Slowest Slow Typical Fast Fastest05101520BagSpeedError Ratio (%)Slowest Slow Typical Fast Fastest05101520BeltSpeedError Ratio (%)Slowest Slow Typical Fast Fastest05101520Front PocketSpeedError Ratio (%)Slowest Slow Typical Fast Fastest01020304050HandSpeedError Ratio (%)Figure 4.3: ER (ER) as a function of Speed Condition for 4-Second Window RRACE. Dark-redboxplots have a larger range for ER. The boxplot’s central bar indicates sample median.vative: we asked RRACE to do the same or harder task, in that our setup was farless constrained.Table 4.3: ER differences by LOP for four-second window RRACE through an unpaired Z-test.The second column contains the mean ER of each LOP; remaining cells contain the differ-ence between two LOPs where the difference is significant. The differences are the maxi-mum possible while maintaining statistical significance, and thus are less than the distanceof ER means from each other. A large value means larger distance between ERs.Difference fromLOP ER (%) Front Pocket Bag Arm Belt Back PocketFront Pocket 2.8 - - - - -Bag 3.1 not sig - - - -Arm 3.6 0.3 0.1 - - -Belt 5.5 2.2 2.0 1.4 - -Back Pocket 7.9 4.5 4.3 3.7 3.0 -Hand 11.4 7.8 7.6 7.1 5.2 2.6Speed Condition: Figure 4.3 shows that ER decreases as speed increases onlywhen the phone is placed in back pocket and the opposite happens when the phoneis held in hand. However, the differences among different speed conditions are99not very obvious for other LOPs in the figure. We can exclude those two LOPsand quantitatively compare speed conditions for the other LOPs; if we do so, wewill see that ER is lower at the typical and fast (one level above typical) speed andgenerally highest at the slowest and/or fastest speed conditions (Table 4.4).Table 4.4: RRACE ER differences by speed condition for 4 LOPs with a four second window(unpaired Z-test). Hand and back pocket – the inconsistent LOPs with more obvious reactionto speed – are excluded to focus on the effect of speed in the absence of the interactioneffects and on the more similar LOPs. See Table 4.3 for more information.Difference fromSpeed Condition ER (%) (0) (1) (-1) (2)Typical (0) 2.5 - - - -Fast (1) 2.8 0.04 - - -Slow (-1) 3.4 0.6 0.3 - -Fastest (2) 4.0 1.1 0.8 0.2 -Slowest (-2) 6.3 3.2 3.0 2.3 1.74.4.5 Analysis of The Effect of Window Size on RRACEFour-second and eight-second processing windows produced similar ERs, consis-tent with our pilot results (Figures 4.4, 4.5, and Table 4.5). While the ER of aone-second window is double that of four or eight seconds, the two-second win-dow is only 1% (significant) different from four and eight-second windows, andmay be usable in some circumstances (Table 4.5). As shown in Figure 4.4, increas-ing window size reduces ER for all LOPs but has a smaller effect on locations withlower ER in general. By comparing Table 4.3 (ER of LOPs for four-second windowRRACE ) with Table 4.6 (ER of LOPs for all variations of RRACE ) we see that win-dow size only affects the rank of front pocket among other LOPs; front pocket is notthe best LOP when we choose smaller window sizes. Other five LOPs stay in thesame relative order when we change window size.As anticipated, the effect of increasing window size on reducing ER is morenoticeable at lower speeds (Figure 4.5). Since smaller windows capture fewer stepsthan larger windows, with decreasing speed the chance of capturing enough stepsis reduced. In effect, increasing window size compensates for the effect of slowingdown.1001 2 4 8010203040ArmWindow (s)Error Ratio (%)1 2 4 8010203040Back PocketWindow (s) 1 2 4 8010203040BagWindow (s) 1 2 4 8010203040BeltWindow (s) 1 2 4 8010203040Front PocketWindow (s) 1 2 4 8010203040HandWindow (s)Figure 4.4: ER is a function of Window Size per each LOP for all Speed Conditions lumped.1 2 4 8010203040Slowest (−2)Window (s)Error Ratio (%)1 2 4 8010203040Slow (−1)Window (s) 1 2 4 8010203040Typical (0)Window (s) 1 2 4 8010203040Fast (1)Window (s) 1 2 4 8010203040Fastest (2)Window (s)Figure 4.5: ER is a function of Window Size per each Speed Condition for all LOPs lumped.4.4.6 Power ConsumptionWe used PowerTutor [125] to measure RRACE’s power consumption on a SamsungGalaxy Nexus smartphone running the Android 4.1.1 Jelly Bean operating system,and compared it with Endomondo [33], a sport tracking app, which is claimedto be the highest rated app of its kind on Android, Runtastic Pedometer [137],a pedometer app that uses accelerometers to count steps, and Angry Birds, thefamous game (Table 4.7).Table 4.5: ER differences by window sizes of RRACE, with walking speed and LOP lumped.Window sizes are ordered by increasing ER mean. See Table 4.3 for more information.Difference fromWindow Size ER (%) 8 Seconds 4 Seconds 2 Seconds8 Seconds 5.8 - - -4 Seconds 5.8 not sig - -2 Seconds 7.1 1.1 1.1 -1 Second 11.5 5.4 5.4 4.1101Table 4.6: RRACE ER differences by LOP for all window sizes and walking speeds (unpairedZ-test). Locations are ordered by increasing ER mean. See Table 4.3 for more information.Difference fromLOP ER (%) Bag Arm Front Pocket Belt Back PocketBag 4.1 - - - - -Arm 4.8 0.5 - - - -Front Pocket 5.5 1.2 0.4 - - -Belt 7.1 2.8 2.0 1.3 - -Back Pocket 10.5 6.1 5.3 4.6 3.0 -Hand 12.8 8.3 7.6 6.9 5.3 1.9Like most activity measurement algorithms, RRACE does not require the dis-play to be on; but for consistency, all of these apps were compared with screenon. PowerTutor is able to distinguish between LCD power usage, which Table 4.7shows is similar for all of them. CPU power usage varies: RRACE uses 10× theCPU power of Endomondo, 5× more than Runtastic Pedometer, and is comparableto Angry Birds.With the screen off, our algorithm will consume considerably less power thanmobile games even before improving the CPU efficiency. Until now, our devel-opment has focused on proving accuracy rather than power efficiency, so the lowpower consumption of other activity measurement apps is promising in terms ofwhat RRACE can achieved with optimization, e.g., with methods such as “codeoffload” [27] and “µSleep” [13].Table 4.7: Power consumption.App Name Duration (s) Average Usage (mW) LCD Usage CPU UsageRRACE 361 772.85 528.53 244.32Endomondo 308 555.52 529.87 25.65Endomondo (no GPS) 369 548.78 530.08 18.70Runtastic Pedometer 322 575.47 521.74 53.73Angry Birds 559 735.78 516.00 225.331024.5 Comparing RRACE with a Threshold-basedTime-domain AlgorithmAs detailed in Section 4.2, the many pedometers available commercially use propri-etary algorithms that have not been released to the public. We therefore comparedour frequency-based algorithm to MPTrain’s algorithm [115]. MPTrain uses twolow-pass filters. One removes noise in the original accelerometer signal, producinga smoothed signal; the second has a lower cutoff frequency, and its output is usedas a dynamic threshold. Footsteps are detected when the smoothed signal crossesthe dynamic threshold from above to below (Figure 4.6). Because the MPTrainaccelerometer is required to be situated on the user’s torso and oriented to detectaccelerations in the superior-inferior axis, it detects footsteps on both feet. Footfalls are translated to instantaneous (i.e., sampled) Steps per Minute (SPM) usingthe following formula:SPMi = (int)60.0×SamplingRate#SamplesSinceLastStep(4.3)Finally, the MPTrain algorithm applies a median filter to the instantaneous SPMto calculate estimated SPM. The MPTrain study reported a uniform sampling rateof 75 Hz for accelerometer data, achieved with an external chest-mounted sam-pler. The authors report that cadence measurement accuracy is comparable to thosefound in commercial pedometers by [106], but provide no specifics.4.5.1 Implementation of Time-based Algorithm for ComparisonWe reconstructed parameterizations for the MPTrain algorithm, since details werenot reported for either of the low-pass filters, and no window was given for themedian filter. We also accommodated the variable sampling rate found in smart-phones, and measured cadence in Steps per Second (SPS) instead of SPM to com-pare it with RRACE.Finally, given that we do not have a sensor in a known orientation, we alsoconsider each of four different axes in our analysis: x, y, z, and m (the magnitudeof the vector, i.e., m =√x2 + y2 + z2).1030 1 2 3 44681012141618Time (seconds)Accelerometer Value (x, y, z, or magnitude)Time-Based Algorithm for Subject 10 - ArmRaw SignalSmoothed SignalThresholdDetected StepsFigure 4.6: Example of the MPTrain time-based step detection algorithm. The y-axis showsthe L-2 norm (magnitude) of the accelerometer signal.Two low-pass filters (accelerometer data smoothing and dynamic threshold)employ parameters α and β (β < α), which were trained on our data (Section4.5.2). For efficiency and simplicity, we implemented these as Exponentially-Weighted Moving Averages (EWMA). An EWMA is defined as follows:Si = αxi +(1−α)Si−1 (4.4)where Si is the i-th smoothed (low-passed) value, xi is the i-th raw accelerometervalue, and α is the smoothing parameter (0≤ α < 1).As for MPTrain, steps are detected when the smoothed signal crosses the dy-namic threshold from above to below. The difference between step times is used tocalculate instantaneous cadence by the formula:104Cadence = 1/CurrentDifferenceBetweenSteps (4.5)For example, if the two previous footsteps were detected at StepTimei = 100 msand StepTimei+1 = 600 ms and we wanted the instantaneous cadence at any timet ≥ 600 ms, we would compute 1/(600−100) = 0.002 steps per millisecond, or2.0 SPS. Final cadence estimates were the average of each instantaneous cadenceestimate and one previous estimate (i.e., a 2-sample smoothing filter).4.5.2 Tuning of the Time-based AlgorithmTo compare the MPTrain time-based algorithm as favourably as possible to RRACE,we optimized the low-pass filter smoothing parameters α and β (Section 4.5.1) forseveral data subsets involving different combinations of subjects and LOP:• All data (all subjects and LOPs): 1 set• Each subject (over all LOPs): 11 sets• Each LOP (over all subjects): 6 sets• Each subject-LOP combination (e.g., Subject 1, Arm) minus 9 (missing data):11×6−9 = 57 setsThis thorough search thus used 75 parameterizations of the time-based algorithm.During analysis (below), data were only scored on the dataset on which it wastrained. This represented a best-case scenario of an algorithm trained for a cer-tain individual and/or LOP, which could occur in real-world use cases with oneindividual using a personal device in a consistent way.Within a dataset, we used a uniform search for the best combination of smooth-ing parameters (α and β ) with a granularity of 0.05 (i.e., α ∈ {0.05,0.1, ...,1.0}and β ∈ {0,0.05, ...,1.0}, one of the three axes or magnitude (γ ∈ {x,y,z,m}),as well as four scaling factors (δx,y,z ∈ { 12 ,1,2,4} for individual axes and δm ∈{14 ,12 ,1,2} for magnitude; scaling factors accommodate harmonics by scaling thecomputed cadence). The best of all these combinations for each dataset was de-termined by having the lowest mean squared ER by comparing to the FSR gold105standard. The scaling factors scale the calculated cadence; for example if the userwalks at 1.5 Hz and the time-based algorithm calculates a cadence of 0.75 Hz (i.e.,detects either left or right steps), a scaling factor of 2 would fix this (0.75×2= 1.5).Analysis of Time-based AlgorithmWe found it was not possible to train the time-based algorithm to work on all LOPsand for all subjects with an ER below 5%; the minimum attained was ER = 74%.The time-based algorithm for all LOP on one subject reached ER = 18%, but thiswas only for the best-case scenario.Tuning the time-based algorithm for one best-case LOP on all subjects wasmore feasible: this achieved ER = 12% for bag. If we tune the algorithm for oneLOP of each subject we may even get a lower ER; when tuned for Subject10’s arm,the algorithm reached ER = 7.8% (Table 4.7). In the next section we will show thatthese results are not nearly as good as the performance of RRACE with ER = 5.8%for the 8s window variant.Comparisons with Frequency-based AlgorithmFigure 4.7 shows boxplots of ER of all the RRACE and time-based algorithm vari-ants ordered by the median of ER. We divided them into five categories as identifiedin the figure’s caption, differing by algorithm and breadth of training set, where amore specific (but unrealistic) training set generally leads to better performance inthis test.Because it is unproductive to compare each of these algorithms with the rest,we have chosen the best of each category in addition to the worst-case RRACEvariant (one-second window) which are marked by blue ticks and blue dashed-linebox plots on Figure 4.7. This is a highly conservative comparison which tends tofavor the time-based algorithm. We used the same data for verification of eachtime-based algorithm that was used for their training and secondly, the ER of allversions of the frequency-based algorithm is measured across all LOPs of all sub-jects. RRACE was not trained or tuned in this comparison.Thus, the single “fair” comparison is between either version of RRACE (greenin Figure 4.7), and the time-based algorithm trained on all subjects and all LOPs106* All Subjects' Body LocationsSubject3Subject5HandSubject4Subject1Subject8Subject2Front PocketSubject1's Front PocketBack PocketSubject2's Front PocketSubject11Subject4's HandSubject2's HandSubject7Subject8's Front PocketSubject3's Front PocketSubject3's Back PocketSubject7's HandSubject7's ArmSubject11's BeltSubject3's HandSubject7's BeltSubject1's BeltSubject10's BeltSubject3's BeltSubject10's Front PocketSubject9's Back PocketSubject11's Front PocketSubject6's Front PocketSubject1's HandSubject4's Front PocketSubject6's BeltSubject4's BeltSubject11's ArmBeltSubject11's HandSubject6Subject9Subject5's BeltSubject5's Back PocketSubject9's BeltSubject2's BeltSubject6's Back PocketSubject9's HandSubject1's BagSubject6's ArmSubject5's Front PocketSubject9's ArmSubject11's Back Pocket* Subject 10ArmSubject3's BagSubject1's Back PocketSubject9's BagSubject6's BagSubject7's BagSubject2's Back PocketSubject4's Arm* BagSubject8's BeltSubject8's HandSubject8's Back PocketSubject5's HandSubject10's Back PocketSubject1's ArmSubject2's ArmSubject11's BagSubject5's BagSubject10's HandSubject2's BagSubject10's BagSubject8's Bag* Subject10's Arm* Freq−Based 1Freq−Based 2Freq−Based 4* Freq−Based 80 20 40 60 80All AlgorithmsError Ratio (%)Chosen for comparisonNot chosen for comparisonRRACETime−based trained onsingle LOP of one subjectTime−based trained onsingle LOP of all subjectsTime−based trained onall LOPs of one subjectTime−based trained onall LOPs of all subjectsFigure 4.7: ER compared for all algorithm variants and ordered by median. (a: GREEN)4 window sizes of RRACE (first four); time-based algorithm trained on: (b: RED) allsubjects’ LOPs (last one), (c: PINK) all LOPs of each single subject, (d: YELLOW) oneLOP of all subjects, and (e: GREY) single LOP of one subject. Algorithms chosen forquantitative comparison are marked by blue ticks and blue dashed-line boxplots.107(red). Table 4.8 summarizes these comparisons, but in order of mean rather thanmedian; thus subject10’s arm comes after the 1-second RRACE in the figure butbefore it in this table.The best of time-based categories — Subject10’s arm, bag (across all subjects),Subject10 (all body locations), and all subjects’ body locations — and the 8-sec and1-sec variants of RRACE appear on the first column of Table 4.8. Their respectiveERs are listed in the second column. It is statistically incorrect to compare thesevalues without testing the statistical significance of their difference. Therefore,we used unpaired Z-tests (with Bonferroni correction for multiple comparisons) to(a) test the statistical significance of the difference between each two algorithms(one from the first column vs another one from the third to seventh column of thesecond row), and (b) measure the maximum difference while maintaining statisticalsignificance which does not apply to pairs that are not significantly different suchas bag vs 1-sec RRACE; this also applies to Tables 4.3, 4.4, 4.5, and 4.6.In particular, the difference between the best variant of RRACE and the bestsof all categories of time-based algorithm, 1.2, 5.5, 10.2, and 67.5 presented onrows 4, 6, 7, and 8 (row of Subject10’s arm, row of bag, row of Subject10, androw of all subjects’ body locations) and 3rd column (column of 8-Sec RRACE) areimportant to us; these values show that RRACE has a much lower ER than any ofthe time-based algorithms and this difference is statistically significant.4.6 DiscussionThe goal of this research was to develop a cadence measurement algorithm foraccelerometer-equipped mobile phones. We required this algorithm to be robustand work out-of-the-box with an ER of 5% or less (comparable to Yang et al.’swaist-mounted cadence measurement device [174] and MPTrain of Oliver & Flores-Mangas [115]). First, we will review the nature of RRACE’s error, its performanceon different LOPs and robustness to subject differences, and compare it with thetime-based algorithm. Then we will examine its main weakness, and finally wewill discuss the best choice for window size.108Table 4.8: Unpaired Z-test comparison of error ratios of the best and the worst versions of thefrequency-based algorithm and the best of each category of time-based algorithm. Algo-rithm variants are ordered by increasing ER mean. See Table 4.3 for more information.Difference withAlgorithm ER (%) 8-SecRRACESubject10’sArm1-SecRRACEBag Subject108-Sec Window 5.8 - - - - -RRACE (a)Subject10’s 7.8 1.2 - - - -Arm (e)1-Sec Window 11.5 5.4 2.9 - - -RRACE (a)Bag (d) 11.9 5.5 3.2 not sig - -Subject10 (c) 17.9 10.2 8.1 4.6 4.0 -All Subjects’ 73.5 67.5 65.0 1.8 60.9 53.7BodyLocations (b)4.6.1 The Nature of RRACE’s ErrorA small number of outliers are responsible for some of the error in RRACE’s read-ings. These are of two types: (a) random readings as a result of irregularities in thesignal, and (b) harmonic readings which happen when the main frequency compo-nent gets smaller than its harmonics. These outliers may be avoided by filtering theoutcome of RRACE. The rest of the error is caused by hardware measurement errorand delay from the 4 second window.4.6.2 RRACE Meets Criteria for 4/6 of Tested Locations; Time-Basedfor 0/6Movement at four LOPs (arm, bag, belt, and front pocket) contain sufficient consis-tent information for RRACE to make accurate estimates and RRACE does not needto be calibrated in order to work there. They each achieve a 3–5% ER, satisfyingthe criteria laid out above.In contrast, the time-based algorithm was highly sensitive to LOP. It was almostimpossible to tune the time-based algorithm for three of the LOPs, front pocketamong them. The LOP that fit the time-based algorithm the best was bag withalmost double the ER of the 8-second and 4-second window RRACE.1094.6.3 RRACE is Robust to Subject DifferencesThe time-based algorithm was very sensitive to subject differences. It could notbe trained to work on all LOPs of all subjects, and when trained on single LOPs,only 12% was achieved, in only one location (bag). Because it was calibrated bysubject, its present tuning would not work on subjects outside our experiment withno adjustment. However, RRACE achieved much lower ER for all subjects with noprior tuning to compensate for subject differences.4.6.4 RRACE is Sensitive to Very Slow SpeedsOur outdoor validation results showed that, like other pedometers, RRACE is sen-sitive to speed. The highest ER belongs to the slowest speed with ER = 6.3%. Weattribute this worsened performance to two possible causes:(a) At lower speeds, walking cycles take longer and fewer cycles are capturedin a fixed window size. As anticipated, this weakens RRACE. Mitigation requiresuse of a larger window size, e.g., by dynamically changing the window size to fitthe speed.(b) Walking becomes less autonomous and more irregular when subjects areasked to walk at very low speeds, especially because users can easily choose towalk as slowly and irregular as they want, while at high speeds step interval isbounded by the subject’s physique.The time-based algorithm is less affected by walking speed because it just de-tects single steps, no matter how irregular or distant from each other they are. Thusone practical approach might be to shift to a time-based algorithm at low speeds.4.6.5 RRACE Window Length of 4 Seconds is BestOur results showed that highest accuracy (lower ER) is achieved at larger windowsizes. The difference in ER is substantial for 1 vs 2-second windows, and for 2 vs4-second windows, but not for 4 vs 8 seconds. A 4-second window size seems theideal length among our candidates as a compromise between responsiveness andaccuracy.1104.7 Conclusion and Future WorkIn this chapter we introduced a new algorithm for measuring cadence through afrequency-domain analysis of accelerometer data from smart phones, called RRACE.This algorithm’s advantages are strong robustness to location on body, orientation,and to individual physiological parameters, resulting in exceptional usability andsuitability for a broad range of consumer-type applications.We also presented an experiment design to verify our and other algorithms.Our user-based validation showed that RRACE performs well under different speedconditions, providing 5% or lower error for four of the six common LOPs exam-ined: front pocket, bag, arm and belt, consistent with previous work in a singlelocation [174], and producing 8% and 11% for the other two: back pocket andhand. RRACE’s primary weakness is a drop in performance for slow and irregularwalking, a flaw which can be mitigated by dynamically adjusting the window sizeto maximize accuracy at the cost of more latency, and/or switching to a time-basedalgorithm at slow speeds.We compared RRACE with a state-of-the-art published time-based algorithmwhich we tuned in every way possible; our highly conservative comparisons showthat RRACE is substantially more accurate than the time-based algorithm tunedfor any subset of the data. Our results show that RRACE is also superior to thetime-based algorithm in terms of independence from LOP and robustness to userdifferences. The exception is for very low and/or irregular speeds, situations whichmany applications of a cadence detection method might classify as a different gaitand analyze using a different algorithm. We also plan to extend comparisons toinclude autocorrelation of time-based techniques, which may share some of theadvantages of a frequency based approach.As well, our algorithm provides general guidelines for window size and ro-bust spectral analysis. This information can be used to inform solutions to morecomplex realtime gait analysis problems, such as activity detection for fitness orrehabilitation applications, or individual gait identification for mobile security.We are continuing to improve our algorithm. Some avenues likely to furtherincrease its performance are to reduce estimation outliers by using smart filtersand adjust window size based on current cadence. We will also look into reducing111power consumption of our algorithm by reducing sampling and CPU usage whenthe subject is in low activity mode. Finally, we look forward to deploying RRACEin the real world: we are engaged in employing cadence to measure other usefulinformation about gait such as stride length and type of gait, and exploring deploy-ment in a variety of real applications [143].4.8 AcknowledgmentThis work was funded by the Natural Sciences and Engineering Research Coun-cil of Canada (NSERC) and the GRAND NCE. User data were collected underUniversity of British Columbia’s Research Ethics Board approval H01-80470.112Chapter 5Susceptibility to PeriodicVibrotactile Guidance of HumanCadenceIf everything seems under control, you’re not going fast enough.— Mario AndrettiIn this chapter we1 introduce a new guidance method that employs periodicvibrotactile cues to help users walk at a desired speed. We also explore walker’ssusceptibility to Periodic Vibrotactile Guidance (PVG): specifically, adjustments oftheir stride frequency in response to cues that are clearly perceived; and finally,how long users can maintain their stride frequency after the guidance cue stops.While wearing a vibrotactile display on one wrist, each participant was givenfive vibrotactile tempos, logarithmically spaced across the participant’s walkingfrequency range. We measured stride frequency, and compared it with cue tempounder conditions that varied cue tempo and presence / absence.This chapter appears with minimal modifications in: [76]• I. Karuei and K. E. MacLean. Susceptibility to periodic vibrotactile guidance of humancadence. In Haptics Symposium (HAPTICS), 2014 IEEE, pages 141–146, 20141For a list of contributors and their level of involvement please refer to the Preface on page iv.113Our results suggest that most individuals (here, 13 out of 15) can synchronizetheir cadence with a vibrotactile cue with 95% accuracy (mean error, all partici-pants: -1.5%, SD = 8.1) for a guidance tempo within their physical ability. Once atempo was matched, walkers could maintain it for at least 30 seconds after the cuewas turned off, showing promise for intermittent guidance as a solution to stimulusadaptation and annoyance.This finding informs design of spatiotemporal guidance systems, by showinghow the informationally narrow but nevertheless underused haptic channel mayhave utility in guiding pedestrians’ speed, without a need to learn abstracted sig-nals, and through a continuous control system.5.1 IntroductionNew technologies emerge daily that aim to use sensing and computation to assistin our daily activities: task and time management, navigation and location servicesare but a few. Many are framed as guidance tools: they can save us time or im-prove our performance in some task (e.g., walking in an unknown neighborhood)by providing immediate information or by making a task (e.g., finding the nearestcoffee shop) easy enough to be done in parallel with another.However, this potential is often undermined by usability challenges, with oneof the most crucial being sensory load. Whatever the communication channel,signals deployed at a conscious level are likely to be intrusive. Additionally, mostsuch tools rely on vision and audition as their medium for user communication. Bytheir nature they are used in multi-task scenarios, so perceptual competition is thenorm; the result often overwhelms, and routinely jeopardizes safety. Meanwhile,the tactile modality is often suggested as an underutilized alternative, but has otherpotential drawbacks (its own sensory load, nonperceptibility, annoyance).In this research, we examine the use of Vibrotactile (VT) guidance cues to pro-vide pedestrian cadence guidance, ultimately processed pre-attentively. We havepreviously reported sensorially optimal locations on the human body for process-ing pedestrian guidance cues (Karuei et al. 2011 [78]; and Chapter 3), and a val-idated algorithm that can measure realtime cadence well enough for interactivecadence guidance, with a commodity smartphone sensor (Karuei et al. 2014 [79];114Figure 5.1: PVG regulates a walker’s step frequency with subtle cues – to help him arrive atthe bus stop at just the right time. Or, help a runner train at the right cadence, or a rehabpatient exert the right effort.and Chapter 4). Here, we demonstrate that given a periodic cue in a single-taskscenario, walkers can adjust their step frequency to match it with minimal reportedeffort. In a final step reported elsewhere, we evaluate how this ability persists undervarying types of sensory, physical and cognitive load.5.2 ApproachHuman walking is a repetitive movement whose rate is primarily characterized bythe stride’s length and its frequency. Under normal circumstances, the walker (orrunner) can control either one to achieve a desired speed: when one is constrainedto increase or decrease, speed changes proportionally while the unconstrained pa-rameter is relatively independent of this change [89].We propose a simple way of guiding human cadence with VT cues: we mapa desired walking frequency to the tempo of a PVG cue, and ask the pedestrian tomatch walking tempo to it. This guidance can subsequently be incorporated intofeedback control to maintain or adjust the walker’s locomotion speed as desired ordictated by an application.This means of communicating rate information fits well with known capabil-ities of the haptic channel, and could be helpful to pedestrians and athletes whoneed to efficiently manage the timing of repetitive movements (walking, running,rowing). Direct-mapped rather than abstract, PVG should require minimal learn-ing, and have a lower steady-state impact on cognitive processing than symboliccues [100, 156]. By freeing cognitive and attentional resources needed to attend115to ones’ surroundings, they may improve safety directly and indirectly. Their sim-plicity may allow them to be combined with other methods of VT communication,for example to transmit higher-level activity information.From a control perspective, PVG operates on a continuous spectrum; tempoand its inverse, the inter-cue interval, can be any positive real number. Continuouscontrol affords many alternatives for control configuration and gain adjustment toachieve smooth, efficient regulation of cadence and speed. These include flexibilityin judicious deployment of ‘silence’ breaks: long periods of VT stimuli should beavoided because too long or too many vibrations can become irritating to someusers, and over-stimulation produces adaptation and loss of sensitivity [64].5.2.1 ContributionsOur quantitative contributions demonstrate empirically the potential effectivenessof PVG, with:1. Data on the effect of tempo and repetition on walkers’ ability to match strideto a VT cue, confirming a broad ability to do so given a comfortably realiz-able tempo; and2. Evidence of walkers’ ability to maintain cued frequency at least 30s aftercue-off, important for avoiding cue adaptation.This creates new opportunities for systems to help pedestrians control walkingspeed easily and accurately. We also share an experimental methodology withutility for future cadence-control development, and discuss implications for appli-cation design.5.3 Related Work5.3.1 Perceptual Overload and SafetyThe critique of our dependency on eyes and ears for interacting with consumerelectronics (e.g., music players, GPS guidance tools, phones containing both) iswell known. This reliance contributes to overload and inefficiency in visual and au-ditory perception [70, 101, 162], while the graphical and auditory interfaces them-116selves often fail when their target modalities are unavailable or inconvenient [167].In other cases, in competing for required resources they undermine primary taskperformance [163]. Motor vehicle authorities increasingly acknowledge risks in-herent in electronic device usage while driving, citing distracted driving due to tex-ting or talking on the phone as directly responsible for upticks in collision statistics[127]. But pedestrians are equally at risk of attentional lapses [66, 71, 111], ren-dering them vulnerable to crossing streets more slowly while using a phone [66]and inattentional blindness [71].The two obvious approaches to reducing visual and auditory, and ideally cog-nitive, load are to (a) limit the secondary task (e.g., by not using a guidance tool),which is less desirable to the user; or (b) replace audiovisual cues, and their con-scious processing, with VT cues that require little effort to interpret and ideallyprocessed pre-attentively [11]. Examples include vibrations on the left or rightside of the torso as turn direction indicators [163], alarms to warn of safety issuessuch as an unduly slow street-crossing or oncoming traffic or cues that influencewalking speed to make travel more efficient and retain mental capacity for othersituated tasks.5.3.2 Spatial Vibrotactile GuidanceSpatial guidance systems typically provide event-driven cues not continuous con-trol, but the relatively extensive efforts here are informative as to cue interpretabil-ity, attentional load and evaluation.One class uses direct-mapping of vibratory stimulus to direction, e.g., Ertanet al.’s system to guide blind users in unfamiliar indoors areas, with a 4-by-4 vest-embedded array which rendered a stop signal or cardinal direction [35]; or Bosmanet al.’s use of tactors on both wrists to augment space perception in unimpairedwearers [11]. Tsukada & Yasumura achieved 8-direction guidance outdoors with atactor belt [163], and Koslover et al. compared VT and skin-stretch signals with vi-sual and auditory cues [86]. All of these systems have found users able to interpretdirect-mapped spatial guidance with high accuracy.In a different shared-display approach, Rukzio et al. coordinated a palmar VTphone display with a public 8-light display. The lights toggled on/off in a rotation,117and the phone vibrated when the direction on the public display matched the user’sroute direction [136]. Van Erp et al. investigated more abstracted VT navigationcues, displayed around the waist using four distance-coding schemes. Two relateddistance and tempo of stepping rhythm (faster tempo indicated shorter distance)and the others communicated departure, arrival, and intermediary phase by threedistinct tempos of one rhythm [167]. Their VT system was a successful directionindicator but the distance indicators for walking needed improvement.We envision a future system in which speed-control and direction cues arecombined, with sufficient care taken to disambiguate them.5.3.3 Periodic Guidance of LocomotionStudy of guiding how fast to walk is less common, yet pace guidance has obvi-ous utility for mobile, Global Positioning System (GPS)-enabled navigation apps.These currently tend to assume an average walking speed applied to everyone topredict time-of-arrival and suggestions for departure time. In reality, people walkat different speeds. When arrival time is important (catching a bus or train, goingto a meeting) walking speed may be as important as direction (Figure 5.1).Walking is a repetitive task with a variable speed controlled as: walking speed=stride frequency× stride length [89]. Individuals walk at a preferred frequency,which minimizes energy expenditure and depends on the person’s body. A walkermay adjust both stride frequency and length to control walking speed [29]. Laurent& Pailhous measured walker response to both metronomc cues and constraints onstep length, and found that good pace control can be accomplished by constrain-ing and controlling just one of the two parameters, due to their relative indepen-dence [89].One auditory study found that metronome beeps can also guide walking ca-dence [29].Ferber et al. used haptic cues delivered through foot pedals to maintain tar-get intensity level on a stair-climber exercise machine while doing a mental task.Two methods embodied velocity-control (“on” when outside a target zone), andanother gave metronomic VT cues at 2x the desired stepping rate. Results showedissues with perceptibility and signal understandability, and reported increases in118average parameters (velocity, power, and variance) rather than performance in step-level tempo matching. However, user reactions are relevant here: likeability andcomprehensibility did not correspond to effectiveness at increasing effort, and thetempo-matching scheme was deemed hard to follow, and produced the greatestinterference with a simultaneous task of any method tested.In our own design we emphasized perceptibility, comprehensibility and lowcognitive processing effort. Feet are not ideal for mobile cueing – sensitivity is lowin the feet and degrades with movement body-wide as explained in Chapter 3 – sowe proceeded with wrist-worn tactors.5.3.4 Controlling Step RateIn the present work, we explore the use of continuous control on stepping fre-quency. The obvious alternative is discrete: a bang-bang (on-off) controller [5]that gives rate-control cues (“walk faster / slower”) when speed goes outside aspecified band. This approach is simple to implement, and can be attempted withsensor sources subject to noise and dropouts, such as GPS.However, when the control action is not well matched with system responsive-ness (here, the walker’s variable response to the cue; or a runner’s heart-rate inreaction to a change in pace on a hilly route), the result oscillates between thresh-olds. The resulting discomfort can be experienced with many currently availableheart-rate and GPS-based running speed regulation products. Oscillation is bestmitigated by widening the control band, undermining precision. Guidance intomultiple bands of desired velocity (for greater precision) does not improve stabil-ity, and can make the system harder to learn or conceptually understand.Continuous control does need reliable data with accuracy, refresh and phasedelays commensurate to control bandwidth requirements. Our implementationuses our Robust Realtime Algorithm for Cadence Estimation (RRACE) algorithm– which derives realtime step frequency estimates from a commodity smartphoneaccelerometer – with a phase delay, within 2 steps (Chapter 4).1195.4 ExperimentTo ascertain the feasibility of low-level VT guidance of stride frequency, we neededto measure how well humans can synchronize their walking frequency with PVG,and how well they can maintain their walking frequency once the cue stops.We hypothesized thatH1 most people can follow the tempo of PVG with an accuracy >= 90%;H2 tempos near an individual’s natural walking frequency will be easier to follow(exhibiting lower cue divergence than extreme tempos);H3 error will be negative for fast tempos (walking cadence < cue) and positive forslow tempos (walking cadence > cue); andH4 magnitude of error will increase when the cue is turned off.5.4.1 Apparatus and ContextOur setup consisted of a wrist-worn VT display, cadence sensing (four Androidsmartphones running a custom step-detection algorithm), and a control laptop asexplained below. The laptop managed the procedures (Section 5.4.4) and sent com-mands to the VT display wirelessly while the phones constantly measured walkingfrequency.To reduce measurement noise due to cornering, we collected data on a straight,wide, level walkway in a quiet residential area within a university campus. Wefound that 350 meters accommodated one minute of walking by the fastest-movingpilot participants.Client Side: VT CuesTo deliver tactile cues to the participant’s wrist, we used Tam et al.’s Haptic Noti-fier [156] (Figure 5.2). Relevant parts of this system are (i) an Arduino Fio micro-controller [151] with built-in XBee socket, (ii) XBee series 2 radio to communicatewith the experimenter’s laptop, (iii) three synchronized eccentric-mass tactors witha vibration frequency of ∼ 190 Hz (Section 3.3.1), and (iv) a lithium polymer bat-tery.120To avoid communication delay between laptop and Arduino wrist controller,the Arduino logged the start / end of each trial and the time when haptic cues wereturned off during the trial, according to its clock. These data were communicated tothe laptop (server side) at the end of each trial. Arduino timestamps were convertedto computer time in post-processing (Section 5.4.4).We displayed two types of vibrations, all delivered at ∼ 190 Hz: the guidancecue (periodic vibrations, each 100 ms in duration, with an interval defined by theguidance tempo) and the stop signal (a single 5 s vibration, administered at theend of a trial). For example, for a guidance tempo of g = 2 Hz, a trial’s wrist-display vibrations would consist of: [0-20s]: 100 ms every 500 ms; [20-60s]: novibrations; [60-65s]: 5 s vibration.Figure 5.2: The Haptic Notifier (top) and the Xbee USB radio (bottom).Server Side: the Experimenter’s LaptopThe experimenter ran the main control code on a laptop that acted as the server,responsible for: (a) Measuring the participant’s fast and slow cadences and derivingthe mid levels from those through the experimenter’s key presses, which revealedstart, end, and number of strides. (b) Logging synchronization times from the121wrist-worn Arduino, and the Android phones. (c) Reading the trial order from apre-generated table. (d) Running the study step-by-step and send the commandssuch as “start the trial” to the Arduino. (e) Sending a request to the Arduino forlogs at the end of each trial, receiving them, and saving them to a file.Cadence Measurement: RRACEWe used four Android phones equipped with our custom RRACE algorithm formeasuring users’ walking frequency (Chapter 4). We placed two phones in partic-ipants’ front pockets and the other two in a small backpack: while RRACE is espe-cially robust to orientation and body placement, here we used locations previouslyshown to provide the highest accuracy. These phones logged the 3-D accelerationof the user’s thighs and torso and measured and recorded the user’s cadence every200 milliseconds. Duplication provided robustness to issues such as the Androidoperating system terminating RRACE due to perceived CPU over-usage, or inad-vertent button presses. We used the median of all active cadence estimations (todiscard outlier measurements) to improve measurement accuracy.5.4.2 Experiment DesignOur experiment had two factors: guidance tempo (to assess response to divergencefrom natural step rate), and repetition (learning). Each trial consisted of 20 s withVT guidance and 40 s without.An experiment session contained 16 regular trials (5 guidance rates × 3 repe-titions + 1 dummy). Trials were put into out-and-back pairs for practical reasons;because 15 is an odd number, we added a dummy trial at the end (whose data werenot used) to make sure the participant finished the experiment near the startingpoint.Factor 1 – Guidance Rate: We coordinated five guidance rates to each individual’sown fastest and slowest walking frequencies (Section 5.4.3).Factor 2 – Repetition: To ascertain learning (performance improvement as aresult of exposure) we presented every guidance rate three times, arranged in threeblocks, each consisting of the five rates in random order.1225.4.3 Computing Experimental Guidance RatesIn an initial calibration step, we measured participant i’s slowest and fastest ca-dences using RRACE, then matched that participant’s two extreme custom guidancerates (cue tempos) gi[1],gi[5] to his/her slowest and fastest demonstrated cadences,respectively. We then distributed the middle rates evenly on a logarithmic scale;i.e., the ratio of each two consecutive tempos (gi[n+1]/gi[n]) is constant.Reference frequency fr(t) was then set to one of gi[1−5].5.4.4 ProceduresAfter introduction and consent, we asked the participant to walk at his/her slow-est and fastest walking speeds. For each, we measured the time required fortwenty strides (t20). Our experiment program computed the inter-step interval( τ = t20 / 20 ) and thence walking frequency ( f = 1 / τ ), to define this par-ticipant’s g[1] and g[5] (slowest and fastest stride frequencies). We sent the temposto the wrist-worn Arduino client, and synchronized the phones and Arduino clockswith the control laptop.We next explained the task, the wrist display and the experiment format, thencarried out a representative practice trial. Participants were explicitly instructed totry to (a) walk at the tempo of the cue, and (b) continue to walk at that same cadenceafter the cue stopped. This was repeated until the participant fully understood theprotocol, and then the 15 actual trials (plus the dummy trial) were run. A sessiontook about 45 minutes and we thanked each participant with 10 dollars.Pairing of Trials: Participants walked away from the experimenter on a straightwalkway for odd-numbered trials, stopped when they felt the sustained VT stop sig-nal, then turned around. When they felt the new guidance cue they began walkingagain, proceeding until they again felt the stop signal (in some cases passing theexperimenter). To conclude close to the experimenter, the experiment ended witha dummy trial number 16 with a random cue frequency; its data were not used.5.4.5 MetricsWe described users’ stride frequency with cadence ( f ) and cadence ratio ( f¯ ). Ca-dence is the walker’s stride frequency, whereas cadence ratio is cadence divided by123middle cadence, defined as the geometric mean of that walker’s fastest (gi[5]) andslowest (gi[1]) stride frequencies, which was the guidance tempo gi[3] in this study(Eq. 5.1). Cadence ratio was used to normalize participants’ cadences to their ownmiddle cadence, to minimize offset and scale deviation due to individual variabilityin natural walking frequency and range.f¯i(t) =fi(t)gi[3](5.1)We then measured departure from the guidance cue with cadence error %, definedas the difference between participant i’s cadence ( fi) and the tempo of the j’thguidance signal (the tempo of the guidance signal at time t), normalized to thelatter and presented in percentage points:ei(t) =fi(t)−gi[ j(t)]gi[ j(t)]×100% (5.2)5.4.6 Analysis TechniqueCadence was measured every 200 milliseconds on all of the phones, each datapointtimestamped with the phone clock, and analyzed in (non-overlapping) two-secondwindows. We converted the timestamps of all the data from the phones to thecomputer timeWe grouped the cadence measurements from all the phones at each window,removed outliers and used their median for subsequent analysis, and removed thefirst 4s where the participant is transitioning from a stationary position to naturalwalking. One datapoint/2s in 56s of usable trial yielded 28 datapoints/ trial.We separated data into VT cues on/off; then used Generalized Linear Model(GLM) for statistical analysis of each region, with post-hoc pairwise comparisonswith Bonferroni adjustment for multiple comparisons. To assess the effect of cue-off over time, we compared datapoints at different times in the cue-off region.124Table 5.1: Summary statistics of cadence error % by guidance condition for cue-on (top) andcue-off (bottom).Cue On (18 s after the start of the trial)Guidance mean sd median min max skew kurtosis seg1 -0.50 2.47 0.00 -6.61 3.78 -0.35 -0.54 0.40g2 0.40 5.51 -0.04 -20.36 13.50 -0.13 4.32 0.84g3 -0.64 2.51 -0.54 -7.58 7.10 -0.01 1.91 0.39g4 -3.20 4.71 -1.04 -19.13 3.20 -1.35 1.71 0.73g5 -7.70 7.42 -9.10 -21.58 2.40 -0.16 -1.48 1.11Cue Off (58 s after the start of the trial)Guidance mean sd median min max skew kurtosis seg1 6.11 6.07 6.50 -8.14 25.06 0.70 1.59 0.96g2 3.43 5.26 2.74 -6.54 16.75 0.51 -0.36 0.83g3 0.33 6.41 0.47 -24.06 10.56 -1.30 3.15 1.00g4 -3.10 5.89 -1.47 -14.45 6.81 -0.47 -0.90 0.91g5 -10.29 7.07 -10.98 -26.25 1.57 -0.33 -0.85 1.055.4.7 ResultsData Summary15 participants (9 male), aged 19− 31 years (mean = 24.9, SD = 3.6), 152−196 cm tall (mean = 169.7, SD = 11.2), and weighing 39− 90 kg (mean = 63.4,SD = 14.2) took part. 4, 2 and 9 participants respectively had none,<5 years, and>5 years of prior musical training.Stride frequency increases with cue tempo (g1...g5) even 38 seconds after turn-ing off the cue, i.e., at t = 58 s (Figure 5.3). The fastest VT cue shows less successat making users walk faster (g5 and g4 are too close in Figures 5.3 and 5.4).Cadence error % demonstrates how well people are following the VT cues:positive (or negative) error % means the participant’s cadence is faster (or slower)than the cue tempo. Figure 5.5 shows that when the cue is on, users closely followthe cue tempo (average error < 4%) except for the fastest (average error -7.7%).When we turn off the VT cue, step rate diverges more from cue tempo and (unsur-prisingly) tends towards the middle stride rate.Individual post-cue divergence is best seen by viewing data from a singleparticipant (second repetition) as a set of time series. Figures 5.6-5.7 are scat-ter plots with a smooth curve fitted by the Locally Weighted Regression (LOESS)125method [24]; Participant 4 was chosen randomly from 12 of the 15 participantsshowing a similar response pattern. Consistently with the aggregate views, 20 sec-onds into the trial when the cue stops, cadence error starts to grow, although forsome tempos, it quickly plateaus. For slower guidance cues (g1 and g2) cadenceerror is generally positive, and negative for faster cues (g4 and g5).g1 g2 g3 g4 g51.01.21.41.61.82.02.22.4At 18 SecondsGuidanceCadence (Hz)g1 g2 g3 g4 g51.01.21.41.61.82.02.22.4At 58 SecondsGuidanceFigure 5.3: Cadence by guidance rate (average of all participants and all repetitions), when cueis on (left/yellow, at 18s); and off (right/gray, at 58s). Despite inter-individual variability,the cue-linked cadence increase is clear in both cases. Guidance rates are individual-specific and thus cannot be shown.Statistical AnalysisWe separately analyzed guidance and non-guidance periods, to investigate whethercadence error % is significantly different (a) under different guidance conditionswhen the cue is on and off, (b) at different points in time since the start of the trialwhen the cue is on, and (c) at different points in time after the cue is stopped (seeTable 5.1).VT Cue On: The statistical analysis of Generalized Linear Model of the datashowed that for cue-on, guidance rate and time from trial start have a significanteffect on cadence error % (p < 0.05). Pairwise comparisons show that each two ofthe guidance tempos differ significantly from each other. These factors also interactwith each other (p < 0.05), with a simple explanation: under slower guidance126g1 g2 g3 g4 g50.81.01.21.4At 18 SecondsGuidanceCadence Ratiog1 g2 g3 g4 g50.81.01.21.4At 58 SecondsGuidanceFigure 5.4: Cadence ratio by guidance rate, when cue is on (left, at 18s) and off (right, at 58s).Cadences normalized to the participant’s middle tempo), in contrast to non-normalizedcadences of Figure 5.3, show less individual variance and the difference between the 5levels is clearer.tempos, walkers start with a positive error that shrinks as the cue continues, andunder faster tempos participants start with a negative error that then shrinks.In the temporal response, the first measurement after the 4s transition periodremoved for this analysis (Section 5.4.6) was significantly different from the rest ofthe measurements during the cue-on region, but there is no significant differencesbetween subsequent 4s windows in the guidance period. This indicates that partic-ipants aligned their walking rate with the cue tempo early on, attained stability by4s, then maintained it thereafter.VT Cue Off: Similarly to cue-on, when the guidance cue is off, guidance rateand time into trial (or since cue-off) significantly impact cadence error % (p <0.05). They also interact with each other in the cue-off region, with an explanationsimilar to above. Pairwise comparisons show that each two of the guidance temposare significantly different from each other.Temporally, two of the first measurements after stopping the cue were signif-icantly different from two other times near the end. This means that the errorincreases in amount when the cue stops but the change in error is so slow that thereis little difference except for points sufficiently far apart in time.127g1 g2 g3 g4 g5−30−20−10010At 18 SecondsGuidanceCadence Error (%)g1 g2 g3 g4 g5−30−20−10010At 58 SecondsGuidanceFigure 5.5: Cadence error % by guidance rate, when cue is on (left, at 18s) and off (right, at 58seconds). At the end of the cue-on phase (left), the smallest error is seen in the lower threelevels, g1, g2, and g3 (means: −0.5%,0.4%,−0.6% respectively) and the largest with g5(−7.7%). After the cue stops, absolute value of error grows faster for g1 and g2 (absolutevalue of means increase 5.6 and 3.0 respectively) than all other rates.5.4.8 DiscussionOur experimental results confirm that periodic VT cues can easily affect pedes-trian’s walking frequency, when consciously followed (less than 5% divergencein four out of five cue rates and less than 10% during the fastest) (H1 accepted).Our results showed that for tempos distributed across an individual’s full walkingrange, divergence from cued tempos near and lower than individual’s natural walk-ing frequency is lower (H2 rejected). Error increases when the cue is turned off(H4 accepted), but this increase happened at a subtle rate within the 40s windowwe observed.When a user tries to synchronize steps with a cue, the direction of error andits upper bound are generally predictable: positive when the cue is faster thanwalker’s typical cadence and negative when slower (H3 accepted). A benefit ofthis predictability is the possibility of mitigating overall error in a Closed-loopControl system by anticipating the worst case scenario and adjusting the cue tocompensate, i.e., by applying a model of the walker’s response to this low-levelstimuli.1281.81.92.02.12.230 40 50Time Since Start of Trial (s)Cadence (Hz) Guidanceg1g2g3g4g5Figure 5.6: Scatter plot of P4’s cadence during trials 6-10 by guidance rate with smooth curvefitted by the LOESS method. Bands represent the confidence interval of the LOESS method.From guidance cue off (20s) to trial end (60s), cadence converges toward the walker’stypical cadence.−5.0−2.50.02.55.030 40 50Time Since Start of Trial (s)Cadence Error (%) Guidanceg1g2g3g4g5Figure 5.7: Scatter plot of P4’s cadence error % during trials 6-10 by guidance rate (colourcoded) with smooth curve fitted by the LOESS method. From guidance cue off (20s) to trialend (60s), cadence error tends to grow (further from zero) at least initially, then stabilizesin some cases.5.5 Conclusions and Future WorkIn this chapter we proposed Periodic Vibrotactile Guidance (PVG), for regulatingpedestrian stride frequency. An exemplar application is guiding a commuter towardthe closest bus stop at the optimal walking speed, not sweating when there is timefor a stroll nor missing the bus when a slightly faster pace is sufficient. Otherapplications for PVG include athletic training (a long-distance or sprint runner orrower, seeking to maintain a step-level pace) and rehabilitation (displaying desiredstep frequency to a patient instructed to achieve a given effort or mobility level,and no more).129Our results confirm that taction, and in particular stimuli applied through awearable to the wrist, is a viable choice for such applications. It is not used inlarger task of locomotion, and does not compete for perceptual or motor resourcesthat other tasks (listening, reading, even texting on a mobile device) might; simpleto learn, it is likely to be cognitively lightweight as well.Whether audible or tactile, periodic guidance has a potential for more stable,comfortable cadence regulation than the common alternative, bang-bang velocitycontrol, although this premise remains to be tested. Specifically, its continuityallows for deployment in close-loop control systems – most simply a Proportional-Integral-Derivative (PID) controller – that can further improve the user’s perfor-mance by adjusting the cue based on current state, previous error and future pre-dictions, with gains adjusted to the user’s needs and physiological responsiveness.An interesting next step will be to explore the control parameter customizationneeded for different task scenarios and individual differences.Our experiment tested individuals’ ability to match stride frequency with a VTcue displayed to the wrist. Most (13/15) could synchronize at 95% accuracy acrosstheir full range of walking speed, with a 5-10% lag behind cues faster than theirnatural cadence, and 5% lead ahead for slower cues, without significant training.In day-to-day applications such as pedestrian guidance this error ratio will be neg-ligible relative to other factors: a 5% error for a 15 minute walk is equal to 45 s,and is predictable enough for a planning algorithm to compensate for it. In appli-cations that require more accuracy such as training athletes, users’ focus and effortcould improve accuracy. Ideally, we would like users to “lock the buzz” to a partic-ular point in their walk cycle to achieve maximum accuracy and stability; however,without data on the phase of walking cycle we cannot be sure if that was achievedby some users or not.Walkers maintained their stride frequency within a manageable bound aftercue-off; divergence was slow enough to contemplate use of (at least) 30s ‘silence’breaks between cued periods, important for avoiding irritation and adaptation. Theactual length of silence breaks can be further optimized by a Closed-loop Controlalgorithm.1305.5.1 Future WorkAs we proceed stepwise to a fully viable control approach, the most immediatenext step after verifying conscious cue-matching ability is to examine subconsciousstep-matching to VT cues. This is an essential component of a viable control ap-proach for users unlikely or unable to fully concentrate on step rate for any lengthof time.Set up as a dual-task scenario, important cases to consider will be distractingauditory, visual and cognitive tasks with qualities similar to those that we do whilewalking and exercising (listening to music or podcasts, talking on the phone, nav-igating a map, or perhaps even regarding our surroundings. Workload imposed bythe PVG system on any of these tasks, and of them on step-matching performance,are of keen interest.Finally, we anticipate that using PVG in a simple closed-loop format will be keyto its applicability. Many variables remain to be investigated on this topic: e.g.,whether modifying vibration intensity in proportion to target tempo divergencewill improve performance, and the many possible means of incorporating silenceperiods to mitigate adaptation and improve acceptability.5.6 AcknowledgmentThis work was funded by the Natural Sciences and Engineering Research Coun-cil of Canada (NSERC) and the GRAND NCE. User data were collected underUniversity of British Columbia’s Research Ethics Board approval H01-80470.131Chapter 6Periodic Vibrotactile Guidance ofHuman Cadence, Performanceduring Auditory MultitaskingThe degree of slowness is directionally proportional tothe intensity of memory.The degree of speed is directionally proportional tothe intensity of forgetting.— Milan Kundera, SlownessIn this chapter we1 evaluate the viability of a haptic cueing approach for guid-ing pedestrian walking cadence with regards to workload, walker’s performance,and interference with auditory tasks. We previously demonstrated that pedestrianscan synchronize and maintain walking frequency with vibrotactile pulses deliveredon the back of the wrist from a wristband. Here, we examine walker’s guidabil-ity in the face of realistic auditory multitasking scenarios (listening to podcasts, ormusic of varying rhythmicity). We measure workload and walkers’ performanceunder three guidance rates and four auditory tasks. Our results suggest that whileauditory tasks – in particular, those with verbal content – do undermine cadencematching performance, stepping synchrony is generally achieved with >= 90%1For a list of contributors and their level of involvement please refer to the Preface on page iv.132accuracy within 10 seconds. Vibrotactile guidance does thereby successfully affectwalkers’ speed. Perceived guidance-related workload is statistically significant butnot related to cueing frequency; future work will assess its practical significance.6.1 IntroductionMultitasking has become one of the main themes of our lives; society encouragesit, our ambitious lifestyles demand it, and technology facilitates it. Sadly, pro-ductivity does not always improve; instead, the competition for mental resourcesimposed by tasks conducted in parallel may slow us down or cause mistakes. Whenmultitasking is unavoidable, technology needs to mitigate negative impact – e.g.,by simplifying a task, improving its timing, or diverting the required processingto a less-used cognitive resource. A good example is the auditory step-by-step di-rections that have become the norm for GPS navigation devices; augmenting thegraphical interface with auditory signals enables drivers to keep their eyes on theroad, and breaking the directions into small steps makes guidance signals easier todigest.Guidance (e.g., for time management, navigation, and finding nearby services)is by definition multitasking: the guidance happens in parallel to a primary task(e.g., coordinating a meeting, walking to a destination). Replacing or augmentingthe visual interface with an auditory one may reduce some negative effects (e.g.,looking at the device instead of the road while walking or driving) but may alsocreate new usability challenges. For example, audition can be as occupied as vision(listening tasks), while environmental noise can further interfere with perception.The tactile modality is often suggested as an underutilized alternative.Haptic cues, most conveniently implemented as Vibrotactile (VT) stimuli, havethe potential to apply less attentional load than visual and auditory cues, and con-flict less with situational awareness and other listening tasks. The larger goal ofthis project is to establish the degree to which this can be exploited in pedes-trian guidance. In earlier steps, we identified sensorially optimal locations on thehuman body for processing pedestrian guidance cues (Karuei et al. 2011 [78];and Chapter 3), and a validated algorithm that can measure realtime cadence wellenough for interactive cadence guidance, with a commodity smartphone sensor133Figure 6.1: Experiment setup. Left: during a trial, the participant carries four smartphonesequipped with RRACE algorithm for cadence measurement (two in front pockets and twoin backpack); another smartphone (audio player) is attached to the backpack with its screenfacing out for the experimenter to choose and play the audio tracks. The haptic notifier isworn on the participant’s wrist. Right: the participant answers the NASA-TLX questionnareon a laptop after each trial pair.(Karuei et al. 2014 [79]; and Chapter 4). We then determined the range and ac-curacy with which walkers are able to synchronize their stepping cadence with VTcues (Karuei and MacLean 2014 [76]; and Chapter 5), by asking the pedestrian towalk to the cue beat, and to continue with this cadence after a cue stopped.This brought us to the focus of the present chapter: how well can walkersfollow these cues during realistic auditory multitasking; and what is the magnitudeof the workload that VT cues impose on them?1346.2 ApproachHuman walking, like many other movements (running, swimming, rowing), isrepetitive and its speed is defined by stride frequency (cadence) and length. Typi-cally, a walker controls both parameters, unconsciously, to achieve a desired speed;however, when one parameter (stride length or frequency) is constrained to in-crease/decrease – within the walker’s ability – speed also changes proportionally[89]. We explore the potential to exploit this property of walking in three sequentialsteps.1. Periodic Vibrotactile Guidance (PVG): Our guidance scheme is driven by pe-riodic tactile cues, which render a desired frequency to the walker through the skinas a stepping target. This means of communicating rate information fits well withknown capabilities of the haptic channel and may be helpful to pedestrians andathletes who want to closely but efficiently manage the timing of repetitive move-ments. Direct-mapped, PVG should require minimal learning and have a lowersteady-state impact on cognitive processing than symbolic cues [68, 100, 156].Periodic cues are also simple enough to be combined with other haptic communi-cation such as navigational or higher-level activity information.2. Evaluating PVG and Workload: In the study reported here, we found that mostpedestrians can continue to synchronize their cadence with the VT cue tempo, evenin the face of a variety of types of auditory tasks. As a result, PVG successfullyaffected the walking speed of pedestrians in the cued direction. Workload measuredunder various combinations of auditory stimuli and VT guidance further showedthat workload due to tactile guidance is noticeable, but the cue frequency does notsignificantly change its amount. We found that workload increase due to auditoryinput was small compared to that from PVG, and had only a small impact on theuser’s ability to follow stepping cues. On average, users required about 8 secondsto achieve a steady cadence from a stationary start.3. Control of PVG: Ultimately, we plan to incorporate PVG into feedback controlto maintain or adjust the walker’s locomotion speed according to an application’schanging specifications. Tempo ( f ) and its inverse – the cue interval (T = 1/f )– can be any positive real number which provides PVG a continuous spectrum to135operate on. This characteristic affords many linear and non-linear feedback con-trol configurations to achieve fast, smooth, efficient, and/or error-free regulation ofcadence and speed. While this manipulation is beyond our present scope, it hasinformed the design of the present study.6.2.1 ContributionsThe present evaluation provides:1. Data on the effect of PVG rate and auditory task on cadence, stride length,and walking speed.2. Data on the effect of PVG and auditory task on workload during walking.3. Analysis of walkers’ ability to follow PVG cues during auditory multitasking,comparing guidance rates and auditory tasks in terms of performance andworkload.4. Experimental methodology for measuring walking performance and work-load during auditory multitasking, for re-use in exploring other workload-reducing stratagems.5. Recommendations on how to incorporate PVG so as to minimize its workload-related impact.These findings will inform improved pedestrian guidance systems, which by re-ducing guidance-related mental effort can be helpful without compromising safety.Our experimental methodology can be re-used in similar settings to better under-stand motor control and cognitive functions and their relationship to auditory andtactile stimuli, particularly for development of tactile and/or guidance applications.6.3 Related Work6.3.1 Vibrotactile GuidanceAs previously outlined in greater detail (Section 5.3.1), a secondary task that usesaudiovisual channels – e.g., via a Global Positioning System (GPS) device – com-petes for resources required for an aurally or visually demanding primary task (e.g.,136driving or walking). This contributes to overload and inefficiency in visual and au-ditory perception [70, 101, 162], undermines primary task performance [163], andcan thereby endanger safety and cause substantial stress.The two obvious approaches to reducing visual and auditory, and ideally cog-nitive, load are (a) limiting the secondary task (which the user may find unaccept-able), and (b) replacing audiovisual cues with a lower-effort alternative.Guidance of movement in space is one activity where the tactile modalityhas exhibited promise as a replacement or augmentation for visual and auditorychannels. Examples demonstrating its unloaded guidance potential include Ertanet al.’s embedded tactor array for rendering cardinal directions and stop signals[35], Bosman et al.’s wrist mounted tactors for guidance in indoor places [11], andTsukada and Yasumura’s tactor belt capable of communicating the four cardinaland four intermediate directions through eight tactors around the waist [163]; for afull review, see Section 2.1.6.Temporal (or spatiotemporal) guidance has the additional challenge of time-variant dynamics. Maruyama et al.’s P-Tour [104] and ten Hagen et al.’s DynamicTour Guide (DTG) [159] are examples of spatiotemporal guidance which schedulevisiting of tourist attractions based on the user’s location. Both of these use graph-ical interfaces; this presents a potential sensory conflict with problematic results.Alternatively, the Haptic Notification System (HANS) by Tam et al. is an exampleof temporal guidance for time management during oral presentations, deliveringinterrupt-based cues to the presenter and the session chair at certain points in timeduring the presentation [156]. While this application was found to present minimaladditional sensory load, by its nature it required cognitive processing to make useof the cues which in turn required practice and training, and thus is not directlycomparable to our aims in pedestrian support.We envision a system where time management, speed, and direction cues arecombined in a navigation tool to help users achieve their goals with safety andefficiency.1376.3.2 Guidance of Human LocomotionWalking is a repetitive task with a variable speed controlled by cadence (or stridefrequency) and stride length [89] as shown in Equation 6.1.walking speed = cadence× stride length (6.1)When unconstrained, we tend to walk at a speed most comfortable for us; generallyone that minimizes energy expenditure per distance [130]. Increasing or decreasingwalking speed is achieved by changing stride frequency and stride length [83]. Itis possible to control walking speed through constraining stride length [89, 119,170], stride frequency [89], speed, or both [10]. As these are all obviously related,one might expect to see a compensatory effect from stride length (or frequency)when altering stride frequency (or length). However, Laurent and Pailhous showedthat these parameters are relatively independent, while each one is instead stronglycorrelated with speed [89] – hence, an opportunity to control walking speed byconstraining and controlling stride length or stride frequency.Stride length guidance has generally been achieved by visual cues such as tapemarkers [89, 119, 170]; stride frequency by auditory cues such as metronomicbeats [10, 29, 89]. Haptically, Ferber et al. tried different methods for guidingworkout speed on a stair climber. Their metronomic approach (taps on the user’sfeet at double the rate of the desired cadence) did not give promising results [37].In Chapter 5 we evaluated the use of periodic vibrotactile cues to guide humancadence, and ultimately speed. In our design, we emphasized perceptibility, com-prehensibility, and low cognitive processing effort. Feet are not ideal for mobilecueing (sensitivity is low in the feet and degrades with movement body-wide –Chapter 3), so we used wrist-worn tactors [156]. We found a basic ability to followcues, as well as its limit: participants fell behind fast VT cues and walked fasterthan slow cues, relative to their typical cadence (Chapter 5). That is, rather thanexactly matching the cued tempo, the cues appeared to exert upward or downwardpressure on actual walking tempo.1386.3.3 Temporal Guidance and Auditory TaskMultiple Resource Theory (MRT) posits that the interference between two tasksdepends on how much they share stages (cognitive vs response), sensory modal-ity (auditory vs visual), codes (visual vs spatial), and channels (focal vs ambient)[171]. In this regard, VT guidance (periodic or non-periodic) has little to no in-terference with a pedestrian’s vision. However, PVG and any auditory task (e.g.,listening to music or podcasts or conversing) could interfere with each other in twoareas: non-visual sensory perception and motor control.Mammals (and humans in particular) may be subject to three temporal scalesand/or mechanisms: the circadian clock involved in metabolic rhythms; an inter-val timer, flexible, cognitively controlled, and active at seconds to minutes; and amillisecond clock for speech, music, motor control [17]. Ideally, PVG will engagethe millisecond clock and impact motor control, eventually reducing mental work-load via downgraded reliance on the “cognitively controlled” interval clock in dailytasks (e.g., deciding when to start or end a task based on temporal constraints). Asa result, the sensory perception of both the vibrotactile cues and auditory tasks thatare time-sensitive, such as listening to music, would share the millisecond clock.On the other hand, movement timing depends on basal ganglia (involved in intervaltiming) and cerebellum (millisecond timing). The latter is also heavily involved inrhythm synchronization and music perception [176].This suggests that listening to music, especially the rhythmic variety, will in-terfere with/be most affected by PVG. To test this, we used auditory tasks withobvious and subtle rhythms, or with verbal content.6.3.4 Performance and WorkloadMethods employed to evaluate mobile and handheld systems include qualitative(interviews and observations) and quantitative (e.g., error rate and timing of eventswith the help of video recording [123]). The active component of mobile use hasbeen considered via heart rate and deviation from preferred walking speed [84],and cognitive workload [84, 123].Rubio et al. group tools for evaluating physical and mental workload intoperformance-based, physiological, and subjective measure categories [135]. They139note the frequent use of subjective procedures due to ease of implementation, non-intrusiveness, and sensitivity to operator load. Of the subjective workload measureswe employ – NASA Task Load Index (NASA-TLX), Subjective Workload Assess-ment Technique (SWAT), and Workload Profile – the first [65] has seen the most usein VT guidance research. Two of many research examples are Pielot & Boll’s useof NASA-TLX to measure workload of their tactile navigation system “Wayfinder”and compare it with a commercial pedestrian navigation system [122]; and Hogganet al’s investigation of perception of mobile multi-actuator tactile displays that userhythm and location [70].In the present research, our primary concern is to observe performance andworkload as a result of our experimental conditions. Our quantitative performancemetrics includes cadence, cadence error % (i.e., divergence from the guidancecue), stride length, and speed and we used the full NASA-TLX to measure per-ceived workload, which includes mental demand, physical demand, temporal de-mand, performance, effort, frustration, and total workload. We did not collectother qualitative or subjective metrics at this stage (e.g., regarding participant pref-erences) to keep experimental sessions to a manageable length and because theywill be more relevant in a setting where participants use PVG over longer periodsof time.6.3.5 Measuring CadenceAccurately and usably guiding cadence will require Closed-loop Control (CLC),and concomitant accurate realtime measurement of actual step-rate. With onlyopen-loop control (no system access to resulting rate), the designer has little alter-native to constant-level, ongoing cue output regardless of need, and this is boundto cause user irritation and stimulus adaptation. While discussion of possible CLCalgorithms are beyond our present scope, the availability of adequate cadence mea-surement technology is enabling to our larger aims as well as necessary to collectthe data reported here.Cadence can be measured using many different technologies, from traditionalpedometers equipped with mechanical or piezoelectric sensors to accelerometer-based instruments [43, 106, 174] (see Chapter 4 for a full review of these and140several other methods). In order to be used in a guidance system, a cadence mea-surement should be: (a) sufficiently accurate, (b) realtime, (c) robust to placement,orientation, and user differences, and (d) portable. We previously presented RRACE(Robust Realtime Algorithm for Cadence Estimation), which meets these require-ments through a frequency-based approach, in Chapter 4. RRACE measures mo-mentary cadence via frequency-domain analysis of accelerometer signals availablein smartphones. We used RRACE in our experiment (Section 6.4.5); however, itsdevelopment and cadence measurement in general are not key parts of the presentevaluation.6.4 ExperimentWe conducted an experiment to assess the effect of auditory task on a user’s per-formance, and the amount of workload PVG imposes on the user in conditions withand without several key types of auditory tasks. We hypothesized that:H1 PVG will influence the user’s cadence, stride length, and walking speed in thecued direction.H2 Auditory task will interfere – variously – with the effect of PVG on cadence,stride length, and walking speed, with greatest impact for highly rhythmicmusic.H3 Users will be able to synchronize step cadence with the guidance cue within5-10 seconds.H4 Presence of PVG will increase workload, with greatest impact for faster cues.H5 Auditory task will add to workload during walking, with the effect greatest fora verbal task (e.g., listening to a podcast).6.4.1 Experiment DesignWe used a within-subject repeated-measures design with two factors: guidancetempo and auditory task. Each trial lasted 25 s, and was part of a two-trial, out-/return repeated pair; i.e., the subject executed a given condition once in each di-rection, finishing the return trial close to the starting point. An experiment session141contained 24 trials: two instances of 3 guidance tempos × 4 auditory task condi-tions.6.4.2 Guidance ConditionsWe used three guidance conditions: fast, slow, and no guidance. Because eachparticipant has a personal natural cadence, in an initial calibration step we mea-sured the participant’s typical cadence by timing ten steps and measuring the aver-age step interval, then matched the fast and slow guidance rates to 1.15 and 1/1.15times his/her typical cadence, respectively. This ratio value was decided basedon the average range of participants’ cadences, as informed by our previous study(Chapter 5). There, we set the fastest and slowest tempos to each participant’sfastest and slowest cadence and distributed the other three rates between them; butfound that variance in how participants chose their fastest and slowest rates (narrowvs broad range) reduced our data’s consistency. Here, we used the same fast/slowratio (i.e., 1.152) for all subjects.Our current focus was workload and performance, and the effect on them ofauditory tasks. We therefore tested two rather than the five guidance rates ofChapter 5, allowing us to include a range of auditory conditions for a reasonablesession length.In Chapter 5 we used a baseline guided tempo near the participant’s naturalcadence, which turned out to be uninformative. Here we replaced this with a no-guidance baseline that would permit comparison with guided conditions for ca-dence, cadence error, overall movement speed, and subjective workload.6.4.3 Auditory TasksWe tested four auditory task conditions: podcast, techno, classical, silence (Ta-ble 6.1).Podcast examined the effect of verbal auditory tasks on participants’ perfor-mance. Another option, an actual scripted phone conversation administered by aconfederate, was infeasible due to low controllability.We sampled the diverse space of music, a key pedestrian diversion, by varyingrhythmic emphasis, on the premise that this will generate higher PVG interference142Table 6.1: Auditory task conditions used in evaluation.Type Task CommentsVerbal Podcast Engaging (but obscure) segment to fully holdattention: “What Caused the Sabre-ToothTiger Extinction”, produced and broadcastedby CBC’s “As It Happens” [19]. All partici-pants confirmed its novelty.High Rhythm Techno Non-vocal techno song called “Supa-Dupa-Fly” with a typical techno-trance structure,and a simple and distinctive rhythm. We pro-duced several samples with varying tempos.Low Rhythm Classical Johann Sebastian Bach’s “Air on G String” –consistent melodic elements devoid of strongor repetitive rhythm. One version (conven-tional tempo) was used.Baseline Silence No auditory stimulithan melodic variation. Factor levels were high (techno) and low (classical).Choice of auditory rate: Techno music slower than a pedestrian’s typical ca-dence would sound strange, whereas a beat faster than the fast VT cues wouldreinforce, rather than conflict with, the VT cue – undermining experiment objec-tives.We resolved this by choosing a single auditory tempo near the geometric meanof the participant’s typical cadence and the fast cue ( ftechno =√1.15× ftypical).Modifying the music on the fly to match participants’ unique cadence and guidancerates was impractical, so we prepared 14 versions in advance and chose the best-fitat run time. The average of humans’ typical cadence is 2 Hz (120 BPM), so wecreated a 120 BPM base version, plus eight faster and five slower versions. Eachrate was 1.036 (= 1.151/4) times faster/slower than the adjacent ones, ranging from1.679 Hz (100.8 BPM) to 2.645 Hz (158.7 BPM).1436.4.4 MetricsThe following metrics were computed for each sample, and an aggregate valuecompiled for each trial.CadenceStride frequency was sampled at 1 s intervals using RRACE (Chapter 4) running onfour Android phones.Cadence Error %The participant’s measured error (divergence from guidance rate) divided by guid-ance rate at each sample. The sign of the error indicated whether the participantwas behind or ahead of the tempo.Cadence RatioMeasured cadence divided by walker’s natural cadence, at each sample. Normal-ization was performed due to large individual differences in natural cadence.SpeedThe participant’s average walking speed during a single trial. We placed twocoloured flags, one about 2 m after the starting point and another ∼ 17 m fromthe first. An experimenter (E2) timed participants as they passed the flags, goingaway and coming back. Speed was post-computed as distance between the flagsdivided by elapsed time.Speed RatioMeasured speed during a single trial divided by that participant’s speed during thebaseline condition (silence with no guidance). This parameter allows combinationof speed measurements from all the participants.144Subjective WorkloadParticipants reported workload using two-part NASA-TLX questionnaires after eachtrial pair (going away, coming back). In Part 1, the participant rates six subscalesaddressing mental demand, physical demand, temporal demand, performance, ef-fort, and frustration on a single page, in a 100-point range in 5-point steps (20grades). In Part 2, the participant adds weights to these subscales via pair com-parisons (e.g., physical demand versus frustration); as 15 questions, one per page.Each NASA-TLX questionnaire took about 2 minutes to complete. We used the totalworkload index and the six subscales in our analysis.Figure 6.2: Data flow, throughout the experiment and after the experiment during data pro-cessing.6.4.5 Apparatus and ContextOur setup consisted of a wrist-worn VT display, cadence sensing (four Androidsmartphones running a custom step-detection algorithm), a control laptop, an An-droid phone to play audio files, a stopwatch, and a questionnaire laptop.Experiment information flow is shown in Figure 6.2 (yellow area). Two exper-imenters carried out the protocol. The control laptop managed study conditions byinforming Experimenter 1 (E1) which pre-chosen audio track should be selectedfor each trial, and sending commands wirelessly to the VT display. The phonesconstantly measured walking frequency. E2 administered the NASA-TLX question-145Figure 6.3: The Haptic Notifier (top) and the Xbee USB radio (bottom).naire after each trial and timed participants for speed measurement.VT Cues – Android Wrist DisplayTo deliver tactile cues to the participant’s wrist, we used Tam et al.’s Haptic Notifier[156] (Figure 6.3). We used three types of vibrations, as detailed in Table 6.3.Table 6.2: Elements of Android wrist display [156].Part QuantArduino Fio microcontroller [151] with XBee socket ×1XBee series 2 radio to communicate with experimenter laptop ×1synchronized eccentric-mass tactors (∼ 190 Hz – Chapter 3) ×3lithium polymer battery ×1The laptop and the Arduino were synchronized via timestamps at session start,then operated independently during trials to avoid communication delays. TheArduino logged the trial start / end, then communicated to the laptop at trial com-pletion (Section 6.4.6).146Table 6.3: Vibrations used in study (∼ 190 Hz). T (turn) and S (stop) use similar vibrations, Tends an odd trial and begins an even trial, S ends an even trial (and the trial pair).Cue Occurrence Description DurC: count to 3 start of oddtrial(0.5 s vibration +0.5 s silence) ×2 +1 svibration3 sG: guidance during trial 100 ms vibration, interval defined by guid-ance tempo1/fT: turn end (start)of odd(even) trial5 s of constant vibration 5 sS: stop end of eventrial5 s of constant vibration 5 sOverall Experiment Control: Base LaptopThe main control code ran on a server laptop, responsible for: (a) Measuring theparticipant’s fast and slow cadences, and deriving mid levels through the experi-menter’s keypad entries which marked start, end, and number of strides. (b) Log-ging synchronization times from the wrist-worn Arduino, and the Android phones.(c) Reading the trial order from a pre-generated table. (d) Running the study step-by-step and send the commands such as “start the trial” to the Arduino. (e) Send-ing a request to the Arduino for logs at the end of each trial, receiving them, andsaving them to a file. This laptop remained in a stationary location while the par-ticipant walked out/back, within continuous wireless range.Smartphones – RRACE Cadence MeasurementFor redundancy, we used four RRACE-equipped Android phones to measure walk-ing frequency (Chapter 4). We placed two phones in participants’ front pockets andthe other two in a small backpack: while RRACE is robust to orientation and bodyplacement, here we used locations previously shown to provide the highest accu-racy. These phones logged the 3-D acceleration of the user’s thighs and torso andmeasured and recorded the user’s cadence every 200 milliseconds. Duplicationprovided robustness to issues such as the Android operating system terminatingRRACE due to perceived CPU over-usage, or inadvertent button presses. We usedthe median of all active cadence estimations (to discard outlier measurements) to147improve measurement accuracy.A fifth smartphone, used to play audio files, was mounted on the shoulder bagwith its display accessible to E1 (Figure 6.1).6.4.6 ProceduresWe recruited participants through university mailing lists and posters around thecampus. The experiment took about 60 minutes and participants were compen-sated $15. The actual experiment had the following steps, where P indicates theParticipant and E1, E2 the experimenters.1. Calibration and Instruction:• Introduction and consent• Cadence baseline: While P walked at his/her typical walking speed, E1 mea-sured time required for twenty strides (t20) and computed average inter-stepinterval (τ) and walking frequency ( f = 1/τ). Guidance tempos were set to1.15× faster and slower than the typical cadence, and sent to the wrist-wornArduino client.• Synchronization: Arduino and smartphone clocks were synchronized with thecontrol laptop.• Instructions: E1 explained task, wrist display and trial format, then instructedP to execute fast and slow practice trials. P was instructed to try to walk at thecue tempo, and requested to practice until in full understanding of protocol.• Equipage: E1 placed two smartphones in the participant’s front pockets, andthree in or on a small shoulder bag.2. Trials, Run in Pairs:The 24 trials were performed in 12 pairs; paired trials shared conditions (auditorytask and guidance tempo) but had different walking directions (odd numbered trial:away from the starting point, even numbered trial: towards the starting point).• Preparation: P stood at starting point near E1, who then started audio (exceptin the silent mode).148Figure 6.4: Flowchart of the experiment. Beginning in upper left, a single loop is one trial pair,to be repeated 12 times (trial number i increments twice in each loop). Purple denotespresence of VT cues (except in no-guidance conditions), and rectangles data collection.The 100 ms vibration during trials is the same across all guided trials but the vibrationinterval (and the silence) during trials is defined by the tempo of the guidance cue; here,for a tempo of 2 Hz, the vibration interval is 500 ms and therefore, the silence is 400 ms.• Odd trials: Following a VT count to three, P paced away from E1 for 25s.• Turning around: When notified by a continous 5s vibration, P stopped andturned around.• Even trials: Without pause, P stepped towards E1 for 25s.• End of trial pair: P received a continous 5s vibration and stopped. E1 stoppedaudio (if not silent mode).• NASA-TLX questionnaire: P sat down and completed the NASA-TLX question-naire on a laptop administered by E2 while E1 wirelessly downloaded the startand end timestamps from the haptic notifier to the computer.6.4.7 Data PreparationCadence was measured every 200ms on all of the phones, and each datapoint times-tamped with the phone clock. After converting phone timestamps to computertime, data were analyzed at 1 s intervals. Figure 6.2 illustrates data flow through-out the experiment and during data processing.149• Cadence: We grouped cadence measurements from all four data-collectionphones at each timestep (i.e., four observations at t seconds after the start oftrial where t ∈ N and t ≤ 25) and used their median (to guard against outliers).In subsequent analysis, we removed the first four seconds of each trial, wherethe participant is transitioning from a stationary position to natural walking.This procedure produced one datapoint / s in 20s of usable trial, yielding 21datapoints / trial (i.e., t ∈ {5,6, ...,25}).• Cadence error: We computed cadence error % from cadence measures andguidance frequency for that sample.• Speed and stride length: We added manually-measured speed (Section 6.4.4) tocadence data, and computed stride length as speed divided by cadence (Equa-tion 6.1).6.4.8 Analysis TechniqueWe used Generalized Linear Models (GLM) for statistical analysis of performanceand workload data, followed by a Tukey post-hoc test for multiple pairwise com-parisons.When there was no interaction effect between factors, we conducted pairwisecomparisons on every significant main effect. Otherwise, we analyzed the interac-tion in terms of simple effects (Rutherford 2): we divided the dataset by one factor(n subsets for n levels of the factor), and analyzed the statistical significance of theother factor. We then conducted pairwise comparisons of its levels on each of thosesubsets separately and repeated this with the two factors switched.For example, for physical demand, guidance emerged as the only significantmain effect. Thus, we only compared guidance conditions: fast vs slow guidance,slow vs no guidance, and no vs fast. In contrast, cadence error % had three signif-icant main effects (guidance, auditory task, and time) and an interaction betweenguidance and auditory task. In this case, we first used pairwise comparison oftime because it did not interact with other factors; second, we split the dataset byguidance condition into two subsets, and conducted pairwise comparisons of four2 [138] Section 3.2.1, pg. 55; Section 9.3, pg. 169.150Figure 6.5: All statistical effects visualized with p = 0.05 as significance level. The threefactors (auditory task, guidance tempo, and time) and their interactions (auditory task ×guidance tempo and guidance tempo × time) are colour coded in red, violet, green, pink,and cyan respectively. We omitted the interaction between auditory task and time becauseit was never significant. Significant main effects and interactions are shown with threetypes of arrows: (a) significant effects that cannot be interpreted because of presence ofinteraction, (b) significant effects with no pairwise difference between any two of theirlevels, and (c) significant effects with significant difference between some of their levelsare shown with dashed lines, thin solid lines, and thick solid lines, respectively.auditory tasks on each of the two subsets; and third, we split the dataset by auditorytask into four subsets and compared fast with slow guidance for each.6.5 ResultsIn this experiment 24 participants (11 female), aged 19-58 (mean = 25.96, SD =10.2), 157− 190 cm tall (mean = 171.3, SD = 9.5), and weighing 43.5− 100 kg(mean = 66.59, SD = 14.5) took part. 11, 7, and 6 participants respectively hadnone, < 5 years, and ≥ 5 years of prior musical training. 14, 9, and 1 participantsrespectively had none, < 5 years, and ≥ 5 years of prior performing arts trainingincluding dance, ballet, and theatre.1516.5.1 Presentation of ResultsThis study employed five guidance and seven workload metrics, with an analysisbased on GLM and Tukey pairwise comparisons. To visualize this complexity, wefocus on important common patterns and exceptions. Omnibus analysis results areavailable in Appendix D.4, and Figure 6.5 displays significant main and interactioneffects.Number of pairwise comparisons: Because cadence error % is only mean-ingful when there is a guidance cue, the no guidance condition is omitted whenanalyzing cadence error % but presented for other metrics. Therefore, cadenceerror % has only the fast–slow comparison for guidance conditions. All other met-rics have three guidance conditions and three pairwise comparisons: fast–slow,slow–no guidance, and no guidance–fast. Auditory conditions are the same acrossall metrics: four conditions and six pairwise comparisons. Because the numberof pairwise comparisons for time factor was significantly higher, we only reportthe time after which there is no significant difference between any two times. Insummary: for a factor with n levels, there will be n(n−1)/2 pairwise comparisons(n choices for the first condition and n−1 choice for the second condition dividedby two to account for symmetry). Tukey’s test subsequently compensates for theincrease in probability of making a type I error caused by multiple comparisons.6.5.2 Cadence Error %PVG suggests a stride frequency to users; it is up to users to follow this frequency.Analyzing cadence error % shows us how successfully users of our system canfollow the cue. Indeed, guidance condition affects cadence error % regardlessof all other factors (Figure 6.6). Cadence error % is always negative under fastguidance (mean=−18.9%, i.e., users fall behind the cue tempo) and its magnitudeis larger than error under slow guidance (mean = −12.7%). It is largely skewed(magnitude of median is much smaller than the mean) by poor performance ofsix users. Cadence error % is also affected by auditory task, but this effect issmall relative to that of guidance condition. Podcast–techno (mean = −18.2% vs−12.5%), classical–silence (mean = −19.2% vs −13.3%), and techno–classicalare significantly different regardless of guidance condition; the results of pairwise152comparisons in each guidance subset are shown in Table 6.4.Table 6.4: Pairwise comparisons of cadence error % of auditory task levels per each guidancecondition. P, T, C, and S are podcast, techno, classical, and silence respectively. Auditorytasks are sorted within each guidance subset by mean of cadence error % in the last column(Order) from left to right; all means are negative with the largest magnitude to the left.Guidance Subset P-S P-C P-T T-C T-S C-S OrderSlow No Yes Yes Yes Yes Yes CPSTFast Yes No Yes Yes No Yes PCTSCadence error % only changes significantly during the first seven seconds afterstart of trial.6.5.3 CadenceCan PVG affect participants’ cadence despite the error and in presence of auditorytask? Cadence values do track guidance cue rate (mean = 1.79Hz,1.70Hz,1.46 Hzfor fast, no, and slow guidance respectively; Figure 6.7). All guidance conditionsare significantly different from each other in terms of cadence regardless of au-ditory task, with the exception of techno music (no significance for no–fast guid-ance).Podcast–techno (mean= 1.59 Hz vs 1.72 Hz), classical–silence (mean= 1.56 Hzvs 1.69 Hz), and techno–classical, are significantly different from each other re-gardless of guidance condition; Table 6.5 shows comparison results in each guid-ance subset. Cadence stops changing significantly at 7, 8, and 10 seconds after thestart of trial under slow, fast, and no guidance condition respectively.Table 6.5: Pairwise comparisons of cadence of auditory task levels per each guidance condi-tion. Auditory tasks are sorted within each guidance subset by mean of cadence in the lastcolumn (Order) from left to right.Guidance Subset P-S P-C P-T T-C T-S C-S OrderNone Yes No Yes Yes Yes Yes PCSTSlow No Yes Yes Yes Yes Yes CPSTFast Yes No Yes Yes No Yes PCTS153−40−20020slow fastGuidanceCadence Error % Auditory Taskpodcasttechnoclassicalsilence5−25sFigure 6.6: Cadence error % per guidance condition and auditory task. Guidance and auditorytask (main effects) as well as their interaction are significant.1.501.752.002.252.50none slow fastGuidanceCadence (Hz) Auditory Taskpodcasttechnoclassicalsilence5−25sFigure 6.7: Cadence per guidance condition and auditory task. Guidance and auditory task(main effects) as well as their interaction are significant.1546.5.4 Speed, Stride length, and Speed RatioIs speed and/or speed ratio affected by PVG in presence of auditory task? Doesstride length play a role? Walking speed under slow guidance, mean = 1.20 m/s, issignificantly different from no (1.38 m/s) and fast guidance (1.44 m/s), regardlessof auditory task. Speed under podcast (1.29 m/s) is significantly different fromtechno (1.38 m/s) and silence (1.36 m/s) and different between techno–classical(1.33 m/s), regardless of guidance condition. Stride length completely follows thepattern of speed. Speed ratio also follows speed with one exception: no guidanceis also different from fast guidance regardless of auditory task (Figure 6.8).6.5.5 WorkloadPatterns in NASA-TLX results are relatively simpler than cadence (Figure 6.5). Noguidance differs from both slow and fast guidance across all seven NASA-TLXfactors including total workload index, regardless of auditory task. In every factorof the seven, fast guidance scores highest (most workload) and no guidance thelowest.Auditory task is a significant main effect for five NASA-TLX factors (mentaldemand, performance, effort, frustration, and total workload) but the only signif-icant difference between two auditory tasks is for mental demand and is betweenpodcast and every other auditory task regardless of guidance condition.6.6 Discussion6.6.1 Guidance CueH1: PVG will influence the user’s cadence, stride length, and walking speed in thecued direction. All parts accepted.Our results from this experiment confirm those of our previous experiment(Chapter 5) in showing that most people can synchronize their stride frequencywith VT cues either very (here, 9/24 have median absolute error < 5%3) or reason-ably well (13/24: < 10%). The two studies differ in that here, for consistency we38 s after the cue starts to the end of trial.1551.01.41.8none slow fastGuidanceSpeed (m/s)0.751.001.25none slow fastGuidanceSpeed Ratio0.40.60.81.0noneslow fastGuidanceStride Length (m)1.01.41.8podcasttechnoclassicalsilenceAuditory TaskSpeed (m/s)0.751.001.25podcasttechnoclassicalsilenceAuditory TaskSpeed Ratio0.40.60.81.0podcasttechnoclassicalsilenceAuditory TaskStride Length (m)Figure 6.8: Speed (top), speed ratio (middle), and stride length (bottom) per guidance condi-tion (left) and auditory task (right). Guidance and auditory task (main effects) are signifi-cant for all the three metrics.defined min/max walking tempos, resulting in more extreme (and more difficult)tempos to follow than when participants set their own; and, we added sensory andcognitive competition in the form of auditory tasks. As before, participants gen-erally walk faster than slow guidance which produces a very small error and walkslower than fast guidance with a moderately larger error. When participants didnot receive any cue they were inconsistent in their own typical walking frequency,suggesting that the VT cue is useful even at the user’s typical cadence. Our analysis156255075100Mental DemandPhysical DemandTemporal DemandPerformance EffortFrustrationTotal WorkloadScaleRatingGuidancenoneslowfast255075100Mental DemandPhysical DemandTemporal DemandPerformance EffortFrustrationTotal WorkloadScaleRatingAuditory TaskpodcasttechnoclassicalsilenceFigure 6.9: NASA-TLX results colour-coded by guidance condition (top) and auditory task(bottom).showed that PVG successfully affected participants’ cadence and speed regardlessof auditory task.Stride length is obviously also an important component of speed. Our analysisshowed that slow VT cues cause participants to take significantly smaller stridesrelative to their typical stride length, but under fast guidance, stride length remainsat a typical level. This could explain why fast cues impact speed relatively lesseffectively than slow cues despite their effect on stride frequency. However, the ef-fect of fast VT cues is still sufficiently large to increase speed ratio (the participant’sspeed relative to his/her own baseline speed). We employed VT cues 15% fasterand 15% slower than participants’ typical cadences and achieved 20% change inspeed from slowest to fastest.6.6.2 Effect of Auditory Task on PerformanceH2: Auditory task will interfere – variously – with the effect of PVG on cadence,stride length, and walking speed, with greatest impact for highly rhythmic music.First part accepted.157Although the auditory tasks seemed to affect cadence, analysis of each audi-tory task level revealed a strong interaction with guidance condition. It seems thathighly rhythmic music that is faster in tempo than a user’s typical cadence may in-deed reinforce the fast guidance by encouraging the user to walk faster. In contrast,listening to a podcast or classical music seems to slow down the user; in the case ofa faster than typical guidance cue, it slightly reduces the impact of guidance. How-ever, the effect of auditory task on other metrics such as speed and stride lengthis independent of guidance condition. Participants take smaller strides and walkmore slowly when listening to a podcast, and take longer strides and walk fasterunder techno music. However, this difference in stride length and speed (7% in thecase of speed, speed ratio, and stride length) is much smaller than the differencecaused by the guidance cue (20-21%, from slow to fast).6.6.3 The User’s Response TimeH3: Users will be able to synchronize step cadence with the guidance cue within5–10s seconds. Accepted.Elapsed time since the start of the trial also significantly affected cadence errorbut the result was predicted: because participants started each trial from a station-ary mode, cadence increased during the first few seconds (7-8s) and cadence errordecreased. After that, cadence and cadence error did not change significantly.Furthermore, by comparing that response time with the time it takes partic-ipants to start walking from a stationary position until getting a steady cadenceunder no guidance (roughly 10s), we can conclude PVG can get pedestrians up tospeed significantly faster than a single notification would (e.g., the start signal atthe beginning of no guidance trials).6.6.4 Effect of Guidance on WorkloadH4: Presence of PVG will increase workload, with greatest impact for faster cues.First part accepted, second rejected.PVG adds to the total perceived workload measured by NASA-TLX by increas-ing all six basis scores. However, fast and slow guidance rates were not associatedwith significant changes in any workload score. This suggests that the workload158caused by PVG is real but it is likely that the tempo of the recurring VT cue doesnot change the amount of workload (Figure 6.9).6.6.5 Effect of Auditory Task on WorkloadH5: Auditory task will add to the workload during walking, with the effect greatestfor a verbal task (e.g., listening to a podcast). Partially accepted.Auditory task has no effect on physical and temporal demand and very little tono effect on the other NASA-TLX scores including total workload. While listeningto podcast seems to cause the most workload for participants, it is only significantlydifferent from silence and the two musical tasks in its effect on mental demand.This suggests that the additional workload of auditory tasks similar to these aresmall compared to the workload caused by guidance (Figure 6.9).6.6.6 Interpreting Subjective Workload MeasuresNASA-TLX scores in our experiment cannot be used to precisely compare auditoryand guidance workload (or to scores reported in other works) because of differ-ences in how we encouraged participants to focus on the two tasks throughout theprotocol (not just at a NASA-TLX assessment time). It is possible that the partici-pants were using different calibrations in their assessment of the two tasks.In addition, there is a discretization aspect of NASA-TLX reports in that if par-ticipants noticed a difference between two conditions at all (e.g., slow/no guidance)they would give a nonzero score simply for noticing it, even if the impact was ex-tremely minor.As noted by Hart, these subjective workload measurements are relative (e.g.,fast vs no guidance or listening to podcast vs silence) and lack a “redline” indicatorwhen workload is too high [65]. Statistical significance of their difference does notnecessarily translate to practical significance. If fast guidance caused less workloadthan the techno auditory task, then if we consider listening to techno music to be alow-workload task, we can easily argue that fast guidance is also a low-workloadtask. However, having a workload that is higher than listening to techno cannot beused for the opposite argument.1596.7 ConclusionIn this chapter we presented a workload evaluation of periodic vibrotactile guid-ance. PVG is a system that uses cadence synchronization to guide a pedestrian’swalking speed without reliance on audiovisual channels, and it was important toevaluate the degree to which this may indeed be helpful. We also presented aframework for evaluating pedestrian cadence assistance in outdoor settings, withexperimental control over cue rates, guidance mode, and a diversity of auditorytasks, and suitably accurate measurement of resultant step rate. We measured per-formance metrics such as cadence, cadence error (i.e., divergence from the desiredcadence), stride length, and walking speed in addition to perceived workload mea-sured through computerized NASA-TLX questionnaires.We have proposed a series of successively more difficult goals addressed in thisevaluation. The first is simply that most people can follow stepping cues to a usefulaccuracy (90% and above) in a reasonable amount of time (under 10 seconds). Theresults reported here support this, and are also consistent with a non-workloadprevious study (Chapter 5).Next, we confirmed an impact on speed. We knew it was possible that underincreasing cue frequency, participants might take shorter strides. This would re-duce the effect of faster cues, cancel them out, or even reduce walking speed. Weobserved, however, that under faster guidance stride length remained the same butcadence increased and both were lower under slower cues and thus the resultantspeed was guided in the right direction.There will of course be a limit in this, and now we have also identified someevidence for where it might lie, in the imperfect and somewhat skewed responseswe did see. It is reasonable to anticipate that when we increase cadence evenfurther, participants may stop increasing step rate altogether, and/or their stridelength could start to decline and eventually impact their speed management.For PVG to really be useful, it needs some degree of robustness against auditorytask interference, along with the baseline visual load involved in walking: process-ing auditory streams is something that pedestrians using this assistance will likelywish to do at the same time. PVG depends on millisecond timing system of thebrain [17] and affects motor control, and does not depend on speech or complex160cognitive tasks. Further, rhythmic music might directly mask or compete with atactile cue. We therefore anticipated that PVG performance would be most dam-aged by music and particularly rhythmic music, and less by a verbal task such aslistening to a conversation (a podcast in this experiment). Surprisingly, listening toa podcast interfered most severely with guided walking. The auditory task’s effectdoes not have a practical significance when compared with the workload effect ofguidance. However, our study only addressed tasks that involved processing of im-posed auditory stimuli. It is possible that the workload caused by speech generation– e.g., during a conversation or in recall of memories – could cause considerablymore workload.Finally, while we cannot yet rate PVG workload in terms of its real world impli-cations, it is evidently noticeable at minimum, and requires further investigation.6.8 Future WorkThere are two immediate major directions in which this work supports expansion:a more in-depth understanding about inherent PVG merits, flaws, and limitations;and design of integrated, practical guidance systems that incorporate the findingsof this work and others that come ahead.The perceived increment in workload due to vibrotactile guidance is statisti-cally significant. Future work needs to assess the practical significance of thisstrain, which our methods could only register as perceptible relative to an absenceof guidance. With simple variations in experiment design, we can also generatemore comprehensive characterization data. For example, by increasing trial lengthwe can study learning effects and long term impact, and by considering generativeauditory tasks such as conversation or questions and answers over a phone call ourdata will extend into other realistic scenarios. These were not possible within thescope of a single study but are important. It may also be productive to considerother cognitive impact metrics besides workload via the NASA-TLX; for exam-ple, measuring attention, perhaps via the Stroop test [102]. Administering Stroopduring walking could be a challenge, and care must be taken not to introduce a con-found. We have considered assessing a Stroop test immediately after a trial basedon the fact that the effect of guidance (and/or auditory tasks) on attention does not161vanish right away.We are also interested in extending our findings to the design of closed-loopPVG systems that consider additional contextual information such as time of events,geolocation of the user, traffic, and interruptions. Perhaps most interestingly, wesee Closed-loop Control as a means of mitigating the small but important strain wehave found that PVG can impose on the walker. A practical VT guidance systemwith context as well as highly resolved cadence and speed presents several potentialadvantages. It can choose a rate that is optimal based on spatiotemporal constraintsof the user (e.g., distance to destination and time of events) and task difficulty(e.g., slope of the street and weather condition). It can selectively turn off thecue for a period of time, to both reduce workload and prevent physical stimulusadaptation [64]. Finally, now that we know that a fast cue does not necessarilycause more cognitive workload than a slow cue, a control system knowledgeableof the user’s larger context could lower workload when most needed – e.g., ata street crossing, the system could turn off the cue and allow the user to go offcourse, then adjust the guidance rate upward to compensate. Designing such asystem will be challenging but possible by pairing increasingly available context-aware technology and algorithms with a close observation of pedestrian needs, andcognitive, sensory, and physical abilities.6.9 AcknowledgmentThis work was funded by the Natural Sciences and Engineering Research Councilof Canada and the GRAND NCE. User data were collected under University ofBritish Columbia’s Research Ethics Board #H01-80470.162Chapter 7ConclusionAll our knowledge begins with the senses,proceeds then to the understanding, and ends with reason.There is nothing higher than reason.— Immanuel Kant, Critique of Pure ReasonIn this dissertation we1 introduced periodic guidance which employs the tempoof periodic cues in a fine-grained control setting. We provided evidence that showedtactile sensation is a better fit than vision and audition for most applications of pe-riodic guidance. On the other hand, among different types of tactile displays, vi-brotactile displays were more readily available and generally more powerful thanothers, therefore, we used vibrotactile displays and called our system Periodic Vi-brotactile Guidance (PVG). We used PVG for guidance of human walking andstudied the user’s susceptibility to periodic cues, PVG’s workload, and the effect ofauditory multitasking on it. In this chapter we explain the primary contributions ofthis work, reflect on the research approach taken, and suggest some directions forfuture work.1For a list of contributors and their level of involvement please refer to the Preface on page iv.1637.1 Primary Research Contributions7.1.1 Study of Sensitivity to Vibrations in Mobile ContextsIn the first phase of this work (Chapter 3), we wanted to find the best locations onthe human body for placement of vibrotactile displays, especially for mobile ap-plications including our own PVG system. A considerable amount of research hasbeen done on sensitivity to vibrations [70, 73, 90] and the effect of movement ontactile sensitivity [3, 22, 23, 124]. On one hand, the research on relative vibrotac-tile sensitivity by site did not examine (a) movement and its interference with otherfactors and (b) expectations about stimulus locus; on the other hand, the researchon effect of movement on sensitivity did not compare relative vibrotactile sensi-tivity by site and for activities of interest here such as natural walking. Therefore,we had to fill this gap with experiments that would examine body locations thatare of particular interest to wearable haptics and the effect of natural walking onsensitivity. We also included the effect of visual workload and expectation of locusof stimulus in our experiments.Results from our two experiments, each with 16 participants, supported thefollowing findings:1. Increasing vibration intensity improves Detection Rate (DR) and reducesReaction Time (RT).2. Wrists and spine are the most sensitive in detecting vibrotactile signals,whereas feet and thighs are least sensitive. However, response time is similaracross the body.3. Walking significantly decreases DR and increases RT and it affects DR ofthighs and feet more than other body locations.4. Visual workload does not have any apparent effect on DR but it significantlyimpaired RT.5. Expectation (i.e., a priori knowledge about locus of stimulus), surprisingly,only reduced DR at wrists. However, it did significantly reduce RT.1646. Male participants had higher DR than female participants on the chest, stom-ach, wrists, and spine and females had better DR on thighs and feet. Alsomale participants had faster RT on all body locations except feet.7. Participants preferred spine and wrists.Based on these findings we concluded several design guidelines for creating wear-able vibrotactile systems. These include recommendations on location of vibro-tactile displays and intensity of vibrotactile cues, as well as considerations aboutmovement, visual workload, and unexpectedness of cues; they are targeted at inter-action designers, and generally anybody who wants to build a wearable vibrotactilesystem. These guidelines help designers build systems that are more successful atgetting the user’s attention (i.e., increase Detection Rate) and faster response (i.e.,reduce Reaction Time) both of which are critical in design of vibrotactile systems(See Section 3.7.1).Since we published this work in 2011 [78], it has been used in several areassuch as spatial [26, 75, 117, 142], temporal [156, 157], and spatiotemporal guid-ance [99], as well as research on tactile sensation [2, 103, 108, 128, 129] and thedevelopment of a new tactile display [179].7.1.2 Development and Evaluation of Robust Realtime Algorithm forCadence Estimation (RRACE)In the second phase of this work (Chapter 4), we developed a cadence measurementalgorithm that uses the 3-axis accelerometers that are available in smartphonesthese days, and through analysis of the accelerometer signals in the frequency do-main, estimates the cadence of the user carrying the device (almost anywhere onhis/her body). These are the main contributions from this phase:1. We developed the RRACE algorithm, which is robust to user differences, ori-entation, and placement and works out of the box with no a priori knowledgeor calibration.2. We evaluated the performance of RRACE with four different window sizeson six body locations and at five different walking speeds.1653. We showed that RRACE can provide 95% or more accuracy on 4 out of the 6body locations.4. We compared RRACE with the readily available state-of-the-art time-basedcadence estimation method and showed comprehensive evidence for the su-periority of our algorithm.There are two challenges that activity-related mobile applications face: hard-ware unpredictability and user differences; we developed RRACE with those inmind. RRACE is a cadence estimation instrument that liberates software devel-opers and interaction designers from low-level signal processing challenges andhelps them focus on high-level problems. This 2-year old work still stands as themost successful basis for an extensible algorithm that can be used by reseachers.In fact, researchers in our lab have already extended it to a realtime gait analysislibrary called GaitLib [173] which is publicly available2. Many of the implemen-tation issues are resolved in this library, which means that anybody with someprogramming background can skip the headaches associated with those challengesand easily build his/her creative ideas on top of the library. We believe RRACEand GaitLib can be used in a multitude of areas: cadence estimation and classifica-tion, guidance, activity monitoring, rehabilitation, and exercise games for kids andadults.7.1.3 Study of Periodic Vibrotactile Guidance of Human WalkingIn the last phase of this research (Chapters 5 and 6), we studied Periodic Vibrotac-tile Guidance (PVG) of human walking. First, we tested PVG in outdoor settingswith five different rates with the effect of repetition on its performance. We antic-ipated that auditory multitasking would be the major source of problem for users’performance and we wanted to know how much workload PVG would impose onusers, therefore, in the next experiment we added auditory task as a factor and usedNASA Task Load Index (NASA-TLX) as a new instrument to measure workload.The contributions of this phase are the following:1. The PVG system which uses tempo/interval between vibrotactile cues to2https://github.com/m-wu/gaitlib166guide a user’s cyclical movement (e.g., walking) to achieve a desired speed.2. Our two experiments showed evidence that most people are able to followPeriodic Vibrotactile Cues with 90% or above accuracy.3. Our results showed that PVG successfully affected stride length and walkingspeed in addition to cadence.4. Our data showed that, within the time range of our experiment, repetitiondid not significantly change the performance. This may mean that PVG issufficiently simple for users with a very gentle (or no) learning curve.5. We measured the effect of three different auditory tasks and found that, sur-prisingly, the auditory task most damaging to the performance of PVG wasthe verbal one (podcast), not the rhythmic one (techno music). However, wealso found that the effect of auditory multitasking was not comparable to theguidance rate and therefore, the guidance signal could override the effect ofauditory multitasking.6. We also measured workload through self reports. Our findings suggest thatPVG adds to the workload of walkers but the rate of guidance does not mattermuch. We also proposed a strategy to avoid harm to the user’s safety basedon our findings (See Section 6.7).As far as we know, PVG of human walking is the first of its kind. Moreover, theability of most users to follow the tempo of PVG when walking make us hopefulthat it can be extended to other periodic movements such as cycling, swimming, orrowing. In addition to spatiotemporal guidance of commuters, PVG’s applicationsinclude athletic training and rehabilitation. Ultimately, PVG can become a mediumfor sensory augmentation or substitution [74]; continuous usage may create anautonomous sense of speed based on goals or a feeling of space and time relativeto future events. In fact, gadgets that create a sense of time have recently becomeavailable to consumers; e.g., Tikker by Tikker Technologies LLC (Wilmington,DE, USA), “a watch that counts down your life” with a graphical display [161],and Durr by Skrekstore (Oslo, Norway), “a shivering bracelet that demonstrateshow time seems to speed up and down” with vibrations at 5-minute intervals [148].1677.2 Secondary Research ContributionsThe work presented in this dissertation made other contributions that might beuseful for the research community; we did not list these contributions as primarybecause they were the by-product and not the main goal of the research. These con-tributions can be organized into two groups: (a) experimental design, methodology,and statistical analysis examples, and (b) the data.7.2.1 Experimental Design and MethodologyThis research is composed of six experiments. Most of these experiments had manyfactors and multiple levels in some factors. While we do not see complexity as avirtue, we hope that the methodologies we developed to deal with it here will be ofuse to others. Some of the challenges we faced are the following:1. More levels in factors means longer experiments with results which areharder to interpret. However, if having more than two levels is necessary,we should consider an appropriate method for comparing levels in pairs. Inour experiments we employed different pairwise comparisons such as Tukeytest (Section 6.4.8), unpaired Z-test (Section 4.4.4), and post-hoc pairwisecomparisons with Bonferroni adjustment (Section 5.4.6).2. Counterbalancing the levels in repeated measures experiments is very impor-tant. It is not a difficult task for simple experiments (e.g., 2× 2), however,it can be challenging for complex experiments particularly because we donot have access to a sufficiently large number of participants to test all thecombinations. In the last experiment for example, we had to use two Latin-square designs crossed by each other to counterbalance both the order ofauditory tasks and the guidance conditions.3. Analyzing the results of complex experiments is orders of magnitude harderthan simple experiments; the number of interaction effects grow exponen-tially with number of factors (i.e., 2n− n− 1 for n factors) and the growthof pairwise comparisons is cubic with regards to number of levels (i.e.,m(m− 1)/2 for m levels). In addition, with presence of interaction, addi-tional steps must be taken in order to compare different conditions as in the168case of the last experiment in this work where we analyzed interactions interms of simple effects (see Section 6.4.8).4. Simple statistical methods such as Analysis of Variance (ANOVA) and t-test,which are widely taught and employed, are mostly not good matches forcomplex experiments. Data which do not satisfy the limiting assumptionsof those tests (e.g., binomial DR data of our first two experiments), and datawhich are missing not because of poor design but because of the nature of anexperiment (e.g., missing RT measurements when participants did not detectstimuli in the first two experiments) are just two examples. Other methodsthat are less known by the community should be employed in these situa-tions.5. Presenting the results of complex experiments is also a delicate matter. Whenthere are several dependent variables, main effects, interaction effects, andpairwise comparisons, using the conventional methods of presenting the re-sults of statistical tests (e.g., reporting means and p values) makes the in-terpretation of the results and detecting high level patterns very hard. Eachof the experiments we conducted faced this challenge in a unique way. Forsome examples of solutions see Figures 4.7 and 6.5 and Table 4.8.7.2.2 DataThe six experiments that are presented in this dissertation involved vigorous datacollection and preparation; e.g., the Force Sensing Resistor (FSR) footfall detectionused in Phase 2 and the RRACE algorithm used in Phase 3, for the measurementsduring experiments. On the other hand, because our data were collected fromseveral sources (e.g., accelerometer/cadence data from multiple phones in Phase2 and 3), synchronizing and fusing multiple data sources took significant effort.We believe the data we collected can be of value for the scientific communityand we have made our anonymized datasets public, and have likewise developedethics protocols for this purpose which can be shared. This practice is rarely donewhich has contributed to our own challenges in examining our algorithm and moreimportantly comparing its performance with other algorithms. Therefore, we have169tried to contribute to a solution to this problem, by sharing carefully collected andmeasured datasets that others can test their own algorithms/ideas on. Our datasetsare available for download at: http://www.cs.ubc.ca/labs/spin/data/.In Phase 1 we produced Detection Rate (DR) and Reaction Time (RT) datasets:1. DR and RT data of different vibration intensity, under different visual work-load and movement conditions.2. DR and RT data of different vibration intensity, under different expectationand movement conditions.These can be used to create models of sensitivity to vibrotactile stimuli, whichwould enable designers to choose the appropriate intensity of vibration per locationand condition.In Phase 2 we produced accelerometer and cadence datasets that were collectedby smartphones placed on six body locations:1. Accelerometer, cadence, and Error Ratio (ER) data walking at different con-strained speeds on treadmill.2. Accelerometer, cadence, ER, and speed data of walking at different speedsoutdoors.These can be used for improving the existing cadence measurement algorithms orcreating new ones.In Phase 3 we produced two performance datasets and a workload dataset:1. Cadence, speed, stride length, and ER data of walking under vibrotactileguidance at different tempos.2. Cadence, speed, stride length, and ER data of walking under vibrotactileguidance and no guidance and during different auditory tasks.3. NASA-TLX data for walking under vibrotactile guidance and no guidanceand during different auditory tasks.The performance data can be used for modeling of human cadence during auditorymultitasking and under vibrotactile guidance. The workload data can be used for170modeling of physical and cognitive workload under guidance or no guidance andin presence or absence of auditory task during walking.7.3 Reflections on Research Approach7.3.1 Visual WorkloadIn our first experiment on sensitivity to vibrations of this dissertation we needed avisual task for users during parts of the experiment to measure the effect of visualworkload on Detection Rate (DR) and Reaction Time (RT). The task we designedwas counting the number of times a highlighted block hit the walls of a three-dimensional room on a large scale display (see 3.3.3). As shown in the results, thevisual workload did not have any apparent effect on DR but significantly impairedRT.We chose this task because (a) it was continuous, (b) had constant difficulty,(c) required attention and memory, and (d) was not so distracting to cause partici-pants to stumble. We also faced the limitation of running the experiment indoors,therefore, we had to use a display screen.At the higher level, we considered watching scenes that resembled walking inthe real world (e.g., video of cars and commuters) but we did not choose themfor two reasons: firstly, watching a video passively with no real consequenceswould be too easy and we would have no control over users’ engagement; secondly,the level of demand on users’ attention would change during the video and therewould be no way of keeping it constant. We even considered adding an extra taskto watching a scene such as counting certain types of cars but decided against itbecause it no longer resembled a real world situation.At the lower level, we used counting instead of immediate responses to visualstimuli (i.e., requiring participants to react to every collision of the highlightedbox with the wall) because we already had a respond to stimuli scheme for thevibrotactile signals and it would add unnecessary confusions in the experiment.Also, counting had an added bonus of engaging participants’ memory.It could be argued that the task we chose was not hard enough; while makingthe visual task too hard, would probably enable us to achieve significant effect of171visual workload on DR it would harm the external validity of our experiment bybeing too much harder than real world situations. An alternative approach to ourabstract visual task would be to create a full-fledged walking simulation for engag-ing participants visually. Such an environment would only work if the participant’swalking on the treadmill was linked to his/her movement in the virtual world. Apartfrom technical challenges of creating such a system which could drag us from ourmain goal without any definite return on investment, it would make it almost im-possible to decouple the effect of movement from visual workload; in other wordsmovement and visual workload would not be two separated factors because move-ment would affect the level of difficulty of the visual task with a very easy passivewatching of a scene during the stationary condition and an active, relatively hardervisual task during the walking condition.7.3.2 Sensory Adaptation, Learning, and FatigueSensory adaptation is the change of responsiveness to a continuous stimulus overtime [169] and tactile sensation is not immune from it [64]. Four of our exper-iments (out of six) were designed on the principle of responding to vibrotactilestimuli. In the sensitivity to vibrations experiments we tried to capture sensoryadaptation by analyzing the relationship between DR and trial number. Our resultssuggested that the odds of detecting a vibration decreased by 6% after 100 trials.It is possible that learning has played a role by increasing the odds and cancelingsome of the effect of adaptation. Fatigue could also a contributor. In the other twoexperiments, we found no significance for trial number which could also mean theoverall effect of sensory adaptation, learning and fatigue has resulted in minimaleffect on performance.We did not try to separate sensory adaptation from learning and fatigue becausethey were out of the scope of this research; however, assuming that the length ofthe experiment contributes mostly to fatigue, number of stimuli to learning, andnumber of stimuli per locus of stimuli to sensory adaptation, we can propose anexperiment to decouple them with three subject groups, i.e., to solve the 3 un-knowns with 3 equations. The three subject groups should be exposed to differentlevels of adaptation, learning, and fatigue.172Group A: 2×m×n stimuli on 2n body locations in time t.Group B: 2×m×n stimuli on n body locations in time t.Group C: 2×m×n stimuli on n body locations in time 2t.Group A and B have equal length and equal number of stimuli but differentnumber of stimuli per site (A:m, B:2m). Group B and C have equal number ofstimuli and stimuli per site but different lengths. Comparing the effect of trialnumber on DR in A and B will reveal the effect of sensory adaptation relative tolearning and fatigue, and the comparison between B and C will reveal the effect offatigue relative to learning and sensory adaptation.7.3.3 Step DetectionIn order to validate RRACE we needed to test it with actual users to see how well itcan estimate users’ cadence. This is only possible when you have the ground truthfor the cadence at each point in time. As explained in Chapter 4, we conducteda short indoor experiment on a treadmill and a full experiment outdoors, and oneof the differences between the two was the cadence estimation that was used asthe ground truth. In the first experiment, one experimenter visually detected foot-falls on the treadmill and recorded them on the computer with the press of a buttonwhich registered the time of footfalls. In the second experiment, we placed FSRsensors in the participant’s shoes and connected them to a small Arduino boardcarried by the participant which registered the timestamps of footfalls by compar-ing the force with a threshold that was determined during the calibration phase. Inboth of these methods, we measured the interval between two consecutive footfallsand inversed it to produce the gold standard cadence measurement, which was thencompared with the estimation from RRACE.Each of these methods has advantages and disadvantages. The manual stepdetection requires complete attentiveness of the experimenter and cannot be usedwhen the participant gets too far from the experimenter; it is also prone to exper-imenter error. On the other hand, it is noninvasive and requires no setup for theparticipant. In contrast, the FSR system is immune from experimenter error andcan be used outdoor where the participant can get very far from the experimenter;173however, the FSR system requires a setup procedure that includes calibration of thesensors for each user. Also, the FSR system (like any other invasive measurementtool) is prone to wearing out and breaking which is why we had a spare systemduring the experiment and eventually we had to replace one of the sensors with it.Another problem that may happen with sensors in shoes is that despite taping theminside the shoes, they may move inside which means that the range of the forcesthey measure may change and they may require a new calibration. Unfortunately,such incidents may not be known until after the experiment. In our post experi-ment data processing, we compared the timestamps from both feet and in caseswhere they did not match, we relied on the foot that seemed realistic and within ahumanly possible range of cadence (i.e., we ignored the data from the foot whichwere too fast or too slow). It should be noted that the errors in the ground truthcadence measurements in our experiments only made the performance results ofRRACE appear worse than they actually were.Although at this point we have already achieved a relatively noninvasive, accu-rate, and robust method for measuring cadence – i.e., RRACE – we believe the FSRsystem is still of value in certain contexts, particularly when the temporal param-eters such as time of footfalls are of interest and not just cadence; therefore, herewe propose a few solutions for improvement of the FSR step detection:1. Employing multiple sensors in each shoe and registering a footfall when themajority of sensors detect a threshold crossing (e.g., 3 out of 5, or 2 out of3).2. Using the extreme pressure points in the near past (e.g., last 10 seconds) tocalibrate the threshold for detecting footfalls.3. Creating an error detection method which alarms the experimenter about thepossibility of a problem when the timing of footfalls (or the range of forcesmeasured) seem out of ordinary (e.g., too close or too far from each other).4. Manual calibration of the sensors more frequently during the experiment.We believe each of the above solutions, or a number of them combined, mayimprove the accuracy of the FSR footfall detection system.1747.3.4 Speed MeasurementWalking speed has been an important factor in all of our experiments. The firstthree experiments were conducted indoor and on a treadmill where the participants’walking speed was constrained by the speed of the treadmill (although chosen bythe participant at the beginning of the first two experiments), and the rest of theexperiments were conducted outdoor, where participants were given instructionsor guidance cues but their speed was not physically constrained.Treadmill solves the problem of speed measurement by displaying the speedbut measuring the speed outdoor is not as trivial. Originally we planned to usean external Global Positioning System (GPS) receiver (connected to a phone) formeasuring speed. However, the accuracy was not sufficiently high for measuringwalking speed. Therefore, we chose to measure speed manually, by placing flagson the side of the sidewalk and measuring the elapsed time between the partici-pant’s crossing one flag to the next one. We used the same method with minorchanges in the last experiment of this work too.Unfortunately, the speed measurement method we used gives us the averagespeed over a trial and not the momentary speed at each point in time; as a result,the analysis on speed is based on the assumption that speed has remained constantduring a trial. The downside to this is that the changes in speed, particularly at thebeginning of a trial, will not be included in the speed analysis. This was not an is-sue in the experiment for validation of RRACE because we only needed to report theaverage speed of participants when instructed to walk at very slow, slow, typical,fast, and very fast speeds. The second experiment on vibrotactile guidance of hu-man walking, was mainly focused on evaluating the effect of guidance on cadence,during auditory multitasking; speed was also analyzed to show the success of PVGsystem in affecting walking speed. We would argue that showing that PVG couldaffect the average speed over tens of seconds is sufficient for proving its success inreal world scenarios where users might go from point to point in tens or hundredsof minutes.1757.3.5 Robust Realtime Algorithm for Cadence EstimationIn Chapter 4 we introduced Robust Realtime Algorithm for Cadence Estimation(RRACE), our in-house developed algorithm for cadence measurement. RRACEowes its robustness to three design choices:1. operating in frequency-domain,2. using Fast Calculation of the Lomb-Scargle Periodogram (FASPER) for hand-ling time sampling irregularities,3. feeding vector magnitude into the algorithm.Frequency-domain: We used frequency domain instead of time domain becausewe were only interested in cadence and not timing of each footfall. In contrast withthe time domain, the frequency domain is less concerned about the shape of the sig-nal and more about the frequency at which the signal repeats itself; therefore, userdifferences and location on the body which mainly affect the shape and magnitudeof the accelerometer readings do not affect the frequency domain as much.FASPER instead of Fast Fourier Transform (FFT): We were fortunate to find outat a very early stage that the accelerometer data provided by most smartphones arenot sampled at a constant rate, and the irregularities in the rate of sampling makesit impossible to do spectral analysis with FFT. Because trying to ‘repair’ the data– e.g., with interpolation – could introduce new sources of uncertainty, we decidedto use FASPER to handle non-equispaced data.Vector Magnitude: We assumed that users of our system would orient their phonesin different ways and the orientation of the phone would even change with move-ment and the direction of the three accelerometer axes (x, y, z) would not have anysort of consistency. The magnitude (Euclidean or L-2 norm) of the accelerometervector, on the other hand, is independent of the orientation of the phone which iswhy we chose it for the spectral analysis instead of all of the axes.The above design choices turned out to be successful in making RRACE workwith acceptable accuracy on most body locations without requiring any calibrationto account for user differences. Having said that RRACE has some imperfectionstoo.176Weaknesses and Recommendations for ImprovementAs we showed in Section 4.4.6, RRACE consumed 10 times more power thanEndomondo [33] and 5 times more than Runtastic Pedometer [137], the best ac-tivity measurement apps at the time. Although computation power of smartphonescontinues to increase and this problem will become less of a concern than it isright now, we believe by adjusting the window size and sampling frequency wecan reduce the power consumption. For example, when the user is walking fast,the interval between the user’s steps is shorter and therefore a smaller windowsize would be sufficient. On the other hand, when the user is walking slowly, thechanges in acceleration are slower and therefore a less frequent sampling wouldbe sufficient. The downside to reducing window size and sampling frequency isthe negative effect on accuracy, therefore, the parameters of RRACE should be op-timized to meet the requirements for both accuracy and power consumption. Weshould note that window size also directly affects latency, therefore, latency shouldalso be considered in the trade-off between power consumption and accuracy.When we developed RRACE we did not take advantage of any pre/post-processingmethods such as filters. However, based on the typical range of step frequencies[88], we imagine that a 1-2.75Hz band-pass filter3 would reduce most of the noisethat is responsible for the error.7.3.6 Choosing the Range for Cadence and SpeedIn experiments that involve requiring participants to walk, the experiment designeris faced with the question of how to choose the rate(s) in a way that reflects partic-ipant differences and meets the requirements for answering the research questions.In the six experiments we conducted we used five different methods (see Table 7.1).Phase 1, Experiments 1 and 2: Constrained but Flexible SpeedBoth experiments in Phase 1 were conducted indoors, using a treadmill. We askedeach participant to choose a comfortable speed on the treadmill which was usedduring movement condition (i.e., constrained their walking speed). As a result,some participants chose very slow speeds not to get too tired during the experiment.3A band-pass filter is a device that only allows frequencies within a certain range to pass throughand blocks frequencies above and below that range.177To avoid unrealistically slow walking speeds on the treadmill, we can requestparticipants to measure their typical walking speed prior to the experiment. An-other suggestion is setting a lower limit on the walking speed which is less favourablebecause it is too artificial and may hurt the external validity of the experiment.Phase 2, Experiment 1: Constrained and Inflexible SpeedsIn the first experiment of Phase 2, we wanted to test the RRACE algorithm at severalconstant speeds in addition to transitions from one speed to another. To make surewe have consistency among the participants we used the same selection of speedsfor everyone. Using the same selection of speeds for all participants does not reflectthe differences among them; the speed selection could be too slow for some andtoo fast for others. To avoid pushing the participants beyound their physical abilitywe had to choose the maximum speed conservatively.Phase 2, Experiment 2: Unconstrained SpeedsWe conducted the second experiment of Phase 2 outdoors. The goal of the experi-ment was to examine our cadence estimation algorithm at a variety of speeds by anumber of users. Without a treadmill, it was very hard to constrain walking speedof participants. We instructed participants to walk at five different speeds. By let-ting participants choose their walking speeds we tested our algorithm at many morelevels which reflected the differences among the participants too.Phase 3, Experiment 1: Cadence Guided with Unconstrained RangeTo test walkers’ ability to follow vibrotactile cues we needed to use certain guid-ance tempos. In the first experiment of Phase 3 we instructed participants to walkat their fastest and slowest speeds first to measure the upper and lower bound forthe tempo of guidance cues. Then we distributed the middle rates between thoseextremes. We believe this method reflects the diversity of participants better thanany other method. However, the problem that may arise from this setup is that theguidance cues might end up being very close to each other (when the participant’sfastest and slower speeds are not much different) or the opposite.Phase 3, Experiment 2: Cadence Guided with Constrained RangeIn the second experiment of Phase 3 we only used two guidance rates in addition toa no guidance condition. In order to reflect the differences among participants to a178possible extent but to have the same distance (on a logarithmic scale) between thefast and slow guidance cues for everyone we decided to measure each participant’stypical (medium) cadence and set the fast and slow guidance tempos at a fixed ratioabove and below it. This method allowed us to keep the same level of guidancedifficulty for everyone in terms of divergence from typical cadence.To summarize, we used many different methods for choosing speed(s) or ca-dence(s) during our experiments. Each of these methods tried to focus on differentsets of requirements; some of them leaned more towards consistency, some leanedtowards covering the differences among participants, and the rest tried to keep abalance between the two. If the length of an experiment, or added complexitywere not an issue, one could even combine the above methods (e.g., have a twopart experiment with constrained and unconstrained speeds/cadences) to answerhis/her research questions with more confidence.Table 7.1: Method of choosing speed or cadence rates in our experiments. V denotes a speedrate (velocity) and F , a cadence rate (frequency).Phase Experiment Location Parameter Rates Chosen By11 Treadmill Speed V1 V1 Participant2 Treadmill Speed V1 V1 Participant21 Treadmill Speed V1..V10 V1..V10 Experimenter2 Sidewalk Speed V1..V5 V1..V5 Participant31 Sidewalk Cadence F1..F5 F1,F5 Participant2 Sidewalk Cadence F1..F3 F2 Participant7.3.7 Workload MeasurementIn the second experiment of Phase 3 (Chapter 6) we used NASA-TLX to measureworkload during different guidance conditions and auditory tasks. NASA-TLX isa subjective assessment that consists of two parts. Part 1 consisted of the sixsubscales addressing mental demand, physical demand, temporal demand, per-formance, effort, and frustration; the participant should rate each of these on a100-points range with 5-points steps (discretized into 20 grades). Part 2 producesweightings for the above subscales by comparing them in pairs.Our biggest challenge during that experiment was keeping it less than 1-hour,ideally at 45 minutes. We had to use NASA-TLX 12 times during the experiment.179To be as efficient as possible, instead of using the paper version, we used a comput-erized version of NASA-TLX which also made it easier for us to put the data fromthe whole experiment together. Using the computerized version especially makesthe second part of the test easier. In fact, many researchers that do the test on paperonly use the first part. The second part, which consists of 15 comparisons, onlyproduces one total score and generally takes longer than the first part. As shownin the results, the patterns seen in the total score is not very much different fromthe subscales. Taking all these considerations into account, we believe the fullNASA-TLX was too costly for our experiment.7.4 Future DirectionsThe work described in this dissertation can be expanded in various areas that willbe explain in this section.7.4.1 Susceptibility to Periodic Guidance in Other MovementsPeriodic guidance works on the premise of synchronizing a periodic movementwith a repetitive cue to control the speed of the movement (for achieving a cer-tain goal) through manipulation of the tempo of the cue. In that regard, periodicguidance does not necessarily rely on vibrotactile displays. As new tactile dis-play technologies emerge, they can be used in periodic guidance too. As discussedbefore, in addition to walking, periodic guidance – PVG in particular – can alsobe used in other movements that are periodic too; examples are cycling, rowing,swimming, and dancing. It is interesting to see if periodic guidance is as successfulfor those applications as it is for walking. While these applications are more com-plex than walking, their users (e.g., athletes or artists) would be more open to trainthemselves to improve their performance. Consequently, it is very important thatfor studying periodic guidance (or PVG) for those movements, we use longitudinalstudies that allow us to provide sufficient training to the participants.7.4.2 Study of PVG in Medium and Long TermThroughout this research we used several experiments to find answers to our re-search questions. Often times we faced experimental design choices that were180ultimately decided based on priority of research questions and our limitations. Oneof these experimental design decisions was the length of trials for the study of PVGwhich was explained in Chapters 5 and 6. Because we had various factors (e.g.,guidance rate, auditory distraction) and each had several levels and we could notallow each experiment to last longer than a certain amount of time, we had to limittrials to 60 or 25 seconds. As a result, we could not study PVG during longer pe-riods of time. However, it is evident that in most applications, PVG could be usedfor several minutes or even hours. We believe a new set of experiment with as fewfactors and levels as possible and longer trials would enable us to evaluate PVGin more natural settings, where we can also see the effect of fatigue and learningto some extent. On the other hand, we envision PVG to be used several times aweek or day; longitudinal studies of PVG that are conducted in several consecutivesessions can also help us analyze learning effects over time and give us a more re-alistic picture of how well users’ performance can get if they get used to PVG overa longer period of time.7.4.3 Effect of PVG on AttentionAnother interesting expansion of our work is measuring the extent to which PVGis taxing on the user’s attention. This can be done through three methods. The firstone is creating visual (or auditory) cues – e.g., a blinking light – which are hard todetect and asking participants to respond to them [114, 120] at the same time thatthey are being guided by PVG. Detection Rate and Reaction Time can then be usedto measure participants’ attention. The second method is using recall performance;this can be done by planting objects along the route [110] or embedding wordsin an audio track, and asking participants to count them. The third method isusing Stroop test [56, 102]; because both PVG and Stroop test compete for theparticipant’s attention, lower score to the test under a certain condition indicatesthat PVG is more taxing on attention during that condition. It is worth noting that,since doing a Stroop test involves reading words (e.g., name of a colour printed inthe colour denoted or not denoted by the name), it cannot be used during walkingin its conventional way; in order to use a Stroop test, we can give it to participantsimmediately after each trial (and when participants stop) or we can use an auditory181Stroop test [57, 107].7.4.4 Other Use Cases for RRACEIn Chapter 4 we introduced RRACE, our algorithm for cadence estimation whichuses the readily available accelerometer sensors in today’s smartphones to mea-sure a walker’s stride frequency. RRACE works in frequency-domain, therefore, inprinciple it only cares about the frequency components of the signal, not its shapeor phase. The downside of this characteristic is that RRACE, in its original form,cannot detect individual foot steps; however, as long as the signal has a major fre-quency component, RRACE can detect it. This means that RRACE can be used fordetecting the frequency of other periodic movements such as pedaling or rowingtoo. By conducting experiments that are similar to the ones explained in Chapter 4which focus on cycling, rowing, or swimming we might be able to verify this claim.Another use case for RRACE is measuring speed and/or energy expenditurebased on cadence. By measuring cadence, speed and/or energy expenditure we cancreate mathematical models that can estimate one factor based on one or two others.After we create such models, we can combine them with RRACE to create a newalgorithm that estimates cadence from accelerometer signals and then computesspeed and/or energy expenditure. The RRACE algorithm with the ability to measurespeed can be used instead of or in addition to a GPS for improved accuracy of speedmeasurement or increased usability (e.g., can measure speed even when there is noGPS satellite reception); the RRACE algorithm with energy expenditure estimationability can be used for activity measurement applications.7.4.5 PVG’s Performance in Closed-loop Control SettingsIn Chapter 1 we suggested two closed-loop settings that could be used to improvePVG’s performance by reducing and compensating for the user’s error. We alsoprovided the results of those control systems in simulation settings to show howthey differ in terms of dealing with error. The models we used for users wereover simplified. We know that users can be very unpredictable in terms of howthey react to cues, but we also know that they are much smarter than a simplemathematical model and will probably try to understand the guidance system to182respond better to it. Examining PVG in a control setting is a very interesting topicbut it requires more than just one or two experiments. In order to explore PVGin a control setting, we need to design a controller that is stable and minimizeserror. We can use past data on the user’s cadence and speed and response to cues tocreate a loosely defined model then start with a very conservative controller (i.e.,imperfect in terms of error minimization but very stable) and improve it iteratively.Or, we can use Fuzzy [168] or other controllers [77] that do not rely on a perfectmodel of the system.7.5 Closing RemarksToday’s smartphones and other handheld devices are equipped with powerful com-puters, many kinds of sensors, and connectivity to the Internet. Having all theseabilities in one very small package that can be taken virtually anywhere has openedthe doors to many new applications – and guidance systems in particular – whosegoal is making our lives easier. However, sometimes these applications becomenew causes for problems as a result of inappropriate usage or overloading of theaudiovisual channels. In this dissertation, we proposed a new guidance method thatemploys periodic cues for fine-grained control of human movement through thetempo of the cues, which is very intuitive, does not abstract meanings, and workswith minimal reliance on memory. The simplicity of periodic guidance enables itto use the tactile channel which has advantages over the audiovisual channels incertain contexts. Our research examined the use of vibrotactile displays in mo-bile contexts that are the focus of our guidance method, developed and verified acadence estimation method that was required for periodic guidance of human walk-ing and analyzed the performance of PVG, the vibrotactile version of our guidancesystem.183Bibliography[1] K. Altun and B. Barshan. Pedestrian dead reckoning employingsimultaneous activity recognition cues. Measurement Science andTechnology, 23(2):025103, Feb. 2012. ISSN 0957-0233. URLhttp://stacks.iop.org/0957-0233/23/i=2/a=025103. → pages 85[2] H. J. r. Andersen, A. Morrison, and L. Knudsen. Modeling vibrotactiledetection by logistic regression. In Proceedings of the 7th NordicConference on Human-Computer Interaction: Making Sense ThroughDesign, pages 500–503. ACM, 2012. → pages 165[3] R. Angel and R. Malenka. Velocity-dependent suppression of cutaneoussensitivity during movement. Experimental Neurology, 77(2):266–274,Aug. 1982. ISSN 00144886. doi:10.1016/0014-4886(82)90244-8. URLhttp://dx.doi.org/10.1016/0014-4886(82)90244-8. → pages 54, 164[4] APDM Inc. APDM Movement Monitors. 2012. URLhttp://apdm.com/products/movement-monitors/. → pages 86[5] Z. Artstein. Discrete and continuous bang-bang and facial spaces or: lookfor the extreme points. SIAM Review, 22(2):172–185, 1980. URLhttp://epubs.siam.org/doi/pdf/10.1137/1022026. → pages 119[6] M. A. Baumann, K. E. MacLean, T. W. Hazelton, and A. McKay.Emulating human attention-getting practices with wearable haptics. In2010 IEEE Haptics Symposium, pages 149–156. IEEE, Mar. 2010. ISBN978-1-4244-6821-8. doi:10.1109/HAPTIC.2010.5444662. URLhttp://ieeexplore.ieee.org/xpl/freeabs all.jsp?arnumber=5444662. → pages32, 46[7] M. Bergamasco, B. Allotta, L. Bosio, L. Ferretti, G. Parrini, G. M. Prisco,F. Salsedo, and G. Sartini. An arm exoskeleton system for teleoperationand virtual environments applications. In Robotics and Automation, 1994.184Proceedings., 1994 IEEE International Conference on, pages 1449–1454.IEEE, 1994. → pages 46[8] A. Bettini, S. Lang, A. Okamura, and G. Hager. Vision assisted control formanipulation using virtual fixtures: experiments at macro and micro scales.In Proceedings 2002 IEEE International Conference on Robotics andAutomation (Cat. No.02CH37292), pages 3354–3361. IEEE, 2002. ISBN0-7803-7272-7. doi:10.1109/ROBOT.2002.1014229. URLhttp://ieeexplore.ieee.org/xpl/freeabs all.jsp?arnumber=1014229. → pages27, 37, 46[9] A. Bettini, P. Marayong, S. Lang, A. Okamura, and G. Hager.Vision-Assisted Control for Manipulation Using Virtual Fixtures. IEEETransactions on Robotics, 20(6):953–966, Dec. 2004. ISSN 1552-3098.doi:10.1109/TRO.2004.829483. URLhttp://ieeexplore.ieee.org/xpl/freeabs all.jsp?arnumber=1362691. → pages27[10] M. Bonnard and J. Pailhous. Intentionality in human gait control:modifying the frequency-to-amplitude relationship. Journal ofExperimental Psychology: Human Perception and Performance, 19(2):429–443, Apr. 1993. ISSN 0096-1523. URLhttp://www.ncbi.nlm.nih.gov/pubmed/8473849. → pages 138[11] S. Bosman, B. Groenendaal, J. Findlater, T. Visser, M. Graaf, andP. Markopoulos. GentleGuide: An Exploration of Haptic Output forIndoors Pedestrian Guidance. Human-Computer Interaction with MobileDevices and Services, 2795:358–362, 2003.doi:10.1007/978-3-540-45233-1\ 28. URLhttp://dx.doi.org/10.1007/978-3-540-45233-1 28. → pages 4, 32, 43, 49,54, 117, 137[12] M. Bouzit, G. Burdea, G. Popescu, and R. Boian. The Rutgers MasterII-new design force-feedback glove. Mechatronics, IEEE/ASMETransactions on, 7(2):256–263, 2002. → pages 46[13] L. S. Brakmo, D. A. Wallach, and M. A. Viredaz. µSleep: A technique forreducing energy consumption in handheld devices. In Proc. Int. Conf.Mobile Systems, Applications, and Services, pages 12–22, 2004. → pages102[14] S. Brewster and L. M. Brown. Tactons : Structured Tactile Messages forNon-Visual Information Display. In Proceedings of the fifth conference on185Australasian user interface, volume 28, pages 15–23, 2004. → pages 4, 46,47[15] S. Brewster and A. Walker. Non-visual interfaces for wearable computers.In In Proceedings of IEE Workshop on Wearable Computing (IEE,London), pages 8–11. IET, 2000. → pages 4[16] G. Brostrom. glmmML: Generalized linear models with clustering, 2009.URL http://cran.r-project.org/web/packages/glmmML/index.html. → pages58[17] C. V. Buhusi and W. H. Meck. What makes us tick? Functional and neuralmechanisms of interval timing. Nature Reviews. Neuroscience, 6(10):755–765, Oct. 2005. ISSN 1471-003X. doi:10.1038/nrn1764. URLhttp://www.ncbi.nlm.nih.gov/pubmed/16163383. → pages 139, 160[18] G. E. Burnett and K. Lee. The effect of vehicle navigation systems on theformation of cognitive maps. In International Conference of Traffic andTransport Psychology, pages 407–418, 2005. → pages 43[19] CBC Radio. What caused the sabre-tooth tiger extinction, 2012. URLhttp://www.cbc.ca/asithappens/features/2012/12/27/what-caused-the-sabre-tooth-tiger-extinction/. → pages 143[20] C. Chafe. Tactile audio feedback. In Proceedings of the InternationalComputer Music Conference, page 76. INTERNATIONAL COMPUTERMUSIC ACCOCIATION, 1993. → pages 34[21] A. Chan, K. Maclean, and J. Mcgrenere. Learning and Identifying HapticIcons under Workload. In First Joint Eurohaptics Conference andSymposium on Haptic Interfaces for Virtual Environment and TeleoperatorSystems, pages 432–439. IEEE, 2005. ISBN 0-7695-2310-2.doi:10.1109/WHC.2005.86. URLhttp://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1406966.→ pages 36[22] J. K. Chapin and D. J. Woodward. Modulation of sensory responsivenessof single somatosensory cortical cells during movement and arousalbehaviors. Experimental Neurology, 72(1):164–178, Apr. 1981. ISSN00144886. doi:10.1016/0014-4886(81)90135-7. URLhttp://dx.doi.org/10.1016/0014-4886(81)90135-7. → pages 54, 164186[23] C. Chapman, M. Bushnell, D. Miron, G. Duncan, and J. Lund. Sensoryperception during movement in man. Experimental Brain Research, 68(3):516–524–524, Nov. 1987. ISSN 0014-4819. doi:10.1007/BF00249795.URL http://www.springerlink.com/content/kp015145345q111g/. → pages42, 45, 54, 164[24] W. S. Cleveland and S. J. Devlin. Locally weighted regression: anapproach to regression analysis by local fitting. Journal of the AmericanStatistical Association, 83(403):596–610, 1988. → pages 126[25] S. Consolvo, P. Klasnja, D. W. McDonald, D. Avrahami, J. Froehlich,L. LeGrand, R. Libby, K. Mosher, and J. A. Landay. Flowers or a robotarmy?: encouraging awareness & activity with personal, mobile displays.In Proceedings of the 10th International Conference on UbiquitousComputing, pages 54–63. ACM, 2008. → pages 84[26] A. Cosgun, E. A. Sisbot, and H. I. Christensen. Evaluation of rotationaland directional vibration patterns on a tactile belt for guiding visuallyimpaired people. In Haptics Symposium (HAPTICS), 2014 IEEE, pages367–370. IEEE, 2014. → pages 165[27] E. Cuervo, A. Balasubramanian, D.-k. Cho, A. Wolman, S. Saroiu,R. Chandra, and P. Bahl. MAUI: making smartphones last longer with codeoffload. In Proceedings of the 8th International Conference on MobileSystems, Applications, and Services, pages 49–62. ACM, 2010. → pages102[28] Y. Cui, J. Chipchase, and F. Ichikawa. A cross culture study on phonecarrying and physical personalization. In Usability andInternationalization. HCI and Culture, pages 483–492. Springer, 2007. →pages 95[29] F. Danion, E. Varraine, M. Bonnard, and J. Pailhous. Stride variability inhuman gait: the effect of stride frequency and stride length. Gait &Posture, 18(1):69–77, 2003. ISSN 09666362.doi:10.1016/S0966-6362(03)00030-4. URLhttp://dx.doi.org/10.1016/S0966-6362(03)00030-4. → pages 7, 118, 138[30] S. Das, L. Green, B. Perez, M. Murphy, and A. Perring. Detecting useractivities using the accelerometer on Android smartphones. The Team forResearch in Ubiquitous Secure Technology, TRUST-REU Carnegie MellonUniversity, 2010. → pages 90187[31] R. De Oliveira and N. Oliver. TripleBeat: enhancing exercise performancewith persuasion. In Proceedings of the 10th International Conference onHuman-Computer Interaction with Mobile Devices and Services, pages255–264. ACM, 2008. → pages 85[32] B. R. Donald, F. Henle, and B. Donaldt. Using haptic vector fields foranimation motion control. In Robotics and Automation, 2000. Proceedings.ICRA’00. IEEE International Conference on, volume 4, pages 3435–3442.IEEE, 2000. ISBN 0-7803-5886-4. doi:10.1109/ROBOT.2000.845256.URLhttp://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=845256.→ pages 27, 38, 46[33] Endomondo. Endomondo, 2013. URL http://www.endomondo.com/. →pages 84, 85, 101, 177[34] M. Enriquez and K. MacLean. Impact of Haptic warning signal reliabilityin a time-and-safety-critical task. In 12th International Symposium onHaptic Interfaces for Virtual Environment and Teleoperator Systems, 2004.HAPTICS ’04. Proceedings., pages 407–415. Ieee, 2004. ISBN0-7695-2112-6. doi:10.1109/HAPTIC.2004.1287228. URLhttp://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1287228.→ pages 31, 37, 45[35] S. Ertan, C. Lee, A. Willets, H. Tan, and A. Pentland. A wearable hapticnavigation guidance system. Digest of Papers. Second InternationalSymposium on Wearable Computers, pages 164–165, 1998.doi:10.1109/ISWC.1998.729547. URLhttp://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=729547.→ pages 4, 32, 42, 49, 117, 137[36] K.-T. Feng, H.-S. Tan, M. Tomizuka, and W.-B. Zhang. Look-aheadhuman-machine interface for assistance of manual vehicle steering. InProceedings of the 1999 American Control Conference, pages 1228–1232.IEEE, 1999. ISBN 0-7803-4990-3. doi:10.1109/ACC.1999.783563. URLhttp://ieeexplore.ieee.org/xpl/freeabs all.jsp?arnumber=783563. → pages28[37] A. R. Ferber, M. Peshkin, and J. E. Colgate. Using Haptic Communicationswith the Leg to Maintain Exercise Intensity. In RO-MAN 2007 - The 16thIEEE International Symposium on Robot and Human InteractiveCommunication, pages 292–297. IEEE, 2007. ISBN 978-1-4244-1634-9.188doi:10.1109/ROMAN.2007.4415097. URLhttp://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4415097.→ pages 4, 138[38] T. Ferris, S. Hameed, and N. Sarter. Tactile displays for multitaskenvironments: the role of concurrent task processing code. In EuroHapticsconference, 2009 and Symposium on Haptic Interfaces for VirtualEnvironment and Teleoperator Systems. World Haptics 2009. Third Joint,pages 160–165. IEEE, 2009. → pages 76[39] T. K. Ferris and N. Sarter. Continuously Informing Vibrotactile Displays inSupport of Attention Management and Multitasking in Anesthesiology.Human Factors: The Journal of the Human Factors and ErgonomicsSociety, 53(6):600–611, Nov. 2011. ISSN 0018-7208.doi:10.1177/0018720811425043. URLhttp://hfs.sagepub.com/cgi/doi/10.1177/0018720811425043. → pages 44[40] D. Feygin, M. Keehner, and F. Tendick. Haptic guidance: experimentalevaluation of a haptic training method for a perceptual motor skill.Proceedings 10th Symposium on Haptic Interfaces for Virtual Environmentand Teleoperator Systems. HAPTICS 2002, 1:40–47, 2002.doi:10.1109/HAPTIC.2002.998939. URLhttp://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=998939.→ pages 43[41] FitBit Inc. FitBit, 2012. URL http://www.fitbit.com. → pages 85[42] B. A. Forsyth and K. E. MacLean. Predictive haptic guidance: intelligentuser assistance for the control of dynamic tasks. IEEE Transactions onVisualization and Computer Graphics, 12(1):103–13, 2006. ISSN1077-2626. doi:10.1109/TVCG.2006.11. URLhttp://www.ncbi.nlm.nih.gov/pubmed/16382612. → pages 28, 37, 45[43] R. C. Foster, L. M. Lanningham-Foster, C. Manohar, S. K. McCrady, L. J.Nysse, K. R. Kaufman, D. J. Padgett, and J. a. Levine. Precision andaccuracy of an ankle-worn accelerometer-based pedometer in step countingand energy expenditure. Preventive Medicine, 41(3-4):778–83, 2005. ISSN0091-7435. doi:10.1016/j.ypmed.2005.07.006. URLhttp://www.ncbi.nlm.nih.gov/pubmed/16125760. → pages 82, 87, 93, 98,140[44] A. Frisoli, F. Rocchi, S. Marcheschi, A. Dettori, F. Salsedo, andM. Bergamasco. A new force-feedback arm exoskeleton for haptic189interaction in virtual environments. In Eurohaptics Conference, 2005 andSymposium on Haptic Interfaces for Virtual Environment and TeleoperatorSystems, 2005. World Haptics 2005. First Joint, pages 195–201. IEEE,2005. → pages 46[45] H. Fuchs, M. A. Livingston, R. Raskar, K. Keller, J. R. Crawford,P. Rademacher, S. H. Drake, A. A. Meyer, and Others. Augmented RealityVisualization for Laparoscopic Surgery. Medical Image Computing andComputer-Assisted Interventation MICCAI98, 1496:934–943, 1998. →pages 26[46] Y. Fujiki. iPhone as a physical activity measurement platform. In CHI ’10Extended Abstracts on Human Factors in Computing Systems, CHI EA ’10,pages 4315–4320, New York, New York, USA, 2010. ACM. ISBN9781605589305. doi:10.1145/1753846.1754146. URLhttp://portal.acm.org/citation.cfm?doid=1753846.1754146. → pages 84[47] Y. Fujiki, K. Kazakos, C. Puri, I. Pavlidis, J. Starren, and J. Levine.NEAT-o-Games: Ubiquitous Activity-based Gaming. In CHI ’07 ExtendedAbstracts on Human Factors in Computing Systems, pages 2369–2374,New York, New York, USA, Apr. 2007. ACM. ISBN 9781595936424.doi:10.1145/1240866.1241009. URLhttp://dl.acm.org/citation.cfm?id=1240866.1241009. → pages 82, 85[48] Y. Fujiki, K. Kazakos, C. Puri, P. Buddharaju, I. Pavlidis, and J. Levine.NEAT-o-Games: blending physical activity and fun in the daily routine.Computers in Entertainment (CIE), 6(2):21, 2008. → pages 85[49] Garmin International Inc. Garmin Forerunner 910XT, 2012. URLhttp://sites.garmin.com/forerunner910xt. → pages 85[50] F. Gemperle, N. Ota, and D. Siewiorek. Design of a wearable tactiledisplay. In Wearable Computers, 2001. Proceedings. Fifth InternationalSymposium on, pages 5–12, 2001. URLhttp://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=962082. →pages 46[51] R. B. Gillespie, M. OModhrain, P. Tang, D. Zaretzky, and C. Pham. Thevirtual teacher. In Proceedings of the ASME Dynamic Systems and ControlDivision, volume 64, pages 171–178. American Society of MechanicalEngineers, 1998. URL http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.142.4393&rep=rep1&type=pdf. → pages 29190[52] S. Glaser, S. Mammar, and J. Sainte-Marie. Lateral Driving Assistanceusing Embedded Driver-Vehicle-Road Model. ESDA, Turkey, 2002. URLhttp://aramis.iup.univ-evry.fr:8080/∼smam/publications/publis2002/APM-099.pdf. → pages 37[53] R. G. Golledge, R. L. Klatzky, J. M. Loomis, J. Speigle, and J. Tietz. Ageographical information system for a GPS based personal guidancesystem. International Journal of Geographical Information Science, 12(7):727–749, 1998. → pages 26, 37[54] R. G. Golledge, R. L. Klatzky, J. M. Loomis, J. Speigle, and J. Tietz. Studyof the information stress problem in operators. Human Physiology, 26(5):605–611, 2000. → pages 40[55] Google Inc. Google Maps, 2014. URL http://maps.google.com/. → pages6[56] M. D. Grabiner and K. L. Troy. Attention demanding tasks during treadmillwalking reduce step width variability in young adults. Journal ofNeuroEngineering and Rehabilitation, 6:1–6, 2005.doi:10.1186/1743-Received. URLhttp://www.doaj.org/abstract?id=120021. → pages 181[57] E. J. Green and P. J. Barber. An auditory Stroop effect with judgments ofspeaker gender. Perception & Psychophysics, 30(5):459–466, 1981. →pages 182[58] P. Griffiths and R. Gillespie. Shared control between human and machine:Haptic display of automation during manual control of vehicle heading.Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2004.HAPTICS’04. Proceedings. 12th International Symposium on, pages358–366, 2004. doi:10.1109/HAPTIC.2004.1287222. URLhttp://ieeexplore.ieee.org/xpls/abs all.jsp?arnumber=1287222. → pages28, 37, 39, 45[59] E. Grimson, M. Leventon, G. Ettinger, A. Chabrerie, F. Ozlen,S. Nakajima, H. Atsumi, R. Kikinis, and P. Black. Clinical experience witha high precision image-guided neurosurgery system. In Medical ImageComputing and Computer-Assisted Interventation - MICCAI98, pages63–73. Springer, 1998. → pages 26191[60] G. Grindlay. Haptic guidance benefits musical motor learning. In HapticInterfaces for Virtual Environment and Teleoperator Systems, 2008.Haptics 2008. Symposium on, pages 397–404. IEEE, 2008. → pages 34[61] G. C. Grindlay. The impact of haptic guidance on musical motor learning.PhD thesis, Massachusetts Institute of Technology, 2006. → pages 43[62] E. Gunther and S. O’Modhrain. Cutaneous Grooves: Composing for theSense of Touch. Journal of New Music Research, 32(4):369–381, Dec.2003. ISSN 0929-8215. doi:10.1076/jnmr.32.4.369.18856. URLhttp://www.tandfonline.com/doi/abs/10.1076/jnmr.32.4.369.18856. →pages 35, 47, 48[63] A. Gupta, M. K. O’Malley, V. Patoglu, and C. Burgar. Design, control andperformance of RiceWrist: A force feedback wrist exoskeleton forrehabilitation and training. The International Journal of RoboticsResearch, 27(2):233–251, 2008. → pages 46[64] J. F. Hahn. Vibrotactile adaptation and recovery measured by two methods.Journal of Experimental Psychology, 71(5):655–658, 1966. → pages 116,162, 172[65] S. G. Hart. Nasa-Task Load Index (NASA-TLX); 20 Years Later.Proceedings of the Human Factors and Ergonomics Society AnnualMeeting, 50(9):904–908, Oct. 2006. ISSN 1071-1813.doi:10.1177/154193120605000909. URLhttp://pro.sagepub.com/lookup/doi/10.1177/154193120605000909. →pages 140, 159[66] J. Hatfield and S. Murphy. The effects of mobile phone use on pedestriancrossing behaviour at signalized and unsignalized intersections. Accident;Analysis and Prevention, 39(1):197–205, 2007. ISSN 0001-4575.doi:10.1016/j.aap.2006.07.001. URLhttp://www.ncbi.nlm.nih.gov/pubmed/16919588. → pages 117[67] S. Hirokawa. Normal gait characteristics under temporal and distanceconstraints. Journal of Biomedical Engineering, 11(6):449–456, 1989.ISSN 01415425. doi:10.1016/0141-5425(89)90038-1. URLhttp://dx.doi.org/10.1016/0141-5425(89)90038-1. → pages 7, 97[68] C. Ho, H. Tan, and C. Spence. Using spatial vibrotactile cues to directvisual attention in driving scenes. Transportation Research Part F: TrafficPsychology and Behaviour, 8(6):397–412, 2005. ISSN 1369-8478. URL192http://linkinghub.elsevier.com/retrieve/pii/S1369847805000525http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.86.1996&rep=rep1&type=pdf. → pages 30, 55, 135[69] J. Ho. Using context-aware computing to reduce the perceived burden ofinterruptions from mobile devices. In the SIGCHI Conference on HumanFactors in Computing, pages 909–918, 2005. ISBN 1581139985. URLhttp://dl.acm.org/citation.cfm?id=1055100. → pages 85[70] E. Hoggan, S. Anwar, and S. Brewster. Mobile multi-actuator tactiledisplays. In Proceedings of the 2nd International Conference on Hapticand Audio Interaction Design, pages 22–33. Springer-Verlag, 2007. ISBN3540767010. URL http://portal.acm.org/citation.cfm?id=1775518. →pages 4, 51, 54, 116, 137, 140, 164[71] I. E. Hyman, S. M. Boss, B. M. Wise, K. E. McKenzie, and J. M.Caggiano. Did you see the unicycling clown? Inattentional blindness whilewalking and talking on a cell phone. Applied Cognitive Psychology, 24:597–607, 2010. doi:10.1002/acp. URLhttp://onlinelibrary.wiley.com/doi/10.1002/acp.1638/full. → pages 117[72] Interlink Electronics Inc. FSR 406, 2011. URLhttp://www.interlinkelectronics.com/FSR406.php. → pages 95, 96[73] L. A. Jones and N. B. Sarter. Tactile Displays: Guidance for Their Designand Application. Human Factors: The Journal of the Human Factors andErgonomics Society, 50(1):90–111, 2008. ISSN 00187208.doi:10.1518/001872008X250638. URLhttp://hfs.sagepub.com/cgi/content/abstract/50/1/90. → pages 44, 51, 53,164[74] K. A. Kaczmarek. Sensory augmentation and substitution. CRC Handbookof Biomedical Engineering, pages 2100–2109, 1995. → pages 167[75] S. Kammoun, C. Jouffrais, T. Guerreiro, H. Nicolau, and J. Jorge. Guidingblind people with haptic feedback. Frontiers in Accessibility for PervasiveComputing (Pervasive 2012), 2012. → pages 165[76] I. Karuei and K. E. Maclean. Susceptibility to periodic vibrotactileguidance of human cadence. In Haptics Symposium (HAPTICS), 2014IEEE, pages 141–146, 2014. → pages 113, 134193[77] I. Karuei, N. Meskin, and A. Aghdam. Multi-layer switching control. InProceedings of the 2005, American Control Conference, 2005., pages4772–4777. IEEE. ISBN 0-7803-9098-9. doi:10.1109/ACC.2005.1470750.URL http://ieeexplore.ieee.org/xpls/abs all.jsp?arnumber=1470750’escapeXml=’false’/〉. → pages 183[78] I. Karuei, K. E. MacLean, Z. Foley-Fisher, R. MacKenzie, S. Koch, andM. El-Zohairy. Detecting vibrations across the body in mobile contexts. InProceedings of the 2011 Annual Conference on Human Factors inComputing Systems - CHI ’11, pages 3267–3276, 2011. ISBN9781450302289. doi:10.1145/1978942.1979426. URLhttp://dl.acm.org/citation.cfm?id=1978942.1979426. → pages 50, 114,133, 165[79] I. Karuei, O. S. Schneider, B. Stern, M. Chuang, and K. E. Maclean.RRACE: Robust realtime algorithm for cadence estimation. Pervasive andMobile Computing, 13(0):52–66, 2014. ISSN 1574-1192.doi:http://dx.doi.org/10.1016/j.pmcj.2013.09.006. URLhttp://www.sciencedirect.com/science/article/pii/S1574119213001193. →pages 81, 114, 134[80] J. J. Kavanagh and H. B. Menz. Accelerometry: a technique forquantifying movement patterns during walking. Gait & Posture, 28(1):1–15, July 2008. ISSN 0966-6362. doi:10.1016/j.gaitpost.2007.10.010.URL http://www.ncbi.nlm.nih.gov/pubmed/18178436. → pages 84, 89[81] Y. Kawahara and H. Kurasawa. Recognizing user context using mobilehandsets with acceleration sensors. In IEEE International Conference onPortable Information Devices, pages 1–5, 2007. ISBN 8135841671. URLhttp://ieeexplore.ieee.org/xpls/abs all.jsp?arnumber=4216907. → pages85[82] N. Kern and B. Schiele. Context-aware notification for wearablecomputing. Seventh IEEE International Symposium on WearableComputers, 2003. Proceedings., pages 223–230, 2003.doi:10.1109/ISWC.2003.1241415. URLhttp://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1241415.→ pages 85[83] C. Kirtley, M. W. Whittle, and R. Jefferson. Influence of walking speed ongait parameters. Journal of Biomedical Engineering, 7(4):282–288, 1985.→ pages 138194[84] J. Kjeldskov and J. Stage. New techniques for usability evaluation ofmobile systems. International Journal of Human-Computer Studies, 60(5-6):599–620, May 2004. ISSN 10715819.doi:10.1016/j.ijhcs.2003.11.001. URLhttp://linkinghub.elsevier.com/retrieve/pii/S1071581903001964. → pages139[85] R. Knoblauch, M. Pietrucha, and M. Nitzburg. Field Studies of PedestrianWalking Speed and Start-Up Time. Transportation Research Record, 1538(1):27–38, Jan. 1996. ISSN 0361-1981. doi:10.3141/1538-04. URL http://trb.metapress.com/openurl.asp?genre=article&id=doi:10.3141/1538-04.→ pages 93[86] R. L. Koslover, B. T. Gleeson, J. T. de Bever, and W. R. Provancher.Mobile Navigation Using Haptic, Audio, and Visual Direction Cues with aHandheld Test Platform. IEEE Transactions on Haptics, 5(1):33–38, 2012.ISSN 1939-1412. doi:10.1109/TOH.2011.58. URLhttp://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6060820.→ pages 117[87] R. Kramer, M. Modsching, K. Hagen, and U. Gretzel. Behavioural impactsof mobile tour guides. Information and Communication Technologies inTourism 2007, pages 109–118, 2007. → pages 38[88] M. D. Latt, H. B. Menz, V. S. Fung, and S. R. Lord. Walking speed,cadence and step length are selected to optimize the stability of head andpelvis accelerations. Experimental Brain Research, 184(2):201–209, 2008.→ pages 177[89] M. Laurent and J. Pailhous. A note on modulation of gait in man: Effectsof constraining stride length and frequency. Human Movement Science, 5(4):333–343, 1986. ISSN 01679457. doi:10.1016/0167-9457(86)90012-6.URL http://dx.doi.org/10.1016/0167-9457(86)90012-6. → pages 7, 115,118, 135, 138[90] S. Lederman and R. Klatzky. Haptic perception : a tutorial. Attention,Perception, & Psychophysics, 71(7):1439–1459, 2009. ISSN 1943-393X.doi:10.3758/APP. URLhttp://app.psychonomic-journals.org/content/71/7/1439.abstract. → pages51, 53, 164[91] Y.-C. Lee, J. D. Lee, and L. Ng Boyle. The Interaction of Cognitive Loadand Attention-Directing Cues in Driving. Human Factors: The Journal of195the Human Factors and Ergonomics Society, 51(3):271–280, July 2009.ISSN 0018-7208. doi:10.1177/0018720809337814. URLhttp://hfs.sagepub.com/cgi/content/abstract/51/3/271. → pages 44[92] G. Leshed, T. Velden, O. Rieger, B. Kot, and P. Sengers. In-car gpsnavigation: engagement with and disengagement from the environment. InProceedings of the SIGCHI Conference on Human Factors in ComputingSystems, pages 1675–1684. ACM, 2008. → pages 43[93] J. Lester, B. Hannaford, and G. Borriello. Are You With Me ? UsingAccelerometers to Determine if Two Devices are Carried by the SamePerson. Pervasive Computing, 3001/2004:33–50, 2004. → pages 83, 88, 91[94] Y. Li, V. Patoglu, and M. K. O’Malley. Negative efficacy of fixed gain errorreducing shared control for training in virtual environments. ACMTransactions on Applied Perception, 6(1):1–21, Feb. 2009. ISSN15443558. doi:10.1145/1462055.1462058. URLhttp://portal.acm.org/citation.cfm?doid=1462055.1462058. → pages 30[95] J. J. Lin, L. Mamykina, S. Lindtner, G. Delajoux, and H. B. Strub.FishnSteps: Encouraging physical activity with an interactive computergame. In UbiComp 2006: Ubiquitous Computing, pages 261–278.Springer, 2006. → pages 85[96] Logitech International S.A.; and Immersion Corp. iFeel Optical Mice,2014. URL http://www.logitech.com/en-us/press/press-releases/1448. →pages 36[97] N. R. Lomb. Least-Squares Frequency Analysis of Unequally Spaced Data.Astrophysics and Space Science, 39:447–462, 1976.doi:10.1007/BF00648343. → pages 91[98] J. Luk, J. Pasquero, S. Little, K. MacLean, V. Levesque, and V. Hayward. Arole for haptics in mobile interaction: initial design using a handheld tactiledisplay prototype. In Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems, pages 171–180. ACM, 2006. → pages 46[99] J. Lylykangas, V. Surakka, J. Rantala, and R. Raisamo. Intuitiveness ofvibrotactile speed regulation cues. ACM Transactions on AppliedPerception (TAP), 10(4):24, 2013. → pages 165[100] K. E. MacLean. Foundations of transparency in tactile information design.IEEE Transactions on Haptics, 1(2):84–95, 2008. URL196http://ieeexplore.ieee.org/xpls/abs all.jsp?arnumber=4674348. → pages41, 78, 115, 135[101] K. E. MacLean. Putting haptics into the ambience. IEEE Transactions onHaptics, 2(3):123–135, 2009. URLhttp://ieeexplore.ieee.org/xpls/abs all.jsp?arnumber=5184837. → pages116, 137[102] C. C. MacLeod. The Stroop task The gold standard of attentional measures.Journal of Experimental Psychology: General, 121:12–14, 1992. URLhttp://psycnet.apa.org/journals/xge/121/1/12/. → pages 161, 181[103] M. Martinsson. Detection of mobile phone vibrations during walking. PhDthesis, Masters Thesis, University of Lund, Sweden, 2011. → pages 165[104] A. Maruyama, N. Shibata, Y. Murata, K. Yasumoto, and M. Ito. P-Tour: APersonal Navigation System with Travel Schedule Planning and RouteGuidance Based on Schedule. Journal of Information Processing Society ofJapan, 45(12):2678–2687, 2004. → pages 33, 38, 137[105] C. Matthews, Y. Ketema, D. Gebre-Egziabher, and M. Schwartz. In-SituStep Size Estimation Using a Kinetic Model of Human Gait. InProceedings of the 23rd International Technical Meeting of The SatelliteDivision of the Institute of Navigation (ION GNSS 2010), pages 511–524,Oct. 2001. URLhttp://www.ion.org/search/view abstract.cfm?jp=p&idno=9177. → pages85[106] E. L. Melanson, J. R. Knoll, M. L. Bell, W. T. Donahoo, J. O. Hill, L. J.Nysse, L. Lanningham-Foster, J. C. Peters, and J. a. Levine. Commerciallyavailable pedometers: considerations for accurate step counting. PreventiveMedicine, 39(2):361–8, Aug. 2004. ISSN 0091-7435.doi:10.1016/j.ypmed.2004.01.032. URLhttp://www.ncbi.nlm.nih.gov/pubmed/15226047. → pages 82, 87, 98, 103,140[107] A. L. Morgan and J. F. Brandt. An auditory Stroop effect for pitch,loudness, and time. Brain and Language, 36(4):592–603, May 1989. ISSN0093-934X. URL http://www.ncbi.nlm.nih.gov/pubmed/2720372. → pages182[108] A. Morrison, L. Knudsen, and H. J. r. Andersen. Urban vibrations:Sensitivities in the field with a broad demographic. In Wearable Computers197(ISWC), 2012 16th International Symposium on, pages 76–79. IEEE, 2012.→ pages 165[109] F. Mourgues, T. Vieville, V. Falk, and E. Coste-Maniere. Interactiveguidance by image overlay in robot assisted coronary artery bypass. InMedical Image Computing and Computer-Assisted Intervention-MICCAI2003, pages 173–181. Springer, 2003. → pages 26, 37[110] J. Nasar, P. Hecht, and R. Wener. Mobile telephones, distracted attention,and pedestrian safety. Accident; Analysis and Prevention, 40(1):69–75,Jan. 2008. ISSN 0001-4575. doi:10.1016/j.aap.2007.04.005. URLhttp://www.ncbi.nlm.nih.gov/pubmed/18215534. → pages 181[111] J. L. Nasar and D. Troyer. Pedestrian injuries due to mobile phone use inpublic places. Accident; Analysis and Prevention, 57:91–5, 2013. ISSN1879-2057. doi:10.1016/j.aap.2013.03.021. URLhttp://www.ncbi.nlm.nih.gov/pubmed/23644536. → pages 117[112] Nike Inc. Nike+Fuelband, 2012. URLhttp://nikeplus.nike.com/plus/products/fuelband. → pages 85[113] Nike Inc. Nike+iPod, 2012. URL http://www.apple.com/ipod/nike. →pages 84[114] M. I. Nikolic and N. B. Sarter. Peripheral visual feedback: a powerfulmeans of supporting effective attention allocation in event-driven, data-richenvironments. Human Factors, 43(1):30–8, Jan. 2001. ISSN 0018-7208.URL http://www.ncbi.nlm.nih.gov/pubmed/11474762. → pages 44, 181[115] N. Oliver and F. Flores-Mangas. MPTrain: a mobile, music andphysiology-based personal trainer. In Proceedings of the 8th Conference onHuman-Computer Interaction with Mobile Devices and Services, pages21–28. ACM, 2006. ISBN 1595933905. URLhttp://portal.acm.org/citation.cfm?id=1152221. → pages 84, 85, 88, 103,108[116] M. K. OMalley, A. Gupta, M. Gen, and Y. Li. Shared Control in HapticSystems for Performance Enhancement and Training. Journal of DynamicSystems, Measurement, and Control, 128(1):75, 2006. ISSN 00220434.doi:10.1115/1.2168160. URLhttp://link.aip.org/link/JDSMAA/v128/i1/p75/s1&Agg=doi. → pages 29198[117] S. Pane¨els, L. Brunet, and S. Strachan. Strike a Pose: Directional Cueingon the Wrist and the Effect of Orientation. In Haptic and Audio InteractionDesign, pages 117–126. Springer, 2013. → pages 165[118] J. Pasquero. Survey on Communication through Touch. Center forIntelligent Machines-McGill University, Tech. Rep. TR-CIM, 6(August),2006. → pages 46, 47[119] A. E. Patla, C. Robinson, M. Samways, and C. J. Armstrong. Visualcontrol of step length during overground locomotion: Task-specificmodulation of the locomotor synergy. Journal of ExperimentalPsychology: Human Perception and Performance, 15(3):603–617, 1989.ISSN 0096-1523. doi:10.1037//0096-1523.15.3.603. URLhttp://doi.apa.org/getdoi.cfm?doi=10.1037/0096-1523.15.3.603. → pages138[120] C. J. D. Patten, A. Kircher, J. Ostlund, and L. Nilsson. Using mobiletelephones: cognitive workload and attention resource allocation. Accident;Analysis and Prevention, 36(3):341–50, May 2004. ISSN 0001-4575.doi:10.1016/S0001-4575(03)00014-9. URLhttp://www.ncbi.nlm.nih.gov/pubmed/15003578. → pages 181[121] R. Pedrosa. Perception-based design, including haptic feedback inexpressive music interfaces. Masters thesis, University of BritishColumbia, 2007. → pages 34[122] M. Pielot and S. Boll. Tactile Wayfinder: comparison of tactile waypointnavigation with commercial pedestrian navigation systems. In PervasiveComputing, pages 76–93. Springer, 2010. URLhttp://www.springerlink.com/index/mg2425362x230q3h.pdf. → pages 140[123] A. Pirhonen, S. Brewster, and C. Holguin. Gestural and audio metaphors asa means of control for mobile devices. In Proceedings of the SIGCHIConference on Human Factors in Computing Systems, number 4, pages291–298. ACM, 2002. ISBN 1581134533. → pages 139[124] L. Post, I. Zompa, and C. Chapman. Perception of vibrotactile stimuliduring motor activity in human subjects. Experimental Brain Research,100(1):107–120, July 1994. ISSN 0014-4819. doi:10.1007/BF00227283.URL http://www.springerlink.com/content/l848141802470706/. → pages51, 54, 164199[125] PowerTutor, M. Gordon, L. Zhang, B. Tiwana, and R. Dick. PowerTutor,2011. URL http://powertutor.org/. → pages 101[126] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery.Numerical Recipes 3rd Edition: The Art of Scientific Computing.Cambridge University Press, second edition, 2007. ISBN 0521880688.URL http://www.amazon.com/Numerical-Recipes-3rd-Edition-Scientific/dp/0521880688. → pages 91, 92[127] L. Pynn and M. Hager. ICBC wants to hike rates by 4.9%, blames soaringinjury claims. Vancouver Sun, http://shar.es/KJhO2, 2013. URLhttp://shar.es/KJhO2. → pages 117[128] H. Qian, R. Kuber, and A. Sears. Tactile notifications for ambulatory users.In CHI’13 Extended Abstracts on Human Factors in Computing Systems,pages 1569–1574. ACM, 2013. → pages 165[129] H. Qian, R. Kuber, A. Sears, and E. Stanwyck. Determining the Efficacy ofMulti-Parameter Tactons in the Presence of Real-world and SimulatedAudio Distractors. Interacting with Computers, page iwt054, 2013. →pages 165[130] H. Ralston. Energy-speed relation and optimal speed during level walking.European Journal of Applied Physiology and Occupational Physiology, 17(4):277–283, 1958. URLhttp://link.springer.com/article/10.1007/BF00698754. → pages 138[131] L. B. Rosenberg. Virtual fixtures: Perceptual tools for teleroboticmanipulation. In Virtual Reality Annual International Symposium, 1993.,1993 IEEE, pages 76–82. IEEE, 1993. → pages 26, 37, 46[132] E. Rossetter and J. Gerdes. A study of lateral vehicle control under a’virtual’ force framework. Proc. International Symposium on AdvancedVehicle Control, 2002. URL http://ddl.stanford.edu/files/ITSjournal v4.pdf.→ pages 28[133] E. Rossetter, J. Switkes, and J. Gerdes. A gentle nudge towards safety:experimental validation of the potential field driver assistance system.Proceedings of the 2003 American Control Conference, 2003., 5:3744–3749, 2003. doi:10.1109/ACC.2003.1240417. URLhttp://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1240417.→ pages 28200[134] M. Rothenberg, R. Verrillo, S. Zahorian, M. Brachman, and S. BolanowskiJr. Vibrotactile frequency for encoding a speech parameter. The Journal ofthe Acoustical Society of America, 62(4):1003–1012, 1977. → pages 35, 48[135] S. Rubio, E. Dı´az, J. Martı´n, and J. Puente. Evaluation of SubjectiveMental Workload: A Comparison of SWAT, NASA-TLX, and WorkloadProfile Methods. Applied Psychology, 53(1):61–86, 2003. URLhttp://onlinelibrary.wiley.com/doi/10.1111/j.1464-0597.2004.00161.x/full.→ pages 139[136] E. Rukzio, M. Mu¨ller, and R. Hardy. Design, Implementation andEvaluation of a Novel Public Display for Pedestrian Navigation: TheRotating Compass. Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems, pages 113–122, 2009. → pages 54, 118[137] Runtastic GmbH. Runtastic Pedometer, 2013. URLhttps://www.runtastic.com/en/apps/pedometer. → pages 84, 85, 88, 101,177[138] A. Rutherford. Introducing ANOVA and ANCOVA: a GLM approach. Sage,2001. → pages 150[139] N. Sarter. The need for multisensory interfaces in support of effectiveattention allocation in highly dynamic event-driven domains: the case ofcockpit automation. The International Journal of Aviation Psychology, 10(3):231–245, 2000. URLhttp://www.tandfonline.com/doi/abs/10.1207/S15327108IJAP1003 02. →pages 44[140] Y. Sato, M. Nakamoto, Y. Tamaki, T. Sasama, I. Sakita, Y. Nakajima,M. Monden, and S. Tamura. Image guidance of breast cancer surgery using3-D ultrasound images and augmented reality visualization. MedicalImaging, IEEE Transactions on, 17(5):681–693, Oct. 1998. ISSN0278-0062. doi:10.1109/42.736019. URLhttp://www.ncbi.nlm.nih.gov/pubmed/9874292. → pages 26[141] J. D. Scargle. Studies in astronomical time series analysis. II - Statisticalaspects of spectral analysis of unevenly spaced data. The AstrophysicalJournal, 263(2):835–853, 1982. ISSN 0004637X. doi:10.1086/160554.URL http://articles.adsabs.harvard.edu/full/1982ApJ...263..835S. →pages 91201[142] S. Scheggi, M. Aggravi, F. Morbidi, D. Prattichizzo, and Others.Cooperative human-robot haptic navigation. In IEEE InternationalConference on Robotics and Automation, pages 1–6, 2014. → pages 165[143] O. S. Schneider, K. E. MacLean, K. Altun, I. Karuei, and M. M. WU.Real-time gait classification for persuasive smartphone apps: structuringthe literature and pushing the limits. In Proceedings of the 2013International Conference on Intelligent User Interfaces, pages 161–171,Santa Monica, CA, USA, 2013. → pages 89, 112[144] N. Sekiya, H. Nagasaki, H. Ito, and T. Furuna. Optimal walking in terms ofvariability in step length. The Journal of Orthopaedic and Sports PhysicalTherapy, 26(5):266–272, 1997. → pages 7[145] SensAble Technologies Inc. Sensable Phantom Omni, 2014. URLhttp://geomagic.com/en/products-landing-pages/sensable. → pages 27,46[146] C. E. Sherrick. A scale for rate of tactual vibration. The Journal of theAcoustical Society of America, 78(1 Pt 1):78–83, July 1985. ISSN0001-4966. URL http://www.ncbi.nlm.nih.gov/pubmed/4019910. → pages48[147] A. E. Sklar and N. B. Sarter. Good vibrations: tactile feedback in supportof attention allocation and human-automation coordination in event-drivendomains. Human Factors, 41(4):543–52, Dec. 1999. ISSN 0018-7208.URL http://www.ncbi.nlm.nih.gov/pubmed/10774125. → pages 30[148] Skrekstore. Durr, 2014. URL http://skreksto.re/products/durr. → pages167[149] Smart Projects Srl. Arduino Duemilanove Schematic, 2011. URLhttp://arduino.cc/en/uploads/Main/arduino-duemilanove-schematic.pdf. →pages 55[150] Smart Projects Srl. Arduino, 2012. URL http://www.arduino.cc/. → pages95[151] Smart Projects Srl. Arduino Fio, 2013. URLhttp://arduino.cc/en/Main/ArduinoBoardFio. → pages 120, 146[152] Solarbotics Ltd. Solarbotics VPM2. URLhttp://www.solarbotics.com/assets/datasheets/solarbotics vpm2.pdf. →pages 55202[153] M. Steele and R. Gillespie. Shared Control between Human and Machine,Using a Haptic Steering Wheel to Aid in Land Vehicle Guidance. InProceedings of The Human Factors and Eergonomics Society AnnualMeeting, pages 1671–1675, 2001. URL http://www.ingentaconnect.com/content/hfes/hfproc/2001/00000045/00000023/art00023. → pages 28, 37,45[154] S. M. Straughn, R. Gray, and H. Z. Tan. To Go or Not to Go:Stimulus-Response Compatibility for Tactile and Auditory PedestrianCollision Warnings. IEEE Transactions on Haptics, 2(2):111–117, 2009.ISSN 1939-1412. doi:10.1109/TOH.2009.15. URLhttp://www.computer.org/portal/web/csdl/doi/10.1109/TOH.2009.15. →pages 55[155] B. A. Swerdfeger, T. W. Hazelton, and K. E. Maclean. Exploring MelodicVariance in Rhythmic Haptic Stimulus Design. In Proceedings of GraphicsInterface 2009, pages 133–140, 2009. → pages 4, 48[156] D. Tam, K. E. MacLean, J. McGrenere, and K. J. Kuchenbecker. Thedesign and field observation of a haptic notification system for timingawareness during oral presentations. In Proceedings of the 2013 ACMAnnual Conference on Human Factors in Computing Systems, pages1689–1698. ACM, 2013. ISBN 978-1-4503-1899-0.doi:10.1145/2466110.2466223. URLhttp://dl.acm.org/citation.cfm?id=2466110.2466223. → pages 38, 115,120, 135, 137, 138, 146, 165[157] D. K. Tam. The design and field observation of a haptic notification systemfor timing awareness during oral presentations. Masters thesis, Universityof British Columbia, 2012. → pages 165[158] H. Z. Tan, R. Gray, J. J. Young, and R. Traylor. A Haptic Back Display forAttentional and Directional Cueing. Journal of Haptics Research, 3:1–20,2003. URLhttp://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.13.9848. →pages 4, 44[159] K. Ten Hagen, M. Modsching, and R. Kramer. A location aware mobiletourist guide selecting and interpreting sights and services by contextmatching. In Mobile and Ubiquitous Systems: Networking and Services,2005. MobiQuitous 2005. The Second Annual International Conference on,pages 293–301. IEEE, 2005. → pages 38, 137203[160] D. Ternes and K. MacLean. Designing large sets of haptic icons withrhythm. Haptics: Perception, Devices and Scenarios, pages 199–208,2008. URL http://www.springerlink.com/index/ATL5686222K242M4.pdf.→ pages 36[161] Tikker Technologies LLC. Tikker, 2014. URL http://mytikker.com. →pages 167[162] R. Traylor and H. Tan. Development of a wearable haptic display forsituation awareness in altered-gravity environment: Some initial findings.In Symposium on Haptic Interfaces for Virtual Environment andTeleoperator Systems, pages 159–164. Citeseer, 2002. URLhttp://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.113.8808&rep=rep1&type=pdf. → pages 51, 116, 137[163] K. Tsukada and M. Yasumura. Activebelt: Belt-type wearable tactiledisplay for directional navigation. UbiComp 2004: Ubiquitous Computing,pages 384–399, 2004. URLhttp://www.springerlink.com/index/m62n21ptynyre66n.pdf. → pages 32,43, 45, 49, 51, 55, 117, 137[164] C. Tudor-Locke. Taking Steps toward Increased Physical Activity: UsingPedometers To Measure and Motivate. President’s Council on PhysicalFitness and Sports Research Digest, 3(17):10, 2002. URLhttp://eric.ed.gov/ERICWebPortal/recordDetail?accno=ED470689. →pages 84[165] J. Van Erp and H. Van Veen. Vibro-tactile information presentation inautomobiles. Proceedings of Eurohaptics, pages 99–104, 2001. URLhttp://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.92.319&amp;rep=rep1&amp;type=pdf. → pages 43, 47[166] J. Van Erp, C. Jansen, T. Dobbins, and H. Van Veen. Vibrotactile waypointnavigation at sea and in the air: two case studies. In Proceedings ofEuroHaptics, pages 166–173, 2004. URLhttp://www.eurohaptics.vision.ee.ethz.ch/2004/15f.pdf. → pages 46[167] J. B. F. Van Erp, H. A. H. C. Van Veen, C. Jansen, and T. Dobbins.Waypoint navigation with a vibrotactile waist belt. ACM Transactions onApplied Perception, 2(2):106–117, 2005. ISSN 15443558.doi:10.1145/1060581.1060585. URLhttp://portal.acm.org/citation.cfm?doid=1060581.1060585. → pages 32,43, 49, 51, 117, 118204[168] L.-X. Wang. Adaptive fuzzy systems and control: design and stabilityanalysis. Prentice-Hall, Inc., 1994. → pages 183[169] B. Wark, B. N. Lundstrom, and A. Fairhall. Sensory adaptation. CurrentOpinion in Neurobiology, 17(4):423–429, 2007. → pages 172[170] W. H. Warren, D. S. Young, and D. N. Lee. Visual control of step lengthduring running over irregular terrain. Journal of Experimental Psychology:Human Perception and Performance, 12(3):259–66, Aug. 1986. ISSN0096-1523. URL http://www.ncbi.nlm.nih.gov/pubmed/2943854. → pages138[171] C. D. Wickens. Multiple resources and performance prediction.Theoretical Issues in Ergonomics Science, 3(2):159–177, Jan. 2002. ISSN1463-922X. doi:10.1080/14639220210123806. URLhttp://www.tandfonline.com/doi/abs/10.1080/14639220210123806. →pages 30, 76, 139[172] M. L. Wolf. Thomas Jefferson, Abraham Lincoln, Louis Brandeis and theMystery of the Universe. Boston University Journal of Science &Technology Law, 1(May):10, 1995. → pages 84[173] M. M.-A. Wu, O. S. Schneider, I. Karuei, L. Leong, and K. MacLean.Introducing GaitLib: a library for real-time gait analysis in smartphones.2014. URL http://hdl.handle.net/2429/46848. → pages 166[174] C.-C. Yang, Y.-L. Hsu, K.-S. Shih, and J.-M. Lu. Real-time gait cycleparameter recognition using a wearable accelerometry system. Sensors, 11(8):7314–7326, 2011. → pages 87, 108, 111, 140[175] Y. Yokokohji, R. Hollis, T. Kanade, K. Henmi, and T. Yoshikawa. Towardmachine mediated training of motor skills. Skill transfer from human tohuman via virtual environment. Proceedings 5th IEEE InternationalWorkshop on Robot and Human Communication. RO-MAN’96 TSUKUBA,pages 32–37, 1996. doi:10.1109/ROMAN.1996.568646. URLhttp://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=568646.→ pages 29[176] R. J. Zatorre, J. L. Chen, and V. B. Penhune. When the brain plays music:auditory-motor interactions in music perception and production. Naturereviews. Neuroscience, 8(7):547–558, July 2007. ISSN 1471-003X.doi:10.1038/nrn2152. URLhttp://www.ncbi.nlm.nih.gov/pubmed/17585307. → pages 139205[177] H. N. Zelaznik and D. Lantero. The role of vision in repetitive circledrawing. Acta Psychologica, 92(1):105–118, 1996. → pages 33[178] X. Zhao, S. Saeedi, N. El-Sheimy, Z. Syed, and C. Goodall. TowardsArbitrary Placement of Multi-sensors Assisted Mobile Navigation System.In Proceedings of the 23rd International Technical Meeting of The SatelliteDivision of the Institute of Navigation (ION GNSS 2010), pages 556–564,Oct. 2010. URLhttp://ww.ion.org/search/view abstract.cfm?jp=p&idno=9181. → pages 83,85, 88[179] Y. Zheng, E. Su, and J. B. Morrell. Design and evaluation of pactors formanaging attention capture. In World Haptics Conference (WHC), 2013,pages 497–502. IEEE, 2013. → pages 44, 165[180] Y. J. Zheng and J. Morrell. A Vibrotactile Feedback Approach to PostureGuidance. In Haptics Symposium, 2010 IEEE, pages 351–358, 2010. ISBN9781424468225. doi:10.1109/HAPTIC.2010.5444633. → pages 46[181] W. Zijlstra and A. L. Hof. Assessment of spatio-temporal gait parametersfrom trunk accelerations during human walking. Gait & Posture, 18(2):1–10, Oct. 2003. ISSN 0966-6362. URLhttp://www.ncbi.nlm.nih.gov/pubmed/14654202. → pages 88, 89206Appendix ASupporting Materials: DetectingVibrations Across the Body inMobile ContextsThis appendix contains the supporting materials regarding the experiments of Chapter 3.A.1 Ethics Documents207Recruitment Email208Experiment 1 Consent Form, Page 1/3     THE UNIVERSITY OF BRITISH COLUMBIA  Department of Computer Science 2366 Main Mall Vancouver, B.C., V6T 1Z4  April 9, 2010   Consent Form (no videotaping)   Human-Computer Interaction Course Projects (CPSC 444/544/543) UBC Ethics Approval B03-0490   Principal* and Co-Investigators Dr. Kelly Booth, Prof., Dept. of Computer Science, UBC (604) 822-8193 Dr. Karon MacLean, Asst. Prof., Dept. of Computer Science, UBC (604) 822-8169 Dr. Joanna McGrenere*, Asst. Prof., Dept. of Computer Science, UBC (604) 827-5201 Dr. Steven Wolfman, Asst. Prof., Dept. of Computer Science, UBC (604) 822-0407   Student Investigators  Mohamed El-Zohairy, UBC [zohairy@cs.ubc.ca] Zoltan Foley-Fisher, UBC [zoltan@ece.ubc.ca] Idin Karuei, UBC [idin@cs.ubc.ca] Sebastian Koch, UBC [skoch@cs.ubc.ca] Russ MacKenzie, UBC [rmacken1@cs.ubc.ca]    Project Purpose and Procedures   This course project is designed to investigate how people interact with certain types of interactive technology. Interactive technology includes applications that run on a standard desktop or laptop computer, such as a word processor, web browser, and email, as well as applications on handheld technology, such as the datebook on the Pocket PC, and also applications on more novel platforms such a SmartBoard (electronic whiteboard) or a Diamond Touch tabletop display.   The purpose of this course project is to gather information that can help improve the design of  209Experiment 1 Consent Form, Page 2/3interactive technology. You will be asked to use one or more forms of interactive technology to perform a number of tasks.  We will observe you performing those tasks and analyze how the technology is used. You may be asked to complete a number of questionnaires and we may ask to interview you to find out your impressions of the technology. You will be asked to participate in at most 3 sessions, each lasting no more than 1 hour.     Although only a course project in its current form, this project may, at a later date, be extended by one or more of the student investigators to form the basis of his/her thesis research.   Confidentiality   The identities of all people who participate will remain anonymous and will be kept confidential. Identifiable data will be stored securely in a locked metal filing cabinet or in a password protected computer account. All data from individual participants will be coded so that their anonymity will be protected in any project reports and presentations that result from this work.    Remuneration/Compensation   We are very grateful for your participation. However, you will not receive compensation of any kind for participating in this project.   Contact Information About the Project   If you have any questions or require further information about the project you may contact Professor Karon Maclean at (604) 822-8169.   Contact for information about the rights of research subjects   If you have any concerns about your treatment or rights as a research subject, you may contact the Research Subject Information Line in the UBC Office of Research Services at 604-822-8598.   Consent   We intend for your participation in this project to be pleasant and stress-free.  Your participation is entirely voluntary and you may refuse to participate or withdraw from the study at any time.   Your signature below indicates that you have received a copy of this consent form for your own records.   Your signature indicates that you consent to participate in this project.  You do not waive any legal rights by signing this consent form.     I, ________________________________, agree to participate in the project as outlined above. My participation in this project is voluntary and I understand that I may withdraw at any time. 210Experiment 1 Consent Form, Page 3/3    ____________________________________________________ Participant’s Signature                                                     Date     ____________________________________________________ Student Investigator’s Signature                                       Date 211Experiment 2 Consent Form - Participant’s Copy212Experiment 2 Consent Form - Researcher’s Copy213A.2 QuestionnairesWe used a pre-study questionnaire to collect information about participants and apost-study questionnaire to get their opinions after the experiment.Pre-study Questionnaire1. In what age group are you?• 19 and under• 20-25• 26-30• 31-40• 40 and above2. Gender• Male• Female3. How many times in the last year did you use devices with tactile feedback(vibrating devices)?• Never• Once a month• Once a week• Few times a week• Once a day• Few times a day4. How many times in the last year did you use a treadmill?• Never• Once a month214• Once a week• Few times a week• Once a day5. Which hand is your dominant hand?• Left• RightPost-study Questionnaire1. Which vibration location was the most uncomfortable?2. Which vibration location was the most comfortable?3. For each location of vibration please choose the comfort level on a scale from1 to 5; 1 being the most uncomfortable and 5 being the most comfortable.Body location 1 - very uncomfortable 2 - uncomfortable 3 - neutral 4 - comfortable 5 - very comfortableleft shoulderright shoulderchestupper spineupper left armupper right armleft wristright wristlower spinestomachleft thighright thighleft footright foot4. If those motors were embedded in clothing items, which clothing item wouldyou prefer they are embedded in? and why?5. Do you have any comments?215Appendix BSupporting Materials: CadenceMeasurementThis appendix contains the supporting materials regarding the experiments of Chapter 4.B.1 Ethics Documents216Consent Form Version 1.0 - Participant’s Copy217Consent Form Version 1.0 - Researcher’s Copy218Consent Form Version 2.1 - Participant’s Copy219Consent Form Version 2.1 - Researcher’s Copy220Appendix CSupporting Materials:Susceptibility to PeriodicVibrotactile Guidance of HumanCadenceThis appendix contains the supporting materials regarding the experiment of Chapter 5.C.1 Ethics Documents221Recruitment Email222Recruitment Poster223Consent Form - Participant’s Copy224Consent Form - Researcher’s Copy225Appendix DSupporting Materials: PeriodicVibrotactile Guidance of HumanCadence, Performance duringAuditory MultitaskingThis appendix contains the supporting materials regarding the experiment of Chapter 6.D.1 Ethics Documents226Recruitment Email227Recruitment Poster228Consent Form - Participant’s Copy229Consent Form - Researcher’s Copy230D.2 Experiment SetupTable D.1: Tempos used for techno music conditions.Tempo (Hz) Tempo (BPM) Index2.645 158.7 82.554 153.3 72.466 148.0 62.382 142.9 52.300 138.0 42.221 133.3 32.145 128.7 22.071 124.3 12.000 120.0 01.931 115.9 -11.865 111.9 -21.801 108.1 -31.739 104.3 -41.679 100.8 -5D.3 NASA-TLX Screenshots231Figure D.1: NASA-TLX Screenshots - Part 1.232Figure D.2: NASA-TLX Screenshots - Part 2 - 1/4.233Figure D.3: NASA-TLX Screenshots - Part 2 - 2/4 (blank space cropped).234Figure D.4: NASA-TLX Screenshots - Part 2 - 3/4 (blank space cropped).235Figure D.5: NASA-TLX Screenshots - Part 2 - 4/4 (blank space cropped).Figure D.6: NASA-TLX Screenshots - Results (blank space cropped).236D.4 Descriptive StatisticsTable D.2: Descriptive statistics of all metrics.mean median SDage 25.958 24 10.187weight (kg) 66.591 65.317 14.49height (cm) 171.262 170.59 9.541cadence error % -15.792 -4.849 30.821cadence 1.645 1.813 0.593speed (mps) 1.339 1.32 0.246speed ratio 0.952 0.962 0.128stride length 0.699 0.687 0.117mental demand 35.052 30 25.049physical demand 29.688 20 23.387temporal demand 33.524 25 24.47performance 30.851 25 21.798effort 37.153 30 24.282frustration 23.906 15 20.536tlx overall 35.281 30 21.556237Table D.3: Descriptive statistics of performance metrics by guidance condition.guidance mean median SDcadence error %slow -12.674 -0.014 34.256fast -18.911 -11.516 26.592cadence (Hz)none 1.695 1.844 0.545slow 1.455 1.626 0.584fast 1.785 1.939 0.6speed (mps)none 1.38 1.349 0.205slow 1.195 1.15 0.195fast 1.44 1.402 0.263speed rationone 0.98 0.99 0.074slow 0.851 0.847 0.104fast 1.024 1.038 0.129stride lengthnone 0.721 0.712 0.096slow 0.622 0.61 0.079fast 0.752 0.738 0.127238Table D.4: Descriptive statistics of performance metrics by auditory task.audio mean median SDcadence error %podcast -18.226 -5.802 32.369techno -12.468 -4.948 28.604classical -19.155 -7.332 33.335silence -13.32 -3.209 28.097cadence (Hz)podcast 1.593 1.753 0.614techno 1.717 1.815 0.537classical 1.579 1.752 0.65silence 1.691 1.844 0.553speed (mps)podcast 1.292 1.256 0.241techno 1.38 1.355 0.242classical 1.325 1.319 0.252silence 1.359 1.335 0.242speed ratiopodcast 0.918 0.914 0.115techno 0.984 0.994 0.126classical 0.94 0.94 0.133silence 0.967 0.986 0.126stride lengthpodcast 0.675 0.677 0.117techno 0.719 0.705 0.11classical 0.692 0.678 0.123silence 0.709 0.691 0.113239Table D.5: Descriptive statistics of workload scores by guidance condition.guidance mean median SDmental demandnone 21.823 15 18.283slow 39.531 30 25.926fast 43.802 35 24.791physical demandnone 20.052 15 15.853slow 31.615 25 24.639fast 37.396 30 25.225temporal demandnone 15.729 10 13.86slow 36.875 30 22.91fast 47.969 47.5 23.447performancenone 20.312 15 18.336slow 32.5 25 19.841fast 39.74 35 22.599effortnone 21.719 15 16.333slow 40.625 35 23.789fast 49.115 50 23.474frustrationnone 14.427 10 13.877slow 24.531 17.5 20.145fast 32.76 25 22.477overallnone 20.694 17.333 13.586slow 38.361 32.5 21.402fast 46.788 48.333 20.068240Table D.6: Descriptive statistics of workload scores by auditory task.audio mean median SDmental demandpodcast 45.278 37.5 26.121techno 34.097 27.5 24.413classical 31.458 25 22.098silence 29.375 20 24.837physical demandpodcast 33.125 22.5 26.649techno 29.097 22.5 21.169classical 28.194 20 23.002silence 28.333 20 22.518temporal demandpodcast 37.014 27.5 27.162techno 33.75 25 23.493classical 33.056 30 23.204silence 30.278 22.5 23.852performancepodcast 34.861 25 23.376techno 30.556 25 21.421classical 31.597 25 22.515silence 26.389 20 19.269effortpodcast 43.125 40 25.83techno 37.222 30 23.793classical 33.472 25 23.053silence 34.792 30 23.727frustrationpodcast 27.569 20 22.782techno 24.583 20 20.824classical 22.5 17.5 18.576silence 20.972 15 19.548overallpodcast 41.528 38.167 23.469techno 34.972 29.5 21.102classical 33.176 29.833 19.989silence 31.449 24.5 20.581241Table D.7: Mean cadence error % per each level of auditory task guidance condition.Slow Guidance Fast GuidancePodcast -4.89% -18.37%Techno 0.27% -14.83%Classical -7.96% -17.00%Silence -4.36% -13.37%Table D.8: Mean cadence (Hz) per each guidance and auditory task condition.No Guidance Slow Guidance Fast GuidancePodcast 1.72 1.60 1.81Techno 1.85 1.67 1.88Classical 1.74 1.54 1.83Silence 1.81 1.60 1.91Table D.9: Statistical significance of all performance metrics. Yes and No denote statisticalsignificance and non-significance based on p < 0.05. GC and AT are short for guidancecondition and auditory task.Main Effects Interaction EffectsMetric GC AT Time GC:AT GC:Time AT:TimeCadence Error % Yes Yes Yes Yes No NoCadence Yes Yes Yes Yes Yes NoSpeed Yes Yes N/A No N/A N/ASpeed Ratio Yes Yes N/A No N/A N/AStride Length Yes Yes N/A No N/A N/ATable D.10: Guidance condition’s significant effect on cadence and pairwise comparisons ofguidance conditions per each level of auditory task. Each two guidance conditions aresignificantly different from each other under every auditory task except no guidance andfast guidance under techno; cadence is fastest under fast guidance and slowest under slowguidance regardless of the auditory task.Audio Subset NG-SG NG-FG FG-SG OrderPodcast Yes Yes Yes SG,NG,FGTechno Yes No Yes SG,NG,FGClassical Yes Yes Yes SG,NG,FGSilence Yes Yes Yes SG,NG,FG242Table D.11: Statistical significance of all NASA-TLX scores and total workload. Significanceis based on p < 0.05. NG, SG, and FG are short for No Guidance, Slow Guidance, andFast Guidance respectively. P, S, C, and T are short for Podcast, Silence, Classical, andTechno; Significant difference between pairs of conditions are shown in columns 3 to 5(guidance conditions) and 7 to 12 (auditory tasks).Guidance Condition Auditory taskSig. NG-SG NG-FG FG-SG Sig. P-S P-C P-T T-C T-S C-SMental Demand Yes Yes Yes No Yes Yes Yes Yes No No NoPhysical Demand Yes Yes Yes No No - - - - - -Temporal Demand Yes Yes Yes No No - - - - - -Performance Yes Yes Yes No Yes No No No No No NoEffort Yes Yes Yes No Yes No No No No No NoFrustration Yes Yes Yes No Yes No No No No No NoTotal Workload Yes Yes Yes No Yes No No No No No No243

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0167014/manifest

Comment

Related Items