Magic Pen: A Versatile Digital Manipulative for LearningbySoheil KianzadB.Sc., K.N.Toosi University of Technology, 2012M.Sc., The University of British Columbia (Vancouver), 2015a thesis submitted in partial fulfillmentof the requirements for the degree ofDoctor of Philosophyinthe faculty of graduate and postdoctoral studies(Computer Science)The University of British Columbia(Vancouver)December 2021© Soheil Kianzad, 2021The following individuals certify that they have read, and recommend to the Facultyof Graduate and Postdoctoral Studies for acceptance, the dissertation entitled:Magic Pen: A Versatile Digital Manipulative for Learningsubmitted by Soheil Kianzad in partial fulfillment of the requirements for thedegree of Doctor of Philosophyin Computer ScienceExamining Committee:Karon MacLean, Professor, Department of Computer Science, UBCSupervisorIvan Beschastnikh, Associate Professor, Department of Computer Science, UBCSupervisory Committee MemberMichiel van de Panne, Professor, Department of Computer Science, UBCSupervisory Committee MemberJim Little, Professor, Department of Computer Science, UBCUniversity ExaminerMachiel Van der Loos, Associate Professor, Department of Mechanical Engineering, UBCUniversity ExaminerAdditional Supervisory Committee Members:Pierre Dillenbourg, Professor, School of Computer & Communication Sciences, EPFLSupervisory Committee MemberiiAbstractDigital manipulatives such as robots are an opportunity for interactive and engaginglearning activities. The addition of haptic and specifically force feedback to digitalmanipulatives can enrich the learning of science-related concepts by building physi-cal intuition. As a result, learners can design experiments and physically explorethem to solve problems they have posed.In my thesis, I present the evolution of the design and evaluation of a versatiledigital manipulative – called MagicPen – in a human-centered design context. First,I investigate how force feedback can enable learners to fluidly express their ideas. Iidentify three core interactions as bases for physically assisted sketching (phasking).Then, I show how using these interactions improves the accuracy of users’ drawingsas well as their authority in creative works. In the next phase, I demonstrate thepotential benefits of using force feedback in a collaborative learning framework, in amanner that is generalizable beyond the device we invented and lends insight on howhaptics can empower digital manipulatives to express advanced concept by meansof the behaviour of a virtual avatar and the respective feeling of force feedback.This informs our device’s capability for learning advanced concepts in classroomsettings and further considerations for the next iterations of the MagicPen.Based on the findings of how haptic feedback could assist with design andexploration in learning, In the last phase of my thesis, I propose a framework forphysically assisted learning (PAL) which links the expression and exploration of anidea. Furthermore, I explain how to instantiate the PAL framework using availabletechnologies and discuss a path forward to a larger vision of physically assistedlearning. PAL highlights the role of haptics in future "objects-to-think-with".iiiLay SummaryEducational haptic platforms exploit various modalities to create effective interactiveenvironments that can support embodied physical interactions. These platformshave the potential to leverage a student’s physical intuition to make abstract topics inphysics, math, and other fields of science more concrete. This project aims to createa versatile educational robot that serves as an object-to-think-with. It can providestudents with intuitive ways to experience various science, technology, engineering,and mathematics (STEM) concepts by making them more accessible or helpingstudents approach these concepts in a more tangible way. We will explain theevolution of the design and evaluation of our proposed device in a human-centereddesign context.ivPrefaceAll the research outputs are the result of collaborative efforts as no creative workbelongs to an individual. Here, I will clarify my contribution to the published works.Chapter 3: Device DesignS. Kianzad and K. E. MacLean, “Harold’s purple crayon renderedin haptics: Large-stroke, handheld ballpoint force feedback,” 2018IEEE Haptics Symposium (HAPTICS), San Francisco, CA, 2018, pp.106-111.My supervisor (Dr. Karon MacLean) came up with the idea of drawing and thenfeeling it, besides the initial approach of how to implement it. I built upon mysupervisor’s idea and came up with a different and more realizable approach for im-plementing the mechatronics system. I contributed all the engineering work, designiterations, and mechanical tests with feedback and guidance from my supervisor.I wrote the first draft of the paper which was fully edited and re-written by mysupervisor before the submission.S. Kianzad and K. E. MacLean, “Ballpoint Drive Haptic Force-FeedbackDisplay”, provisional patent disclosure.I made all the drawings to convey our invention with feedback and guidance frommy supervisor. My supervisor and I collaboratively shaped the framework, and Iadded the technical details. She re-wrote and revised the final patent draft.vChapter 4: PHysically Assisted SKetchingSoheil Kianzad, Yuxiang Huang, Robert Xiao, and Karon E. MacLean.2020. Phasking on Paper: Accessing a Continuum of PHysicallyAssisted SKetchING. In Proceedings of the 2020 CHI Conference onHuman Factors in Computing Systems (CHI ’20). Association forComputing Machinery, New York, NY, USA.I designed the mechatronics system, wrote the state-model, and designed a closed-loop controller. I supervised Yuxiang Huang, an undergrad volunteer who built alightweight CAD platform for the RPi computer. Our group collaborator, Dr. Xiao,helped with implementing the hardware PWM to control motors and the integrationof a digital pen to achieve robust absolute tracking. I designed the user study withthe help of Dr. Xiao and conducted the user-tests. I designed the experimental setupand ran mechanical tests on the device. My supervisor framed the paper and led thewriting.Chapter 5: Benefits of Force Feedback for GroundingSoheil Kianzad, Julia A. B. Lindsay, Wafa Johal, Unma Mayur De-sai, Hala Khodr, Hsin Yun(Tiffany) Wu, Pierre Dillenbourg, Karon E.MacLean “Dialectic Touch: Exploiting Force Feedback for Groundingin Collaborative Learning Tasks”, IEEE Transaction on Haptics (inpreparation).I designed the experiments with three haptic/robotic platforms collaboratively withmy supervisor and Julia Lindsay– an undergrad student that I was supervising. Iprogrammed and implemented all these experiments. I ran pilot tests and iteratedon the design for each study. Julia designed the pre-test and post-tests for thethree studies. She also drafted an amendment for conducting our study duringthe pandemic. Together we designed and ran the user evaluations for the twostudies. I conducted the quantitative analysis on our user study’s result. I led thequalitative assessment while I received help from Unma Desai (master student) andTiffany Wu (volunteer) to avoid potential biases. Unma, Tiffany and I individuallyviperformed the qualitative coding, and collaboratively, we discussed the findings. Ireceived insightful feedback and comments from my supervisor, Dr. Wafa Johal,and Dr. Pierre Dillenbourg through out the process, specifically on the design of theexperiments and running the analysis. I wrote the original draft and my supervisordid the final edits.S. Kianzad and K. E. MacLean, “Collaborating Through Magic Pens:Grounded Forces in Large, Overlappable Workspaces.” InternationalAsiaHaptics conference. Springer, Singapore, 2018.A single-user configuration with Virtual Electrostatic Lab was initially developedby Lotus Hanzi Zhang in collaboration with Matthew Chun and myself for StudentInnovation Challenge – IEEE World Haptics 2017. Later, I extended this frameworkto enable multi-user interaction and added more functionality to it. I made the hapticdevices, developed the virtual Jigsaw puzzle, and drafted the demo paper [114]which was revised by my supervisor.Chapter 6: The Physically Assisted Learning (PAL)Soheil Kianzad, Guanxiong Chan, Karon E. MacLean “PAL: A Frame-work for Physically Assisted Learning through Design and Explorationwith a Haptic Robot Buddy,” Frontiers in Robotics and AI, pp 228-250,Vol 8, 2021[117].Based on my supervisor’s initial vision and with her guidance, I proposed thePAL framework. I implemented the idea, and I came up with two examples: (1)handwriting and (2) mass-spring to demonstrate the concept. Further, I designedthe passivity controller, and I show how to expand the idea to other domains ofphysics. I supervised Guanxiong Chan, an undergrad volunteer who helped with theliterature review and Wifi communication. My supervisor had a major contributionto the writing of the paper.All research including human participants were reviewed and approved byUBC’s Behavioural Research Ethics Board, approval ID (H14-01763). This specifi-cally includes the studies reported in Chapters 4 and 5.viiThis project is a result of collaborative work between my supervisor and me, referredto as "we" throughout this document.viiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiLay Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviiAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxivDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvi1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Design Considerations . . . . . . . . . . . . . . . . . . . . . . . 41.1.1 Pedagogical Considerations . . . . . . . . . . . . . . . . 51.1.2 School Logistics considerations . . . . . . . . . . . . . . 61.1.3 Challenges of Haptic Displays . . . . . . . . . . . . . . . 71.1.4 Creating Haptically Augmented Learning Environments . 81.2 Thesis Direction, Rationale and Scope . . . . . . . . . . . . . . . 91.2.1 Thesis Focus and Scope . . . . . . . . . . . . . . . . . . 91.2.2 Design Approach . . . . . . . . . . . . . . . . . . . . . . 91.2.3 Evaluation Approach . . . . . . . . . . . . . . . . . . . . 10ix1.3 Thesis Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . 111.3.1 Objective I – Chapter 3 . . . . . . . . . . . . . . . . . . . 111.3.2 Objective II – Chapter 4 . . . . . . . . . . . . . . . . . . 121.3.3 Objective III – Chapter 5 . . . . . . . . . . . . . . . . . . 121.3.4 Objective IV – Chapter 6 . . . . . . . . . . . . . . . . . . 131.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Background on Educational Haptic Technology . . . . . . . . . . . . 152.1 Haptic Feedback Overview . . . . . . . . . . . . . . . . . . . . . 162.1.1 Grounding and Tethering . . . . . . . . . . . . . . . . . . 162.1.2 Workspace Scaling Factors . . . . . . . . . . . . . . . . . 172.1.3 Haptic Rendering . . . . . . . . . . . . . . . . . . . . . . 172.1.4 Open-Source Haptic Libraries . . . . . . . . . . . . . . . 182.1.5 Haptic Performance Metrics . . . . . . . . . . . . . . . . 182.1.6 Sharing Haptic Control Authority . . . . . . . . . . . . . 192.1.7 Dimensionality: One-, Two- and Three-Dimensional Ren-dering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.1.8 Passive versus Active Haptic Feedback . . . . . . . . . . 202.2 2D Technologies with Energetically Passive (Non-Guiding) Mecha-nisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.2.1 Passive Collaborative Robots (Cobots) . . . . . . . . . . 212.2.2 Brake-Based Haptic Styli . . . . . . . . . . . . . . . . . . 222.2.3 Other Haptically Enabled but Passive Styli . . . . . . . . 222.2.4 Active Force Feedback Technologies . . . . . . . . . . . 232.3 Devices Rendering Texture and Friction . . . . . . . . . . . . . . 242.4 Challenges of Haptic Displays . . . . . . . . . . . . . . . . . . . 262.4.1 The Challenge of Grounded Force Feedback . . . . . . . 262.4.2 The Challenge of Mobility and Large Workspace . . . . . 272.4.3 The Challenge of Accessibility . . . . . . . . . . . . . . . 282.5 Haptics for Designing and Exploring . . . . . . . . . . . . . . . . 282.5.1 Design Approaches: Input Methods, Feedback Modalitiesand CAD features) . . . . . . . . . . . . . . . . . . . . . 282.5.2 Pen-based Sketching Tools . . . . . . . . . . . . . . . . . 30x3 Device Design: Introducing the Ballpoint Drive Mechanism . . . . . 323.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343.3 Approach: Design Considerations . . . . . . . . . . . . . . . . . 353.3.1 Mechatronics and Amenities . . . . . . . . . . . . . . . . 353.3.2 User Experience and Ergonomics . . . . . . . . . . . . . 363.4 Prototype Design . . . . . . . . . . . . . . . . . . . . . . . . . . 363.4.1 Form factors . . . . . . . . . . . . . . . . . . . . . . . . 363.4.2 Ballpoint Drive Mechanism . . . . . . . . . . . . . . . . 383.4.3 Prototype Software Architecture . . . . . . . . . . . . . . 403.5 Force Feedback in a Ballpoint Drive . . . . . . . . . . . . . . . . 403.5.1 Guiding Along a Path: Position and Velocity Trajectory . 413.5.2 Rendering VEs: Constraints, Force Fields and Textures . . 423.6 Design Review . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 PHysically Assisted SKetching . . . . . . . . . . . . . . . . . . . . . 454.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474.2.1 Physically Assisted SKetching with Variable Control . . . 484.2.2 Usage Scenarios . . . . . . . . . . . . . . . . . . . . . . 484.2.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . 494.3 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494.3.1 Non-Haptic Assistance of Digital and Manual Drawing . . 504.3.2 Haptic Support of Pen-and-Paper Sketching . . . . . . . . 514.3.3 Using Haptics to Facilitate User-System Control Sharing . 524.3.4 Frameworks . . . . . . . . . . . . . . . . . . . . . . . . . 524.4 Phasking Framework . . . . . . . . . . . . . . . . . . . . . . . . 534.4.1 I. Conceptual Activities . . . . . . . . . . . . . . . . . . 534.4.2 II. Core Interaction Concepts . . . . . . . . . . . . . . . . 534.4.3 MagicPen Mechatronics . . . . . . . . . . . . . . . . . . 564.4.4 Implemented Drawing Support Features . . . . . . . . . . 594.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604.5.1 Performance Characterization . . . . . . . . . . . . . . . 61xi4.5.2 User Evaluation . . . . . . . . . . . . . . . . . . . . . . . 644.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694.7 Conclusions and Next Steps . . . . . . . . . . . . . . . . . . . . 705 Benefits of Force Feedback for Collaborative Grounding . . . . . . . 715.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725.2.1 Approach and Contributions . . . . . . . . . . . . . . . . 755.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775.3.1 Educational Haptics . . . . . . . . . . . . . . . . . . . . 775.3.2 Haptic Communication . . . . . . . . . . . . . . . . . . . 775.3.3 Collaborative Learning . . . . . . . . . . . . . . . . . . . 785.4 Framework and Tools – Haptic Devices and Learning Environments 795.4.1 Framework for Exploration of Haptic Grounding . . . . . 795.4.2 Haptic Devices . . . . . . . . . . . . . . . . . . . . . . . 805.4.3 Haptic Environments and Associated Learning Tasks . . . 825.4.4 Dyad Participants . . . . . . . . . . . . . . . . . . . . . . 845.4.5 Core Experiment Procedure: One Device/Environment Block 855.4.6 Full Procedure . . . . . . . . . . . . . . . . . . . . . . . 875.4.7 Collected Data . . . . . . . . . . . . . . . . . . . . . . . 875.5 Study 1: Confirming the use of haptics in grounding . . . . . . . . 885.5.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 885.5.2 Analytical Approach . . . . . . . . . . . . . . . . . . . . 895.5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 915.6 Study 2: Haptic Critical Instance Analysis . . . . . . . . . . . . . 955.6.1 Full Study Design and Protocol Modifications from Study 1 965.6.2 Additional Data Collected . . . . . . . . . . . . . . . . . 975.6.3 Analysing Haptic Critical Instances (hCIs) . . . . . . . . 975.6.4 Quantitative Results . . . . . . . . . . . . . . . . . . . . 1005.6.5 Thematic Categories . . . . . . . . . . . . . . . . . . . . 1055.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1085.7.1 Research Questions . . . . . . . . . . . . . . . . . . . . . 109xii5.7.2 Practical Considerations in Designing to Promote HapticGrounding . . . . . . . . . . . . . . . . . . . . . . . . . 1115.8 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . 1135.8.1 Limitations and Future Work . . . . . . . . . . . . . . . . 1136 A Framework for Physically Assisted Learning (PAL) . . . . . . . . 1166.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1176.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1176.2.1 Approach and Contributions . . . . . . . . . . . . . . . . 1206.3 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1226.3.1 Adding Physicality to Digital Manipulatives (DMs) via Robots1226.3.2 Relevant Educational Theory and Design Guidelines . . . 1236.4 A Framework for Physically Assisted Learning (PAL) . . . . . . . 1256.4.1 Pedagogical Rationale and Components . . . . . . . . . . 1256.4.2 Principles for Creating Digital Manipulatives . . . . . . . 1276.4.3 Using the PAL Framework . . . . . . . . . . . . . . . . . 1306.4.4 First Step: Need for a Technical Proof-of-Concept . . . . 1316.5 Haptically Linking the Expression and Exploration of an Idea . . . 1326.5.1 Technical Proof-of-Concept Platform: Haply Display andDigital-Pen Stroke Capture . . . . . . . . . . . . . . . . . 1336.5.2 Level 1: Rendering Rigid Surfaces and Tunnels . . . . . . 1356.5.3 Level 2: Drawing and Feeling Dynamic Systems . . . . . 1386.5.4 Level 3: Expanding the Range of Parameter Explorationthrough Passivity Control . . . . . . . . . . . . . . . . . . 1426.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1476.6.1 The PAL Framework, Guidance and Exposed Needs . . . 1476.6.2 Technical Proof-of-Concept . . . . . . . . . . . . . . . . 1496.6.3 Generalizing to Other Physics Environments: a bond graph-Inspired Approach . . . . . . . . . . . . . . . . . . . . . 1506.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1517 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1527.1 Thesis Objectives and Contributions . . . . . . . . . . . . . . . . 152xiii7.1.1 Objective I: Design, Interaction Space, and Applications ofa Low-Cost and Large Workspace Haptic Display . . . . 1537.1.2 Objective II: Phasking and Computer-Aided Design . . . . 1557.1.3 Objective III: Intuitive Learning of STEM with Haptics . . 1577.1.4 Objective IV: Physically assisted learning (PAL) . . . . . 1587.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1597.2.1 Limitations in Assessing Quality of Drawing . . . . . . . 1597.2.2 Evaluations are Preliminary . . . . . . . . . . . . . . . . 1607.2.3 Small Library of Learning Activities . . . . . . . . . . . . 1607.3 Future Work: The Path Forward . . . . . . . . . . . . . . . . . . 1617.4 Final Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165A MagicPen Technical Demonstration . . . . . . . . . . . . . . . . . . 188B Phasking Experiment Results . . . . . . . . . . . . . . . . . . . . . . 192B.1 Draw bring constraints : a straight line . . . . . . . . . . . . . . . 193B.2 Draw bring constraints : a rectangle . . . . . . . . . . . . . . . . 194B.3 Draw bring constraints : a rectangle in perspective . . . . . . . . . 196B.4 Draw bring constraints : a circle . . . . . . . . . . . . . . . . . . 198B.5 Bound constraints:lines meeting an invisible line barrier . . . . . . 199B.6 Shared control : a sine wave/invisible line barrier at the center . . 200C Collaborative Grounding Pre-test/Post-test Questions . . . . . . . . 203xivList of TablesTable 3.1 Design Considerations: Ballpoint Drive V1.0 . . . . . . . . . . 44Table 4.1 Core conceptual activities of the phasking framework. . . . . . 54Table 4.2 Control steps. . . . . . . . . . . . . . . . . . . . . . . . . . . 58Table 4.3 Phasking primitive descriptions. Each operation begins by touch-ing the corresponding icon on the tool palette. . . . . . . . . . 60Table 4.4 Evaluation tasks, by execution and complexity. . . . . . . . . . 66Table 4.5 Precision of manual drawing vs. phasking. . . . . . . . . . . . 67Table 5.1 Summary of four-part experiment procedure for a single de-vice/environment combinations, for Study 1 (MagicPen) and 2(all three combinations). . . . . . . . . . . . . . . . . . . . . 85Table 5.2 Environments paired with haptic devices . . . . . . . . . . . . 86Table 5.3 Grounding Acts, repeated from "Grounding in Multi-ModalTask-Oriented Collaboration" [48]. 151 grounding acts wereidentified in Study 1. . . . . . . . . . . . . . . . . . . . . . . . 90Table 5.4 Haptic Gestures made with the MagicPen. 130 gestures wereidentified in Study 1. . . . . . . . . . . . . . . . . . . . . . . . 90Table 5.5 Presumed intention behind each haptic gesture (adapted fromCesareni’s "Global Conversational Functions" [30]). 174 pre-sumed intentions were identified in Study 1. Asterisks denotefunctions or gestures we have added to Cesareni’s table to coverall actions demonstrated by our participants. . . . . . . . . . . 91xvTable 5.6 Correlation between the named dimension (rater’s average valuefor all groups for that device) and the learning gain (measuredthrough pre/post test, of individuals). The dimensions in thefirst column are taken without modification from [151]. 1,942cases of collaboration from Study 2 were analyzed; % for eachdimension are listed in first column. . . . . . . . . . . . . . . . 102Table 5.7 Statistical mediation analysis. The impact of some collaborationdimensions (X) on Learning Gain (LG) appeared to be medi-ated by Haptic Critical Instances (hCI), specifically SustainingMutual understanding, Information Pooling and Reaching Con-sensus. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105Table 5.8 Touch in support of collaborative grounding through logicalreasoning and factual evidence . . . . . . . . . . . . . . . . . 114Table 6.1 Summary of research informing the use and benefits of hapticsin learning, organized by the PAL framework’s two activitycomponents. [+] indicates a positive benefit, or [-] no addedvalue was found. . . . . . . . . . . . . . . . . . . . . . . . . 128Table 6.2 Analogy between some conventional physical domains, repro-duced from Borutzky’s [24]. . . . . . . . . . . . . . . . . . . 150xviList of FiguresFigure 1.1 Harold can now feel (not just see) what he draws,from molecu-lar attractions (top) to roadside edges (bottom) . . . . . . . . 5Figure 1.2 Ph.D. big picture overview . . . . . . . . . . . . . . . . . . . 11Figure 2.1 Haptic interaction with virtual environment. Two most studiedapplications of haptics with potential benefit in education. Thefirst group assists users in their sketching and design while thesecond group is used for exploration. . . . . . . . . . . . . . 29Figure 3.1 MagicPen device form factor. . . . . . . . . . . . . . . . . . 33Figure 3.2 Ballpoint drive and sensing in 2D and 3D. Left: Motor-generatedtorques Tm are transmitted to the surface contact ball throughthe gears causing the ball to roll over the surface. The frictionforce Fr between the contact ball and the surface producesmotion in x-y direction. Right: The arrangement of four DC-motors in 3D shows how the Ballpoint drive generates forces in2 directions. . . . . . . . . . . . . . . . . . . . . . . . . . . 38Figure 3.3 Ballpoint rectilinear and rotational coordinate systems. . . . . 41xviiFigure 4.1 The steps of a phasking interaction. (a) The user selects the“circle” tool from the Phasking tool palette, (b) then selectsthe centre and one point on the circle to establish a circularbring constraint (dotted line). (c) The MagicPen actively bringsthe user along the circular path with its ball-drive motor, butthe user can modify control sharing by applying pressure tothe pen (Figure 4.9Right). This causes the system to scaledown its constraint force, allowing the user to diverge from thepath. (d) Phasking supports passive constraints as well as fullyunconstrained drawing, enabling the user to quickly sketch outa cartoon character. . . . . . . . . . . . . . . . . . . . . . . 46Figure 4.2 A one-dimensional model of sketching control authority. They-axis denotes strength of a given system’s constraint, active up/ passive down. The x-axis has no meaning. Existing types ofassistance are located approximately in this space, both generic(e.g., a physical ruler) or published works (denoted with *, seeRelated Work). Phasking can fluidly access all points on thisaxis, by bearing down on the pen-tip while drawing. . . . . . 50Figure 4.3 The framework’s (a) bound and (b) bring constraints, and (c)the concept of control sharing, where the user can diverge froma guiding line by bearing down on the pen. . . . . . . . . . . 54Figure 4.4 The MagicPen: an untethered ballpoint drive produces forcesby driving a contact ball over a surface. Friction between theball and surface prevent slip, and provide a “ground” back tothe user’s hand. . . . . . . . . . . . . . . . . . . . . . . . . 56Figure 4.5 MagicPen ballpoint drive design iterations. From left (earliest):(a) Early version with plastic gear between motors and rollers,and rubber connecting ball between roller and surface contactball. (b) Pulley belt connection between motors and rollers, andclock gears with micro cogs between roller and the contact ball.(c) Metal gears between motors and roller, for a lower gear ratioof 1:1. (d) Similar mechanism with a customized higher gearratio of 4:1. . . . . . . . . . . . . . . . . . . . . . . . . . . . 58xviiiFigure 4.6 Paper tool palette, which can be hand-drawn and customized,or printed on a full sheet or slip of paper. . . . . . . . . . . . 61Figure 4.7 Force generation performance. (a) BOSE test setup. (b) Maxoutput force response to PWM voltage pulses. (c) Force-trackingresponse to a slow sinusoid of PWM excitation. . . . . . . . 62Figure 4.8 Position and disturbance tracking. (a) Test setup. The Mag-icPen’s shaft is held in a vice, with roll and pitch disturbancesapplied at the top of the shaft and through the contact ball,respectively. The onboard processor controls the contact balltrajectory using orientation (roll/pitch/yaw) and internal ballmotion. An external trackball measures ball movement relativeto a global reference. To assess performance, we compare errorbetween command and externally measured trajectories. (b)Test 3 (Disturbance Rejection) results shows the system’s re-sponse to rapid yaw (0.5Hz, 35º peak-to-peak over 22 seconds),mimicking significant wrist rotation. . . . . . . . . . . . . . 63Figure 4.9 Human’s aided circle following. (Left) System control: withcontrol-sharing off, we fuse absolute and local position sensingto guide user P1 in drawing a circle. The deviation is a resultof the pen rotating in P1’s hand. (Right) With control-sharingon, a user violates a bring circle guide to draw a bear’s face bypressing down on the pen. The color scale indicates appliedpressure (yellow at 100% user control, blue when user hasrelaxed and is letting the system drive). . . . . . . . . . . . . 65Figure 4.10 Users’ Likert scale responses for the MagicPen(N=10, 7 novicesand 3 experts; 7 is positive). . . . . . . . . . . . . . . . . . . 67Figure 5.1 The Blind Men and the Elephant. People who have never seenan elephant try to conceptualize it via touch alone, either joiningor fractured by their different perspectives [101]. . . . . . . . 73Figure 5.2 Three educational haptic devices used in this work. From leftto right: Cellulo [171], Haply [67], and MagicPen[113]. . . . 76xixFigure 5.3 A modified framework based on Baker et al [13]. (Left) Asimplification of Baker’s original model for analysing the effectof a tool on grounding. (Right) In our work, the tool is Haptics,Agents are Learners, and we focus on two impacts (red arrows):a) Learners’ haptics-mediated mutual understanding of eachother; b) Learners’ haptics-mediated understanding of the goal. 79Figure 5.4 Three virtual environments paired with haptic devices. Fromleft to right: Pressure Lab with two Cellulos, Collision andMomentum Lab with One Haply, and Electrostatic Lab withtwo . Gloves and masks were used as part of our institutionallyapproved COVID-19 safety protocol. . . . . . . . . . . . . . 82Figure 5.5 Coding relationship analysis (Study 1). . . . . . . . . . . . . 94Figure 5.6 Coding haptic critical incidents: Left: Coder’s view of screenrecordings of the haptic environments when the learners areperforming the experiments Right: Real-time analytical viewof force behaviour during any given time in the screen recording.The background of the force amplitude plot (upper right) turnspink to indicate a critical instance, identified from the forcelogs during playback of the screen recording of a session. Thisnotifies the coders to make and code the critical moment andanalyse learner behaviour around this timestamp. . . . . . . . 98Figure 5.7 Summary of quantitative results for the three learning environ-ments of Study 2. . . . . . . . . . . . . . . . . . . . . . . . . 101Figure 5.8 Correlations among 9 collaboration dimensions as well as learn-ing gain (first row/column), by learning environment/devicecombination. We computed correlations on the average rat-ing of each dimension across three coders with the addition ofdyad’s learning gain. Cross-system variation seen in the matri-ces exposes how collaboration dynamics varied by system.**. Correlation is significant at the 0.01 level (2-tailed).*. Correlation is significant at the 0.05 level (2-tailed). Weselected a 0.05 threshold for reporting the p value and statisticalsignificance. . . . . . . . . . . . . . . . . . . . . . . . . . . 104xxFigure 6.1 The PAL Framework. Physically Assisted Learning inter-actions haptically adapt stages of experiential learning fromKolb’s [122] general framework, with some added featuresfrom Honey [93]. Hands-on Active Experimentation and Con-crete Experience are most amenable to haptic augmentation,enriching the more purely cognitive Reflective Observation andAbstract Conceptualization. . . . . . . . . . . . . . . . . . . 121Figure 6.2 Technical Setup. Our demonstration platform consists of aHaply force-feedback pantograph, a USB-connected digitalpen, and a host computer. The Haply communicates positioninformation to the host computer and receives motor commandsthrough a USB port. A digital pen captures and conveys thesuser’s stroke, along with data opressure, twist, tilt and yaw. . 134Figure 6.3 Technical implementation required to support Design (green)and Explore (blue) learning activities in response to ongoinguser input. Details are explained in Section 6.5.3.(The user’sgraphic from Can Stock Photo, with permission). . . . . . . . 134Figure 6.4 Rendering a haptic wall, using a virtual coupling to achieve bothhigh stiffness and stability. (A) Algorithm schematic. (Upper)In the simplest rendering method, force depends directly on thedistance between the virtual wall and the user’s hand (hapticdevice) as it penetrates the wall: F = K(Xwall−Xuser). (Lower)A virtual coupling establishes an avatar where Xuser would beif we could render a wall of infinite stiffness, and imposes avirtual damped-spring connection between Xuser and Xavatar.(B) Force-displacement behaviour when the wall is renderedas a direct stiffness or through a virtual coupling. The VCused here also uses the maximum K = 10N/cm, and achievesa similar stiffness as when this K value is used on its own.(C) Oscillatory behavior of the conditions from (B). In directrendering, instability increases with K, but with a VC, a highK is as stable as the softest direct-rendered wall. (B) and (C)show data sampled from a Haply device. . . . . . . . . . . . 138xxiFigure 6.5 A teacher prepares a handwriting activity by defining a lettershape m; the learner will then attempt to form the letter with as-sisting guidance. To create the m, the teacher can (A) laser-printa computer-generated graphic on paper, (B) draw it by hand, or(C) manually draw it with haptic assistance. For erasable media,e.g., pencil on paper or marker on whiteboard, the teacher can(D) erase and draw a new exercise. (E) Exploring the m withink marks rendered as virtual walls. . . . . . . . . . . . . . . 139Figure 6.6 Use case: comparing the dynamic behavior of different spring–mass system configurations by drawing then feeling. (A) Theuser sketches a pair of spring–mass systems using a system-readable notation. (B, E) Our system recognizes the user’sstrokes and incorporates them into virtual models. The usercan now “connect” to one of the drawn masses by movingover it and e.g., clicking a user interface button. (C) Behaviorwhen connected to the single-spring configuration (A). Thesystem implements the corresponding model (B) by pinningXavatar to that mass. The user can then feel the oscillatory forcebehaviour by “pulling the mass down,” extending and releasingthe spring. (D) The user connects to the two-parallel-springsconfiguration, and compares its behavior (model E) to the firstone. (F) compared to (C) shows a higher force for the samedisplacement, and a different oscillatory behavior. This systemis implemented using a passivity controller to allow a widerange of M and K values, which are modifiable by hand-writingnew values on the sketch. . . . . . . . . . . . . . . . . . . . 140Figure 6.7 Simulation model of a complete haptic interface system andpassivity controller, as implemented here. (Reproduced from[86], Figure 8. System blocks are (left to right): user, hapticdisplay, passivity controller α, and virtual environment. . . . 144xxiiFigure A.1 The force and velocity behaviour as the user runs the avatar intoa virtual wall. The blue line represents the velocity, the orangeline shows the force, and the black line represents the user’strajectory. The gray box represents the regions where the user’savatar is inside the virtual wall. . . . . . . . . . . . . . . . . 189Figure A.2 A user is moving the MagicPen around the corner of a box(gray). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190Figure A.3 A user is moving the MagicPen around a circular wall. . . . . 191Figure B.1 Line with NeoSmart pen. . . . . . . . . . . . . . . . . . . . 193Figure B.2 Line – bring. . . . . . . . . . . . . . . . . . . . . . . . . . . 193Figure B.3 Rectangle with NeoPen . . . . . . . . . . . . . . . . . . . . 194Figure B.4 Rectangle – bring . . . . . . . . . . . . . . . . . . . . . . . . 195Figure B.5 Rectangle in perspective with NeoSmart pen . . . . . . . . . 196Figure B.6 Rectangle in perspective – bring . . . . . . . . . . . . . . . . 197Figure B.7 Circle with NeoSmart pen . . . . . . . . . . . . . . . . . . . 198Figure B.8 Circle – bring . . . . . . . . . . . . . . . . . . . . . . . . . . 199Figure B.9 Line – bound . . . . . . . . . . . . . . . . . . . . . . . . . . 200Figure B.10 A sine wave with no shared control – bound . . . . . . . . . 200Figure B.11 A sine wave with shared control – bound . . . . . . . . . . . 201Figure B.12 A sine wave with no shared control – bring . . . . . . . . . . 201Figure B.13 A sine wave with shared control – bring . . . . . . . . . . . . 202Figure C.1 Electrostatic lab pre-test . . . . . . . . . . . . . . . . . . . . 204Figure C.2 Electrostatic lab post-test . . . . . . . . . . . . . . . . . . . . 205Figure C.3 Momentum and collision lab pre-test . . . . . . . . . . . . . . 206Figure C.4 Momentum and collision lab post-test . . . . . . . . . . . . . 207Figure C.5 Pressure lab pre-test . . . . . . . . . . . . . . . . . . . . . . 208Figure C.6 Pressure lab post-test . . . . . . . . . . . . . . . . . . . . . . 209xxiiiAcknowledgmentsFirst and foremost, I would like to thank my Ph.D. supervisor, Dr. Karon E.MacLean, for all the supports and research mentorship that I received during theseyears. I highly appreciate her patience and encouragement for research excellenceand challenging my work by urging better clarity in explaining my ideas. Herguidance was essential for shaping this work.I am very grateful to my supervisory committee, Dr. Ivan Beschastnikh, Dr.Pierre Dillenbourg, and Dr. Michiel van de Panne, for their valuable feedback. Iwould like to thank my external examiner Dr. J. Edward Colgate, as well as myinternal university examiners, Dr. Jim Little and Dr. Machiel Van der Loos for theirinsightful comments that led to several key improvements of this dissertation.I thank Dr. Wafa Johal for assisting me to receive the Swiss National Centre ofCompetence in Research (NCCR)’s robotics fellowship and spending 4 months atthe CHILI lab at EPFL. I learned many things from her.I like to give special thanks to my student collaborators. In particular, JuliaLindsay, Yuxiang Huang, Guanxiong Chen, and Unma Desai for all our productivediscussions and teamwork. My special thanks to my fellows at the SPIN lab(members and alums) for their supports and valuable input throughout this journey,Dr. Oliver Schnider, Dr. Hasti Seifi, Laura Cang, Paul Bucci, Matthew Chun, SalmaKashani, Dilan Ustek, Hanieh Shakeri, Preeti Vyas, Hannah Elbaggari, RubiaGuerra, Devyani McLaren, and Elizabeth Reid.I thank the MUX group’s faculty members and the students, for all their greatfeedback on my research and paper drafts. In particular, I thank the faculty members,Dr. Robert Xiao, Dr. Joanna McGrenere, Dr. Dongwook Yoon, Dr. Tamara Munzner,and Dr. Kellogg Booth, and the MUX students, Dr. Jessalyn Alvina, Dr. FrancescoxxivVitale. Yelim Kim, Taslim Arefin, Izabelle Janzen.I would like to thank the CHILI lab members for their hospitality and my greattime with them. In particular, Sina Shahmoradi, Dr. Thibault Lucien ChristianAsselborn, Hala Khodr, Dr. Barbara Bruno, Dr. Arzu Güneysu Özgür, Utku Norman,and Jauwairia Nasir.I want to thank the Natural Sciences and Engineering Council of Canada(NSERC) and UBC’s Institute for Computing, Information and Cognitive Sys-tems (ICICS), and the UBC Designing for People (DFP) Research Cluster forproviding facilities and funding for this work. I thank the Swiss National Centre ofCompetence in Research Robotics for partially funding my visit to the CHILI Labat EPFL, and in particular, Florence Colomb for organizing this visit. I thank all theparticipants who took part in our experiments.Lastly, I have immense appreciation and love for my parents, my wife, mysiblings, and my friends.xxvDedicationThis work is dedicated to my loving family, and to all the motivated and curiouschildren with big dreams.xxviChapter 1IntroductionAccording to Papert [174], education should be more about exploring and engag-ing, with less emphasis on explaining. As discussed in the pedagogical theoryof Constructionism, pupils should be involved in their process of learning, fromdesigning and constructing meaningful projects/artifacts to exploring them andcreating personal experiences. As one of the well-grounded theories in the do-main of educational robotics, learners shape their knowledge based on what theyalready know, the experiences they gain and the way they organize this to constructknowledge [152]. In this chapter, we will propose an approach for making a usefultechnology to promote experiential learning based on the Constructionism learningtheory, and outline how this dissertation will go about the design and validation ofit. Specifically, we focus on empowering digital manipulatives with the addition ofhaptics and study their potential benefits.Physical manipulatives are objects that aid learners in perceiving Physics andMath concepts by manipulating them. Pattern blocks, coloured chips, and coins areexamples of physical manipulatives that are used in early childhood education toengage learners in hands-on activities. Digital manipulatives are physical objectswith computational and communication capabilities that can promote different typesof thinking in children by engaging them in playing and building. The history ofusing Digital manipulatives for education dates backs to the early 1970s - 1980sin several works from MIT, most notably from the Media Lab’s Tangible Mediaand Epistemology and Learning groups and the Artificial Intelligence Lab. Among1them, projects such as Floor Turtle [4], Graphical Logo, LEGO Mindstorms [174],Crickets, and Curlybot [64] introduced engaging environments to develop newapproaches to thinking about mathematical concepts with encouraging results.Robots are a class of digital manipulatives that use motion along with othervisual or audio cues to express information. Children can program robots andtherefore observe and experience how defining a set of rules results in predictablerobot behaviours. This also gives them the freedom to decide what the robot is,based on how the robot behaves. However, as children grow older, the lack ofversatility reduces the effectiveness of these robots in conveying advanced topics.Consequently, as teaching shifts more towards using formal methods in later years,they gradually disappear from classrooms.We propose two exemplary difficulties of fitting this approach into conventionalclassroom settings, each pointing to a class of considerations.• School logistic considerations: Teachers who are the major stakeholders ofeducation have needs which are not met. For example, what would be aproper educative tool to help teachers deliver instructions and orchestrate theclassroom more efficiently?• Tool versatility: As more advanced topics are introduced to learners, softwareand hardware limitations make these robots less effective.We can name twomain reasons behind this incompetency: (a) the tool does not permit learnersto actively design, make, and change their learning environment based ontheir hypothesis (user’s expressivity), and (b) it imposes a complicated levelof interpretation to show the relation between the robot’s motion and teachingsubjects (tool’s expressivity).Even if these robots could not find their way to schools, that does not makethem inferior digital manipulatives. In fact, there are clear pieces of evidence fromschool camps and workshops to show children learn several advanced topics justby playing with them [162]. One method that these robots use to improve thelearning process is to help learners imagine themselves as the robot and furtherreason about the robot’s motions based on the learners’ commands. Accordingly,if these commands are given to solve a mathematical problem, then learners are2provided with an intuitive way of understanding the Math concept by observingthe robot’s movements and reason about them [193]. Often, learners write code tocommunicate with robots; thus learners naturally get involved in systematic thinkingand learn the key principles of computer science [229].Despite the possible advantages of using robots to teach computer literacy, thereis little conclusive evidence to show transferable learning from practicing computerprogramming with robots to learning other fields of science such as Physics andMath [73]. Moreover, there is a slight chance that using robot movements in somelearning scenarios could be misleading, more possibly, when there is no direct linkbetween the movements and information that needs to be delivered. Both casesoptimistically expect students to either be able to apply the learned skills with robotsto other scenarios (e.g., using problem-solving skills such as divide-and-conquerin Math problems) or use their imagination to find the analogy between the robots’movements and Physics or Math concepts. In reality, this gap is often so large thatlearners cannot find the connection between the activity with robots and what theyare supposed to learn.We posit that combining the visual and haptic (sense of touch) cues can poten-tially address this deficit. By using the visual cues we obtain a proper environmentthat directly matches the learning concepts. With a haptically augmented manip-ulative, haptic cues build on the visual input available from a robot’s motion asthe user manipulates the virtual environment by moving and interacting with itthrough the handheld robot. This can provide complementary information about arepresented concept, reduce cognitive load of unimodal input, and generally makethe manipulative more expressive than one which works through vision alone as it isused to explore advanced STEM concepts. The combination of these two modalitiesreduces the risk of misinterpretation.In this dissertation, we make three observations, the joint addressing of whichprovides a unique opportunity in this as well as some other fields.1. Assisted sketching can improve fluid expression of ideas. Proper assistiveforce feedback to users’ pens while sketching can help them to manifest andcommunicate their ideas to other people and to a computer.2. Exploring the role of force feedback in learning in a human centered design3context could lead to understanding the best strategies for employing them inlearning advanced topics in Physics and Math.3. The addition of haptic feedback to a digital manipulative (beyond, for ex-ample, the ability to program motion alone) is potentially helpful to makemore compelling interpretations so that learners can predict and reason aboutthe outcome, based on what they see and what they feel. As a result, we canexploit this design space to empower learners to actively design, make, andchange their learning environment based on the learners’ hypothesis.Countering these opportunities, we also observe the difficulties of incorporatinghaptic feedback into a digital manipulative suitable for a classroom environment.First, we could not find a mechanism to support low-cost untethered haptic feedbackfor curricular activities, and thus it would be physically impossible. Secondly,supposing it existed, such a device would be difficult to adopt if it were special-purpose; school logistics and resource limits constrain classroom technology tothose which can support many learning scenarios. Consideration of these practicalobstacles has informed many of the decisions in this dissertation.Inspired in part by the vision portrayed in Harold and the Purple Crayon,Crockett Johnson’s magical 1950s book about a small boy who can draw objectsand scenes that come to life [103], we envision a tool that enables learners to, likeHarold, be able to diagram a physical system and then explore it (Figure 1.1).1.1 Design ConsiderationsThere exist multi-purpose commercial haptic displays such as Phantom (now Geo-magic [1]), Novint Falcon (Novint Technologies, 2010; discontinued but currentlyproduced by a different vendor, 2019 [87]), and Force Dimension Omega displays(Force Dimension, 2017 [61]) that suit many applications; however, these devicesare often expensive and not accessible in educational settings even as educationaltoys. Knowing the design considerations tailored towards a specific applicationenables a more targeted design of haptic displays and consequently reduces the cost.Here, we present a list of design considerations that we collected based on literatureand our group’s previous experiences in designing an educational haptic device.4Figure 1.1: Harold can now feel (not just see) what he draws,from molecularattractions (top) to roadside edges (bottom)1.1.1 Pedagogical ConsiderationsAs we sought to understand how to extend the benefits of digital manipulatives tolearning of more advanced topics, we found traction in two visions based on earlyworks by Seymour Papert and his colleagues at MIT Media lab. His two visionsinclude:• aiding learners to expeditiously express their thoughts by facilitating commu-nication of ideas between human/computer and human/human,• supporting exploration of different domains of knowledge such as buildinga model of the world or abstract concepts by making them more accessibleor helping to make them approachable in new ways, therefore increasing thepossibility that learners use this medium to perform interiorized actions on5the object of knowledge [180].Thus, we set out to implement these two visions in the form of a digital manipulativeto function as an "object-to-think-with" [174]. We clarify what we mean by an"object-to-think-with" by giving an example of a girl interacting with an educationalrobot called Curlybot [64].”... It usually was not possible to have children perform specific tasksgiven the informal environment of the study. However, there was oneseven-year-old girl that played with Curlybot for an extended period oftime and accepted our challenge to create a few geometric shapes outof their most basic elements. We found that she needed us to providean example before being able to create the shapes on her own. Weshowed her how to create a square and let her try it on her own. Whenwe asked her to create a circle, she started by designing it with largearcs. She needed additional help to understand that a circle could becreated from a small segment. Later on, the same girl came back, andasked if she could try a shape she had been thinking about. We werepleased to see that she continued to process her new knowledge aboutshapes even outside the play area. Curlybot appears to have becomean object-to-think-with for her.”This example highlights the learner’s experience model from becoming awareof how to use the robot to taking actions and further maintaining these actionseven without having access to the robot. What Curlybot does is to open up newopportunities by giving the means to learner to approach new problems based ontheir own experiences. We will explain these two visions in details in Chapter 6.1.1.2 School Logistics considerationsOzgur et al [171] propose a list of requirements for a useful educational platform ina class setting. From a practical standpoint, the platform should be low-cost, robust,and reliable. The limited class-hour (typically around 45-50 minutes) requiresan educational tool that can support uninterrupted learning scenarios, minimumcalibration and initialization time, and at the same time shows encouraging learning6gain to justify the effort to design a learning activity with the tool. As such, weshould carefully consider the user experience for both learners and the teacher.Other considerations drawn from some of our own experiences with classroomwork include extreme limits on teacher time (both for studies and later deployment)and technological expertise or the ability to prioritize such time as part of their job.On the technology side, we need to be concerned about justifying the expense ofsole-use technology, deployment practicalities like batteries and power cords, andthe sheer difficulty of students being able to determine when a device is behavingcorrectly, yet delivering encouraging learning gains and engagement right out of thegate in order to justify the extra workload [170].This highlights a larger issue, that of validating learning benefit in situationswhere there are countless variables and controlled studies are not possible. For thisreason, many studies take a qualitative approach and look for ways in which thehaptic modality is changing student strategies, collaboration style, engagement andinterest or type of questions [44].1.1.3 Challenges of Haptic DisplaysMaking meaningful haptic feedback is challenging. Often vibrotactile hapticfeedback falls short in making the desired haptic perceptions. As a result, hapticdesigners employ more sophisticated force feedback haptic displays. On the otherhand, force feedback haptic devices have traditionally been costly and difficult touse; therefore, inaccessible for many applications. Educational haptic devices werenot an exception. Below, we give an example of educational haptic challenges inpractice.An education researcher asked our group for a large workspace haptic device tostudy the role of bodily gestures in understanding salient features of graphs.We could not offer them any solution at the time, as haptic devices were notready to be employed in real classroom settings due to hardware and softwarelimitations. Most haptic devices need to be anchored to a base in order to transferreaction forces to the ground via mechanical chains of links (arms) or cables.This static grounding constricts mobility and restricts them to a small workingarea. Increasing the workspace requires expensive and powerful motors and sturdy7linkages. Consequently, our options were either to invest in expensive devices or belimited by the need for grounding.1.1.4 Creating Haptically Augmented Learning EnvironmentsThere are a multitude of haptic libraries to support designers developing hapticinteractions with virtual environment for a given haptic technology. However,designing even a simple environment using a 2D haptic library requires some basicknowledge of programming as well as of Physics. Either or both of teacher andstudent may not possess this knowledge or be confident enough to write programsin a classroom setting. Even in cases where a teacher is a technology enthusiast, theuncertainty in predicting learning benefit relative to a large time investment is anunderstandable barrier. This underscores a broad need for more usable, accessibletools for haptic experience design which goes well beyond the need for accessibletechnology itself.8Roadmap: We will explain the challenges of haptic displays for education inChapter 2, and our effort to design a new mechanism to address these challenges inChapter 3. We will present our two studies on educational applications of our newdevice in Chapters 4 and 5. In Chapter 6, we will discuss our method of creatinghaptic experience just by drawing it. Finally, in Chapter 7, we summarize the resultsand provide a conclusion as well as directions for future research.1.2 Thesis Direction, Rationale and Scope1.2.1 Thesis Focus and ScopeThe prior consideration i.e. the lack of appropriate haptic devices discussed aboveled to a focus in this PhD research on making a new device that can provide forcefeedback to a user’s pen and paper interaction. We study and define a frameworkfor this type of interaction and investigate how well it could work in terms ofthe mentioned considerations. We explore the best strategies for employing forcefeedback in learning activities using three educational haptic devices includingthe one that we have invented. The use of multiple devices makes the resultsgeneralizable and reduces the risks of failure due to the varying qualities of hapticrendering per each device. This dissertation does not cover any studies on potentiallearning efficacy and any evaluation of whether this hardware approach will be moresuitable in a classroom context.1.2.2 Design ApproachFrom the wide range of digital manipulatives for education, we decided to explorean intersection of two main streams of research on (a) haptic devices and (b) penand paper-based interactions with a primary focus on STEM applications. Althoughthere exist several examples of devices that are specifically designed for eachmentioned domain of research, only a few of them support both haptic features andpen-and-paper interaction simultaneously.Our options were either to invest in high quality but expensive haptic displaysthat we could implement and test our ideas knowing the drawbacks that the smallwork-space could impose on our research findings, or to design our haptic display9based on the design considerations that we have previously mentioned. Because ofmy engineering background, we decided to go with the second choice. Therefore, amajor activity throughout this dissertation research was to to develop the hardwareand improve its capabilities.We decided to make 2D haptic interfaces that can support generic interac-tion with content found in typical curricula, conceptual learning, and ultimately,amenability to commercial development. The ability to create and feel virtual con-straints while generating smooth curves and straight lines on life-sized 2D surfacescould make them more efficient and natural to use. Or, like Harold, a learner coulddiagram a physical system and then explore it. We believe that it is necessary thatlearners build the environments they will feel. Current educative haptic platformsmust be programmed by hapticians rather than learners. This can result in a pas-sive experimental environment as opposed to creation-based premises of digitalmanipulatives.1.2.3 Evaluation ApproachWe evaluated how a user can express their ideas more fluidly. We established thecore interactions that physically assisted sketching can help users to project out theirideas. As part of our assessment, we measured how force feedback can improve theaccuracy of drawing. Moreover, we investigated how a user manages their authorityin drawing by giving them control over the amount of aid they receive from thecomputer.To test the benefits of haptics for exploring the learning environments which wedeveloped, we used three different haptic devices – the one we developed withinthis dissertation’s scope as well as two others, together exhibiting a diverse set ofcharacteristics – each paired with a learning environment. We studied the benefitsof haptics in a collaborative learning context. Therefore, we could explore whatstrategies learners would take to use this tool to achieve mutual understanding whicheventually shed light on the best methods to use haptics in learning scenarios.10Figure 1.2: Ph.D. big picture overview1.3 Thesis ObjectivesIn this section, we define the thesis objectives and describe the specific mechanismand framework we used to reach each objective (see Figure 1.2).1.3.1 Objective I – Chapter 3We created a functional platform to explore the concept of drawing and then feel it.• Guiding question: How can we create a low-cost, large workspace forcefeedback device? What type of new interactions can we support with it? Whatare the potential educational applications of this platform?• Approach: Explore the design considerations including the mechatronics,user experience, and ergonomics requirements to design a pen-based forcefeedback device and evolve it through a series of iterations.• Achievements: We designed two form factors of a handheld force feedback11device: “stylus-based” and “pen add-on”. We ran mechanical tests to ensurethat they meet the primitive haptic device requirements. We started withuser studies that required the stylus form factor due to the simplicity of thedesign compared to the add-on form factor. We assessed the quality of hapticrendering with the “stylus-based” form factor and the drawing with the “penadd-on”.1.3.2 Objective II – Chapter 4We investigated how we can help users to fluidly express their ideas. We formalizedour inquiry by establishing a conceptual framework for physically assisted sketching(Phasking) which eventually informed our evaluation.• Guiding question: What are the key interaction concepts for physicallyassisted sketching (phasking), and how can we support them with our forcefeedback pen?• Approach: Explore how physical assistance from a haptic display can im-prove the user’s expressivity in pre-defined drawing tasks. We establishedthe framework of physically assisted sketching by identifying a primitiveset of interaction concepts. We use this framework as a base for our userevaluation. A user explicitly sets the drawing commands (e.g., drawing acircle) and receives the proper force feedback. Later, we reflected on thedrawing performance and the unique opportunities, which our system bringsto the student’s physical drawing.• Achievements: We used an untethered force feedback pen to give real-timeforce feedback. A user could use the command palette to choose amongthe functions that he/she needs the assist for drawing. We used informativemetrics related to accuracy and shared control and authority to asses theperformance of the haptic device.1.3.3 Objective III – Chapter 5We studied the importance of sense of touch in experiential learning; and uncoveredwhich haptic strategies will be most useful in collaborative learning.12• Guiding question: How can we improve the versatility of the device throughforce feedback- the capacity to express information to users through theaddition of haptics?• Approach: Study how learners can utilize force feedback in the context of acollaborative learning framework, in a manner that is generalizable beyondthe device we invented and lends insight into the utility of different kindsof haptic capabilities. Within the collaboration, we specifically look at theprocess by which two learners achieve a mutual understanding of the concept.Studying with learner pairs allows us to leverage existing theories of stagesof collaborative learning, and more easily relate our observations to findingsusing other kinds of learning technology.• Achievements: We deployed a variety of force feedback devices for a varietyof science concepts, each paired with different interactive learning environ-ments. We gained insight into the different approaches and strategies thatstudents take and how the platforms supported or hindered them by means ofqualitative methods applied to student pairs.1.3.4 Objective IV – Chapter 6Here we propose an approach to connect the activities in design (Objective II) andexploration (Objective III) to achieve a smooth transition between these two stagesin learning.• Guiding question: What are the key haptic interactions that can supportlearners throughout different stages of experiential learning cycle?• Approach: Propose a theoretical framework to support physically assistedlearning in two stages of experiential learning namely Active Experimentationand Concrete Experience. Discuss the types of haptic interactions in eachstage and explore the need to link these stages together to create a smoothand natural transition between them.• Achievements: We implemented and technically evaluated the key hapticinteractions such as rendering a wall and Mass-Spring examples to validate13the feasibility and performance of the concept using available haptic anddigital pen technologies then proposed a path forward to more generalizablelearning interactions in other domains of Physics.1.4 ContributionsOur educational contribution consists of the design, implementation, and evalua-tion of a new haptic digital manipulative that serves learners in their design andexploration of learning environments. We achieve this by pursuing the mentionedobjectives (I-IV). Besides the educational contribution, we present three thesis-levelcontributions, which impact the research in areas other than education.1. We introduce the Ballpoint drive, a mechanism that can provide force feed-back through the friction of a rolling ball on a surface. Our approach cir-cumvents the current workspace limitation of 2D haptic displays by offeringan infinite 2D workspace. This contribution is directed towards the hapticsengineering community Contribution I.2. We demonstrate how haptics guides users in their physical drawing. Wecreate a platform that supports computer-assisted design in physical sketchesas well as digital twinning where it is possible to trace the physical content onthe screen and being able to apply changes and bring them back on the paperthrough guided drawing. Our platform also creates a medium for smoothcollaboration between human and computer. This contribution is towards thedesign community Contribution II.3. We study the added value of haptics in collaborative learning. We find valuesin assessing the use of haptics in a collaborative context and investigate howdifferent forms of force feedback impact the dynamics of collaboration. Weidentify successful strategies of using haptics in peers’ collaboration. Thiscontribution informs the research in both fields of educational haptics andcollaborative learning Contribution III.14Chapter 2Background on EducationalHaptic TechnologyPreface – In Chapter 1, we discussed the challenges of haptic displays as well asdifficulties on the way of designers who are creating haptically augmented learningenvironments. An overview of important topics in the field of haptics will helpus to better understand these challenges. In this chapter, we provide the relevanttechnical background for this dissertation. We begin with two classes of hapticfeedback and focus on the force feedback’s important terminologies as it is thepivotal point of this dissertation. We explain the haptic performance metrics as thebasis for the validation of any introduced haptic display. Next, we discuss hapticrendering techniques to enhance sensory experience and available haptic libraries.Having this background enables us to discuss the challenges of haptic displays morein-depth and the related research which try solve these challenges. We close thischapter with a survey on educational haptic displays for designing and exploring.To avoid repetition, this overview covers the haptic technology related workcommon to Chapters 4-6. The related work sections of each of these chapters willfocus on material relevant to the specific educational application or topic it covers.152.1 Haptic Feedback OverviewAt the most general level, haptic feedback can take two forms: tactile (skin sensa-tions like vibration, pressure and temperature); and forces (generally called forcefeedback, kinesthetic display, proprioceptive feedback, etc.) [107]. Tactile feed-back can be competent in rendering textures, temperatures and material properties[33, 120], but is less able to display interaction forces, e.g., compliance, inertiaand mass, and mechanism dynamics [169]. Thus, when these properties are impor-tant, force feedback rendering is often more effective (informative, believable andfunctional in task completion) than tactile feedback can be.2.1.1 Grounding and TetheringForce feedback requires something to push against (i.e., a reactive ground), andthus is easiest to provide as a world-grounded robot arm (e.g., mounted on a desk)[6, 29, 150, 210]. This is known as grounded force feedback.Another concept relevant to haptic feedback and this invention is tethering.Tethering is in essence a constraint on mobility, as a consequence of either wiresand cords (e.g., to supply power, data communication), or mechanical linkages(e.g., for the user’s body to interact with a grounded device through a mechanismwith a workspace that is small relative to the desired environment space). Thus,tethering of a device to a restricted anchor point can be a direct consequence ofgrounding, but it can also arise from other practical constraints. Devices whichare untethered typically are freed of one or more of these constraints by one ormore of the following approaches: (a) being ungrounded (i.e., unable to provideforce feedback); (b) utilizing a body-mounted force ground; (c) using a body-wornbattery for power; (d) using either wireless communication for data connection to acomputer controller, or carrying the computer on the body (usually the computer isalso wirelessly connected to external infrastructure in this case).However, to be fully untethered, the device must in some manner be free of alltethers needed for power, data and force transmission.162.1.2 Workspace Scaling FactorsAs described above, all force feedback devices require a physical ground of somekind. Most world-grounded devices must be anchored to a base in order to transferreaction forces to a ground other than the user’s own body, generally via mechanicalchains of links (arms) or cables. To achieve an arm-scale workspace, the forcefeedback device generally must use large motors, sturdy linkages and high-resolutionsensors, all of which increases the weight, size and cost of the device.Moreover, the device’s minimum achievable impedance – an important hapticperformance measure which relates to the kind of environments it is able to render– depends on mechanical properties of the device such as masses in the structure,friction, and inertia of the gearing system and actuator. As workspace size goes up,impedance range (the difference between the minimum and maximum impedancethat the device can render) becomes smaller for two reasons. First, the minimumimpedance that can be rendered will increase due to the effect of inertia on links andactuators. Second, the maximum renderable impedance will decrease as a result oflarger position errors caused by longer and more compliant linkages [15][239].Thus, it is difficult to make large-workspace force feedback devices using theconventional method of world-grounding, as is the typical approach when systemsmust provide reactive forces to the user.2.1.3 Haptic RenderingForce feedback has long been shown to enhance sensory experience and cognitiveinformation when used to provide correspondent force information during inter-action with a virtual environment [158, 169, 214]. The basic approach for spatialforce feedback in a virtual model (one way in which force feedback can be utilized)is for the user to move an end-effector of some kind (e.g., the tip of a held stylus,or a bare finger sliding on a surface) while its position is sensed. A reaction forceconsistent with that spatial point in a virtual model is computed and applied to theuser’s body part that is connected (directly or indirectly) with the virtual surface[169]. One important kind of virtual feature that is useful to render haptically is avirtual fixture: a line or constraint in a virtual model which the user should not beable to cross, but can use as a guide [194].172.1.4 Open-Source Haptic LibrariesHaptic libraries cover at least two main categories of functions: rendering hapticbehaviors, and connecting the haptic interface modality to other parts of the systemand experience, be it an underlying virtual model, graphics and/or sound enginesand display, managing other forms of user input and control, and in some casesinteraction over a network and with other users and entities. While some have beenassociated with a specific product, most attempt to generalize support for at least asignificant class of devices (e.g., CHAI3d [41], hAPI [66]).Some haptics libraries support advanced rendering of complex deformation andcollisions both haptically and graphically for sophisticated environments such assurgical training simulations. For educational contexts, we often do not need suchcomplexity. In contrast, for student-oriented online physics learning materials it iscommon to see the physical behaviour of an object presented with simplicity via anopen body diagram and illustration of applied forces (e.g., [178]).On gaming platforms, developers use graphics engines to simulate the behaviourof rigid bodies in a virtual world in procedural animations which move realisticallyand can be interactive within a virtual game world. Hapticians have exploited gameengines for their virtual environment modeling, getting graphic display for free anddriving haptic output from the VE simulation; this obviatees the need to make oraccess another physics library for haptic rendering. For example, the hAPI uses awrapper around the 2D physics simulation library Fisica and turns it into a hapticengine system for educational purposes [66].2.1.5 Haptic Performance MetricsAn early article on haptic displays for remote interaction with physical or virtualenvironments suggests three key qualitative requirements for a haptic display, whichare still generally accepted as both important and challenging to achieve, particu-larly while meeting other requirements like low cost, increased workspace size orportability [150]. They can be expressed as primary performance metrics:1. Free space must feel free. This freeness is obtained by minimizing the overallelectromechanical impedance when actuators are turned off, i.e., portrayingfree motion. Impedance can come from many sources, including actuator18non-backdrivability, and friction or backlash throughout the drive train. Forexample, the easiest way to achieve high forces is through a large gear ratio,but this leads to high motor inertia when the device is backdriven, violatingthis requirement.2. Solid objects must feel stiff. The rendering of walls and other solid objectsrequires both at least moderately large forces, and rendering stability at apoint of nonlinearity (the crossing of a virtual line between free space and ahard virtual surface). The latter requires high temporal and spatial resolutionrelative to the speed of the end-effector’s movement. Beyond this, to beconsidered of high fidelity, a device must to be able to faithfully renderdifferent levels of stiffness between none and high.3. Virtual constraints should not be easily saturated. Apart from the portrayalof gradations of stiffness (previous item), the issue of saturation is about totalforce magnitude. When the user hits a virtual wall or slides along it witha reasonable interaction pressure, the device must be able to respond withsufficient force that the user will not overcome it.To compare haptic displays, we can consider quantified parameters of the abovegeneral statements (i.e., actual free-space impedance, rendered stiffness range andforce saturation), but also secondary performance measures. The latter include, forexample, peak force, resolution and precision, workspace size, degrees of freedom,bandwidth, rendering impedance bandwidth, power density, and peak acceleration[89, 201].2.1.6 Sharing Haptic Control AuthorityEarly studies have suggested that haptic shared control can benefit the performance(speed and accuracy) of human and robot collaborative tasks, as well as loweringthe need for visual involvement and level of control effort [2, 3]. This seamlesscollaboration helps users to take the full authority of control, or conversely, shift itsmoothly to the system.In an automation application, haptic displays can be incorporated into themanual control displays, when there is a need for machine-human shared control19collaboration. Users can receive relevant information continuously through theirsense of touch, and decide to either conform, or override to gain control [77].2.1.7 Dimensionality: One-, Two- and Three-Dimensional RenderingA great deal of effort has been devoted to the creation of force feedback devices thatcan render forces in three dimensions (3D); e.g., as seen in http:/www.haptipedia.org. The need for grounding constricts mobility; devices with large workspacesare complex and costly. Wearable devices capable of rendering forces to, e.g., thehands and fingers in body-referent frames can be untethered, with a less constrainedworkspace. However, force feedback variants are currently cumbersome, complexdevices that provide grounding by pushing against another part of the user’s body.Tactile (e.g., vibration) feedback can be easily provided in an ungrounded, untetheredform, but is limited in what it can render [213].Two-dimensional (2D) haptics typically provide feedback in a plane. This hasbeen done in many ways, with both force feedback and tactile display; and withboth a held or touched mechanism (e.g., a pantograph mechanism driven by twoactuators [186]) or with direct touch (bare finger sliding on a surface that is actuatedin some way). Examples of various approaches that have been taken to achieve 2Dhaptics are described below, together with brief examples.Single degree-of-freedom (1D) devices can be potentially low cost and simple[72, 146, 149], and are well suited to many interface specifications. However, theyare limited in the types of systems they can represent. When built at extremely lowcost, as with many other technologies, rendering fidelity as well as workspace sizemay suffer.2.1.8 Passive versus Active Haptic FeedbackEnergetically passive haptic devices cannot supply energy to a haptic interaction,but can only dissipate energy that is supplied by the user, e.g., through the user’sown movement. A brake that is under electronic control is an example of a passivehaptic display: it can stop or slow the user’s movement, but cannot actively guidethe user along a trajectory, or simulate storage and release of energy in an elasticmaterial. This characteristic limits the scope of what a passive display can render,20but its passivity confers advantages of guaranteed stability.Active haptic feedback devices, on the other hand, can render a broad range ofhaptic features, and can also supply active guidance. In guidance, the user’s handcan relax while guidance forces draws it along a trajectory or to a specific point ofinterest.2.2 2D Technologies with Energetically Passive(Non-Guiding) Mechanisms2.2.1 Passive Collaborative Robots (Cobots)In 1996, Colgate et al. introduced Collaborative robots (Cobots). Cobots areenergetically passive devices, meaning they are unable to apply forces to the user[40, 228]. They operate within a multi-wheel (e.g., tricycle) mechanism, which canbe moved around under operator control with no resistance (an actively steered,but passively driven single wheel design). Then, to render or enforce a constraint(e.g., a virtual wall), the appropriate wheel turns sideways and blocks movementin that direction while allowing continued motion in other directions. This class ofdevices has been used to safely provide constraints in material and parts handling inautomobile assembly. The Cobot mechanism is suitable for passively presentingsmooth, hard virtual surfaces, and can be integrated with path planning ability.However, it is less suitable for other haptic effects such as texture, friction andcompliance. Finally, as a passive device, is unable to provide active guidance forces.Twenty years afterwards in 2016, Price and Sup developed Haptic Robot [183] asa hand-size passive force feedback device with a mouse form-factor, which exploitsa Cobot unicycle mechanism (motor drives a caster to change the axis of rotation)to create haptic force feedback. In essence, this device takes the Cobot robot idea,which was designed for materials handling in manufacturing environments, andconfigures it as a haptic feedback instrument. In this mechanism, a wall collision isrendered when the motor puts the caster in a path that can only steer tangent to thewall. The device can present definitive constraints (resistive force of more than 10N; but like the Cobot, it cannot create active force feedback or guide users.212.2.2 Brake-Based Haptic StyliInvented in 1888, the ballpoint pen is still a ubiquitous writing implement due to itssimple but functional mechanism. A number of later efforts, illustrated in the patentliterature, have found ways to actuate or brake the rolling ball, albeit generally ata far larger scale and for different purposes. For example many following patentssuggest new ideas of how to control the speed of the rotating ball by means ofelectromagnetic or mechanical brakes. They are all dissipative in nature, i.e.,energetically passive.For instance, US Pat. No 7,508,382 B2 [128] presents a force feedback styluswith an electromagnetic actuator and rotating ball. US Pat. No 7,265,750 B2 [195]describes a configuration where a solenoid is embedded inside of a stylus whichmoves to change the pressure on the rotating ball, preventing it from free rotation;this is essentially a force feedback interface using a mechanical brake to hinder thefree rotation of the contact ball. The ball is placed between two supports, whilea coiled actuator moves the braking pad and housing to increase the resistance tothe motion of the ball under electronic command. A version of this system with ahaptuator (an actuator specially designed for haptic applications) was designed torender roughness on 3D objects [43].There exist other haptic styli with dynamic resistance which use different mech-anisms to control the friction and the speed of the rolling contact ball; e.g., US8,681,130 B2 [7] and US 8,619,064 B2 [121]. US Pat. No 9,116,560 B1 [81]describes a touch pen with a similar braking mechanism, but uses multiple hapticballs to improve haptic feedback and accuracy.None of these friction-based mechanisms are designed to provide active forcefeedback to the user: specifically, there is no mechanical driver to roll the contactball.2.2.3 Other Haptically Enabled but Passive StyliThere are examples of haptic styli that use friction modulation to provide a slightlyrough pen-on-paper sensation on glassy display surfaces. Advantages includegreater controllability and a lower pressure requirement while writing. The frictionbetween stylus and surface depends only on the user-applied pressure and can be22controlled externally [132].I-draw, 2014 [59] presents the idea of adding an attachment to any stylus-shapeobject to add force feedback. It suggests a Cobot-like mechanism using a smallcaster (controlled by a motor) and three rolling balls. This design has the usualCobot advantages and limitations, and at the same time, the challenge of dexterity.Using one wheel, I-draw can only resist movements perpendicular to its wheel.As a result, it only renders 1-D passive constraints at a time. Eventually, it facesdifficulties in rendering sharp corners and can’t stop the user’s movements in 2-D.2.2.4 Active Force Feedback TechnologiesActive Force Feedback through Robotic Arms: Serial and Parallel LinkagesRobotic arms are among the most known and studied active force feedback hapticdevices. The commercially available Phantom (now Geomagic [1]), Novint Falcon(Novint Technologies, 2010; discontinued but currently produced by a differentvendor, 2019 [87]); Force Dimension Omega displays (Force Dimension, 2017[61]) have been used in many different areas including medicine and education.Many studies have investigated ways in which haptic feedback can improve learningprocesses, for example in teaching visually impaired people how to write [164] orproviding more realistic training environments for medical practitioners [37]. Serialarms, often tendon-driven as in the Phantom, can potentially provide relativelylarge workspaces, but at the cost of precision, strength and cost. Parallel robotmechanisms such as the Novint Falcon can provide great dexterity in their entireworking space and tend to be more stiff (allowing higher fidelity rendering); however,their working space is often much smaller than serial robots.One of the most common 2D force feedback mechanisms is the Pantograph,1994 [186], a 5-bar linkage . These are capable of high-quality feedback in a planarworkspace, generally of fairly small size. Recently, a low-cost version was designedto be clipped on top of a tablet display to be used in co-location with the tablet’sgraphics [67]. Pantographs are appropriate as small-workspace desktop devices butnot suitable for mobile applications in currently-available configurations.Active force Feedback joystick-trackball: In the joystick (released commercially as23the Microsoft Sidewinder consumer device), two motors are connected to a 2DFstick through an advance belt gear mechanism, an approach which is fundamentallydifferent from that of the proposed invention. For the haptic trackball, the mechanicalcontrol over the rotating ball is not explained, but one example can be found here[54]. However, for these proposed configurations, the mechanism is sufficientlybulky that it requires the device to be held against a ground surface, such as a table.Active Force Feedback through Magnetic ForcesMagnetic force can be used to create directional force feedback. The (FingerFlux,2011 [224]) device provides attraction, repulsion, vibration haptic feedback on aplanar (2D) tabletop using an array of coil magnets embedded in the table and apermanent magnet mounted on the user’s fingertip. A permanent magnet attachedto the user’s finger responds to an actively controlled magnetic field generated byelectromagnetic actuators on or under a tabletop. The results showed reductionof drifting, finger guidance of the finger and physical constraints. However, thisapproach has practical limitations. Electromagnetic coils are bulky and heavy,presenting challenges to inclusion in planar displays. It is also difficult to achievea high-resolution magnetic field and control the direction and magnitude of theforce. Finally, resultant magnetic fields can be a major source of electrical noisethat impacts on other system components.2.3 Devices Rendering Texture and FrictionVibrotactile surface display: The most common and broadly commercialized surface-rendered haptic feedback is vibrotactile, produced by high-frequency vibration of atouched surface in response to finger motion. A variety of actuator technologies areused to produce these vibrations. No lateral resistance to motion is provided, i.e.,this is not a force feedback technology.Variable friction: Surface friction is felt in sliding contact of a bare finger with asurface. Some technologies vary the coefficient of friction as a function of measuredfinger position, and thereby control linear and shear forces experienced by the fingerduring active sliding. Significantly, variable friction approaches are not able togenerate any forces on a stationary finger, but they can generate small amounts of24force resistance to a moving (sliding) touch. Applications enabled on touchscreendevices with this kind of feedback have been explored [134].The (TPaD, 2011 [227]) device use an air squeeze film effect to lower friction.Specifically, it uses ultrasound vibration to create a thin layer of air between fingerand display and lower the friction at a designated point. The (ShiverPaD, 2010[34]) employs a similar variable friction mechanism in addition to lateral vibrationto create an active forcing device. Despite its innovative mechanism, ShiverPaD’scurrent design suffers from a small workspace (15 mm in diameter) which limits thefinger exploration area. Increasing the workspace size demands heavy engineeringinvestment and expertise.In a different approach to providing variable friction, TeslaTouch uses electrovi-bration principles to create a dynamic friction between the sliding finger and display[16]. Increasing the charge density on a conductive transparent electrode underneaththe display can create an electric field between finger and electrode. The alternatingelectric field can generate periodic electrostatic forces. These forces are too weakto be perceived in a static environment but they can actuate the skin and create arubbery sensation when fingers are sliding on the surface of the display. Practicallevels of electrostatic friction display require a large electrical field, from a highvoltage difference between user’s finger and touch display. This can potentially bea safety hazard, and requires some mitigation measures to avoid electrical shocks,but this issue has been handled to some degree in products using this approach thatare close to release today (e.g., https:/www.tanvas.co). However, still more needsto be done to reduce the required volume for incorporating this technology intocommercial tablets and surfaces.Magnetic texture: Magnetic coil arrays were investigated to create virtual surfaceroughness directly to the fingertip equipped with a magnet [20]. A user needs toput a “haptic probe” on their fingertip, or in another configuration, hold a pen withembedded coils inside to receive the force feedback from the screen.Ferrofluid and magnetorheological (MR) fluids: Ferrofluid materials are colloidalliquids made of nanoparticles of ferromagnetic materials that can change theirviscosity as being exposed to a magnetic field. Changing this viscosity locallycan create labyrinth feeling when the finger passing it. This phenomena was firstexplored as a surface display in the early 1990s [160]. More recently, MudPad is a25multi-touch screen based on MR fluid combined with an array of electromagnetsthat can change the surface stiffness and render changeable textures [102].One problem with ferrofluid materials is the weak generated resistive force,which improves as the size of the particle increases. Another issue is due to the largesizes of ferromagnetic particles in MR fluids; ferroparticles will settle out if thedevice remains static for a period of time. An approach currently under investigationis to increase sedimentation time by means of surfactants.2.4 Challenges of Haptic Displays2.4.1 The Challenge of Grounded Force FeedbackTraditional grounded force feedback: We will not attempt a comprehensivereview of this large and diverse class, but examine two archetypal examples whichcapture the constraints we wish to circumvent. The 3D Phantom Omni (nowGeomagic Touch, geomagic.com) desktop system has been used as a handwritingtraining tool and for rehabilitation [164]. In 2D, the best known is a pantograph(first in 1992 [141], recently updated in DIY form [67]).The Omni has a serial cable-drive mechanism; the pantograph uses parallel links.While both mechanisms’ workspace can be scaled up, practical considerations limitit to inches per side (about 5” for the Omni). Beyond this requires significant costto maintain performance.Force feedback grounded within a surface: (dePENd, 2013 [231]) uses actuatedmagnets moving under a table to apply directional forces to pen and paper interac-tion, exploiting the ferromagnetic property of pen ball tips to assist sketching. Whileit scales to a larger workspace than some devices, it has a large, fixed footprint. Itcan provide position but not force control, and requires calibration when the userlifts the pen off the surface (a problem we must also creatively address with ourballpoint drive).Ungrounded tactile feedback devices: at least two reported haptic styli providetactile feedback related to position on a graphically rendered surface, e.g., using afriction brake [43, 91], which allows the device to render precepts like roughness.This dissipative approach, however, is not able to generate directional force feedback26or guidance forces, and hence is unable to represent constructs such as springs orwalls.2.4.2 The Challenge of Mobility and Large WorkspaceOne approach to growing a workspace is to move the ground.Movement without forces: (Ballbot, 2006 [129]) is a mobile robot based onan inverted mouse drive. While not designed for force rendering, its locomotionmechanism has high relevance to our ballpoint drive. Making this approach suitablefor haptic rendering, however, requires adaptation of the force display mechanism,an entire new family of rendering techniques and control strategies, and multimodalintegration. We will return to Ballbot as we introduce our own design.Movement with forces: There are two prior examples of mobile robots (i.e.,bots which can propel themselves on a surface) being used as active force feedbackdisplays. While perhaps suited to the applications for which they were designed,each has traits which make it unsuitable for high mobility, nomadic applications.MOTORE (MObile roboT for upper limb neurOrtho REhabilitation, 2011 [12])is a rehabilitation platform, e.g., re-learning motor skills. Designed to move acrossa table-like surface, its three wheels allows the device to generate spatial hapticfeedback which a user experiences while moving the device around on a surface.It can control and constraint impedance and help patients tracking a trajectory inaddition to simulate virtual environment with different viscosity. However, it is10kg with a 145mm radius, and not comfortably lifted or held(Cellulo, 2017 [171]) is a mobile haptic robot is meant for classroom learning,and able to render virtual objects on a 2D-plane. Cellulo is just 168g, the weight ofa pear (although physically larger); and it is low-cost at €125. It uses a permanentmagnet ball in its omnidirectional driving mechanism to address known vibrationissues for this locomotion approach. Using permanent magnet coupling betweenmotors and the rolling balls provides Cellulo with partial backdrivability.Cellulo is the sole example we have found of a grounded, mobile surfaceforce display with handheld potential. Its puck-like form affords an enclosing gripand a mouse-like interaction style. However, while inspired by a “pen and paper”metaphor, it is awkward for drawing; and its size produces occlusion issues. Further27miniaturization might be problematic for stability reasons because its multiple-ballsurface contact relies on a flat orientation.2.4.3 The Challenge of AccessibilityThe “accessible haptics” movement: Large-workspace, low-cost devices will bevaluable in many applications; but even with prevalent DIY fabrication strategieslike 3D printing and laser cutting, requirements for structural strength, precision,motor quality, and power systems drive prices up exponentially with workspacearea. Promisingly, recent haptic prototypes suggest that affordable devices are infact becoming feasible [145] [149] [62] [67] [176].Applications needing access: We are engaged with the goal of using hapticfeedback to assist with conceptual understanding. While it is beyond our presentscope to share the theoretical underpinnings of this goal, some recent studies suggestboth its promise and its challenges [44, 155]. Meanwhile, there is evidence thatlarge movements better support embodiment in learning than small ones [69].2.5 Haptics for Designing and ExploringIn this survey of education-related haptics, we focus on the intersection of twoprimary haptic approaches: (1) haptically rendered virtual environments, and (2)pen-and paper-based interactions (Figure 2.1).2.5.1 Design Approaches: Input Methods, Feedback Modalities andCAD features)We identified a number of relevant works describing novel input methods andhaptic feedback outputs; we focus on systems that are well-suited for educationalapplications such as science, technology, engineering, and mathematics (STEM)learning and visual art.SketchPad [212] was a pioneer in the field of modern computer-aided design(CAD) as well as pen interactions with graphical displays. Sketchpad, for the firsttime, demonstrated the great potentials of combining computing power and digitaldesign in 1963.VoicePen, 2007 [88] is a digital stylus that takes non-linguistic vocal inputs28Figure 2.1: Haptic interaction with virtual environment. Two most studiedapplications of haptics with potential benefit in education. The first groupassists users in their sketching and design while the second group is usedfor exploration.in addition to position and pen pressure for tasks such as creative drawing andobject manipulation . VoicePen uses vowel sounds, variation of pitch, or control ofloudness to generate fluid continuous input to the user’s pen interactions. Similarly,WatchPen [96] captures the users’ drawing inputs form a combination of smartgadgets including a digital stylus, a smartwatch and a tablet. It employs vocal andtouch input to reduce workflow interruptions, such as tool selection. These sytems’reliance on the vocal modality makes them impractical for classroom settings, butthey deliver ideas for stylus interactions.TAKO-Pen, 2015 [124], is a haptic pen which provides pseudo-force feedbackby creating the sensation of sucking on users’ fingers through pressure chambersembedded on the handheld surface. RealPen, 2016 [32] is a digital stylus whichrecreates the sensation of writing on real paper with a pencil through auditory andtactile feedback. FlexStylus, 2017 [58] allows users to perform tool selection andto draw with better accuracy by bending the pen in various modes. Although thesenovel input methods and feedback modalities expand the interaction space betweenusers and haptic devices or digital styli, they are designed for specific purposes, andthus are not pivotal for as more broadly useful sketching tools.In addition to devices, we looked for innovations in computer-aided drawing29(CAD) features for generating engineering or artistic drawings. Specifically, wehave found some existing work on parametric sketching – a CAD functionality thatallows users to define geometric entities with parameters, and to specify relationshipsbetween them as constraints. Examples of parametric sketching include defining acircle by its central position and radius, and defining two lines as being collinear orof equal length. This functionality is useful to architects and architects for creatingcomplex architectural or mechanical sketches. Gürel et al. [80] studied the impactsof applying parametric drawing with CAD tools on creating architectural designs,and they found allowing designers to define parameters and constraints on geometricentities provided them with greater flexibility in their creation process. Ullman etal. [218] emphasized the importance of geometric constraint as a functionality forCAD tools to help designers create mechanical sketches with better clarity.Departing from CAD features and considering direct-sketching input, we high-light ChalkTalk, which recognizes users’ strokes and translates them into meaningfulinteractions using dynamic visualization and procedural animation to facilitate ex-ploration and communication [179]. ChalkTalk is a purely visual medium at thistime; we see potential for using its approach when extending the functionality ofthe haptically supported approach described in Chapter 6 for more expansive sketchinterpretation.2.5.2 Pen-based Sketching ToolsEngineering Design and Educational DrawingAside from work on design methodologies, there are works that focused on sketchingon 2D surfaces alone.In engineering design, InSitu provides architects with a stroke-based sketchinginterface capable of augmenting sites’ contextual information from sensor data intosketches and delivering the information via pop-ups ([173]).Within educational drawing support for (STEM subjects), most devices werebuilt for sketching math or physics diagrams and equations. MathPad2 allows usersto create animations to represent processes (e.g., a mass block oscillating) in additionto static diagrams or math formulas [130]. Hands-on Math places more emphasis30on recognizing handwritten math inputs from users and performing calculationssuch as solving for an unknown variable in an equation [237].Data Visualization and Digital AnnotationIn data visualization, DataToon utilizes touch and pen-based interactions to allowusers create data comic storyboards [119].Within digital annotation, PapierCraft [138] is a haptic stylus that allows usersto annotate documents via pen-based interactions. Moreover, it can take users’gestures as document manipulation commands. A later refinement of Papiercraftemployed more feedback modalities for distinct purposes: LED lights for notifyingusers of pen status, tactile feedback to provide warnings, and speech feedback forindicating action results [137].In this chapter, we have reviewed several key topics in haptics and the challengesof haptic displays. It is apparent from this review that conventional approachesof producing grounded force feedback require mechanical linkages (e.g., arms orcables) to ground the user’s hand to a fixed point in space, and thus have workspaceslimited to the reach of the devices’ arms or cables. In the next chapter, we proposea novel haptic device that displays world-grounded forces to the user’s hand in twodimensions (2D force feedback display), over a workspace of unlimited extent.31Chapter 3Device Design: Introducing theBallpoint Drive MechanismPreface – Our first step towards evaluating the concept of drawing and then feelingan idea is to make a functional platform that supports the required haptic interac-tions. This chapter highlights our innovative approach towards solving the need forcombining grounded force feedback with a large drawing workspace by introducinga novel 2D haptic display featuring the ballpoint drive mechanism. It describesthe evolution of the design of two versions of MagicPens which will be utilized inlater chapters, as well as their technical details supported by primary performancemeasurements. The first version of our prototype has a stylus form factor, and isused for our study on collaborative grounding in Chapter 5. The second version ismodified to have an aligned pen-holder. We will describe how we used it in assistedsketching (Chapter 4). This chapter thus comprises the device design sectionsof two published papers (Kianzad et al. 2020 1 and Kianzad & MacLean 20182). The reminders of the aforementioned papers are covered in Chapters 4 and 5,respectively.1S. Kianzad, Y. Huang, R. Xiao, K. E. MacLean, “Phasking on Paper: Accessing a Continuumof PHysically Assisted SKetchING. Proceedings of the 2020 CHI Conference on Human Factors inComputing Systems. Association for Computing Machinery, New York, NY, USA, 1–12.2S. Kianzad and K. E. MacLean, "Harold’s purple crayon rendered in haptics: Large-stroke,handheld ballpoint force feedback", 2018 IEEE Haptics Symposium (HAPTICS), San Francisco, CA,2018, pp. 106-111.32(a) A rolling, stylus form factor enables un-limited user-driven or guided motion, on arbitrar-ily oriented and shaped surfaces. We used this inour collaborative learning study, Chapter 5(b) For a haptically enabled pen, the embed-ded camera provides an accurate absolute positionon a watermarked paper. We exploited the inkingon paper feature for our user study in Chapter 4.Figure 3.1: MagicPen device form factor.3.1 OverviewAn elusive and audacious vision in haptics is the rendering of forces “in the large” –to feel sensations in large gestures, anywhere and anytime. The reality is that deliver-ing force feedback in informal settings and diverse environments suffers from linkedconstraints. Producing force requires physical grounding, which limits workspacesize; meanwhile, expansion impacts affordability and application versatility.Inspired by needs for haptic support of large motions on a surface in appli-cations like embodied conceptual learning, commercial design, and 2D virtual /augmented reality, we present the ballpoint drive. This novel approach circumventsconventional constraints by imposing a new one: motion restricted to rolling on anarbitrary two dimensional surface, and grounding forces generated through friction.We analyze the ballpoint’s design considerations in the context of our framingapplications, describe a first prototype and its performance, and assess its potentialfor further development.333.2 IntroductionHaptic force feedback first appeared in the 1990s [159], and soon fired the imagina-tion of engineers, scientists and children with the possibilities of making a virtualworld tangible. Since that time, device designs have proliferated.However, the fundamental requirement of physical grounding to provide a solidreaction force still dictates how force feedback devices are configured, triggering acascade of interlocked challenges. Most devices achieve grounding by being fixedto the world, e.g., through a parallel or serial robotic mechanism. This imposesa finite workspace size – a serious detraction for many non-desktop applications,particularly those in virtual and augmented reality (VR/AR). Increasing workspaceusually means large linkages, more expensive components and complex engineering,and hence a sharply increased cost/performance ratio that limits accessibility formany applications. Finally, device bulk, tethering and handle restrict versatility inhow the end-effector can be grasped and moved. While a few applications can livewithin these four linked constraints, many more are precluded by one or more ofthem. The most common solution is to get by without grounded force, for a far lesscompelling result.A different tradeoff is possible. Restrict motion to a ball rolling along atwo-dimensional (2D) surface; then, grounding comes from pushing against thesurface, while the ball driver provides 2D constraints and active forces. Workspaceis unlimited, or at least as large as the surface. The device itself must be handheld,small and light.At least one mechanism can achieve this accessibly, with low-cost componentsand DIY (Do it Yourself) fabrication techniques. Variations of the basic design cansupport grip versatility with “swappable” handles and varied orientation.We have dubbed this concept MagicPen. It is inspired in part by Harold andthe Purple Crayon, Crockett Johnson’s magical 1950s vision of a small boy whocan draw objects and scenes that come to life [103]. Our surface might be a largewall-mounted or table graphical display, or a personal tablet. It could also be awhiteboard or an arbitrary large curved object, together with a projected imageand vision system to detect drawn graphical symbols as a virtual environment issketched on the surface.342D haptic interfaces have broad potential applicability, particularly when theycan be arbitrarily large. This includes generic interaction with common content,conceptual learning and commercial design. For example, automotive designers usetape drawing to create fast, 1:1 scale sketches, and while digital versions exist [78],the ability to create and feel virtual constraints while generating smooth curves andstraight lines on life-sized 2D surfaces could make them more efficient and naturalto use. Or, like Harold, a student could diagram a physical system and then exploreit (Figure 1.1). Handheld haptics are the most novel component of this vision, andthe one we address in this chapter. We contribute:A novel portable mechanism providing force feedback while rolling on a 2D sur-face, through interchangeable handle forms, whose cost and performance isinsensitive to workspace size;Associated measurement techniques for 2D position and angular orientation;Accessible designs for user community development.3.3 Approach: Design ConsiderationsWe describe initial balancing of perspectives of mechatronics, cost, computation,ergonomics, usability and application needs; specific engineering requirementscontinue to evolve.3.3.1 Mechatronics and AmenitiesAccepted mechatronic requirements for haptic devices include maximal backdriv-ability, minimal impedance in free motion, force responsiveness and sensing band-width [198] [90]. Other specifications are dictated by our approach and applications.Force magnitude: Forces must be clearly perceptible in guiding and rendering,but need not overpower the hand.Degrees of freedom: Force vector must be smoothly controllable in a 2 degreesof freedom (DF).Surface contact: The device must maintain rolling contact with a comfortablegrip; e.g., rendering a circle requires 2D force vectors without slip at the virtualobject’s circumference.35Interaction surface properties: The contact ball must be able to maintain fric-tional grip and optical sensing quality on surfaces that are to some degree rough,slippery or reflective.Size and weight: A handheld that works on both vertical and horizontal surfacesrequires minimal weight and girth, and suitability for an untethered, battery-poweredpackage.3.3.2 User Experience and ErgonomicsWe start with well-known features of everyday pen use. Pens should be comfortablylight to hold, but weight imparts stability, normal force and a sense of quality. Theirlong aspect ratio requires balance. Pen shape and surface properties influence graspand consequent fatigue.Ergonomic design for a haptically active pen is more nuanced. We observedpeople writing with a variety of everyday writing tools and with mouse motions, aswe pulled and pushed on their implement with a string or wire. We found that a penrequired less force (∼0.3N) to drive the user’s hand than did the mouse-shape formfactor (∼0.8N).Our vision includes being able to sketch environments for rendering, and thuswe examined integration of a writing implement with the ballpoint drive. It is clearlydesirable to minimize distance between the implement’s tip and the force feedbackdrive contact, to avoid ambiguity between sensing and contact representation. Someprovision can also be made through manipulation of visual-haptic co-location.Based on these considerations, we focused our design explorations on factors ofshape, weight, and grip.3.4 Prototype Design3.4.1 Form factorsWe developed two different form factors as shown in Figure 3.1. We exploit thefeatures of each form factor for our user studies in Chapters 4, 5. Here we elaboratemore on these features.36MagicPen with stylus form factorOur basic proof-of-concept MagicPen is stylus-based (Figure 3.1a). Through a pengrasp, users feel a directional force generated through rolling contact between thedriven mouse ball in the tip, and an arbitrarily shaped or sized 2D surface possessingnominal surface friction. This design can haptically render dynamic 2D virtualobjects: e.g., constraints, textures, detents, stiffness, viscosity and inertia.Size and form factor: The bulkiest element is the tip, which houses the balldrive. Its current 45mm diameter is significantly more compact than possible with amouse form [171]. The stylus grip is familiar and dexterous, facilitating drawingand stroking as well as nuanced perception of guiding forces.We found that the stylus requires substantively smaller drive forces than a mouseform for ergonomic reasons. The ball drive’s freedom to tilt (currently 25-30◦)means that the user’s applied force is generally not aligned with the surface normal.In a pen grip, users generally take responsibility for maintaining a comfortable tiltangle. This results in their contributing to the manual work of moving the device,with a tendency to follow the pen when it ‘pulls out from under’ their hand. Further,users tend to rest hand and forearm on a mouse, adding to the inertia that must beovercome with a driver; this happens far less with a stylus. These factors differ mostdramatically from a mouse on a horizontal surface, but are somewhat in play in avertical setting.MagicPen as Add-on to Arbitrary Pen ImplementOur device can be attached to a pencil, whiteboard marker or pointing finger, andwill apply force feedback through the implement while rolling-ball surface contactis maintained. The device barrel unscrews, to be replaced with a retractable penholder 3.1b.The substantial increase in application possibilities does bring mechanicalcomplexity. Added girth makes the grip slightly awkward, while the dual pointsof contact constrain stylus pitch angle. Meanwhile, the offset between drive andimplement contact points varies depending on grip pitch, requiring additionalsensing and correction.In this configuration, we have constrained pitch to 45◦, respecting users’ typical37Figure 3.2: Ballpoint drive and sensing in 2D and 3D. Left: Motor-generatedtorques Tm are transmitted to the surface contact ball through the gearscausing the ball to roll over the surface. The friction force Fr betweenthe contact ball and the surface produces motion in x-y direction. Right:The arrangement of four DC-motors in 3D shows how the Ballpoint drivegenerates forces in 2 directions.40-55◦ writing angle. In practice, users prefer slightly different writing angleson horizontal and vertical surfaces. This issue can be handled with a compliant,angle-sensed linkage with offset corrected in software.3.4.2 Ballpoint Drive MechanismThe key driving element of MagicPen is a novel mechanism that we called Ballpointdrive. Ballpoint drive (Figure 3.2) is inspired in part by a class of autonomous robotswhich self-balance on a sphere. Similarly to Ballbot [129], we use four motors (2opposing pairs) in an inverse mouse-ball drive to generate planar rolling movement(an alternative, three independent drives, leads to nonholonomic constraints). Ourinnovation is to use this mechanism to render model-based force feedback to a user,and begin to address the new challenges this raises.Each motor pair acts to rotate a 25.4 mm elastic surface-contact ball, whichis retained by a housing with four sprung ball bearings with stiffness tuned tomaximize rotary freedom while achieving ball retention.Components – We tested several motor/gear systems to optimize speed, power and38backdrivability. This version employs high power-density brushed DC motors, ratedat 2000 W/kg, 50,000 RPM, 3.7V and 100mA. Generally used in small hobbyaircraft, they are China-sourced at $2.67 each. In practice, we observe 200-500mAdraw and a maximum of 100 W/kg. For context, the high-fidelity haptic standardMaxon RE40 is rated at 300 W/kg for >$300.The mechanical transmission amplifies torque in two stages: 12:1 from contactball to rotating bar, and 2.5:1 from the plastic crown gear to the motor. We variedball material and size to ensure adequate friction on a variety of surfaces. Traditionalmouse balls have internal weight which adds excessive inertia. We found best resultswith spheres of solid neoprene rubber (mcmaster.com/rubber-balls), achievinggood contact with monitor screens, whiteboard, painted wall, and tabletop. Moreextensive research is required to precisely measure actual friction achieved.Sensing: position, orientation and pressure – The two primary sensing needs forphasking operations are (a) localization of device on the interaction surface, and (b)internal ball motion and stylus orientation for closed-loop control on position andvelocity, to generate desired forces.Position for external localization and internal control:There are many possible approaches to external localization of the contact pointon the interaction surface, depending on application setting, priorities and con-straints. We exploited existing technology of digital pens and watermarked papersto obtain absolute position sensing. An embedded camera (here, a Neo SmartPen[98]) decodes optical microdot patterns on watermarked paper to determine absoluteposition of the stylus to an accuracy of 0.1mm. This enables accurate sketches anduseful interactions such as the tool palette.Internally, a micro-trackball senses the motion of the larger surface contact ballin each rolling direction and sends the pulses to the device controller.Orientation: Stylus orientation corrects position estimates generated by the abovemethods. A twist of the user’s hand impacts the contact relation between rollingball and interaction surface: when non-vertical, the ball’s contact point does notcoincide with the stylus axis. The global localization signal can also suffer fromlow update rate, gaps or spatial inaccuracy.We used a low-cost orientation sensor (Bosch BNO055) which fuses accelerom-eter, magnetometer and gyroscope data to supply Euler roll, pitch and yaw [25].39Pressure: The Neo Smartpen supplies pen tip pressure (0-255), which we use forcontrol sharing. During early prototyping, we tested several locations for pressuresensing. We found out that users have most control over pressure application whenthe pressure sensor is located at the pen tip, rather than behind the trackball or underthe pen finger’s grip.Battery – The early version of the ballpoint drive (Figure 3.1a) was electricallytethered to external power and communications. The next prototype (Figure 3.1b)uses two 3.7V Ultrafire 14500 rechargeable batteries in parallel, capable of approx-imately one hour of free-roam performance. The most significant power draw isfrom the CPU due to I/O interrupts related to orientation and localization sensing.A future version using an embedded CPU will be significantly more efficient.3.4.3 Prototype Software ArchitectureOur initial compute structure optimized quick construction over optimal perfor-mance. It has two versions of computational elements: an Arduino Mega for sensing,output and kinematics computations to a laptop-hosted custom Python API for theVE model for stylus form factor (Figure 3.1a) or integration of both in a RaspberryPi zero W (RP) (Figure 3.1b).The Arduino/RP receives desired force/velocity from the VE model, trans-forms them to Vdx ,Vdy based on trackball and orientation sensor data, and generatesPWM motor output (Ux,Uy). The laptop API/RP receives kinematic data and bothgraphically displays and returns VE output.This system achieves 1ms (1000 Hz) roundtrip updates, limited by the USBconnection. An upgraded communication protocol will easily attain the hapticstandard of 1 kHz.3.5 Force Feedback in a Ballpoint DriveWe have framed initial control of our ballpoint haptic display, constrained to a 2Dsurface, in modes of guiding along a path (user is led by the device), or rendering avirtual environment (user drives while interacting with VE elements). We imple-mented two proof-of-concept rendering schemes to test the mechanism’s viabilityin force feedback.40		 !"	#$	%&''	()*	 +,-./	01,23-)-,&3								Sensor	45 	65	 75 	8$	S	d	49 	69	79 	(a) Stylus orientation (dictated by usergrip) is defined by roll, pitch and yaw fromsurface normal. The ball contacts the surfaceat xS ,yS . 𝜽𝑺 𝜶𝒅 𝑹𝒐𝒍𝒍 ∠0 𝒀𝒂𝒘∠𝜶𝒅 0 𝑷𝒊𝒕𝒄𝒉 ∠0 𝑧𝑆  𝑥𝑆 𝑦𝑆  𝜷𝒅 𝜶𝒅 𝑧𝑑  𝑥𝑑 𝑦𝑑  (b) Surface contact diverges from ball“south pole” when stylus yaws. Eqs. 3.3-3.4address the resulting parallax.Figure 3.3: Ballpoint rectilinear and rotational coordinate systems.3.5.1 Guiding Along a Path: Position and Velocity TrajectoryIn this simplest-possible scenario, the device is driven as if a standalone ballbot [129]on a defined trajectory. The user is pulled by the handle without influence on thecomputed path.Our ballpoint display can take any pose [x,y,ΘS ,αd,βd], where xS ,yS describecontact point in the traversed, topologically contiguous 2D surface’s coordinateframe S. The remainder are respectively device roll around the surface normal, andpitch and yaw relative to device base d (Figure 4.1a).We do not actuate Θ,α or β, although technically possible with added complexity.Guidance scenarios exist where actuated roll is valuable; for most uses, we believethese are best left passive, allowing the user to find a comfortable angle rather thanimposing unnatural torque on the wrist.The device is driven by commanded forces (Fdx ,Fdy ) applied to the ball, causingit to roll or resist rolling in xd,yd. Passed through our voltage-controlled, opposing-pair drivers, these are roughly proportional to the resulting output velocity (Vdx ,Vdy )when the ballbot moves unresisted by a hand, which are in turn related to surfacevelocity through the bot’s inverse kinematics as a function of roll, pitch and yaw:41 VdxVdy = R(−Θ)Y(α)−1P(β)−1 VSxVSy , (3.1)where:R(Θ) = cos(Θ) -sin(Θ)sin(Θ) cos(Θ) (3.2)Y(α) = cos(α) 00 1 (3.3)P(β) = 1 00 cos(β) . (3.4)R(ΘS ) compensates for user wrist rotation (spin), while Y(αd) P(βd) accounts forgrip-generated stylus tilt in yaw and pitch directions respectively.Figure 4.1d shows that as the stylus tilts sideways in yaw, contact point di-verges from the rolling ball’s south pole. Now, if the ball rolls toward us (out ofthe paper plane), it moves through a sub-equatorial circumference, path Pt. Thissmaller contact circle’s diameter is a cos(α) function in xd and cos(βd) for yd.We measured stylus orientation with the BNO055 (blended accelerometer, mag-netometer and gyroscope data fused in three-axis roll/pitch/yaw Euler orientationoutput; Adafruit.com) mounted on the handle near the ball, sampling stylus absoluteorientation at 100 Hz.Stylus endpoint velocity at point t on path Pt is:VSxVSy=gPxgPy, (3.5)where gPx ,gPy define Pt’s gradient. We scale velocity components to the maximumsupported in either x or y alone (the device could move faster on the diagonal whereboth drivers are in play). This relation enforces a uniform top velocity regardless of(x,y) heading: √(VSx )2 + (VSy )2 = VSx,max. (3.6)3.5.2 Rendering VEs: Constraints, Force Fields and TexturesWe render a virtual environment in several steps, described in context of Figure1.1’s example: Harold’s handheld car travels along a drawn road. When the car42passes over the road edge, this interference is simply computed as a normal springforce.First, we define the VE in the computer using our Python API. Then, in a 10msloop (limited by transmission speed not computation): (1) surface-contact positionis measured, amplified, filtered, and differentiated for velocity; and (2) sent througha forward kinematic model: VSxVSy = R(Θ)Y(α)P(β) VdxVdy (3.7).(3) Interaction with the VE involves collision with VE elements, determiningdepth of interference, and computation of the resulting interference force.(4) VE force is transformed into desired FS and VS by inverse kinematics(Eq 3.7); then through a voltage constant to command force on respective actuatorpairs.Braking and tactile display: The most effective braking occurs when the rollersboth turn inward, driving the contact ball towards the trackball sensor. In thissituation, the contact ball is actively jammed at four points (surface, trackballand two bearings). This locking effect can be leveraged for Cobot-like virtualconstraints [40]. However, the same effect may occur inadvertently under excessivehand pressure on the device, which squishes the ball and prevents the drive fromfunctioning. Users need to maintain a light normal force to achieve the intendedforce feedback.When the rollers both turn outward, there is some braking but also vibration.More deliberately, rapidly alternating push/pull directions can produce up-downcontact ball vibration, which can be modulated for texture rendering.3.6 Design ReviewTable 3.1 summarizes the degree to which this first prototype hereinafter referred toas "MagicPen" addresses our ballpoint drive design considerations.We measured properties such as mechanical impedance and 2-axis forces(®BOSE ElectroForce TestBench). We defined velocity (Vmax) as how fast thepen can move with motors saturated and no force resistance, and measured it by43holding the ballpoint device stationary over a second, inverted trackball. We foundthat this mechanical and electrical hardware design can convincingly support thebasic virtual environment primitives described in Section 3.5.2, and it can can keepup with brisk hand motion (20 mm/s).However, there is ample opportunity (and means) for improvement. The mostcritical mechatronic improvements are drive train impedance, and inter-processorcommunications speed (to reach 1ms rather than current 10 ms).Others will require straightforward engineering iteration: e.g., eliminating dataand power tethers and reducing drive system bulk while further reducing cost. Partscost are already comparable to other DIY examples [149][67], custom electronicswill drop overall parts cost by 20-40%. For more technical demonstrations andresults, please see Appendix A.Table 3.1: Design Considerations: Ballpoint Drive V1.0Performance Fmax: 1N force (xd,yd). Can move the device body andconvincingly drive the user’s hand.Vmax: 20mm/s across 2D surfaces, unrestricted.Mechanical impedance: 35 Ns/m (measured with 1 cmdisplacement at 5 Hz). This is greater than ideal hapticbackdrivability.AccessibleCostDevice parts cost are $˜101 USD: $50 (electronics), $12(drive mechanism), $34 (orientation sensor), $5 (trackballsensor). Electronic components currently dominate cost,and can be reduced 40% through integration and more opti-mized choices. The drive mechanism would most benefitfrom component upgrades, as well as investigation of alter-natives (e.g., belt/pulley or metal gears to reduce noise).RenderingPipelineRate: We currently can render force commands at 1000Hz,and communicate them at 100Hz.VE: Our drive mechanism can rapidly switch operationmode for specific rendering conditions, such as a virtualwall, elastic behaviour and textures.Ergonomics Device weight (sans battery or Arduino circuitry) is 50grams. Device is borderline adequate and will benefit fromfurther miniaturization.44Chapter 4PHysically Assisted SKetchingPreface – In Chapter 3, we introduced MagicPen and its potential applicationsin learning. For our first application, we study the benefits of force feedback forassisted sketching. We investigate the opportunities in using different force feedbackinteractions and propose a framework that covers related research and unveilsthe existing gap in the design space. We realize that control sharing is vital tomaintain creativity and authorship while collaborating with a computer in assistedsketching tasks. We use the pen-holder form factor that we introduced in Chapter3, and instantiate our framework. We identify the conceptual activities and coreinteraction concepts that we need to support assisted sketching with our MagicPen.We validate our framework and our approach to implement it, by running a userstudy with both experts and novices 1. We reflect on the results and the opportunitiesfor future improvements. This chapter builds the foundation for the Design processwhich we will explain in Chapter 6. The material in this chapter is taken directlyfrom sections: Introduction, Phasking framework, Evaluation, and Discussion of(Kianzad et al. 2020 2) while the System section is presented in Chapter 3 and thetechnical part of the Background section is in Chapter 2.1The approval for this study was obtained from the University of British Columbia BehaviouralResearch Ethics Board, approval ID (H14-01763).2S. Kianzad, Y. Huang, R. Xiao, K. E. MacLean, “Phasking on Paper: Accessing a Continuum ofPHysically Assisted SKetchING”, Proceedings of the 2020 CHI Conference on Human Factors inComputing Systems, April 2020, pp 1–12.45(a) (b)(c) (d)Figure 4.1: The steps of a phasking interaction. (a) The user selects the “circle”tool from the Phasking tool palette, (b) then selects the centre and onepoint on the circle to establish a circular bring constraint (dotted line).(c) The MagicPen actively brings the user along the circular path withits ball-drive motor, but the user can modify control sharing by applyingpressure to the pen (Figure 4.9Right). This causes the system to scaledown its constraint force, allowing the user to diverge from the path.(d) Phasking supports passive constraints as well as fully unconstraineddrawing, enabling the user to quickly sketch out a cartoon character.4.1 OverviewWhen sketching, we must choose between paper (expressive ease, ruler and eraser)and computational assistance (parametric support, a digital record). PHysically As-sisted SKetching provides both, with a pen that displays force constraints with whichthe sketchers interact as they draw on paper. Phasking provides passive, “bound”constraints (like a ruler); or actively “brings” the sketcher along a commanded path(e.g., a curve), which they can violate for creative variation. The sketcher modulates46constraint strength (control sharing) by bearing down on the pen-tip. Phaskingrequires untethered, graded force-feedback, achieved by modifying a ballpoint drivethat generates force through rolling surface contact. To understand phasking’sviability, we implemented its interaction concepts, related them to sketching tasksand measured device performance. We assessed the experience of 10 sketchers, whocould understand, use and delight in phasking, and who valued its control-sharingand digital twinning for productivity, creative control and learning to draw.4.2 IntroductionFrom scribbles to detailed, elaborated productions, sketching is both intellectualplay and can help us form, develop and communicate our thoughts, a key part ofconceptualization. Pen-and-paper sketching is direct, improvisational, expressive,resists distraction, and may promote deeper cognitive processing [163]. The freedomand functional control afforded by physical drawing is unmatched in electronicmedia; but paper sketching lacks digital enhancements, and it is laborious to movefluidly between paper and digital media.Meanwhile, freehand drawing is poorly supported in graphical and CAD (Com-puter Aided Design) environments, as evinced by many professionals’ preferencefor paper. Capturing the subtlety and nuances of physical drawing and paintingremains elusive [219].Active force feedback is an intriguing approach to supporting on-paper drawing,with different opportunities for expert and novice sketchers. But active force feed-back (rather than brakes, which cannot actively guide, or tactile vibrations, whichcannot even constrain) usually entails a grounded device [201] with a fixed andimpractically limited workspace, often costly. What if a user wants to roam withtheir physical drawing support, accessing guidance and constraints for everydaydrawing on arbitrary media, or to support big strokes on big surfaces?The goal of this chapter is explore the use of active force feedback in papersketching, to provide a user with the digital support they need in the moment whilemaintaining their originality, authorship, and continuity of expression.474.2.1 Physically Assisted SKetching with Variable ControlPhasking enables a user to access drawing supports in a continuum from full physicalguidance to expressive freehand sketching, on their media of choice (Figure 4.2).It is free-roaming (untethered and portable); and our demonstration prototype isconstructed of low-cost commodity components and DIY construction. It is basedon two key interaction concepts.Constraints: Phasking’s bring/bound system captures and extends the range ofassistance explored in previous work, by accessing an interactive, constructablevirtual environment (VE). The user can place structures in the VE to constraintheir movements both passively (bound) and actively (bring), and can interact withconstraints and the sketch itself.Control sharing: Users can fluidly move between levels of assistance, simply bybearing down on the pen tip. Phasking differs from adaptive force feedback [18],where the system chooses the degree of assistance based on user performance.Phasking gives this choice to the user.4.2.2 Usage ScenariosA wide spectrum of users sketch, in many contexts. We consider situations wherephasking will be valued, to prioritize feature development, and consider them in ourevaluation.Rapid technical sketching: Professional architects, engineers and other designerssketch copiously, rely on it for conceptualization and communication, and movebetween paper and digital media. Many experts value drawing assistance when onpaper, whether to construct a perfect circle or perspective, as evinced by their heavyuse of physical guides. Some find physical tools cumbersome and “in the way”, e.g.,wanting to draw on both sides of a ruler. Large-scale drawing (e.g., during publiccommunication), is a situation where even skilled sketchers may have difficultywith alignment and clean curves. Finally, professionals spend a long time learningto draw. Phasking’s large workspace, portability and digital-twinning capabilitiescould assist in all these contexts, both manually and by off-loading some cognitivedemand.[Re]Learning to sketch: Learners may be children, hobbyists or recovering stroke48patients. All may need assistance in drawing a straight line, getting proportions rightfor an animal or a face, or figuring out perspective. There is evidence that CADcan help creative self-efficacy [203]. Learners’ needs might range from manualcontrol (e.g., drawing a circle), to the cognitive challenge of using proportion.Learning is best accomplished on paper, with its friction, focus and room to spreadout, but would benefit from computational supports. Phasking’s constraints canassist learners and patients in reinforcing motor programs, and advanced skills likeperspective drawing. Once learning is achieved, they might continue to phask inmore expert ways, or no longer need it.Artistic 2D Sculpting: Creative expression is inspired by constraints [135]. Physicalconstraints that can be stretched and violated stimulate a resonant collaborationbetween user and system [135, 207]. Phasking is founded on collaborative control-sharing. Artistic sketchers can follow a basic shape, or creatively modify it, byaltering control authority. To draw a face instead of that circle, they press down totake control to form an ear or a sketchy line of hair. Or, they can pull and bounce offan active node to draw sweeping trapeze-like trajectories, bound to the node with aninvisible elastic string.4.2.3 Contributions1. A conceptual framework for phasking which highlights fluid transition ofcontrol sharing from assisted to freehand drawing, making use of both boundand bring constraints.2. A force-feedback digital manipulative capable of implementing this frame-work. We made major mechatronic, ergonomic and control extensions to aprevious ballpoint drive display to enable screen-free, shared control phask-ing.4.3 BackgroundOur work is founded in manual and computer-aided drawing practices, virtual andaugmented environment creation and manipulation, haptic force feedback and thatfield’s knowledge of control-sharing and past frameworks for sketching support.Figure 4.2 arrays the mediums and examples mentioned here on the spectrum49Fully Passiveconstraints Free-handdrawingDrawingRobotsdePENd*Comp*Pass*I-Draw* CompassMuscleplotter*RulerSoft ConstraintHardConstraintHardConstraintSoft ConstraintFriction-based stylusDynaFrame*DynaBase*PhaskingFrameworkFully Activeconstraints Transition with control sharingFigure 4.2: A one-dimensional model of sketching control authority. The y-axis denotes strength of a given system’s constraint, active up / passivedown. The x-axis has no meaning. Existing types of assistance arelocated approximately in this space, both generic (e.g., a physical ruler)or published works (denoted with *, see Related Work). Phasking canfluidly access all points on this axis, by bearing down on the pen-tipwhile drawing.between fully freehand and fully computer-aided drawing. Vertically, this figurehighlights how control sharing is related to type of drawing. Most examples occupyonly one point in this space, whereas physically assisted sketching, as presentedhere, can theoretically cover all of it.4.3.1 Non-Haptic Assistance of Digital and Manual DrawingProfessional CAD tools are increasingly accessible, online and learnable: basicideas of parametric drawing have high penetration for even minimal expertise. Butdespite many conveniences (pen type/color, copy/paste/undo) the experience is stillfundamentally different than pen-and-paper drawing due to tactility and limited50canvas.Graphical drawing systems can provide visual corrective feedback for drawingor give stroke suggestions, e.g., iCanDraw to sketch a human face from a sourceimage [50], and ShadowDraw for high-level arbitrary objects [133]. These systemstypically restrict users to graphical screens.Paper-oriented digital styli, such as Anoto and Neo smartpens, feature real-time digital capture of handwriting and translation to digital task, requiring use ofwatermarked paper (e.g., pre-printed with microdots) [97, 98]. While capturingnatural drawing, this approach cannot offer added support during sketching. Ourconceptual prototype incorporates a Neo Smartpen with its pen-tip vision systemand watermarked paper as a convenient way to mockup position localization.For visual guidance, PenLight [208] combines an Anoto pen with a miniatureprojector to add information to pen and paper interaction, but faces a technicalbarrier of image stability In virtual reality, Nomoto et al. present a “corrected”sketch which encourages the users to configure their own hands appropriately fordrawing a shape [167]. These are promising approaches but do not offer physicalconstraints. In phasking, real forces on the hand – in the real world, not a VRheadset – convey drawing suggestions.4.3.2 Haptic Support of Pen-and-Paper SketchingPassive Constraints: A ballpoint stylus able to impose passive constraints by con-stricting a rolling contact ball by means of electromagnetic or mechanical brakeshas some similarity to our ballpoint drive and was used to render roughness on 3Dobjects, but cannot provide active forces [43]. Comp*Pass [166] offers a semi-activesolution using DC motors; however, a user is not actively involved in sketching.With I-Draw, a cobot-type drawing assist for passive constraints [59], the authors“explore the seamless switching between guided and freehand modes,” as do we.Active Guidance: Muscle-Plotter generates force feedback to the hand by electricallystimulating the user’s own muscles [142] While creatively satisfying the criteria offree-roaming, it has drawbacks of 1.5 DOF, and a potential for temporal adaptationby muscles [110][240].dePENd [231] exploits the ferromagnetic property of pen ball-tips to assist51sketching by providing directional force feedback on regular pen and paper interac-tion. The main drawback of magnet-based haptic assisted sketching devices [140] isthe tradeoff between backdriveability and perceptible force levels. Increasing mag-netic coupling provides higher forces, but draws the pen tightly to the interactionsurface (higher normal force) and makes it difficult to move freely.I-Draw, dePENd, and Muscle-Plotter each demonstrate a concept of guideddrawing (using actuation to turn a user’s hand into a computer-guided drawingimplement in a screen-free context), one passive and the other active. They are adeparture point for the contributions described here.4.3.3 Using Haptics to Facilitate User-System Control SharingControl-sharing with haptic systems has been utilized for physical therapy [136]and handwriting control [215]. In driving, haptic shared control can improve speedand accuracy of human / system collaborative tasks, and lower the need for visualinvolvement in control effort [3, 76] It has also been used to support expressivedrawing on a screen, e.g., Snibbe’s Dynasculpt and GridDraw [207]. While I-Draw[59] is framed in moving between freehand and supported drawing, its passivenature does not provide an ideal mechanism for doing so.We have drawn from these functional and expressive approaches to form ourown control-sharing, which prioritizes simplicity and intuitiveness in modulatingcontrol authority in instances where users need a collaboration rather than a binarychoice. The MagicPen’s capabilities support this.4.3.4 FrameworksSteimle et al.’s framework of non-sketching pen-and-paper interactions separatesconceptual activities (annotating, linking, tagging) from core interactions (inking,clicking, combining, associating) [211]. While its domain differs, we are inspiredby its approach in our own support framework.I-Draw presents an initial framework for passive guidance, of interaction primi-tives allocated between physical (guided and freehand drawing) and digital (digitalmanipulation) spaces [59]. We re-organize and extend it with the capacities affordedby active haptic guidance.524.4 Phasking FrameworkThe added capabilities of a fully force-performant but free-roam, screen-free devicehas several implications. First, the availability of active, omnidirectional forcesin a handheld format permit fundamental changes in interaction, notably activeguidance and control sharing. This physical support can work in both graphical andscreen-free contexts, widening scope and altering how digital-physical transitionscan occur. Finally, the active force’s scalability means that constraints and guidancecan be modulated, from hard to soft. Together, these necessitate a deep revision andextension beyond past conceptual framings (e.g., [59, 211]).Like [211], our framework articulates conceptual activities that users need to do,elaborated in (I) below; leading to core interactions that support them (II): boundand bring-type constraints and variable control authority (constraint hardness).Figure 4.2 shows how bound/bring constraints interact with shared control inthe phasking framework.4.4.1 I. Conceptual ActivitiesWe articulate the foundational activities which our framework needed to support, aselements that mediate a dialogue between user and system in Table 4.1, a potentiallyextendable list. These emerged from our observation of users’ expectations formedthrough interacting with conventional tools, as well as consideration of the basicoperations of freehand paper drawing, CAD and virtual environments.4.4.2 II. Core Interaction ConceptsThese concepts demonstrate how phasking’s key conceptual activities are supported.We use a constraint-based virtual environment (VE) which a user constructs thensketches within, with tools created by drawing a palette on the paper’s margin.(a) Constraints – Bounding and BringingConstraints can be expressed as a gain on an error function (of position, velocityor other parameters): u = K(xdes − xact). With active force assistance, phaskingconstraints can passively bound, or actively bring (Figure 4.3). They do both tovarying degrees (Control Sharing concept, below).Bound: Movement is free up to the boundary, then constrained. A binary boundary53Table 4.1: Core conceptual activities of the phasking framework.Activity DescriptionFree-drawmarksCreate marks manually and at will, optionallywithin user-set and system-maintainedboundaries.CreateobjectsForm basic shapes on command (e.g.,parametrically specified). Produced objects mayconform perfectly to the digital guidance, or theuser can overcome guidance to constructpersonalized or expressive variations.Place &arrangeelementsReceive assistance as to where, how large andwhat angle; e.g., perspective drawing, or sizingdifferent regions of a multi-part sketchInteract withactiveconstraintsSet up constraints (e.g., attractive nodes, or linesand curves to push/pull against) for modulatedcreative control in variably-guided drawing.(no shared control) could be implemented with a passive force device, e.g., abrake, because it just prevents the user from going somewhere. Examples of boundconstraints include one-sided walls, and path constraints which the user can traverseat will: the constraint blocks path departure, but allows free movement along it.Bring: A force field draws the user in a particular direction or rate, and requires anactive force feedback device; it always entails active guidance3. Examples of bring3Bring contrasts with what is called guidance (but is passive) in some related work (e.g., I-Draw),Figure 4.3: The framework’s (a) bound and (b) bring constraints, and (c) theconcept of control sharing, where the user can diverge from a guidingline by bearing down on the pen.54constraints include point magnetic attraction or repulsion (snap); and spatiotemporaland temporal trajectories, in which the user is guided to traverse a path in time andspace respectively.As with a ruler, passive VE elements are only felt when the drawing tool touchesthem. Active elements, bound or bring, can be felt at a distance, as a force field.(b) Control SharingIn phasking, what’s shared is control authority (“who gets to drive”), undercontinuous user control (e.g., the pressure with which the drawing tool is squeezedor pushed into the drawing surface). The constraint can vary between absolute andsoft – a suggestion or a jumping-off place, e.g., if one wishes to draw a wigglyline along a path (Figure 4.3c). This scaling is available for both bound and bringconstraints.Control sharing can be implemented simply by changing the control gain on theerror between pen and constraint. A wall can be softened, and an actively guidedgeometry (such as an oval) can be sketched into something more expressive anddetailed, like a face. More complex implementations are available to address systemstability issues [3, 63].(c) Tool SelectionLike other digitally assisted drawing systems, phasking is modal. We deliver thefunction of tool selection with a paper tool palette with hand-drawn (and extempo-raneously creatable) icons that the pen’s vision system can recognize (see System).Because interactions are brief, tool selection also supplies modal awareness, togetherwith the device’s physical response.(d) Constructing Constraint EnvironmentsIn free-hand drawing, a user sets a passive boundary with a tool, like placing aphysical ruler to help draw a straight line or set a CAD drawing plane.Phasking constructs and tracks constraints through a virtual environment: theuser draws the environment within which they then operate. For example, the userplaces a virtual bound (e.g., a line, circle or channel) or an attraction point for abound or bring constraint, respectively. The VE is portrayed to the user throughboth the visible marks on the drawing medium, and what they feel.in which a passive mechanism such as a a Cobot [40] or brake restricts movement in some direction.55  Localization / Contact switch  Orientation sensing  Absolute position sensing  Pressure sensing  Ballpoint drive  Batteries  Motor drivers  Processing core  Figure 4.4: The MagicPen: an untethered ballpoint drive produces forces bydriving a contact ball over a surface. Friction between the ball andsurface prevent slip, and provide a “ground” back to the user’s hand.Constraints support expressive drawing, and force feedback has been used forthis [207]. But when working screen-free, constraint creation is required to accessassistance. Here, our VE is a basic functional implementation, but the approachopens other design spaces as well.and paper-based operation. Here, we overview the full system for the reader’sbenefit, but focus on novel or modified elements.4.4.3 MagicPen MechatronicsWe implemented the MagicPen’s mechatronics, system architecture and controls,and phasking primitives to assess the concept’s feasibility and usefulness. Thedevice reported here significantly extends a previous basic demonstration of theballpoint drive mechanism as a 2D force-feedback display (Figure 3.1a), as required56for this application. The detailed view of the system is presented in Figure 4.4.Mechanism: Untethered 2D Forces Via Ballpoint DriveIn the ballpoint drive [113], pairs of opposing motors drive a surface-contactball to create directional force-feedback, generated between ball and arbitrary two-dimensional surface (Figure 3.2). We completely re-engineered the ballpoint drivemechanism to improve backdrivability (freedom of unpowered motion), reducingpassive impedance by 51%. We also reworked the gear drive to achieve higher forcewithout slip, significantly reducing vibration and skidding. We customized thedrive train with low-cost commodity micro gears, motors and a surface contact ball.To achieve required backdriveability and power transfer, we iterated componentconfiguration, size and material properties (Figure 4.5).Gear drive design : Ensuring non-slip coupling between motors and surface contactball can add impedance to the drive train, which then degrades rolling free-nesswhen motors are not actuated. We found a solution by matching contact-ball materialproperties with cog size. A 1-inch diameter rubber ball with tensile strength of 144MPa, with metal gears with diametral pitch of 187 teeth/inch [53] gave the bestresults. The contact ball has been sandwiched between the metal gears. The metalgears’ teeth penetrate the contact ball’s rubber body just the right amount givingthe contact ball enough room for rotation while creating non-slip contacts with thegears.Drive motor coordination: We can achieve optimal control over the surface contactball with four motors (as opposed to two drivers and two passive castors; or a3-point contact which cannot reach the same control space). To generate a rollingmovement, each opposing motor pair works together to apply a balanced torque tothe ball. While it is possible to electrically connect the opposing pairs, we found outthat for fast movements when the motors are not in active mode, paired motors workas a generator and produce back-emf voltages which pass through the opposing pair.To avoid the consequent resistance, we drove each motor separately.Computing processor, motor drive and communications: The primary processingunit (Raspberry-Pi Zero W, or RPi) takes sensor data, updates a state model, thencomputes proportional-derivative (PD) control commands and sends them to twoPololu DRV8835 dual-motor driver carriers (one per axis).57  Figure 4.5: MagicPen ballpoint drive design iterations. From left (earliest):(a) Early version with plastic gear between motors and rollers, and rubberconnecting ball between roller and surface contact ball. (b) Pulley beltconnection between motors and rollers, and clock gears with micro cogsbetween roller and the contact ball. (c) Metal gears between motorsand roller, for a lower gear ratio of 1:1. (d) Similar mechanism with acustomized higher gear ratio of 4:1.We sample micro-trackball velocity, integrated to get position at 5 kHz (RPiexternal interrupts), and BNO055 for orientation sensing at 100 Hz. The NeoSmartpen sends its data (x,y position and pen-tip pressure) to the RPi controller viaBluetooth Low-Energy at 100Hz. We built a custom Linux driver for the Neo’s BLEprotocol to reduce latency and enable custom features (e.g. on-demand beeping).State model and closed-loop control: A 1kHz control loop checks for a command,samples internal position and orientation, receives x,y and pressure data from theNeo (100 Hz), and optionally sends it to the monitor. Control then branches:Table 4.2: Control steps.Free mode No command is running; wait for next iteration.CommandmodeIf new command registered, collect command parameters parameters(position taps). Then:. Update command target reference.. Adjust absolute contact position estimate based on pen orientation,with internal position change.. Compute motor command command using a PD controller on theerror signal.. Output motor command to PWM motor.584.4.4 Implemented Drawing Support Features2D CAD software implementation: Our new untethered design required a custom,lightweight CAD platform for the RPi, and custom low-latency hardware driversfor the ballpoint drive and digital pen.To implement the MagicPen’s drawing functions, we developed a simple 2DCAD software system using Python and the PyQt4 library (a Python interface forQt, a popular cross-platform graphics library) [139], which runs on the pen’s RPicontroller. With this software, the user can either free-draw, or access primitiveCAD functions by clicking on a paper tool palette (described below).The CAD software also implements a graphical user interface view (referred toas the GUI monitor), which can be displayed on a screen connected by HDMI cableto the onboard RPi for debugging. The GUI monitor view shows real-time updatesof the user’s drawing, and provides additional functions such as saving the digitizeddrawing locally and changing the color and thickness of the pen.(a) Bound/Bring Constraints for 2D Geometry Construction: We implementedeight phasking primitives (Table 4.3): seven to construct basic shapes or constraints,and a perspective function for use with other primitives.Each uses bound/bring constraints, with force guidance modulable through con-trol sharing (below). Jointly, these primitives implement all of the core conceptualactivities of Table 4.1.(b) Sharing Control Authority: When drawing, the user can start with one of theseprimitives and deviate from the pen’s guide by applying a small force to sketchmore complex shapes (Figure 4.10).(c) Tool Switching: paper-based tool palette: Phasking requires extendable accessto the drawing primitives. We used a paper palette as a simple physical accesspoint; the user can draw tools on the sheet, selects a command by tapping on abox, then taps on the drawing to define parameters (Table 4.3). Figure 4.6 showsthe watermark-paper implementation. The tool palette has an added advantage oflogging user commands, as one route to saving geometry; e.g., for copy/paste onpaper, or a screen-based reconstruction.59Table 4.3: Phasking primitive descriptions. Each operation begins by touchingthe corresponding icon on the tool palette.Primitive Description IllustrationLine,Ruler(1) Touch end point(2) Touch starting point(3) Ruler: draw along invisible barrierbetween the two pointstriangleright Line: MagicPen brings to endpoint231Triangle,Arc(1-3) Touch three points to define Triangle orArc. MagicPen brings across triangle edges, orarc.123Circle,Rectangle(1) Touch center(2-3) Touch radius (circle) or top left corner(rectangle). MagicPen brings to endpoint.231Bezierspline(1-4) Touch at least four control points. A cubic (4 points) Bezier curve is defined(5) Touch curve. MagicPen commands motor velocityaccording to the tangent line to the curve ateach point.45312Perspectivefunction(1) Touch vanishing point(2-n) define any geometry, e.g., Rectangle(center, corner). MagicPen draws the object (e.g.,Rectangle) in perspective12 34.5 EvaluationOur evaluation objectives focused on conceptual viability: we needed to knowwhether the ballpoint drive approach could perform well enough to support phaskingoperations, and to get insight into users’ experience of phasking.We tested force-feedback standards of force and position control and disturbancerejection. These address whether the novel drive can localize itself with its dualsensing system even as the user’s hand rotates the pen’s axis (pitch and yaw), followa commanded path, provide a usefully large commanded impedance, and rejectdisturbances, all fast and smoothly enough for at least moderate-paced drawing. Wesought performance that would let us try phasking out, an assessment to be renderedin part by our user study. These tests also set a benchmark for comparison withfuture progress.604.5.1 Performance CharacterizationPrior to involving participants, we evaluated the MagicPen’s mechanical and controlperformance.Test 1 – Force generation: We measured passive and active 2D forces with a BOSEElectroForce TestBench®, which held the MagicPen with two arms (Figure 4.7a).Passive force, step response: Low mechanical impedance is crucial for controlsharing interactions. We recorded MagicPen’s resistive force while unpowered,while one BOSE arm imposed an 8mm sine-wave position displacement to theMagicPen at 0.5 Hz for 5 periods. We found an impedance of 37.5 Ns/m, computedby dividing the recorded value of mechanical resistance force over the speed. This isapproximately half that measured for the previously reported version of the ballpointdrive (77 Ns/m) [113].Active force, step response: We measured the force the ballpoint drive producedin response to pulsed drive input (∼ 0.2Hz), while held isometrically between twomeasuring load cells, one on each arm (Figure 4.7b). The pen generated up to 1N incontinuous force (close to peak output) without slip between contact ball and gears,after which the drive gears slipped and the ball started spinning.Active force, sine response: As seen in Figure 4.7c, generated force closely followeda continuous sine pulsewidth modulated (PWM) voltage input. There was minorFigure 4.6: Paper tool palette, which can be hand-drawn and customized, orprinted on a full sheet or slip of paper.61Figure 4.7: Force generation performance. (a) BOSE test setup. (b) Max out-put force response to PWM voltage pulses. (c) Force-tracking responseto a slow sinusoid of PWM excitation.62nonlinearity at higher voltages, as excitation approached motor saturation. Introducing roll Second trackball to measure displacements in x and y axis BNO055 absolute orientation sensor Introducing  pitch Figure 4.8: Position and disturbance tracking. (a) Test setup. The MagicPen’sshaft is held in a vice, with roll and pitch disturbances applied at the top ofthe shaft and through the contact ball, respectively. The onboard proces-sor controls the contact ball trajectory using orientation (roll/pitch/yaw)and internal ball motion. An external trackball measures ball movementrelative to a global reference. To assess performance, we compare errorbetween command and externally measured trajectories. (b) Test 3 (Dis-turbance Rejection) results shows the system’s response to rapid yaw(0.5Hz, 35º peak-to-peak over 22 seconds), mimicking significant wristrotation.Test 2 – Position control: We required the MagicPen to follow a sine wave trajectoryusing only the internal micro-trackball (relative position sensing), measured withthe setup of Figure 4.8. This test demonstrates the ballpoint drive’s capacity toachieve agile, omnidirectional control in the absence of disturbances. We chosea sine wave target to capture a full range of movement in a 2D plane, and used a63steady roll offset of 10°. This means that for the surface contact ball to, e.g., movealong an x-trajectory, it would need to adjust x,y motor commands using orientationdata rather than simply turn the x-motor on and the y-motor off. Here, the task isto follow an x-y sinusoid. Due to the low impedance of the second trackball, thecontact ball rolls at full speed (20mm/second) which slightly reduces the accuracyof the controller.The MagicPen followed a 4.0cm peak-to-peak sine trajectory with 20% initialerror and 40.0cm path length, using only the trackball (relative position sensing)corrected by orientation data for position control, with an error (mean squareddistance to reference) of 6.78 mm (std 4.88mm), or 1.4%.Test 3 - Disturbance Rejection: The controller needs to compensate when the stylusis twisted due to movements of the user’s hand, adhering to a straight x trajectorywith no y deviation. To do this, it just continually change motor velocity to maintaincontact ball motion in the x direction alone.Figure 4.8b shows the system following a line as we sweep through roll anglesof 3-38°(a range observed for handheld usage) with mean square error 6.89 mm (std2.29mm) 1.3%.Introducing a pitch angle of 0-25° can reduce the speed in the x direction up to10%, as the surface contact ball rolls on a smaller circle. This can be compensatedeither by multiplying a cos(θ) coefficient [113], or using absolute position sensing.Test 4 - Sensor fusion: By using the absolute position sensing from the NeoPen,the controller can compensate for the offset and achieve higher precision. Figure 4.9shows (left) the performance of the device in drawing a circle with a hard bringconstraint, while a user gently holds the MagicPen; and (right) with control sharingactivated while the user authors creative modifications to the basis circle.4.5.2 User EvaluationWe conducted two user studies: (a) A performance assessment with novices (N=7)to assess usability and impact of core phasking interactions on normal users. Wemeasured users’ deviation from a predefined trajectory (error) and investigated theirpressure profile to better understand the shared control (SC) concept in practice. (b)An interview evaluation with domain experts (N=3, architects who sketch in their64Figure 4.9: Human’s aided circle following. (Left) System control: withcontrol-sharing off, we fuse absolute and local position sensing to guideuser P1 in drawing a circle. The deviation is a result of the pen rotatingin P1’s hand. (Right) With control-sharing on, a user violates a bringcircle guide to draw a bear’s face by pressing down on the pen. The colorscale indicates applied pressure (yellow at 100% user control, blue whenuser has relaxed and is letting the system drive).professional work) to assess value, fit to needs and potential for a enriching userexperience, based on a functional conceptual prototype.We identified domain experts as those who had professional training in handsketching and usually use hand drawings in their day-to-day work. There was onlyone exception among our novice drawers, who took a preliminary drawing coursearound six years ago, but he has not been practicing drawing since then.Procedure, both studies: We collected profile information on how participantsused sketching in their work, and their personal attitudes to it. After a systemfamiliarization session, we asked participants to perform several tasks (Table 4.4)with the Neo Smartpen on its own, and with the MagicPen. Neo/MagicPen orderwas counterbalanced by task and participant. Tasks were performed in the sameorder for each participant, as a progression in complexity (please check AppendixB for the user evaluation’s results).After the tasks, all participants were asked to complete a Likert scale (1:7) on:(1) How likely are you to use this system again in the future?; and, (2) Would you65Table 4.4: Evaluation tasks, by execution and complexity.Concept TaskBringconstraints[1] Draw a straight line[2] Draw a rectangle[3] Draw a rectangle in perspective (novices)[4] Draw a circle (novices)Boundconstraints[5] Draw diagonal lines meeting an invisiblebarrier (coarse cross-hatch on a line)Sharedcontrol[6] Draw a sine wave as MagicPen guides alonga straight line (pulling towards the guide)[7] Draw a sine wave as MagicPen sets aninvisible line barrier at the center of the sinewave (resisting their crossing of it).use feature again (assume a more refined version of the tool)? for each of Line(Bring), Line (Ruler), Rectangle, and Shared Control. (3) The movement speed andforce is appropriate for me.This general procedure was the entirety of the Novice evaluation (30 minutes/ses-sion). Seven participants (aged 21-30, 4 female) had backgrounds of Computerscience and Forestry.Experts: Procedure for Qualitative Interview: Following the task-based interactionperiod, we carried out a 30-minute semi-structured interview with our domainexperts, covering task experience, relevance to their professional work, and potentialimpact. A complete session for experts took 60 minutes.ResultsQuantitative Likert Responses (N=10): In total, participants performed 204 trials.Figure 4.10 shows the responses of all participants (novices and experts) to thesurvey questions. The Line function (ruler, or block) was the most popular feature,followed by SC, with Bring close behind and a strong interest in adoption. Rectanglewas the only feature to receive any responses below neutral.Table 4.5 shows task precision of manual drawing (NeoPen unmounted andused as a normal pen.) vs. phasking. Straight lines cover the basics; circle showsthe system‘s ability to generate force feedback in a full range of 0-360 degrees, andrectangle+perspective a more complex task and guidance. Our data show shared66Figure 4.10: Users’ Likert scale responses for the MagicPen(N=10, 7 novicesand 3 experts; 7 is positive).control reducing error, with both bound and bring constraints.Experts’ Profiles and practice: The three participants (2 female, all right-handed)had practiced in the area of landscape architecture for periods ranging from 3-12years; E1 in outdoor, E2 urban and E3 residential design. None had prior hapticsexperience. All confirmed that their process started with hand-drawn conceptualsketches, an important aspect of both ideation and client and collaborator commu-Table 4.5: Precision of manual drawing vs. phasking.Task System Mean / std abs error, mmLine Neo 1.56 / 0.43, N=10Line (Bring) SC 1.38 / 0.33, N=10Line (Bound) SC 1.20 / 0.30, N=10Circle Neo 5.39 / 5.34, N=7Circle (Bring) SC 5.20 / 5.65, N=7Rect Neo 5.91 / 3.87, N=7Rect (Bring) SC 3.38 / 3.21, N=7Rect+persp Neo 9.21 / 6.86, N=7Rect+persp (Bring) SC 2.22 / 3.21, N=7Shared control (SC) or just Neo Pen (manual: single point contact, no guidance).67nication, including in public meetings (E2). While estimating hand drawing as10-20% of their entire process, after which continuity and precision required CAD,they wished for more:My personal preference is hand drawing and sketching but I also likeand appreciate the precision and the tools that other CAD basicallyprovides you. [E1]It’s not about preference. It’s about the tools that we currently have.[E2]E3 additionally mentioned the importance of color in hand-drawn work, for impactin public communication.Basic task performance, quality and experience: Participants rated appropriatenessof speed and force at [5.5,7,5] on a 7-point Likert scale; E1 requested more flexibilityin speed and force control. They could feel and understand the feedback, and controlit to degrees estimated at 60-100%; E2 (60%) described “an invisible barrier thatsort of holds you ... keeps your line nice and tidy”. Some noted initial awkwardness,with greater comfort by the end of the tasks, and that some CAD functions hadunfamiliar steps (e.g., sequence of marking).They found its use generally intuitive; E1 liked the ability to construct a perfectlystraight line, and E3 noted that “Line [ruler] part was really interesting because itreally helped me to draw a straight line and it was the most interesting ... aspect ofusing it.” [E3]All were enthusiastic of SC’s value and intuitiveness: “an amazing sort oftransition between a hundred percent computer drafting and hand drawing” [E1];“in a way the device starts reacting smartly to what I’m intending to do. ” [E2]. E3sometimes pressed too hard, then found SC less controllable. Precision (E1) andbulk (all) were identified as primary issues.Relevance to professional work: All valued the potential to cycle between paper anddigital work, in contrast to their present one-way transition. Of features evaluated(geometry, barriers, SC), all identified SC as most useful. Among specific widgets,they preferred Line (Likert responses [6.5,6.5,6]), with more mixed but still generallypositive reception for Rectangle [5.5,3,6] and Ruler [4,5,7]. For screen-free andlarge-surface potential, E1 noted the difficulty of maintaining control on large68surfaces (where MagicPen could help); E2 mentioned value in a public engagementprocess, and communicating extemporaneously with an audience. E3 wanted aphasking ruler for section and building elevations, now done by hand.Overall impact and interest in adoption: Experts responded to the adoption questionwith [6,7,6]. Presuming a slimmed-down and more precise device, participants werepositive on productivity (e.g., by integrating paper sketching later in process). E3indicated great interest in precise technical sketching, rather than conceptual workwhere roughness was fine, and liked the efficiency – “you put away the ruler andthen you have two things in one (pen and ruler)”. E1 predicted value in education,noting 10 years of training with constant practice.4.6 DiscussionWe have presented and implemented phasking, a form of computer-assisted drawingthat brings a virtual environment constraint system to pen and paper, and allowedusers to access a continuum of assistance (type and hardness of constraints) via fluidsharing of control authority.We created this framework out of the varied purposes that people bring tosketching. Phasking requires active force feedback, because it entails active andpassive constraints, and user-controlled gradation in, e.g., a restoring force uponviolating a helpful constraint. Paper sketching requires an implement that canoperate screen-free. For large, free movements, the device needs to be free-roaming(untethered).In creating the Phasking pen, a major extension of a previous ballpoint drivedevice, we focused on strength and backdrivability, attributes difficult to jointlyoptimize but crucial for feature rendering and unimpeded movement. Objective per-formance metrics provide a benchmark for future improvements. We implementedan essential set of framework primitives and a paper-based tool palette to accessthem, with which users can carry out a complete drawing task on paper.We shared phasking with novice and professional sketchers. They could feeland understand the forces, and found strength and speed adequate while wantingmore precision. They also told us of a strong desire to be able to hand-sketch more,which requires integration throughout their process, not just at the start. Because ofthe volume of their use, they valued physical supports for productivity and prized69fluid control sharing; and suggested that geometry construction would have beenvaluable when they were learning to sketch.Limitations: Rotation in the drawing direction (pitch) is constrained by two-pointcontact, addressable by delivering ink via the ball itself. Our evaluation revealednotable individual differences, particularly in magnitude and smoothness of forceprofile, signalling a need for training. While new users learn to deploy pressureto optimize SC use, graphical or auditory feedback will be valuable. A screen astraining-wheels could also assist with learning CAD features. Finally, a nonlinearrelationship between pressure and control share might work better, a topic of futurework.4.7 Conclusions and Next StepsDespite some usability issues, which are an inherent part of early prototypes, wewere able to assess phasking’s potential and collect feedback points to key improve-ments.With a full interactive experience in place, we have proved possible many otherinteresting functions within this framework, including modifying elements (e.g.,resize, rotate, amend) as well as copy, paste, undo; identifying free-drawn marks asparametric objects; combining objects into a virtual construct, and even simulatingdynamic virtual systems. It is a small straightforward step to full digital twinning:modify it onscreen then bring it back to paper with guided tracing.Phasking is too different from other digital tools to know its full potential. Beinguntethered, portable and self-contained, MagicPen can, with attainable modifica-tions, be used on arbitrary surfaces. This could lead to a new way of ‘drawing ona napkin’, support blind mobility by revealing maps on a corridor wall, and allowdrawing and playing with simulations on a whiteboard – an ‘object to think with’[174, 190].70Chapter 5Benefits of Force Feedback forCollaborative GroundingPreface – Whether touch sensory feedback can enhance a learning outcome isdifficult to ascertain; long-studied, there is no conclusive result to date. Manyresearchers use theoretical and empirical pieces of evidences to support theirresearch hypothesis. From the theoretical standpoint, embodied cognition and touchas an additional sensory input are the most popular theories to justify the benefitsof physicality and haptics in learning. On the other hand, the empirical researchwas often devised to seek significant differences between the learning gain of acontrolled group without haptics vs. a group with haptic feedback. In this chapterwe discuss the findings and obstacles identified in previous work and offer a newlens to study the benefits of haptics in collaborative learning by focusing on thestrategies that learners take to achieve mutual understanding and reflect on thelearner’s haptic experience model1. We used the stylus form factor (see Figure3.1(a)) in this study. The chapter’s findings have potential benefits in two fields ofresearch, namely educational haptics and collaborative learning. Further, we usethis chapter to ground the Explore process in Chapter 6.1The approval for this study was obtained from the University of British Columbia BehaviouralResearch Ethics Board, approval ID (H14-01763).715.1 OverviewWe investigate how the sense of touch can support the development of mutualunderstanding (grounding) between collaborating learners via dialectical discourse:a conversation in which differing points of view move to consensus through reason,here supported by evidence collected with a shared force-feedback environment.Using collaborative learning as a lens, we invited dyads to physically explorevirtualized phenomenon related to physics concepts, and examined their strategiesfor using (a) force cues and (b) mutual understanding in learning tasks. We comparedtheir behavior across three diverse low-cost haptic platforms, each paired with alearning environment and activity that exploited its best attributes.In Study 1 (n=8), we confirmed the occurrence of haptically mediated ground-ing and identified patterns between grounding acts, haptic gestures and learnerintentions. In Study 2 (n=24) we assessed collaboration dimensions through an ob-jective lens focused on haptic critical instances (hCIs). Qualitative and quantitativeevidence from these exploratory studies suggest correlations between collaborationdimensions, learning gain and haptic mediation; and promisingly, a statisticallysignificant relationship between the number of hCIs and learning gain in two envi-ronments. We close with a set of design considerations derived from our thematicanalysis.5.2 IntroductionFrom early childhood, haptic communication is one of the most intuitive ways thatwe perceive and interact with our environment, and communicate our experiencesand feelings to one another. Haptic information can contribute to rhetorical dis-course, in which participants use subjective interpretations of a cue. It can also beused for dialectical discourse, by providing partners with factual information fromwhich they can build an objective conversation.Recently, considerable attention has focused on social haptic touch, both directand technically mediated. Socially, haptic technologies are often used to conveyinterpersonal information such as emotion or reassurance [95, 216, 232], from afriendly pat on the arm [17, 27] to deeper affective communication [196, 200].When we consider touch and person-person collaboration more broadly, we72Figure 5.1: The Blind Men and the Elephant. People who have never seen anelephant try to conceptualize it via touch alone, either joining or fracturedby their different perspectives [101].see many cases where functional haptic information can be used to support usersin jointly carrying out many kinds of tasks. While purposes such as coordina-tion assistance are most common [31], in this work we are interested in how oursense of touch can engage useful collaboration as individuals attempt to perceivecharacteristics and properties of an object or environment [92, 131].The theory of collaborative grounding offers a base from which to regard amodality’s potentiality in the development of joint activity where both contentand the process of expressing it matter. Grounding refers to a process by whichindividuals maintain and/or develop some degree of mutual understanding, effec-tively finding joint “common ground” from which they can further communicateor ideation [13]. A key marker of collaboration, the process of grounding has beenmodeled with stages of monitoring, diagnosis and repair [48] (Table 5.3).Recognition of how touch can figure into collaborative grounding is ancient,73traced through Buddhist and Hindu texts to before 1500 BCE [226]. Figure 5.1illustrates the parable of a group of blind people physically appraising an unfamiliarcreature. Each person must express their own perception to help the group concep-tualize the elephant [74] (grounding success) – or, as the story is often told, theyare unable to resolve their apparently conflicting perceptions (a grounding failure)and confusion ensues. Despite this plus a large body of research on grounding viamediating visual and audio systems, touch is typically overlooked [209].Our goal in this paper is to study the challenges and the design space of usinghaptics to attain collaborative grounding through the physical sharing of objectiveexperience. We focus here on functional haptics through force feedback: the simu-lation of physical attributes of a virtual object. Force feedback devices can conveyinformation such as weight, velocity, collisions, vibrations and attraction/repulsion,through user movements known as exploratory procedures [131]. In comparisonwith direct interactions with physical systems, sensory display of these physicalparameters are often absent from graphical and auditory media, and from virtualsimulations of phenomena that are distant, large or small, or abstract.Haptics does bring technical challenges, with accessibility forefront. Whilevisual and auditory media require just a graphical screen or ever lower-cost virtualreality headsets, inexpensive yet high quality force-feedback displays are still rare.Some studies have used high quality haptic displays to assess the potential of hapticfeedback on learning [83, 156, 157] with encouraging, if narrowly focused, results;but high cost puts them out of reach for many contexts and for at-scale studies.‘Do-It-Yourself’ (DIY) haptic movement [144] suggests that more affordabledevices [67, 149] might be feasible for educational purposes, and studies have begunto assess their benefits in different learning activities [44, 155, 201].As with any collaboration or learning technology, results can be difficult toobjectively assess due to the many individual and situational factors in play. Hence,our approach at this stage is to focus on exposing and analyzing collaborationstrategies in response to different conditions and opportunities, rather than statisticaloutcomes.We thereby define two guiding research questions:1. RQ1: How can force feedback affect the process of grounding in collaborativelearning environments?742. RQ2: What haptically enabled strategies do learners use to create mutualunderstanding?5.2.1 Approach and ContributionsWhile some studies have examined the use of haptics to assist embodied cognitionor as an additional information channel, we are not aware of any that target the useof force cues in a grounding process. We set out to fill this void.Context: We chose collaborative learning as our lens for investigating hapticgrounding: specifically, comparing how a range of forms of low-cost haptic forcefeedback can support grounding as two individuals work jointly to understand aconcept. Collaborative grounding is recognized as an insufficiently met need inlearning technology [13], and we see its lack in recent experiences with widespreadonline learning. We posit that appropriate force feedback can positively alter thecourse of grounding with young learners.We observe this by analysing how learners use a haptic tool to convince them-selves and a partner of an idea, wherein they must provide evidence and consolidatearguments. Rather than statistical evidence of changes in topical understanding,we explore evolution of learners’ mutual understanding of the topic and of oneanother’s beliefs.This framing exposes the strategies and purposes by which learners shareinformation with a partner via functional haptics. The resulting insights into hapticimpact on collaborative strategies can also inform single-user learning environments,and help designers recognize opportunities to provide learners with new informationand invite them to perform a particular activity or reflect on outcomes.Platforms: Haptic displays differ in mechanical capabilities and fidelity of hapticrendering [201]; haptic feedback will different roles depending on how the ap-plication deploys it [112, 147]. For broad perspective on haptic features key tocollaborative learning and to reduce the risk of a single device’s limitations maskingour findings, we utilized three different devices (Figure 5.2. For each, we designeda unique learning environment which leverages that device’s special capabilities.Two Studies: Our first study used one device, the MagicPen, to confirm that learnerswere using haptic information during the process of achieving mutual understanding,75Figure 5.2: Three educational haptic devices used in this work. From left toright: Cellulo [171], Haply [67], and MagicPen[113].as well as how and why (n=8; 4 dyads). Study 2 utilized three devices/environmentcombinations (n=24; 12 dyads). We analyzed learners’ open dialogues around hapticcritical instances, based on established dimensions of collaboration [151]. Throughpre- and post-tests we looked at early trends in learner’s knowledge changes, andlinkages to collaboration patterns. Through both studies, we identified conditionsthat encourage a pivot to haptic use for grounding, and observed strategies by whichlearners used haptics in the different environments. We organized findings into fourthematic categories.We contribute, encapsulated as reflections and design implications in our Discus-sion:• Insights into the utility potential of force feedback in collaborative grounding,as enabled by device capability;• Identification of strategies whereby learners were effective in using forcefeedback for grounding as well as knowledge confirmation and learning;• Articulation of the layered roles that users tend to assign to the haptic modal-ity as they collaborate, from communication mediator to exploration, designand learning tool.765.3 Related Work5.3.1 Educational HapticsEducational haptic platforms exploit both visual and haptic cues to create interactiveenvironments that teach abstract topics in physics, math, and other fields of science.They provide the tangibility of digital manipulatives, and can combine force feed-back with visual cues to create a more compelling experience in STEM (Science,Technology, Engineering, and Mathematics) learning scenarios. Previous workhas demonstrated the benefits of physical manipulatives in education, particularlyscience education. While some studies suggest physical manipulatives directlyimpact learning, resulting in higher test scores when students use these manipula-tives [199], other studies highlight indirect factors that can lead to learning gainssuch as increased collaboration [234] or new possibilities for classroom orchestra-tion offering new possibilities to teachers to better orchestrate their classroom, e.g.,remote teaching and learning [51].Haptic devices, like other physical manipulatives, add physicality to abstractconcepts [44]. As a result, in the learning environments, learners can reasonabout the movements and behaviour of different objects, which may be otherwiseinaccessible. Moreover, haptic devices exploit the sense of touch to provide anadditional channel of information for students. Through haptics, learners can feeldifferent mechanical properties as well as perceive the shape, volume and weightof objects. This non-verbal channel may help build intuitive understanding of bothsymbolic and iconic concepts in the learning scenarios [44, 112].5.3.2 Haptic CommunicationHaptic communication can fall into dialectical or rhetorical discourses, depending onthe type of information provided (objective/factual vs. subjective/emotional). Theapplication of educational haptics needs to be objective in order to meet intendedneeds, i.e., conveying specific information or creating a specific experience. Asa result we expect that teachers, students and hapticians who are designing theseinteractions have similar interpretations of their haptic perception during the learningactivities.77A large body of work explores the role of sensory feedback in science learning.The importance of the sense of touch while interacting with learning environmentsinvites two lines of research. The first investigates the significance of physicalityin physical manipulatives in comparison with learners’ interactions with virtualmanipulatives [45, 123, 236]. The second, and more aligned with the focus ofthis research, studies the educational benefits of embedding haptics in virtual labs(simulators). The majority of these empirical examinations test the added valueof force feedback to the virtual environment through a quasi-experiment with atleast two conditions (with and without presence of haptics) [85, 225]. A majorityof the activities, however, find a learning gain for both conditions with no signif-icant differences between them [112, 234]. Educational haptic literature tends tofocus on quantitative score changes to measure learning, but fails to establish aframework for What, How and Why force feedback should be employed in learningenvironments [234].Here, we study haptics in a collaborative context. We will investigate how haptic-enabled devices impact collaborative discourse in different learning scenarios. Thisresearch attempts to find a collaborative benefit to educational haptics rather than acausal relationship between haptic use and test scores.5.3.3 Collaborative LearningThe demonstrated benefits of collaborative learning (where individuals jointly try toconstruct, negotiate and share understanding) [28, 52]) include learners’ planningmore accurately and generating more solutions, as well as boosting individual per-formance in near-transfer problems [14]. Collaborative test-taking demonstrates asignificant increase in test score and students’ retention of learning materials [23].Supportive tools can create more opportunities for collaboration and significantlyimpacts collective knowledge building dynamics and consequent individual learn-ing [238].While many collaborative learning technologies exploit the auditory and visualchannels to mediate peer interactions between peers, the importance of the senseof touch as a medium for grounding is largely unexplored. In focusing on thepotential of deploying touch to support grounding, we first require a haptic platform78Figure 5.3: A modified framework based on Baker et al [13]. (Left) A simpli-fication of Baker’s original model for analysing the effect of a tool ongrounding. (Right) In our work, the tool is Haptics, Agents are Learners,and we focus on two impacts (red arrows): a) Learners’ haptics-mediatedmutual understanding of each other; b) Learners’ haptics-mediated un-derstanding of the goal.and accompanying learning environment that intrinsically encourages students toscaffold one other’s learning. However, in a chicken-and-egg quandary, we knowlittle of what tool and learning environment properties will best accomplish this.Baker et al present a simple framework (Figure 5.3, Left) for understandingand analyzing the role of tool-mediated grounding in mutual understanding in acollaborative learning task [13]. We build on this framework to conceptualize ourwork.5.4 Framework and Tools – Haptic Devices and LearningEnvironments5.4.1 Framework for Exploration of Haptic GroundingWe build on Baker et al.’s framework [13] to explore how introducing a hapticmodality (force feedback) can influence different aspects of grounding. Our primaryfocus here is on how haptics might help peers to achieve grounding and to sharetheir experiences and thoughts across different learning environments, rather thanwhether it can improve learning by an individual (Figure 5.3, Right; and detailedbelow).Agents: Our agents are two learners expected to have a meaningful mental state –79able to act and interact with other learners and the haptic tool – and the abilityto infer the state model held by their partner.Goals: We are studying three learning scenarios, each of which has several definedlearning goals. Learners must collaborate to perform each assigned task.Tools: We use a haptic device paired with a custom virtual-environment learningenvironment (described next), which together mediate learners’ collaborationas they jointly perform certain tasksWe expect to see: Among entities, we assume that goals and tools are static, andfocus on cognitive changes within our agents (learners). In terms of relations,we focus on the changes caused by haptics in the agents’ mutual understand-ing of the goals and the mutual understanding that develops between them.Changes in each entity or relation can impact others and could be bidirectionalbut are beyond our scope.5.4.2 Haptic DevicesWe chose three low-cost haptic devices that met requirements of being available(both as physical devices and in programmable access), sufficiently robust for thisstudy, and as a group representing an important and diverse range of forms amongthe broad category of manual, 2-D force-feedback devices (Figures 5.2 for devicecloseups, and 5.4 in situ).All three devices require no calibration (i.e., in an experimental context are‘walk up and use’) and have been used in published studies.We paired each device with its own custom collaborative learning environmentand activity, to best exploit its capabilities (Table 5.2). While theoretically possi-ble to implement a full factorial comparison of all environment/activities with alldevices, we were dissuaded by concerns of study size, participant fatigue, poor out-come targeting, and inconclusive results due to poor haptic quality from inherentlyunsuitable pairings. Conversely, devising a single environment that was “fair” on allthree devices would mean sinking to a lowest common denominator of engagementand conceptual breadth.The device-specific attributes we sought to exploit are as follows.Cellulo [171] is an untethered puck that moves relatively slowly but autonomously,80localizing on watermarked paper, and pushes gently against the user’s hand.Its accurate localization and independent robot movement enable collocatedinteractions with visual cues on a printed poster paper in the Pressure Lab.Haply [67] is a DIY version of a pantograph, one of the most common planarmechanisms, with a compact constrained workspace and reasonable respon-siveness. It has probably the least familiar form for students. Its fast renderingand high quality/strong motors gave the best fidelity of the 3 systems, andwere crucial for rendering object impacts in the Collision Lab.MagicPen [113] is an untethered stylus which provides forces generated through africtional rolling contact on an arbitrary surface; it needs assistant to standup, but can actively drive a user’s hand around and display a variety of forceenvironments [116] in a theoretically unlimited workspace. MagicPen hasthe lowest expected learning effort, since it mimics pens and styluses, whichmost students in North American schools are familiar with. Its fast renderingand response speed help learners to feel the immediate changes in the forcefields needed for the Electrostatic Lab.Haply and MagicPen’s environments are developed in Processing (Java) andparticipants interact with them with a laptop and a graphical screen. The Celluloenvironment is implemented in QT (C++/JavaScript), a cross-platform toolkit thatconnects the tangible robots and printed environment wirelessly to a tablet.Participant dyads each had their own Cellulo and MagicPen, but shared a Haply:in the Collision & Momentum Lab, “throwing” objects (so a partner could feel theimpact) worked better on a trackpad than with Haply’s constrained workspace.One might question the reason behind using these three haptic displays insteadof high-quality haptic devices such as Phantom Omni. To answer this question weshould highlight the special needs and unique features of each haptic device besidesthe affordability constraints.We needed an overllapable workspace for the Electrostatic Lab with MagicPenstudy (untethered). Similarly, Collision Momentum Lab demands a 2D hapticdevice since forces in 3D could make the experiment very confusing. We alsoneeded a large workspace for the Electrostatic Lab which was beyond the workspaceof the Phantom Omni haptic display.81Figure 5.4: Three virtual environments paired with haptic devices. From leftto right: Pressure Lab with two Cellulos, Collision and Momentum Labwith One Haply, and Electrostatic Lab with two . Gloves and masks wereused as part of our institutionally approved COVID-19 safety protocol.5.4.3 Haptic Environments and Associated Learning TasksOur haptic environments are shown in Figure 5.4 (screen or poster shots and in-situ views). We developed learning goals in reference to the local region’s highschool science curriculum. for relevance to high school students (year dependingon curriculum component). We designed each learning activity with attention tominimizing the interface’s cognitive load during learning [13].Pressure (Fluids and Hydraulics) – Paired with CelluloLearning Goals (Grade 11:) We address natural and constructed fluid systems,specifically how compression of fluids can be used to power these systems.Upon completion, students should be able to observe evidence of pressure,and measure how fluids react to pressure and model fluid systems [168].Haptics Role: We focus on conceptual understanding and applications of fluidproperties of compressibility, flow and hydraulic systems. These can bedifficult to illustrate in the classroom, but Cellulo can bring these conceptsto life, and allow the learner to interactively explore and feel the fluid’s82behaviour. Through its movement Cellulo demonstrates the behaviour of theliquid coming out of a bucket of water or the pistons on a hydraulic jack sothat students be able to feel the pressure on each scenario.Learning Task: Learners first perceive the compressibility of a gas (Familiariza-tion); then in pairs, the effect of pressure on flow and of hydraulic multiplica-tion (Accumulation). Finally, they must parameterize a hydraulic jack thatcan lift up a car (Design), deciding whether to put the car on the larger orsmaller surface area of the jack and to fill the jack with air or liquid.Collision and Momentum – Paired with HaplyLearning (Grade 8): In Newtonian mechanics, linear momentum, translationalmomentum, or simply momentum is the product of the mass and velocityof an object. It is a vector quantity, possessing a magnitude and a directionin three-dimensional space. Upon completion, students should be able toanalyze, evaluate and apply information about the conservation of momentumin closed systems and collisions.Haptics Role: In an isolated environment, the momentum of the objects before andafter the collision are equal. The Haply renders the law of conservation ofmomentum in three different linear interactions: collision where objects donot stick together; collision where objects do not stick together; and explosion.The Haply renders an impulse force on the avatar object "held" by the userwhen it is "thrown" by the partner using the trackpad.Learning Task: If m is an object’s mass and v is the velocity (also a vector), thenthe momentum is P = mv. We ask learners to change mass, velocity and coef-ficient of restitution and feel the impact on the momentum (Familiarizationand Accumulation). They also have to set the parameters in an environmentwhere two particles collide and stick together (Design).Electrostatic Lab – Paired with MagicPenLearning Goals (Grade 8): We address Coulomb’s law, which states that the forcebetween two charged objects is proportional to the magnitude of the chargesand inversely proportional to distance between them. Upon completion,students should be able to predict electrostatic dynamics between chargedobjects in one dimension. They should also be able to explain the relationship83between electrostatic force and distance between charges.Haptics Role: MagicPen renders electrostatic forces as linearly proportional to theamount of charge, and follows the inverse square relation to the distance [114].Generated forces from point charges are small in real life, and need to bescaled up to be perceptible [148]. MagicPen demonstrates the differencebetween attractive and repulsive electrostatic forces by drawing the twopartners’ devices together in the former case and pushing them apart in thelatter.Learning Task: Learners are asked to construct a physics example, in order toexperience Coulombs law and governing attractions between point charges(Familiarization). They can construct the model and move the placed chargesusing the graphical interface (Design), and can control the main charge withthe visual information about the force and test their hypothesis (Accumula-tion).We describe data collection processes associated with both of the studies out-lined in Section 5.2.1 and further detailed in Sections 5.5-5.6 respectively.5.4.4 Dyad ParticipantsBecause collaborative learning requires communication with a peer, both of ourstudies were built around dyad participation, as have others [44]. In contrast to a solothink-aloud approach [112, 148, 184], pair work is intrinsic to collaborative learning,and induces pairs to speak to one another, providing a window into their thinking aswell as verbal and nonverbal cues regarding their orientation and emotions towardstheir partner.Demographic: To balance representation of high-school-age learners with ethicsand safety constraints during the pandemic, we designed both studies for early-stageuniversity undergraduate students who had never taken a university-level Physicsclasses, chosen to simulate target levels of Physics knowledge.Recruitment: Due to the COVID-19 conditions in place for both studies, we askedeach recruited participant to bring a partner so that each dyad was composed ofindividuals who were within the same social "bubble" – partners could then sitclosely as they worked. This type of pair recruitment simulates secondary and84Table 5.1: Summary of four-part experiment procedure for a single device/en-vironment combinations, for Study 1 (MagicPen) and 2 (all three combi-nations).Stage Description Example1. Pre-Test Brief assessment of existing con-ceptual understandingWhat is an electrostatic force?What interactions exist betweenpositive, negative, and neutrallycharged objects?2. Activity (a):Familiarization Ex-ercisesDyads answer questions on pa-per relating to physics conceptWill these two same charge ballsswing apart or together?2. Activity (b):Accumulation Exer-cisesPredicting and reasoning aboutthe answers. Using the hapticdevice to do the experiment andcompare it to their prediction.Consider the following chargeformation. If the negativecharges are held fixed, explainthe behaviour of the positivecharge in terms of the forces be-tween the charges.2. Activity (c):Design TaskIndividual, then as dyad, designa scenario that could be testedusing the learning environmentArrange charged particles sothat net force on a target pointis neutral “How did you con-vince your partner that your de-sign worked?”3. Post-Test Re-assessment of conceptual un-derstandingSimilar to pre-test4. Usability & Per-ception SurveyDetermine enjoyment of activ-ity and perceived usability of de-viceLikert scale for enjoyment, us-ability, and usefulnessprimary school experiences in which students typically know each other prior tocollaboration, and may be important in terms of comfort level in having frankconversational exchanges.5.4.5 Core Experiment Procedure: One Device/Environment BlockFor each device/learning activity block (executed once in Study 1 and three timesin Study 2) we conducted a four-stage data collection procedure, following [148]:(1) pre-test to measure prior learner understanding and knowledge; (2) primarycollaborative activities to observe the interaction between learners and possibleeffect of force feedback on the quality of grounding; (3) post-test activity to measure85Table 5.2: Environments paired with haptic devicesDevice Environment Mechanism Key Device Features SoftwareCellulo [171] PressureLabA pocket-sized handheld mo-bile robot with the abilityto generate force feedback(2D+roll). It uses and embed-ded camera for absolute globallocalizationAccurate localization,large workspace, holo-nomic motion, ForceAmp: 1.75NQT(C++)Haply [67] Collision &MomentumLabUses a 2-degree-of-freedompantograph mechanism withparallel joints and optimizeddynamic performanceFast rendering >1kHz,High quality motors,Force Amp: 4NProcessing(Java)MagicPen [113]ElectrostaticLabFriction of a rolling ball over asurface creates 2D force feed-back to the user’s handFast rendering >1kHz,fast response speed,large workspace, ForceAmp: 1NProcessing(Java)learning gain; and (4) usability and perception survey to reflect on the learners’interaction experience with each device. Table 5.1 summarizes the experimentprocedure.Stages 1 and 3: Pre- and Post-Tests Before and After Learning ActivityParticipants completed pre- and post-tests individually and without access to thehaptic devices, to detangle individual understanding of the physics concepts, anddetermine whether they were able to apply mutual concept understanding to individ-ual work. Pre- and post-tests consisted of similar 5 questions, designed to captureintuitive conceptual understanding (3 questions), and knowledge applications (nocalculations) (2 questions). Questions were multiple choice and short answers andadministered on a laptop.Stage 2: Three-Phase Learning ActivityLearner common ground and knowledge accumulation are both important for collab-orative discourse [35]. We designed our learning activity to ensure that participantscould achieve a common ground (Familiarize), and then add to it (Accumulate),both steps essential for our analysis. We added a third phase (Design) so we couldassess quality of collaboration along the scale of minimal to optimal effort [48].(a) Familiarization: Researchers familiarize learners with the haptic device and86environment, then assess learners’ mutual understanding of the basic conceptscovered in the present lesson, as a pair.(b) Accumulation: Learners add to their common ground by working togetherto predict answers and solve problems through experimentation within theenvironment. Our study goal here is to understand how learners use hapticinformation to fill in gaps and add to mutual understanding; to do so, we mustmotivate them to construct examples that their partner can experience.(c) Design: Working on paper, learners design a scenario that could be tested usingthe learning environment. Then, they collaboratively discuss and test each oftheir designs, and decide which meets the requested criteria best (and why).The design phase offers learners creative control. Differences in design oftenlead to constructive debate between partners which furthers their grounding.Ideally, learners then repair those differences [48].Stage 4: Usability and Perception SurveyWe conducted a survey after each activity asking learners about the quality of hapticperception and ease of interaction with environment and device [202], consisting offour Likert questions (1-strongly disagree to 7-strongly agree) on (1) enjoyment oflearning physics, (2) ease of interaction with the device, (3) ease of interpreting theforce feedback, and (4) usefulness of device in learning physics.5.4.6 Full ProcedureThe experimental setup consisted of a laptop and haptic devices (Figure 5.4. En-vironments – one (Study 1) or three (Study 2) – were arranged in nearby stations.Following a welcome and consent finalization process, learners were seated togetherand shared a monitor at the current station, then commenced the environment/deviceblocks.We required learners to wear masks and gloves for COVID-19 safety, and tookother sanitizing precautions before and after experiments. Approval for both studieswas obtained from the University Behavioural Research Ethics Board.5.4.7 Collected DataThe above procedure resulted in the following data:87• Individual demographics, including physics expertise and intra-dyad relation-ships• Pre-test and post-test scores• Learning activity written responses• Voice and screen recordings of dyads completing the learning activity• Videos of the participants hands operating the haptic device during the learn-ing activity• Log of force, velocity, and displacement of the haptic device (Only study 2)• Usability questionnaire5.5 Study 1: Confirming the use of haptics in groundingIn our first study (based on Electrostatic Lab paired with MagicPen), we lookedfor instances of and opportunities for using haptics in collaboration. How andthe degree to which learners incorporated haptics into explanations and groundingstrategies would direct our further inquiries. Specifically, we considered:1. At what stage of grounding did learners tend to incorporate haptics?2. What haptic gestures were made with the device?3. What were learners’ intentions with haptic information (e.g., introduce orrevisit content, evaluation, )?5.5.1 MethodProcedure:For data collection, we followed the general procedure of Table 5.1.Participants:We recruited four participant dyads (n=8) through word-of-mouth2. Two dyadsdescribed their relationship as “close friends”, one as “family members” and oneas “casual friend”. Seven participants were female; all were within 16-24 years of2Study 1 occurred days before the first university closure due to COVID-19 in March 2020.88age with the majority between 17-19. Each participant was compensated with $15(actual session duration 45-60 minutes, M=52).5.5.2 Analytical ApproachWe developed a three-layer qualitative qualitative coding approach to find collabora-tive actions based on screen capture videos and audio recordings, then successivelyunpacked them for haptic import.First, we identified what (collaborative grounding acts were executed throughany modality, and as in the conversational analysis method of Porcheron et al [182],we used these boundaries to fragment the data into collaborative learning episodes,with timestamp-maintained interaction structure. This fragmentation was necessaryto discover and report co-occurrence of actions, intentions and strategies for use ofthe MagicPen.We then looked for how (gestures made with the haptic devices and associatedwith those grounding acts, hereafter referred to as haptic gestures); and finally why(the intention behind the haptic gestures). For these, we developed a haptic gesturecoding system inspired by Yohanan et al [232]: we used grounding acts identified inthe data logs to segment our qualitative data then categorized haptic gestures usedin each grounding act, and the intentions behind them.Identifying Grounding Acts (What):We first had to identify all grounding acts and stages (monitoring, diagnosis, andrepair) (Table 5.3 [48]) as they occurred.Collaborators monitor each other when they determine the information andbeliefs that their partner has. This can be done actively by expressing one’s ownopinion, for example: [P8] ”So these arrows are going like this.”. Diagnosisis explicitly acknowledging a difference in belief or information access betweencollaborators or discrepancy between their beliefs and new information: [P7]”No, no, no, no. I think we’re wrong here.”. A grounding act is complete oncecollaborators have gone through the stage of repair, or adjusting their beliefs tomatch so that they become part of the common ground. [P8] ”It will be pulleddown.”— P7: ”Ok.”.89Table 5.3: Grounding Acts, re-peated from "Grounding inMulti-Modal Task-OrientedCollaboration" [48]. 151grounding acts were identifiedin Study 1.Acts ExamplesMonitoring(53%)A infers that B accesses XA infers that B notices XA infers that B understands XA infers that B (dis)agrees with XDiagnosing(25%)A joins B to initiate co-presenceA asks B to acknowledge XA asks B a question about XA asks B to agree about XRepairing(22%)A makes X accessible to BB communicates X to AA repeats-rephrases-explains XA argues about XTable 5.4: Haptic Gesturesmade with the MagicPen.130 gestures were identifiedin Study 1.Gestures Haptic interaction examplesConstruction(23%)Adding, deleting, moving pointsof interest (POIs)PhysicalCollaboration(23%)Switching pens, switching posi-tions, moving location requestedby partners, asking partner tomove their avatarExploration(42%)Moving towards/away from POI,varying speed, exploring envi-ronment edge, scanning environ-ment, detailed inspection, bump-ing into POIPlay(12%)Chasing partner’s avatar, bump-ing into POI repeatedly, circle aPOIFor each dyad, we identified grounding acts based on verbal statements andintonations from voice recordings. The above quotes, all drawn from the same40-second period, illustrate a common sequence of monitor, diagnosis, repair.Identifying Haptic Gestures (How):To determine whether the MagicPen was being used for collaborative acts, welooked at how learners were using the device. We first identified all gestures andaction occurrences that participants made with the MagicPen (a total of 130 events).Then, we classified these occurrences into four categories: physical collaboration,construction, exploration and play (Table 5.4). This categorization was based solelyon hand movements and the resulting on-screen avatar movement, not on other cluessuch as utterances or the task at hand.Identifying Intention (Why):Finally, we surmised intent behind each haptic gesture using recordings of partic-ipants’ onscreen avatar movements, voice and hands. Unlike the gesture coding90Table 5.5: Presumed intention behind each haptic gesture (adapted from Ce-sareni’s "Global Conversational Functions" [30]). 174 presumed inten-tions were identified in Study 1. Asterisks denote functions or gestureswe have added to Cesareni’s table to cover all actions demonstrated byour participants.Intention ExampleIntroduce newinformation(32%)Introduce personal ideas, Introduce information from a reliable source, Intro-duce examples drawn from experience, Pose research question or problemRevise previousinformation(19%)Elaborate own ideas, Elaborate other’s ideas, Synthesize, Repeat other’s ideas,Repeat own contribution, Respond to other’s ideas*, Resolve an ambiguity*,Understand partner’s perspective*Evaluate or re-flect(33%)Express meta-cognitive reflection, Comment, Evaluate (Reason*), Test theirhypothesis about the environment*, Determine magnitude/limitation of hapticperception*, Share experience with partner*, Take turns with roles/positions*Maintain rela-tionships(3%)Express agreement/disagreement, Maintain social relations, Convince partnerof a hypothesisFun*(8%)For fun, To enjoy the sessionLogistics*(5%)Request an action from a partner, Confirm completion of a task, Determinethe next taskwhich focused on how users relied on the device, here we unpacked the goal thatparticipants were working towards with each haptic gesture, which often dependedon the activity task or their partner interactions. Although this process is imprecise,Cesareni [30] identified four broad types of utterances that support knowledgebuilding: introducing new content, revisiting content, evaluation, and maintaininginterpersonal relationships. Some haptic gestures did not seem to build knowl-edge, so we added two more categories: play and logistical coordination. Thesesix categories covered the likely intent of all the MagicPen gestures we observed(Table 5.5).5.5.3 ResultsTwo coders collaboratively discussed and coded the script, using MAXQDA soft-ware [126]. Grounding acts and haptic gestures were relatively straightforward toobserve; intentions coding benefited from inter-coder discussion and calibration.We found a total of 455 codes (M=114 per group, and SD=47) including: 15191grounding acts, 130 haptic gestures, and 174 presumed intentions.In studying co-occurrence, we first compared each code with others outside itscoding family. For example, for each intention behind haptic use (why), we lookedfor frequent co-occurrence with all haptic gestures (how) and collaborative act (what)codes. These co-occurrence counts were based on frequency within collaborativelearning episodes. As a result, co-occurring codes sometimes came from bothparticipants, which we deemed admissible since both people were contributing tothe problem solution, the grounding process and the haptic experience (by interactingwith each other’s avatars).The result (Figure 5.5) suggests that learners tended towards recurring x-y-zpatterns – accomplishing a specific grounding act x by using the device in mannery with the intention of expressing z. We elaborate on patterns with highest co-occurrence (large circles).Grounding act (Monitoring) + Haptic gesture (Exploration of environment)(31 co-occurrences). Learners often used the MagicPen to determine somethingabout a layout of point charges that they had previously created. During the subse-quent exchange, they used force feedback from a fixed point charge to form a beliefabout the strength of a force they experienced – for example:[P7] Oh, he doesn’t want to stay close to this blue buddy.[p8] Oh but maybe my guy needs to stay close. You just stay-[p7] No, see he’s trying to move away. [P7 moving the pen towardsthe blue][p8] Other repulsive charges are stronger than your attraction to me.Haptic gesture (Exploration) + Intention (Evaluate/Reflect)(32 co-occurrences). Perception of changes in forces’ amplitudes and direction en-abled participants to evaluate their hypothesis and reflect on their conclusions. Theysearched device behaviour for evidence that confirmed or denied their hypothesesabout point charges. This pattern, which appeared across all the dyads, suggeststhat they were employing haptic information in their learning process, to test theirintuitive understanding of point-charges.92Grounding act (Monitoring) + Intention (Introducing new information) +Haptic gesture (Construction)(26 co-occurrences). Participants often shared new information during the Moni-toring stage. In most cases this included using the MagicPen to create new pointcharge layouts while verbally sharing a hypothesis, sometimes then relying on theforce feedback to confirm or reject their hypothesis (16 of the co-occurrences).[P5] Okay. First let’s line them up like this one. Like this one and now,now what we do... We just let it go.[P6] Yeah.[P5] Seems like they... we’ll swing away from each other due to therepulsive electrostatic force between them.Taken together, these two processes of introducing information and explorationallowed learners to first create a point-charge system and then use it to test theirhypothesis.Sequence of coded eventsWe used a code map to plot code overlaps by including a transcript paragraph beforeand after each episode.In Figure 5.5b, each circle denotes a coding subcategory, which cluster intothree groups. We used the MAXQDA software’s clustering tool and increasedthe number of clusters (K) until we arrived at a plateau. From K=4 onward, weobserved appearance of clusters with a single member.Cluster 1: Yellow: We see strong overlap between Construction, Physical Collabo-ration and Introduction codes, which manifested as participants using their devicesto jointly build an environment. This was often to rapidly test an environment theywere asked to explore or a hypothetical environment. For example, learners mightask their partner to move their avatar to experience the effect of this movement ontheir own avatar. Introducing new information was often followed by monitoringacts where participants tried to ensure that their peer understood the new topic.Cluster 2: Pink: Learners usually employed their haptic devices to explore theenvironment and then evaluate and reflect on their hypothesis. While we expected93(a) Co-Occurrence of Codes. Circle size denotes frequency of co-occurrence.(b) A code map plots codes according to similarity. Circle sizedenotes frequency of code assignments; intercode distance representsoverlap on the same script. Connecting lines indicates which codesoverlap or co-occur (within a paragraph before and after). Line thicknessdenotes code co-occurrence.Figure 5.5: Coding relationship analysis (Study 1).94that users would talk about their haptic experience to ensure comparable experiences,we saw that some preferred to first try and reflect on the environment by themselves.Hypothesis evaluation often lead to a repair (consensus) act, in which they might relyon the haptic experience to convince their partner. We saw that haptic perceptioncould be personal, with individuals preferring to discuss higher level information;however, when this higher level failed, they would restart by agreeing that they werefeeling the same behaviour. This similar haptic experience would eventually createa base from which they could build to reach to a common conclusion.This can be seen in Figure 5.5, where code closeness implies overlap in use, butalso sequence. Monitoring is closer to Exploration than to Revising, which suggeststhat monitoring is often followed by or precedes exploration. The thick connectionindicates a high frequency of this co-occurrence.Cluster 3: Purple: In Monitoring, learners tended to discuss their ideas and revise orrepair them. This often occurred through conversation rather than using the hapticdevice, potentially because the information required to repair a rift between twobeliefs was more conceptual than experimental. As such, learners may have foundit easier to express this information verbally than haptically.Learning Gain and Usability:We measured learning gain by subtracting the individual’s post-test score from theirpre-test score (the assessment questions are presented in Appendix C). We observeda positive learning gain (M=1.125 SD=1.36 out of 5 points). The results of the userexperience survey showed reasonably good scores (at least 5 out of 7 Likert scalepoints in all dimensions). These were: Enjoyed learning (M=5.62, SD=1.30), Easyinteraction with device (M=5.37, SD=0.74), Understood the applied force (M=5.38,SD=1.06), and Useful for learning physics (M=6.25, SD=0.89).5.6 Study 2: Haptic Critical Instance AnalysisStudy 1 suggests the utility of haptic interaction during collaboration (learninggain), revealed examples of how participants naturally use haptic gestures in theirconversation (RQ1), and provided insights on the strategies of (when, what and how)to use haptics for grounding (RQ2). However, we could not tell when participants95were relying on haptic or visual cues, and could not search for correlations betweenlearning gain and haptic use.In Study 2, we utilize haptic critical incidents to understand how different formsof force feedback influence collaborative learning, based on three environment-device pairs.5.6.1 Full Study Design and Protocol Modifications from Study 1Design and Sample Size:We ran a within-subject dyad study with 2-hour sessions that produced comparabletranscript and videos across three haptic educational devices. We chose the single-session format to minimize participant transitions under COVID-19 safety protocols.In choosing sample size, we considered our chief goal of digging deeper intomoments where haptics plays a key role in the conversation or collaborative actions,and sought sufficient number and diversity of these moments to study rather thanstatistical significance [106]. Each dyad-session produces many. Thus, we sized thestudy based on two counterbalanced repeats of the three device-environment pairs:2 repeats x 6 orderings, leading to n=24 (12 dyad-sessions).Procedure:The data collection procedure was similar to that of Study 1, except that the basic4-step elements (Table 5.1) were repeated three times per session, once for eachdevice/environment combination, order of combinations counterbalanced acrossdyad-sessions. After finishing the activity in each environments, learners took a 5minutes break and then we invited them to start the next activity.Participants:We recruited 24 participants through online advertising among first or second yearnon-STEM university students. Recruiting was done by pairs, as for Study 1. Wecompensated each participant $40 for their participation in a 2-hour session (actualduration 95-120 minutes, mean 102).965.6.2 Additional Data CollectedTo Study 1 data (5.4.7) we added a debrief interview after each activity. We askedstudents to reflect on advances made by the dyad and any disagreement. Thisinterview was similar to the debrief interview after Study 1, with the addition ofasking participants to identify moments when their understanding of the conceptchanged. Although participants were not typically able to pinpoint their shift inunderstanding, this question invited discussion about changes in understanding.We also created a log of force, velocity, and displacement for our three learningenvironments. We expected learners to experience different behaviour of the forcefeedback and perceived changes in magnitude and direction of the force feedback.5.6.3 Analysing Haptic Critical Instances (hCIs)Study 2 focus was on places in the activities where haptic use was taking place. Asin Flanagan et al [60], we used the force feedback data stream to identify momentswhen students are actively using the haptic device to communicate, play or explore:when device actuators were active, we inferred that the holder was using it insome way. At these points, which we term haptic critical instances, we examinedall of our synchronized data records, particularly voice recordings and video ofparticipants’ hands.Definition of an hCI: We defined an hCI as any noticeable change in magnitudeand direction of force feedback which could be the result of, co-occur with or lead toan utterance, action or movement, so that all relate closely to the aim of the learningactivity. We looked at a window of 1 minute before and after the triggering forcechange. Auto-detected hCIs which did not meet this criteria of connection to otherbehavior were rejected.Computational process for finding hCIs: We needed to identify hCIs based on anestimate of force variability (FV). Previous works used cosine similarity to capturevariability of the force vector [170, 205]; however, this does not offer the temporallocalization that we required.Instead, we computed FV as the interquartile range of the measured forceamplitude and direction, within a rolling window on a force data stream. Thisstatistic captures dispersion of force during a haptic interaction gesture.97Figure 5.6: Coding haptic critical incidents: Left: Coder’s view of screenrecordings of the haptic environments when the learners are performingthe experiments Right: Real-time analytical view of force behaviourduring any given time in the screen recording. The background of theforce amplitude plot (upper right) turns pink to indicate a critical instance,identified from the force logs during playback of the screen recording ofa session. This notifies the coders to make and code the critical momentand analyse learner behaviour around this timestamp.However, we anticipated that the appropriate window size for FV computationwould vary by environment. For example, for Collision Lab, relevant changes occurin milliseconds, but in Pressure Lab we would need a window of at least a fullsecond. To identify optimal window duration so as to segment useful chunks ofhCI type activity, we referenced frequency domain indicators of typical activity.Specifically, we ran a fast Fourier transform on samples of force logs on each hapticinteraction to determine the key haptic frequencies (Figure 5.7c). It should be notedthat these frequencies indicate the changes of forces due to the user’s behaviour, andthe force rendering frequency is at least 10 times higher. Frequencies thus found foreach device informed FV window duration, by device/environment.Automation and data stream fusion: We wrote a script to automatically detect hCIsas an objective indicator of using haptics by each dyad. The algorithm used a 1-Dpeak finder filter to search for local maximums and minimums, with FV windowlength set as described above, and was applied to the study force logs to generate98hCI time stamps.From this point, we combined the audio, video and force log data to create asingle video-format resource for the analyst. Text transcripts were auto-generated,manually verified, and later added to MAXQDA to facilitate coding.Manual data annotation: A human analyst team reviewed the compiled resource atauto-marked hCIs locations for all dyad records, to understand the context of eachhCI (Figure 5.6), look for events of interest and apply annotations accordingly. Theseannotations were independent of the 9-dimension collaboration rating describedbelow.The analysis team consisted of three coders. All reviewed and annotated all12 dyad records. Annotations were generated across the first 4 records, then sub-sequently calibrated and refined through discussion during check-ins after each 4records.Thematic analysis: We performed thematic analyses for the full set of videos inregions around auto-identified hCIs, using the process detailed by Braun et al [26].We used the 9 collaboration dimensions (column 1 of Table 5.6) as codes [151] byrating the full activity sequence (on a scale of -3 to 3) by collectively considering allthe hCIs for that device/environment block. Raters also considered the annotationsmade in the previous step by themselves.Specifically, each rater (of three) produced one rating/dyad for each dimensionfor each of the three study blocks, for a total of 3x12=36 ratings per dimensionfor each device/environment combination. Throughout, we tried to establish thefollowing information around the hCIs:• Describe the situation around the haptic usage• What did the participant/s do with the haptic device or say?• Why this usage was particularly effective (or ineffective)?The coders continued this line of questioning as they perused the data, docu-mented supporting details. We continued collecting data until new incidents beinganalyzed provide few or no additional critical grounding acts. For more complicatedactivities we checked to examine a greater number of incidents.We obtained inter-coder reliability of Cronbach’s Alpha 0.802, suggesting ‘good’agreement.995.6.4 Quantitative ResultsUser Experience:We report general observations on particular affordances that the device/environmentpairings seemed to reveal, with no intention of relative ranking; the systems are toodifferent to thus compare.The results of user experience for the devices/environment combinations areshown in Figure 5.7a. While all devices are rated relatively highly on four dimen-sions, some outstanding features of each device lead to a slight differences in theratings. Participant enjoyed learning with Cellulo; as a sometimes-autonomousrobot, it was especially engaging and fun to play with. They found Haply easyto interact with, possibly due to the passive nature of the haptic experience in theCollision environment. They found the haptic information from the MagicPeneasiest to understand and useful for learning, as they could actively manipulate andexplore the Electrostatic environment.Learning Gains:We measured learning through pre-tests and post-tests. Electrostatic Lab (MagicPen)has the highest learning gain (M=1.13, SD=1.25 out of 5 questions) followed byPressure Lab (Cellulo) (M=0.30, SD=1.40) and then Collision/Momentum (Haply)(M=0.02, SD=1.05). We categorized dyads based on their learning gain into threebaskets as follow (see Figure 5.7b: High [2-5], Low [0-2] and No learning gain[=<0].Dimensional Ratings and Correlations:In total, our automated process identified 2361 hCIs over the 12 dyads’ records, ofwhich our team analyzed 1246. Collision & Momentum Lab has the highest averagehCI per dyads (M=56,SD=17) then comes Electrostatic Lab with (M=31, SD=10)and finally the Pressure Lab had the lowest with (M=22, SD=7).The quantitative outcome of this process was a set of average ratings by de-vice/environment pair for the 9 collaborative dimensions. Table 5.6 shows thecollaboration dimensions and potential correlation with learning gain across the100(a) User experience scores(b) Learning gain v.s. frequency of individuals(c) Frequency analysis of force behaviourFigure 5.7: Summary of quantitative results for the three learning environ-ments of Study 2.101Table 5.6: Correlation between the named dimension (rater’s average valuefor all groups for that device) and the learning gain (measured throughpre/post test, of individuals). The dimensions in the first column are takenwithout modification from [151]. 1,942 cases of collaboration from Study2 were analyzed; % for each dimension are listed in first column.CollaborationDimensionExample of High Collabo-rationLearning GainPressure (Cel-lulo)LearningGainCollision(Haply)Learning GainElectrostatics(MagicPen)Sustainingmutualunderstanding(SM)(36%)Confirmation, Asking ideas,Asking for knowledge, Con-tinuing conversationSig (Partialmediation:Table 5.7)Non-sig SigDialogue man-agement (DM)(7%)Smooth flow, Right signal-ing for having a smooth con-versation and turn takingNon-Sig Sig SigInformationpooling (IP)(24%)Presenting new information,Expressing new thoughtsSig (Partialmediation:Table 5.7)Non-sig Non-sigReaching con-sensus (RC)(13%)Explicit agreement/dis-agreement with rationalreasoningSig (Partialmediation:Table 5.7)Sig SigTask division(TD)(3%)Dividing the task into sub-tasks, Moving towards thesolution step by stepSig Non-sig Non-sigTime manage-ment (TM)(0.1%)Stay focused on the maintopic, Evaluate time, Finishirrelevant conversationNon-sig Non-sig Non-sigTechnical coor-dination (TC)(12%)Knowledge of tools, Tech-nical skills, Using the toolproperlyNon-sig Non-sig Non-sigReciprocalinteraction (RI)(4%)Respect, Symmetrical rela-tionship, Polite, Invite toconverse, Avoid aggressionNon-Sig Non-sig SigIndividual taskorientation (IT)(1%)Engagement, Perform thetask correctly, Encouragingattitude, InvolvementNon-Sig Non-sig Sig102three device / learning environment combinations. The examples of high collabora-tion in Table 5.6 elaborate on how coders scored dyad collaboration. While each ofthese dimensions can individually contribute to higher learning gain, they can alsoindirectly impact and support other dimensions.We continued to search for any correlation between the ranked dimension andthe learning gain (measured through pre/post test, of individuals). The dimensionalcorrelation matrices shown by environment in Figure 5.8 suggests varying dynamicsof collaboration. These differences could also be caused by different backgroundknowledge, the types of activities and the nature of haptic experiences.Statistical Mediation Analysis:Mediation analysis quantifies the extent to which a variable participates in thetransmittance of change from a cause to its effect. We argue that participants canlearn from (a) collaboration (e.g., learning directly from a partner), (b) hapticsdirectly (by feeling), and/or from (c) collaboration mediated through haptics. Assuch, we view hCIs as windows into potential mediation as in (c).Here, we specifically use mediation in the sense that we observed instances ofcertain dimensions during the period of an hCI, and hence infer that their impact ismediated by the hCI. For example, we saw many dyads Information Pooling (IP)during an hCI, and subsequently reaching a final answer, thus contributing to alearning gain mediated by the hCI.Using linear regression we found a significant relationship between the numberof hCIs in a session and the learning gains registered by each participant in that ses-sion, for two environments (Table 5.7) – namely Cellulo and MagicPen (Coef=0.4,SE=0.01, F-Value= 6.06, p-value= 0.033).Our statistical results show that three of collaboration dimensions in the Pressureenvironment were partially mediated through the use of the haptic device, whichmeans the effect of collaboration still exists on the learning gain. We found noevidence of mediation with MagicPen, which suggests that both collaboration andhaptic feedback contributed in learning gain.Based on direct observation, this outcome is likely explained by the Pressurelearning activity inviting greater collaboration than the Electrostatic environment103Figure 5.8: Correlations among 9 collaboration dimensions as well as learninggain (first row/column), by learning environment/device combination.We computed correlations on the average rating of each dimension acrossthree coders with the addition of dyad’s learning gain. Cross-systemvariation seen in the matrices exposes how collaboration dynamics variedby system.**. Correlation is significant at the 0.01 level (2-tailed).*. Correlation is significant at the 0.05 level (2-tailed). We selected a0.05 threshold for reporting the p value and statistical significance.104Table 5.7: Statistical mediation analysis. The impact of some collaborationdimensions (X) on Learning Gain (LG) appeared to be mediated by HapticCritical Instances (hCI), specifically Sustaining Mutual understanding,Information Pooling and Reaching Consensus.Cellulo Coef SE F-valuep-valueSig. Total(X+HCI −→LG)Direct IndirecthCI −→LG0.08 0.02 7.32 0.02 Yes - - -SM −→hCI0.53 0.16 11.43 0.006 YesCoef=0.535% hCI=0.2995% hCI=1.12p-value=0.01Coef=0.465% hCI=0.1895% hCI=1.35p-value=0.12Coef=0.075% hCI=0.43495% hCI=0.57p-value=0.80IP −→hCI3.86 1.0 15.0 0.003 YesCoef=0.555% hCI=0.3695% hCI=0.73p-value=<001Coef=0.605% hCI=0.2695% hCI=0.92p-value=0.004Coef=-0.065% hCI=-0.3295% hCI=0.18p-value=0.50RC −→hCI6.34 1.62 15.28 0.003 YesCoef=0.565% hCI=0.1995% hCI=1.32p-value=0.03Coef=0.245% hCI=-0.22595% hCI=1.12p-value=0.3Coef=-0.365% hCI=-0.2295% hCI=0.76p-value=0.14rather than a factor of relative device affordances.5.6.5 Thematic CategoriesTo avoid bias, coders evaluated the collaboration dimensions without knowledgeof learning gains. After calculating learning gains (Figure 5.7b), we retrieved ourcodes to reflect on successful or less effective strategies. At this point we focused onthe role of haptics on dimensions more highly correlated with learning gains. Basedon these results as well as Study 1 insights, we identified four thematic categorieswhich together capture most of our observations (quotes are from Study 2).1. Communication and sustaining mutual understandingIn several scenarios, participants asked their peers to perform specific tasks. Thiswas to set up the environment to test a haptic experience, or to confirm a hypothesis.Sometimes, it led to discussion where participants invited opinions or asked for105more knowledge.After the haptic experience, participants would seek to explain the devicebehavior, e.g., by asking questions or clarifications of their partner. At this point, iftheir background knowledge was sufficient they could continue the discussion andrelate it to their final answer. When it was insufficient, they were unable to proceedand chose wrong answers.We observed participants were able to communicate their ideas conceptually totheir partners, when unable to describe it in technical terms. For instance: [P21] Iknow it in my head, but I can’t explain physics like on like a basal level, but like,you know what I mean? This leads us to wonder if there needs to be a bridge tothe mathematical definitions of these concepts so that participants can conceptuallyunderstand and also technically define such phenomena.We did not see any evidence to support the idea that haptic experience turns tobody-syntonic reasoning as proposed by Papert [174]: a line of reasoning wherechildren imagine themselves to be the manipulatives (e.g., turtle robots) and reasonabout their own behaviour (how they would move if they were the turtle).2. Joint information processing and reaching consensusWe observed two levels of reaching consensus. The first was to agree on theperception. The second requires a higher level of understanding and reasoning toanswer the activity questions. [P2] "Not linear." [P1] "That’s for sure. So that’sprobably the first." [P1] "I think it’s the second one where the force goes down asyou increase this sense." [P2] "Yeah, that’s true. That’s true."However, as participants proceeded, they pass the need to reach consensus onperception level until a point in which one doesn’t feel the need to test an activitywith the haptic device and reason about the answer based on the previous example.Participants co-constructed a haptic "measure" to assess their designs, even ifit did not completely fulfill the requirements. [P5] "Cause yours have a slowervelocity. So yours works better." This joint measure helped participants to collaborateto make their design better: [P5] "Let’s just go with yours. You don’t want to makeit better". [P6] "Yeah". [P5] "The best is going to be the one we are most confidentwith."Participants recalled what they have learned in the past to either predict orexplain the behaviour of the Cellulo robots. Few referred to the robots as they tried106to use mathematical formula or express scientific terms. [P2] "in fluid dynamics.We learned that the, the atmospheric pressure and the pressure of all this waterit’s potential energy and one at the bottom is going to again, be turned into kineticenergy."Finally, we observed situations where it was difficult for participants to trusttheir feelings, complicating consensus. This occurred most often when participantswere confident with their background knowledge and found it contradictory towhat they perceived. This situation makes it difficult to find a common ground andeventually ends with participants writing individual answers.3. Interpersonal relationship and engagementBecause it was quick to set up a haptic environment and test an idea, participantsbecame more engaged in the design stage. This engagement and increased commu-nication was reinforced when a partner asked about their ideas, discussing them andconceding to a better design, a process promoted by requiring participation fromboth learners to operate the device.Engagement, a sought-after attribute of any learning method, is related to play;our Study 2 analysis specifically found Play as one of the 4 gesture classes weobserved (Table 5.4 even though play was not part of the assigned task.We noticed haptic interactions generating interest and exploration in someparticipants, which could mean that utilizing haptics for learning could be helpfulin making more students interested in the field and more willing to explore ideasoutside of the normal syllabus. For instance: [P13] Oh, so it doesn’t work. Yeah.Okay. Let’s try the other one. Just for fun!We also noticed certain participants came in with a prior lack of interest inphysics. Some of them were subsequently not motivated to explore the hapticenvironment or inclined towards finding optimal solutions. It is possible that usinghaptics at an early stage to prevent an aversion to the subject could be helpful insubsequent learning experiences, and that use of different representations might notnecessarily be helpful in scenarios where participants are already biased against thesubject. For example, [P16] Yeah. It just made me remember in the high schooldays. I don’t want to remember that.We speculate that the kinds of engagement and conversation-promotion cuesidentified above would be particularly beneficial in relationship development for107partners who do not previously know one another.4. Uses of haptics in learningWhen learning-focused, participants mainly used the haptic devices for two purposes– learning and revising. Many participants used the device to confirm concepts theywere already familiar with, for example: [[p13] ... , it’s more like a review or view.I mean, I think I knew this already. It was just the fact that remembering how itapplied sort of like seeing it kind of practically in your life was good refresher.However, participants learnt new information as well: [P20] I learned somethingnew.This would be inter-connected to the information polling theme when partici-pants with background knowledge, as those with a thorough understanding of thephysics concept behind the experiment would be able to use the haptics environmentfor confirming their existing knowledge, while participants with no or minimalbackground in physics would probably learn new information.Participants used a variety of learning approaches to derive answers as wellas communicate explanations to their partner. Some chose to explain certain phe-nomena in physics using conventional learning methods like physics formulae andreasoning, whereas others explained using new approaches like real-world exam-ples and trial-and-error in the haptics environment. While some could relate thebehaviour to the mathematical relationships in the physics formula, we observedthat many participants could not remember the exact formula or used a wrongformula to explain some relationships between the parameters which often failed toconvince their partner. Real life examples were helpful, e.g., lever or bike gears weremore successful in explaining the behaviour of the hydraulic jack in the Pressureenvironment. We also observed haptic experimentation with quick trial-and-errorwhen participants had some guesses and they were not sure about them.5.7 DiscussionWe can now reflect on what the quantitative and qualitative results of these twostudies yield towards our research questions and design guidance. Our analysis –from broad qualitative coding in Study 1 to a focuse on critical haptic instances(hCIs) in Study 2 – went deeply into the what, why and how of haptic use towards108collaborative grounding and learning.5.7.1 Research QuestionsRQ1: How can force feedback affect the process of grounding in collaborativelearning environments?Haptic grounding happens: First and foremost, we saw that haptic grounding didoccur and was utilized by participants when the context made it available. In Study1, in the ∼30m of analyzed activity per dyad we found an average of 114 codes (forgrounding events, haptic gestures and intentions). We broke down these codes persubcategories and report them as percentages. The haptic gestures (Table 5.4) wereconnected to grounding events and interpreted by intention, with rich examples inthe qualitative part of the study.Platform affordances make a difference: In Study 2, the hCI analysis together withthe multiple device/environment combinations gave a qualitative look at what specif-ically was happening, and how the different device capabilities were supportingthose developments.We detail rich insights into these differences with respect to haptic strategies(RQ2 below) but here identify higher level takeaways about what a specific force-feedback platform can enable:• Fast realtime response (MagicPen) facilitates shared simultaneous experience.• Shared rather than person devices can drive increased collaboration, butprecludes simultaneous perception).• A small workspace limits fast, ballistic activities (Haply) which are morepossible with large ones, given sufficient update rates (MagicPen).• Autonomy and ability to stand up is fun and good for many kinds of hypothesistesting (Cellulo).• Higher fidelity enables perceptual precision and refinement in explorationand experiments; environments need to take device fidelity into account.Roles and stages: We see evidence for two stage-related roles that haptics enabledfor users. Early on, it was helpful in simply mediating their collaboration, asan alternate communication tool that could convey, often nonverbally, ideas orconfusions otherwise hard to express. It helped to advance the conversation; and109we speculate that for participants who do not know one another well, this could beeven more important. At a more advanced stage, and ideally with the benefit of aminimum of scaffolding background knowledge, learners were able to use it as atool to achieve a learning goal.RQ2: What haptically enabled strategies do learners use to create mutual under-standing?We got a hint of strategies from Study 1, in terms of intentions behind their hapticgestures (Table 5.5), but greater depth in Study 2’s hCI analysis. Participants usedhaptics for exploration during monitoring and to evaluate and reflect their ideas inorder to reach consensus or to resolve disagreement.Overlapping purposes: Strategies often merged and combined. Study 1 identifiedco-occurrence of haptic intentions (Table 5.5), with nonuniform linkages and se-quencing among gestures carried out with particular intentions (Figure 5.5). Study2 identified recurring behavior patterns in another way, using 9 collaboration di-mensions (Table 5.6) which we examined for co-occurrence through correlation(Figure 5.8), including a preliminary look at trends associated with learning gains.Primary takeaways are that both gestural intentions and collaboration purposesworked in tandem; and that platform impacted collaboration patterns, reinforcing ahaptic role in the collaboration dynamics.Platform and strategy: We also found that the environment/device combinationspromoted different strategies. For example, in the Pressure Lab participants coordi-nated their actions to set-up then observe and compare different (semi-autonomous)behaviours of the Cellulo robots. Conversely, in Collision & Momentum, Haplylearners responded more to the perceptual responses to the environment’s behavior,i.e., reporting and discussing their perception of the impact when their partner threwa virtual object (using the laptop trackpad) towards the Haply avatar. Such sensationswere not forthcoming from the slower-moving Cellulo, nor did MagicPen’s structureand activity really invite it. But in the Electrostatic Lab with MagicPen, learnerscould mutually perceive changes in forces and perceive the effects of their peersmanipulation’s of the environment in real time, and we saw that this combination ofshared perception and fast-realtime responsiveness seemed to enable them to reachconsensus more easily than in the other environments.A similar effect should be available with the Haply, if each collaborator had110their own and bandwidth constraints were met by the activity components (they didnot permit it in this case). While not the reason we used just one Haply, it did givean opportunity to see what happened when the haptic device had to be shared. Thesharing may have amped up their collaboration, since they had to negotiate their useof the single device.5.7.2 Practical Considerations in Designing to Promote HapticGroundingDesigners and technologists seeking to employ a haptic channel to support collab-oration in dyad work should take encouragement from these results, given theirdemonstration of participant readiness to employ this channel, their versatile andcreative use of it and the evident impact it had on joint interaction strategies. Wehave tried to shape our findings into some practical, if early, thoughts on things tokeep in mind.Acknowledging and supporting grounding stages in an application’s design mayempower a modality’s use.We saw active evidence of learners moving between grounding stages – monitoring,diagnosing and repair. Awareness that these stages have different requirements canenrich a design: e.g., providing for private to shared perception; opportunities toconfront ambiguities, to trigger discussions and idea-testing; and always, the needfor each individual to be able to show or ask about what they or the other is feeling,for comparison with their own.Activities have a natural rhythm which need to be accommodatedThe three stages we developed and followed for our learning activity (Familiarization/ Accumulation / Design) highlight a sequence that could be exploited in activitydesign not only for learning, but for other creative contexts, particularly whenmerged with grounding stages. While the first and last stage are typically included,we do not always allow enough time on Accumulation - predicting and reasoning forthe purpose of understanding, even before we have a design objective. Environmentsshould provide rich opportunities for this step.To bridge from private reflections to shared understandings, there must be spacein the interaction to do both.Haptics offers a private modality from which to build perceptual bases for grounding;111from there, users can identify and resolve misunderstandings when they occurred athigher conceptual level. It enable users to explore, interact and communicate theirexperiences.Haptics can become a voice when verbalization of ideas is a struggle.Without dealing with semantics and technical terms, haptics offers its own vocab-ulary to help users achieve mutual understanding. Our observation suggests thatactivities with haptic environments can come prior to the formal theory classes andhelp learners to obtain an intuitive understanding of the learning concepts. Lack oftechnical terms would not burden haptic collaboration between peers.A little situational knowledge goes a long way; without enough, what one feelsmay not make sense.We saw that participants with at least some background knowledge used the devicedifferently, and more purposefully, than those with little or none. There is a thresholdof cognitive scaffolding below which sensemaking of the physical sensations isdifficult; one cannot accurately interpret behavior and movements. It is likelythat a lesson’s design can be made adaptable to counter this challenge, usingaugmenting visuals at the earliest stages to assist novices in forming a usable graspof the representation. This kind of introductory experience has been suggested foraugmented reality in education [184].To reaching consensus through haptic sharing, start low and don’t lose track ofit.We observe two levels of reaching consensus. We realized that agreeing on basicperceptions is important to focus on at the beginning of the activity, when learnersare building trust on the device and the information that they receive through it.Maintaining this common ground can build to consensus in increasingly abstractconcepts, without the need to return to basic perception.Haptics can be fun, but it has to be fully accessible.Haptics can be a fun and stimulating way to engage, learn and interact; we sawit drawing people in. However, it needs to be easy, not frustrating. As for manytechnologies, the fun/frustrating line will be at a different point for different people.1125.8 Conclusions and Future WorkIn this chapter we studied the role of force feedback on grounding, examininghaptic strategies that partners employ in collaborative learning contexts. Our twostudies sought patterns in participant’s haptic interactions, their grounding actionsand the intentions behind them, with qualitative and qualitative results revealinghow participants naturally use haptics to set-up the haptic environment and test theirideas. We found that participants predominantly employed haptics to explore theenvironment, communicate their hypotheses and repair possible misunderstandingsin an effort to reach consensus. But our analysis also exposed rich, complicatedpatterns in collaboration dynamics, enabled by environment features and device anddevice capabilities. These patterns involved intentions such as curiosity, relation-building and fun, and naturally facilitated a non-verbal language. The analysis alsoexposed the pitfalls of inadequate threshold knowledge, technical frustration, andprior adverse experiences.In our second study, critical haptic incident analysis provided an objectivemeasure to evaluate different dimensions of collaboration and their correlation withlearning gain. Qualitative and quantitative results suggested that use of hapticsimpacted the collaboration dynamic and strategy differently according to the typeof haptic interaction and the learning tasks. When the haptic activity was morecollaborative, haptics partially mediated the impact of collaboration on learning.Even when the nature of the task did not required haptic collaboration, we foundthat haptics and collaboration could separately improve learning.We reflected on the lessons we learned through our two studies to help hapticdesigners and educational specialists deliver their haptic information successfully tothe users, either to sustain mutual understanding or to create collaborative learningactivities.5.8.1 Limitations and Future WorkFrom this foundation we anticipate several next steps.Co-located to remote collaborationThese studies were designed pre-pandemic and executed during lockdown, even asresearch team members watched young family members struggle with connection-113Table 5.8: Touch in support of collaborative grounding through logical reason-ing and factual evidenceKey Haptic cues How it is perceived –through an objective lensExample of purpose –information that is soughtThermal Differences between the object temperatureand the body temperatureFinding the thermal conductivity inmaterials [104]Identifying texturesand material surfacepropertyTactile acuity /roughness/ friction/ findingprimitives, and symmetryIdentifying different materials basedon their surfaces [156]Shape and sizerecognitionIdentifying edges/ spatial information andgeometriesStudents’ conceptions of the animalcell [156]Compliance Resistance to applied force Tissue stiffness for surgicalrobots [115]Force behaviour Perceiving changes in force magnitude anddirectionCoulomb’s law (attractive-repulsiveforces)building in online learning. We are eager to see how our findings translate tosituations where remote partners face various reductions in the quality and easewith which they see and hear one another, both faces and what they are doing withtheir hands. We speculate that the haptic channel grounding benefits that we seein co-located scenarios may be even more valuable in remote ones, and this is anexcellent time to find out.From novelty to skill - longitudinal examinationFew of our participants had any prior experiences with force feedback. Noveltycreates engagement, while inexperience obstructs working with the haptics deviceand interpreting force cues. A longitudinal study can explore how users who becomemore literate over time can become more effective communicators, explorers andcollaborators.Learning outcomesOur focus here was to qualitatively understand the role of haptics in grounding, and24 participants gave more data than we could use. However, more is needed forreliable statistical insights. Ultimately, for our learning use case the prize is impacton learning outcomes, which will require a large sample to determine given themany sources of variation in individual’s learning situation.Other haptic and multimodal cues114Here we only studied the changes in force behaviour; however, many more could beemployed in the grounding process. In Table 5.8, we categorize a set of examplesto demonstrate the use of haptics in learning activities and how they are perceivedthrough an objective haptic lens. Future studies can investigate grounding with otherand combinations of haptics cues, as well as myriad multisensory combinaions withother senses.115Chapter 6A Framework for PhysicallyAssisted Learning (PAL)Preface – Inspired by Harold’s purple crayon, in previous chapters we investigatethe technical challenges of creating a platform that allows us to investigate theimportance of haptics in learning. We proposed a technology (MagicPen) that canhaptically render the virtual world similar to Harold’s purple crayon. Further, westudied the two core haptic interactions for regenerating Harold’s experience ofdesigning and then exploring the hypothetical imaginary world.Building on the previous discussions and outcomes, in this chapter we revisitdifferent categories of interaction that are enabled by the force feedback support.We then complete Harold’s journey using haptics, by offering a seamless, hapticallysupported continuum between the activities of constructing an environment (design)and exploring it. We present a Physically Assisted Learning (PAL) framework toachieve a better understanding of how to build a model by drawing it and thenexploring the model through the sense of touch. The material in this chapter is takendirectly from (Kianzad et al. 2021)1.1Soheil Kianzad, Guanxiong Chan, Karon E. MacLean “PAL: A Framework for PhysicallyAssisted Learning through Design and Exploration with a Haptic Robot Buddy,” Frontiers in Roboticsand AI, pp 228-250, Vol 8, 2021.1166.1 OverviewRobots are an opportunity for interactive and engaging learning activities. In thischapter we consider the premise that haptic force feedback delivered through a heldrobot can enrich learning of science-related concepts by building physical intuitionas learners design experiments and physically explore them to solve problems theyhave posed. Further, we conjecture that combining this rich feedback with pen-and-paper interactions, e.g., to sketch experiments they want to try, could lead to fluidinteractions and benefit focus. However, a number of technical barriers interferewith testing this approach, and making it accessible to learners and their teachers.In this chapter, we propose a framework for Physically Assisted Learning basedon stages of experiential learning which can guide designers in developing andevaluating effective technology, and which directs focus on how haptic feedbackcould assist with design and explore learning stages. To this end, we demonstrated apossible technical pathway to support the full experience of designing an experimentby drawing a physical system on paper, then interacting with it physically afterthe system recognizes the sketch, interprets as a model and renders it haptically.Our proposed framework is rooted in theoretical needs and current advances forexperiential learning, pen-paper interaction and haptic technology. We furtherexplain how to instantiate the PAL framework using available technologies anddiscuss a path forward to a larger vision of physically assisted learning.6.2 IntroductionThe learning of topics once delivered in physical formats, like physics and chem-istry labs, has moved into digital modalities for reasons from pragmatics (cost,maintenance of setups, accessibility, remote delivery) to pedagogy (topic versa-tility, personalized learning, expanded parameter space including the physicallyimpossible). Much is thereby gained. However, typically accessed as graphicaluser interfaces with mouse/keyboard input, these environments have lost physicalinteractivity: learners must grasp physical concepts in science and math throughdisembodied abstractions which do little to help develop physical intuition.Physically interactive robots coupled with an interactive virtual environment(VE) offer an alternative way for students to encounter, explore and collaboratively117share and build on knowledge. While contemporary technology and learningtheories have not yet delivered a robot system sufficiently versatile to supporta wide range of learning needs and environments, we can nevertheless proposeand separately evaluate design dimensions that a haptic robot and accompanyinginteractive VE enables. The objective of this chapter is to facilitate the design andassessment of this new class of learning technology by articulating its requirementsvia a framework.Experiential learning theorist – (Kolb ,1984) [122] – posits a four-phase cyclethat learners ideally repeat iteratively: concrete experience (CE), reflective observa-tion (RO), abstract conceptualization (AC), and active experimentation (AE). In thischapter we focus on how a haptic robot might be engaged in the stages of this cyclewhich naturally lend themselves to physical manipulation: active experimentation,through designing a virtual experimentation environment suitable for a question theyhave, and concrete experience, through exploring the environment they configured.A Vision for Physically Assisted Learning: A Sketch-Based Design-Explore Cy-cleThe ability to draw a model, then feel it (active experimentation around anidea, then associated concrete experience of it – forming and testing a hypothe-sis) may be key to elevating interactive sketching to experiential learning. Whenexploring, learners can extend their understanding of a domain of knowledge byphysically interacting with a virtual model – making abstract concepts more ac-cessible, and approachable in new ways. When they are designing, physicalizeddigital constraints combined with sketch-recognition intelligence can help them toexpeditiously express their thoughts by sketching to the system, with the addedbenefit of representing the resulting model to a co-learner. Finally, exploring one’sown designs now becomes a holistic cycle: the learner challenges their knowledgeby dynamically posing their own questions and mini-experiments as well as others’by designing models, then reflecting on the outcome of interacting with it.As a concrete example: to “play with” the dynamics of a physical system (e.g.,a mass-spring oscillation), a learner is assisted by a force-feedback-enabled drawingstylus to sketch the system on an arbitrary surface. The system recognizes thedrawn ink as, say, a mass connected to a ground through a spring. Using the samestylus, the learner can then “grab” the drawn mass and pull on it. To test a question118about parallel versus series spring networks, they can mentally predict then quicklydraw the two cases and physically compare the result. Similarly, they could testrelative oscillatory frequencies by extending the spring then “releasing” it. Bywriting in a new spring constant (“K = 2”) they can modify the spring constant. Thesame process can be applied in other domains, such as in designing-to-explore anelectronic or fluid circuit, and to improvisationally testing equations defining systemproperties. This use case (Figure 6.6) and others are implemented and elaboratedlater in this chapter.Technical Challenges and Ways Around ThemAspects of the AE and CE experiential learning stages have been studied andvalidated in isolation using tangible user interfaces, robots and haptic devices,and the results underscore the general promise of this approach [148, 185, 236].However, few systems support physicalized interaction in both stages, far less fluidtransition between them.This is at least partially due to the technical difficulties of working with present-day versions of these technologies. For example, conventional grounded force-feedback haptic systems can theoretically support VE creation and interaction, butin practice, they require extensive time and expertise not just to create but even toapply small variants in learning purpose, which often is unavailable in a schoolsetting. Their expense, limited-size and desk-tethered workspaces and single-usernature preclude mobility and collaboration and tend to be too high-cost and requiresignificant technical support. Other robot technologies are mobile and collaboration-friendly, but do not convey physical forces – e.g. a robot puck with which a usercan control tokens on a graphical screen.However, a handheld force-feedback tool that combines a spectrum of autonomywith physical interaction can potentially overcome these technical limitations: e.g.,a robotic pen which can assist a learner in navigating concepts of physics andmath by conveying physical forces modeled by an environment drawn by its holder.Technically, this system must read and understand the user’s sketches and notations,translate them into a VE and associated parameterized physical models, then animatethis environment mechanically with a physics engine rendered through a suitableforce-feedback display – ideally with the same handheld tool with which they drewthe environment. A haptic device in the general form of a handheld, self-propelled119and high-bandwidth robot can generate untethered, screen-free experiences thatencourage collaboration.This concept is technically feasible today without any intrinsically high-costelements, with the haptic pen itself fully demonstrated [113, 116], but significantengineering remains to translate innovations in sketch recognition from other tech-nical domains and integrate them into a full-functioned, low-latency robotic system.Our purpose in this chapter is to consider the potential of this approach based onrelated technology elements as a proxy for a future integrated system which weknow is possible to build if proven worthwhile.6.2.1 Approach and ContributionsWe have designed support based on a theory of activities that has been shown to leadto effective learning, and require this support to meet usability principles suggestedby the theory. For example, the cyclical nature of Kolb et al.’s [122] learning cycledirects us to minimize cognitive and procedural friction in performing and movingbetween important cycle activities. Unfettered designing and exploring impliescomfortable workspace size and natural command-and-control functions that trans-fer easily from a student’s existing experience – e.g., pen-and-paper diagramming,nomenclature consistent with how they are taught, direct application of parameters,etc. They should not have to switch tools when they switch stages. Meanwhile, theirwork should be easily visible in a way that teachers and co-learners can see whatthey are doing and effectively collaborate in their experience [10, 112, 185].Getting To Confidence that it Could WorkThe scope of this chapter is to identify and solve technical obstacles to theinstantiation of the theoretically based PAL framework, focusing on the gap in previ-ous work: the connection between physically supported design and explore learningactivity, in the form of theoretical rationale and technical proof-of-concept. Weneed to ensure that the concept’s non-trivial realization is feasible, given obstaclesranging from stroke recognition to haptic rendering algorithm and availability of ahaptic display with suitable capability and performance.Only with this evidence will it will be ready to (beyond our present scope)optimize for usability; and thence to evaluate for the pedagogical value of adding120(AE) ActiveExperimentation (RO) ReflectiveObservation(AC) AbstractConceptualization(CE) ConcreteExperience ExperientialLearningSharedcontrolPassiveconstraintsActiveconstraintsRepulsive/attractiveforcesComplianceVirtualwallsBead on astringImpact/collisionSurface/textureShapes andgeometriesLearner designs  andconstructs experimentenvironment using:Learner explores andexperiences via:ACTVITIESVALUES:Learner/teacher accessibilitySupport of collaboration Support mobility Transparent user-systemcommunicationSmooth movement betweenlearning stagesHaving an experience(feeling)Reflecting on itPutting theory into practice to see what happensDrawing their own conclusions (theorizing)Figure 6.1: The PAL Framework. Physically Assisted Learning interactionshaptically adapt stages of experiential learning from Kolb’s [122] gen-eral framework, with some added features from Honey [93]. Hands-onActive Experimentation and Concrete Experience are most amenableto haptic augmentation, enriching the more purely cognitive ReflectiveObservation and Abstract Conceptualization.physical expression and fluidity to the explore-design-explore cycle. Given thecomplex and individual process of learning, this will require a sequence of userstudies to convincingly validate the framework and its impact on learning gain, aswell as generalizablity across multiple platforms.Guiding Support and Assessing Potentials with an Experiential Learning Frame-workWe propose a Physically-Assisted Learning (PAL) framework through whichwe can systematically compare different candidate technologies’ potentials in un-locking key activities and values (Figure 6.1). Through the PAL lens, we viewlearning via the physically supported activities of designing (AE) and exploring(CE); and assess platforms against key cross-cutting values of learner/teacher ac-cessibility [171], support of collaboration, untethered [114], screen-free mobility,transparent user-system communication [192], and seamless transitioning betweenlearning stages.We are using PAL as a tool to understand the impact of device attributes onlearning strategies and outcomes, as well as collaborative effectiveness, self-efficacy,creativity, and performance in drawing and design. Throughout the chapter, we will121relate needs, technical challenges and approaches to this framework, and considerhow the candidate technologies stack up on its values under the two activities offocus.We contribute:(1) The Physically Assisted Learning (PAL) framework which can (a) concep-tually and constructively guide the design of haptic science-learning support;and (b) lead directly to articulation of special requirements for explore-typecontexts like learning, including fluid access to large ranges of model structureand parameterization.(2) Demonstrations of (a) means of addressing these needs, for designing withinnovative application of hand-stroke recognition, and for exploring throughhaptic rendering with a control approach not available in open libraries(namely passivity control); and (b) a technical proof-of-concept system inwhich designing and exploring are haptically linked: a user can draw andthen feel a virtual environment.(3) A path forward: An account of technical considerations, challenges and pos-sible approaches to fully realize this paradigm.6.3 BackgroundWe introduce past work related to the idea of physicalizing digital manipulatives,relevant classes of haptic force feedback technology, challenges in bringing thiskind of technology into education environments, and ways in which haptics havebeen used for related activities of designing and exploring.6.3.1 Adding Physicality to Digital Manipulatives (DMs) via RobotsRobots are a class of DMs that use motion along with other visual or audio cuesto express information. Children can program robots and therefore observe andexperience how defining a set of rules results in intentional behaviours in them. Thisalso gives them the freedom to decide what the robot is, based on how the robotbehaves. This flexibility potentially helps learners to use the robot as a probe toexplore many learning concepts in different contexts [189].122Haptics can empower digital manipulatives by expanding the imagination be-yond the motion of a physical robot, in the behaviour of the virtual avatar andrespective feeling of force feedback. While users can manipulate the environment,we posit that the visual and haptic cues can reduce the cognitive load of interpretingthe abstract concepts and make the haptic digital manipulative more expressive.Returning to our mass-spring illustration: a physical mass connected to a realspring is a manipulative that can demonstrate the concepts of elasticity, inertia,vibrations and resonance. A programmable robot can visibly implement the mass-spring behaviour through its reactive motion. With physical user interactivity, thisrobot becomes a haptic digital manipulative. Combined with a graphical display,it could tangibly render the system with learner-specified parameters – shape, size,spring and mass constants – and expose learners to the reaction forces and dynamicsof pulling and bouncing it [154] as well as new combinations of springs, and varyingviscosity and gravitational force. Such a system can simulate many other physicalsystems, e.g., gas, fluid or electronic circuits.6.3.2 Relevant Educational Theory and Design GuidelinesLearning Through ExperienceIn Piaget’s Constructivism [181], knowledge is seen as deriving from indi-viduals’ experiences, rather than as a transferable commodity. Learners activelyconstruct and re-construct knowledge by interacting with the world [9]. Accordingto Piaget’s cognitive development theory, to know an object means to act on it.Operation as an essence of knowledge requires the learner to modify and transforman object, and also understand the process of transformation; leading to knowledgeof how the object is constructed [180]. Several schools of educators [46, 161, 174]have emphasized physicality in educational learning tools and direct manipulationof objects. These theories underlie a goal of providing tools that enable learners tooperate on multiple instances of knowledge construction. Papert based his Construc-tionism on his supervisor’s Constructivist [175] learning theory. Constructionism, inaddition, takes into account the social and situational aspects of learning. Accordingto Papert, learners will be more involved in learning when they are constructingsomething tangible (schema) that is shareable and justifiable when other learners123can observe and criticize or even use it.Extending Experience With ReflectionMeanwhile, Cornu [42] propose three iterative steps of externalization, sense-making of meaning, and internalization, through which reflection links experience tolearning. Often discussed in social constructionism literature, these steps have beenapplied to a wide range of human actions in the world and society, including the useof feedback (from people, or the results of physical “experiments”) to develop themeaning of the self.Haptic Digital Manipulatives as Vehicles for Experience and ReflectionThe theories above have been applied to a wide range of tangible user interfacesand digital manipulatives. Through educational robots, experiential learning canbe tangible and digitally supported, and specifically invite reflection. Resnick’sprocess of reflection with robots [190] starts with the construction of a robot-basedenvironment, in which learners make their own “microworld” by programming it,followed by feedback from robots to help them shape and validate their ideas. Sucha reflection cycle can be repeated multiple times, deepening the experience [64].Within early edurobot work, we sought visions for digital manipulatives suitablefor more advanced educational topics. We found examples using robots to aidlearners in mindful integration or materialization of ideas through the practice ofdesign [5]; and to support exploration of different domains of knowledge or ofabstract concepts by making them more accessible or approachable in new ways[172].Instantiating these principles in a digital manipulative could help them to workas an object-to-think-with, wherein learners instantiate their ideas into a physicalmodel through the object, and can debug or extend their thinking model regardingthe outcome. The process of analyzing the validity of execution motivates learners tothink about their own thinking, developing their metacognitive ability. This resultsin (a) gaining higher-level thinking skills, (b) generating more representationsand metaphors for their understanding, (c) improving social communication anda collaborative atmosphere, and (d) forming deeper understanding of the conceptamong learners [11, 22].1246.4 A Framework for Physically Assisted Learning (PAL)The motivation for the PAL framework is to exploit benefits postulated above for ahaptic digital manipulative, in learning and in pen-and-paper interaction, and turnthem into a versatile and effective digital manipulative. We previously introducedKolb’s four-stage framework for experiential learning [122], on which we havebased PAL (Figure 6.1). Here, we lay out PAL’s theoretical basis, then elaborate onits components and explain how we expect learners and designers to use it.6.4.1 Pedagogical Rationale and ComponentsLearning is iterative: one builds a mental model of a concept by repeatedly inter-rogating and manipulating a system, forming then testing successive ideas of howit works in a cycle such as Kolb’s. Manipulatives are often designed in a way thatwill support just one part of this cycle – e.g., to create a microworld or to directlyinteract with one. Our premise is that supporting fluid movement throughout theexperiential learning cycle will facilitate more resilient mental model formation.Supporting Kolb’s Learning Stages with a Haptic Digital ManipluativeMost of the visions in related work, and the idea of robot-supported reflection morebroadly, would support at least one out of Kolb’s two “acting in the world” phases:Concrete Experience (CE; having an experience) and Active Experimentation (AE;putting a theory into practice). Here, there is an opportunity for intervention, andalso for researchers to observe and try to understand what is happening basedon the part of the cycle that is visible. The more internal stages of ReflectiveObservation (RO; reflecting on an experience) and Abstract Conceptualization (AC;theorizing) are crucial, but can be influenced or inferred only through what happensin the other phases, or through post-hoc assessment, e.g., of changes in conceptualunderstanding.The PAL framework’s mandate is therefore to help educators focus on physicalinstruments and strategies that will support learners in CE and AE, and eventuallyto help us insightfully observe them as they do so.Early works on edurobots have claimed that robots could be beneficial in allfour stages. For example, for Reflective Observation (RO), Resnick suggested that125through its processing power, the robot could speed the reflection cycle [190] –externalizing/internalizing from hypothesis to result; modifying parameters, con-ditions and even time. For Abstract Conceptualization (AC), Papert uses gears asan example where learners can use mechanical objects for conceptualizing physicsconcepts [174].Kolb argues that the interaction and manipulation of tangible objects is anindivisible part of epistemic (knowledge-seeking) exploration, where the learnerpurposefully changes the learning environment to see its effect and thereby tounderstand relationships. When suitably framed through availability of multipleperspectives, parameters and factors, manipulation thus might provide at leastindirect support for Kolb’s Reflecting Observation (RO) stage [9, 57].However, these claims are as yet unsupported. Limited to findings that havebeen validated in controlled studies, we conjecture that a DM approach’s influenceon RO and AC will be indirect.PAL ComponentsA useful (that is, versatile) manipulative should be able to provide the basis forproductive subsequent reflection and theorizing during both Active Experimentation(AE) and Concrete Experience (CE). Therefore, we identified explore (CE) anddesign (AE) as PAL’s key components: activities which a haptic DM mustenrich.Further support for centering a PAL framework on these two components, aswell as clues towards means of implementing them, emerge from other studies ofhow haptic feedback can support designing and exploring. Summarizing these,Table 6.1 has two features of particular interest. First, we populated it with just twoof Kolb’s four learning activities, because we found few examples of attempts touse haptics or other PDMs to directly support reflection or theorizing. Those we didfind (e.g., [83, 187, 191]) proposed systems or studies whose results either showedno benefit or were inconclusive.Secondly, none of the cited studies examined both designing and exploring, buttreated them as isolated activities. This may have been influenced by the naturalaffordances of the devices used. For instance, a Haply (in its unmodified state) can126be used readily to Explore; but to facilitate creation of micro-worlds (Design), wefelt we needed to hack it – and chose addition of a drawing utensil. In other words,meeting the principles expressed by PAL triggered specific, targeted technologyinnovation. More is needed to reach the full PAL vision; the framework provides ablueprint to get there.We believe that PAL framework fits best in Constructionism learning theory.PAL emphasizes on two main aspects of Constructionism. It demands the learner toconstruct the learning environment, which later they can create an experience byexploring it.PAL potentially can enhance other schools of learning. From the physiologicalstandpoint, it is possible to find some added values of using the exploration compo-nent of PAL for learning by observation. The learner watches how the system doesit and then tries to repeat the action. We can also define a reward system so that thehaptic feedback attracts and motivates learners towards a certain learning directionwith specific objectives (reward-based learning).We foresee some limitations in using PAL for Cognitivism learning theory. AsCognitivism focuses on information transformation, perhaps the haptic channelis not the most efficient method of communicating information to the learners asopposed to using visual or auditory channels.6.4.2 Principles for Creating Digital ManipulativesWe assert two overriding principles that guide us in creating versatile digital manipu-latives, based on learning theory discussed in Section 6.3.2 as well as observations oflearners’ interactions both with conventional pen and paper and with haptic/roboticdevices, across a range of learning scenarios.A digital manipulative needs to serve learners in expressing their thoughts (De-sign)According to Ackermann [5], “To design is to give form or expression, to innerfeelings and ideas, thus projecting them outwards and making them tangible”.Design enables individual interactions with and through human made artifacts andinvolves them in the “world-making” process [75]. The purpose of design goesbeyond representing just what exists, by bringing imagination into this existence127Table 6.1: Summary of research informing the use and benefits of haptics inlearning, organized by the PAL framework’s two activity components. [+]indicates a positive benefit, or [-] no added value was found.HapticBenefitsDesign (Active Exploration) Explore (Concrete Experience)Understandingand manip-ulatinggeometry[+] [231] Drawing accurate geomet-ric shapes.[+] [166] Computer assist collabora-tive drawing of different shapes.[+] [140] Increasing the passive sty-lus affordance through haptic guid-ance.[+] [172] Identifying differentshapes and number of edges.[+] [156] Understanding thestructure and function of the cellmembrane transform[+] [105] Learning morphology anddimensionality of viruses; diagnosemysterious viruses by pushing,cutting and poking.Improvingaccuracy andspeed[+] [116] Improving accuracy ofdrawing objects through force feed-back assistance[+] [222] Using haptic feedbackin a calligraphy simulation reduceswriting errors and improves writingspeed.[+] [231] Drawing accurate geomet-ric shapes.[+] [165] Enhancing completiontime and interactivity of bimanualtasks.[-] [55] Users were unable to sculptforms to produce acceptable curvedsurfaces using haptic feedback.[-] [19] Haptic human–human inter-action does not improve individualvisuomotor adaptation.Engagement [+] [84] Significant increase in stu-dents’ engagement during the learn-ing activity.[+] [233] Increasing engagement inword-writing activities.[+] [127] Increasing confidence andachieving more realistic drawings.[+] [220] Enhancing interactionswith objects in Augmented Reality[+] [217] Providing realistic sensa-tion of physical interaction in a vir-tual environment[+] [112] More engagement in edu-cational robotic activities.Accessibility(e.g., in faceof disability)[+] [164]) Re-learning to write aftera stroke.[+] [100]) Haptics improves taskperformance of children with physi-cal disabilities (review paper.[+] [221] Allowing visually im-paired users to perceive data withgreater speed and efficiency.Understandingof underly-ing concepts[+] [142] Designing an optimumsystem/model by receiving on-the-go force feedback.[+] [148] Conceptualizing electro-static concepts through the sense oftouch.[+] [235] Building electrical circuitswith one or two bulbs.[-] [188] Haptics did not add tolearners’ ability to understand pen-dulum principles.[+] [236] Understanding mass-beambalance.128[5].For example, we often use pen and paper to write down fast-travelling ideasin our minds. Our immediate drawings can reflect our thoughts, experiences andemotions. Particularly for children, drawings reveal the hidden transcripts of theirinterpretation of the world. From scribbles to detailed, elaborated productions,sketching is both intellectual play and can help us form, develop and communicateour thoughts, a key part of a conceptual process. Sketching is direct, improvisa-tional, expressive, resists distraction, and may promote deeper cognitive processing.Projecting our ideas onto paper makes our thoughts more tangible, shareable, andjustifiable; This enhances our communications with others. A versatile manipulativeshould work as a medium to exchange information between a user and a computerinteractively.These prior findings and observations support the premise that aid from asuitably configured and supported physical digital manipulative can directly impactthe active experimentation phase: specifically, when learners are hypothesizing andplanning small tests. The environment altogether should encourage the learner tohypothesize, construct a experimental micro-world and set the conditions for theenvironment, anticipate the result and test it; and iterate to improve their hypothesis.A digital manipulative needs to support exploration of domains of knowledge(Explore)Two classes of manipulative proposed by [190] include Frobel Manipulatives(FiMs) to model the world, i.e., provide an intuitive way to experience manyconcepts in physics by making them more accessible (wooden sphere and cube tofeel the natural differences between shapes), and Montessori manipulative (MiMs)to model abstract structure – e.g., form an approachable way to make math, andgeometry concepts more tangible (golden bead materials used for representingnumber). Haptics researchers show that even a 1D haptic device can support bothof these classes when it works as haptic mirror [154], to mimic physical experience,or as a haptic bridge, connecting a dynamic visualization of a mathematical conceptwith a haptic representation [44]. A versatile manipulative should support bothclasses using physical interaction with the virtual world through force feedback.Perhaps the most studied aspect of digital and physical manipulative is the roleof physicality in simulation learning for concrete experience (CE) stage. Here,129learners try out the action and have a new experience. Through physicality, learnerscan obtain more embodied experiences and perceive information through touch.6.4.3 Using the PAL FrameworkLearner’s UseSome examples illustrate PAL’s two conceptual activities, wherein a learner con-structs a microworld then explores it.Design: The learner must be able to fluidly express rich information to the system.Assistive force feedback to users’ pens while sketching can help them manifest andcommunicate their ideas to other people and to a computer: it might be more efficientand natural if they can feel virtual constraints that support them in generating smoothcurves and straight lines as they draw – on a computer screen, paper, whiteboard orother surface. In the future, we can exploit this design space to empower learnersto actively design, make, and change their learning environment based on theirhypothesis.Explore: The tool must provide rich sensory information to the learner. Theaddition of haptics to a digital manipulative (beyond motion alone) potentiallysupports a more compelling interpretation so that learners can predict and reasonabout outcomes based on what they feel as well as see.In this project we explore these two PAL activities – requisite attributes for anobject to think with – along with the connection between them. Although sucha device could also be seen as an object to promote computational thinking [99]we saw it differently. A DM exploits the computational power of the computer tospeed up the learner’s reflection cycle, which leads to more constructive failures[36]. Throughout this process, learners can explore a variety of representations andsolution methods. If followed by a consolidation and knowledge assembly stage,together they can create a productive failure process [108].Education Technology Designer’s UseIdeation of Form and Prediction of Haptic Value130Designing technology solutions for learning requires ideating innovative con-cepts and ideas, but also evaluating and prioritizing them. PAL can help inspireeducational technology designers with new ideas, and to understand the potential ofadding haptics to a particular domain or context. In addition, our implementationshows a technical example of how to use emerging technological capabilities tosolve particular problems.Setting Requirements and Evaluating the ResultPAL can help designers identify requirements via experiences that their tech-nology needs to support. Based on Figure 6.1, a designer can create an opportunitymap by examining connections between the stages of learning and activity type.For example, to support collaboration in learning electrostatic forces, a learner canconstruct the environment (design) by placing the point charges; then invite theirpartner to experience them (explore). A designer can then focus on finding thehaptic controls and feedback which will allow the learner to place the point chargescorrect places (e.g., equidistant), and how to render the force behaviour as learnersmove respectively to each other.Based on these requirements, in evaluation a ed-tech designer simply needs (ata first pass) to verify that the requirements are being met when learners interactwith the system. Are they able to construct the environment, and then place thecharges correct? Can a partner experience this? Is the whole experience engagingand usable enough to invite this kind of collaboration? With the assurance providedby intermediate goal and usability evaluation derived from theory-based guidelines,they will be in a better position to proceed to assess how such a system is influencinglearning outcome.6.4.4 First Step: Need for a Technical Proof-of-ConceptIn past research supporting haptic design and explore activities (Table 6.1), what ismissing is the connection between them. This requires a technical means by whichto understand the user’s imagination and dialogue in design and then bring it intoexistence by defining its physical, haptic behaviour for exploring. For example, if auser draws a microworld consisting of a set of point charges, we need to define theforce behaviour of the point charge and make it interactive so that users can feel the131forces as they move in the environment.Once such a system exists, it can misfire for purely technical reasons. Forexample, expanding the user’s available possibilities during design – e.g., allowingthem to cover a greater variety of concepts in more ways – often introduces newissues such as triggering vibrational instabilities which naturally accompany hapticrendering of dynamic environments with large uncertainties.In summary, the challenges here are to (1) make an intelligent system that cantake unconstrained drawing as an input, and (2) robustly render a wide range ofhaptic environments with high quality. For the first, advances in artificial intelligencego far in allowing us to infer and display interpretations of user’s drawings [21, 47].For the second, the field of haptic rendering can contribute advanced control methodswhich when carefully applied should be able to describe and within bounds, toaddress the environments that may arise when a user is permitted to create ad hocenvironments [49, 82].Putting these elements together is, however, a substantial systems-type contribu-tion, and its initial appropriate validation is in technical performance assessmentwith respect to force and stability outputs relative to known human psychometriccapabilities rather than a user study of either usability or learning efficacy. In the fol-lowing, we will describe and assess performance of our technical-proof-of-conceptsystem which implements this missing, connective aspect of our proposed PALframework.6.5 Haptically Linking the Expression and Explorationof an IdeaCurrently available processes for generating and modifying content for hapticinteraction impose logistic and cognitive friction between ideation in the form ofsketching a problem, idea or experiment the learner would like to understand, andtesting that idea in the form of a physicalized model. We aim to reduce this friction.After describing the technical setup we will use to demonstrate our ideas, we willwork through a series of technical instantiations which support increasingly powerfuland wide-ranging cases. Each begins with an education use case illustrating how thislevel of haptics could be useful. Readers may find the first (rendering a haptic wall)132distant from our final goal; we have included it as a means of gradually exposinglayers of haptic technology needed to understand more complex implementations.While all of the haptic rendering algorithms described here are well known, weshow how they can be combined in new ways with other technical features (e.g.,stroke recognition) to meet technical challenges that arise from the requirements ofa versatile, unrestricted learning environment.6.5.1 Technical Proof-of-Concept Platform: Haply Display andDigital-Pen Stroke CaptureThe demonstrations described here use the Haply Robotics’ pantograph system(Figure 6.2, https://haply.co/, [65]) and its hAPI software library [66]. The Haply isa low-cost pantograph, relying on 3D-printed parts which together with good-qualitymotors and fast communication can offer convincing haptic rendering with respect toaccuracy, force levels, responsiveness and uniformity across its 14x10cm workspace(https://haptipedia.org/?device=Haply2DOF2016). It communicates sensor andactuator data via USB to a VE running on a host computer, typically using theProcessing computer language. The hAPI library renders haptic interactions byreading the pantograph’s end-effector position (moved by the user’s hand) andcomputing output forces sent to two driver motors.To capture users’ sketch strokes, we used a watermarked paper and a digitalpen (Neo Smart Pen, [98]) connected to the Haply end-effector. The digital pencaptures detailed information about the user’s stroke: absolute position, pressure,twist, tilt and yaw. The Neo pen requires watermarked paper, creatable with astandard laserprinter by printing encoded dot files. For erasability and re-usabilityof sheets, we laminated the watermarked paper and positioned it under the Haplyworkspace. We calibrated the digital pen’s position data with the Haply’s encoders.With this system, the user can draw on the laminated paper and the strokes arecaptured, sent to the host computer and imported to the Processing application thatinteracts with the Haply.133Figure 6.2: Technical Setup. Our demonstration platform consists of a Haplyforce-feedback pantograph, a USB-connected digital pen, and a hostcomputer. The Haply communicates position information to the hostcomputer and receives motor commands through a USB port. A digitalpen captures and conveys thes user’s stroke, along with data opressure,twist, tilt and yaw.Figure 6.3: Technical implementation required to support Design (green) andExplore (blue) learning activities in response to ongoing user input.Details are explained in Section 6.5.3.(The user’s graphic from CanStock Photo, with permission).1346.5.2 Level 1: Rendering Rigid Surfaces and TunnelsWe begin by illustrating how haptics could potentially support learning (in motorcoordination) with basic haptic rendering techniques.Use Case: Handwriting Training Guided by Virtual WallsPast research on motor training, e.g., post-injury rehabilitation, has elucidatedeffective strategies for utilizing physical guidance, whether from a human traineror a programmed haptic appliance. Full guidance of a desired movement does nottypically produce good transfer to the unguided case; some studies suggest betterresults by physically obstructing the desired movement in the face of visual feedback,causing exaggerated motor unit recruitment [94, 143]. Learning and improvinghandwriting similarly involves training numerous haptic sensorimotor activities;these employ both fine (fingers) and larger (arms) motor units. It entails significantmental and motor-control practice, particularly for individuals working againstspecial challenges, such as dysgraphia which can impact 25% of the school-agedpopulation [79, 206].However, learning and improving handwriting is also a cognitive practice, andoften practiced by the young where engagement is also important. Rather thanlearners comparing their results to a standardized specified outcome, an expert maybe able to conceive of better individualized support (more specific, or advanced at adifferent rate) but requires a means to convey it to the learner as they practice ontheir own [68].The priority may thus be easing an expert’s customization of exercises, tosupport repeated self-managed practice [79]. The expert might want to modifydetails of visual cue presentation and the level and form of haptic guidance [125,215]; or temporally adapt by reducing force feedback aid over time through control-sharing [116]. Effective feedback must convey correct movements, notify a learnerwhen something goes wrong, and show them how to correct their movement [10];[8]. Haptic guidance could potentially provide these needed cues when the teacheris not present, without demanding a high cognitive load.In the PAL framework, the teacher would use the design stage, then explore toensure the force feedback works correctly. The learner would access this resource135in the explore stage.Here, we show in a basic example targeting elementary school students how ateacher can define a channel within which the learner needs to stay as they trace aletter. This channel will be rendered as a pair of enclosing and guiding haptic walls.This simple demonstration does not attempt best practices for handwriting training,or demonstrate many customization possibilities; it primarily introduces a importantbuilding block of haptic rendering, but is also a placeholder for the advanced wayslisted above that haptic feedback could be used to customize handwriting support.Defining a WallThere are many ways to define a boundary to a computer program. We require ameans that is convenient for a teacher or therapist. Working in a context of pen-and-paper, we let the teacher sketch the path which they wish the learner to follow. Theirstrokes are captured as a time-based set of point coordinates. These can be usedeither directly, if the stroke sample density is adequate, or with a smoothed line fitto them. We collect the user’s strokes as a two-dimensional array, then re-sample itwith spatial uniformity and present the result as a one-sided wall. A user can movefreely on one side of the wall; if they penetrate the wall from the free direction, theywill feel resistance. A teacher can draw a set of one-sided walls as a letter-shapedtunnel to guide a learner in their handwriting practice.Feeling the Wall: Virtual CouplingThe simplest way to haptically render a wall is to sense the position of the user orhaptic device handle, hereafter Xuser, and compare it with the wall boundary Xwall.If Xuser has penetrated Xwall, the penetration distance is multiplied by a stiffnessK defining the force that pushes the user out of the wall (Figure 6.4(A, upper)).However, we typically want to render stiff walls, while limitations of haptic deviceforce output and sampling rate create a result which is both squishy and unstable[71]. As shown in Figure 6.4(B-C), increasing K makes a more rigid wall but at thecost of unstable oscillations.Virtual Coupling (VC) for Stiff Yet Stable WallsAn accepted technique for stably rendering stiff walls, virtual coupling connects136the haptic end-effector position Xuser to a point representing it in the virtual worldwhich we define as its avatar (Xavatar, [197]). A VC links Xuser to Xavatar through avirtual damped-spring, as shown in Figure 6.4(A, lower). A stiff VC spring connectsthe operator more tightly to the virtual model; they can feel more detail, but it canlead to instabilities.Thus, a VC’s parameters (stiffness and damping) need to be tuned to modelproperties, such as virtual mass and spring magnitudes, device force limits andanticipated interaction velocities. When these are known and constrained to alimited range, a VC can work well. The VC implementation in the hAPI interfacelibrary enables users to change VC parameters [66].A virtual coupling is closely related to a proportional-derivative (PD) controller,perhaps the most basic form of automatic control structure. The key goals in tuningeither system are to (a) set damping to the minimum needed for stability, to limitenergy dissipation and consequently responsiveness; balanced with (b) sufficientstiffness to achieve satisfactorily tight connection to the user’s motion. Systemstability is also challenged when the mass of the virtual entity to which the avataris either bound or touching is too small, or when the system’s update (sampling)rate is slow compared to the dynamics of the system (either the virtual system orthe user’s movement) [204].Wall Performance in Letter-Drawing Use CaseIn Figure 6.5, we show the various mechanisms by which a teacher can defineand revise a shape which they want a learner to trace (A-D). In (E), we show anexample of a learner exploring the tunnel defined by the letter outline, including thehaptic rendering performance of the virtual coupling as a learner practice to writean m. The spring-damper VC filters high frequency force variations and createssmooth guidance as the user slides between and long the walls; the forces keepthem within the tunnel. The user’s actual position sometimes goes outside the wall,but their avatar remains within it and the learner feels restoring forces pulling themback inside. Depending on velocity, the user position and avatar may be slightlydisplaced even while within the wall, as the user “pulls” the avatar along throughthe damped-spring coupling.137Figure 6.4: Rendering a haptic wall, using a virtual coupling to achieve bothhigh stiffness and stability. (A) Algorithm schematic. (Upper) In thesimplest rendering method, force depends directly on the distance be-tween the virtual wall and the user’s hand (haptic device) as it penetratesthe wall: F = K(Xwall −Xuser). (Lower) A virtual coupling establishesan avatar where Xuser would be if we could render a wall of infinite stiff-ness, and imposes a virtual damped-spring connection between Xuser andXavatar. (B) Force-displacement behaviour when the wall is renderedas a direct stiffness or through a virtual coupling. The VC used herealso uses the maximum K = 10N/cm, and achieves a similar stiffness aswhen this K value is used on its own. (C) Oscillatory behavior of theconditions from (B). In direct rendering, instability increases with K, butwith a VC, a high K is as stable as the softest direct-rendered wall. (B)and (C) show data sampled from a Haply device.To extend this example, a teacher could adjust the tunnel width (a step amenableto parameterization) to customize the experience for the learner. The activity canoptionally be visualized graphically, or be done entirely on paper. Learner progresscan be quantified through statistics of position error (distance between the physicaland virtual avatars) and the force magnitude generated in response to this error.6.5.3 Level 2: Drawing and Feeling Dynamic SystemsOur second example implements more challenging stroke recognition, and addressesthe situation where a virtual coupling is inadequate because of the range of propertiesthat the user may need to access in their design and exploration.138Figure 6.5: A teacher prepares a handwriting activity by defining a letter shapem; the learner will then attempt to form the letter with assisting guidance.To create the m, the teacher can (A) laser-print a computer-generatedgraphic on paper, (B) draw it by hand, or (C) manually draw it withhaptic assistance. For erasable media, e.g., pencil on paper or markeron whiteboard, the teacher can (D) erase and draw a new exercise. (E)Exploring the m with ink marks rendered as virtual walls.Use Case: a Mass-Spring SystemHooke’s Law is a linchpin topic in high school physics: along with gravity andfriction, students learn about the relation between applied force and the amount ofdisplacement in springs and other stretchable materials. They further must be able todefine what a spring constant is, how to compute a net constant assembled throughparallel and serial spring assemblies, and with support from their teacher, conductexperiments to verify spring-stiffness hypotheses [70]. Here, we use a dynamicsystem consisting of coupled mass and springs to demonstrate the constructionof and interaction with a physical system model based on the PAL framework(Figure 6.6).System Interprets the User’s StrokeWe used a 2D recognition library implemented in Processing (the $ 1 UnistrokeRecognizer [230] to translate user sketches into a virtual model. $ 1 is an instance-based nearest-neighbor classifier with a 2-D Euclidean distance function. It canaccurately identify 16 simple gesture types, e.g., zigzag, circle, rectangle. Toimprove performance and customize it to shapes relevant to models our systemsupports, we created a database to which learners can add their own labeled strokes.139Figure 6.6: Use case: comparing the dynamic behavior of different spring–mass system configurations by drawing then feeling. (A) The usersketches a pair of spring–mass systems using a system-readable notation.(B, E) Our system recognizes the user’s strokes and incorporates theminto virtual models. The user can now “connect” to one of the drawnmasses by moving over it and e.g., clicking a user interface button. (C)Behavior when connected to the single-spring configuration (A). Thesystem implements the corresponding model (B) by pinning Xavatar tothat mass. The user can then feel the oscillatory force behaviour by“pulling the mass down,” extending and releasing the spring. (D) Theuser connects to the two-parallel-springs configuration, and compares itsbehavior (model E) to the first one. (F) compared to (C) shows a higherforce for the same displacement, and a different oscillatory behavior.This system is implemented using a passivity controller to allow a widerange of M and K values, which are modifiable by hand-writing newvalues on the sketch.140In the current implementation, the system starts in a training mode where users drawthen type to label their sample; then exit training mode and start designing theirexperiment.Our current implementation is modal: it needs to know what kind of a systema user is sketching in order to recognize their marks. A zig-zag could representa spring in a mechanical system, or a resistor in an electrical circuit. This can bedone by manually writing the system type’s name on the paper with the digital penas shown by [142] – e.g., , “Hydraulic lab” triggers a hydraulic simulation. TheTesseract optical character recognizer (OCR) system is one of many robust solutions[111]. For simplicity, we selected environments using a graphical user interface.Reliance on a set notation for sketching has a potential as usability feature orpitfall. If the notation is well known (e.g., taught in the curriculum), it gives thelearner a pre-existing language; versus unfamiliar, unmemorable or uncued (e.g., no“tool-tips”). We did not focus on usability refinement at this stage; ensuring it willbe an important future step.System Interprets User Strokes for Model Construction and ParameterAssignmentEase of environment specification and modification is an important PAL principle.One way that users can specify environment parameters is in the way they draw them.For a mechanical system, a box indicates a mass; mass magnitude is interpreted asthe area within the box. Spring stiffness is assigned based on the zigzag’s aspectratio. Haptics can provide assistive guidance to create more accurate drawings.Here, haptic constraints help the user follow implicit geometrical relationships suchas relative locations and sizes, through “snapping”; thus the user can perceive whenthey reach and move beyond the width or length of the previously drawn spring.Some parameters are harder to indicate graphically, or the user may want tomodify an initial value. This could be handled by writing an equation: e.g., setthe value of gravitational force with g = 9.8m/s2, or change a spring constant byK1 = 10N/cm. As before, recognition can be done with an OCR like Tesseract, apossibility already demonstrated by at least one other system [142].141Unconstrained Experimentation Requires Stepping Up the Control LawFluid exploration means that a learner should be able to observe and feel an object’sbehaviour and reason about it. This requires changing object properties, comparingbehaviour between versions of a model and reflecting on the differences.Above, we introduced the concept of an avatar as key to rendering a wall througha virtual coupling. The avatar’s existence was transparent to the user, its existenceimplicit in their movement. But when we advance to interacting with multipledynamic systems – to compare them – users must get more explicit with their avatar.To “hold on” to and interact with a part of a virtual model, such as a tool or to probepart of a dynamic system, they must hitch or pin their avatar to that model element,just as they might when selecting a character in a virtual game.The combined functionality of (a) pinning and unpinning one’s avatar to arbitrarysystem elements, and (b) allowing unconstrained parameter assignment, is a majordeparture from how a model intended for haptic rendering is typically constructed.Normally, we design an environment with particular components, set its parametersto a pre-known range, and expect the user to interact with it in a particular setof ways – always connecting through a particular avatar linkage. For example,in a surgical simulation, we might have a defined set of tools, and known tissueparameters. Bone and liver have different properties, and rendering them might behighly complex and computationally expensive, but their properties are known inadvance. We can tune a controller (such as a VC) to work with those constrainedconditions.This is no longer the case if parameters can be changed arbitrarily and on thefly, and as usual, the result will be instability. Commonly, several factors can causeinstability, such as quantization, delays, and virtual object properties like stiffnessand mass. We address this next with the passivity controller.6.5.4 Level 3: Expanding the Range of Parameter Explorationthrough Passivity ControlTo move beyond the simple tuning heuristics above, we reference the notion ofpassivity. A real-world, nonvirtual system like a wood tabletop or mechanical buttonor doorknob is energetically passive – it will never vibrate unstably when we touch,142tap or wiggle it because such oscillations require additional energy which theycannot access. The only energy flowing into the interaction comes from our ownhand. At best, we can excite a mechanical resonance (e.g., by bouncing a rubberball, or pumping our legs on a swingset), but this cannot grow in an unlimited waybecause of the lack of an external energy source.In contrast, a haptic display is energetically active: it accesses an externalenergy source through its controller. This is necessary for the system to physicallysimulate a VE’s dynamics. However, instability – often manifested as vibrationsthat grow without bounds, or unnaturally ‘buzz’ upon operator contact – occur whenthe total energy entering the system from the human operator and the controller’scommands is greater than the energy leaving it.Passivity theory underlies a type of controller which can be designed so as toguarantee stability in systems interacting with humans [38, 39, 153]. In essence,passivity controllers bound system movements based on the energy flow throughthe system: they guarantee overall system passivity by ensuring that the energyinput exceeds outputs. It also can achieve global stability through local passivity insubsystems separately. As a result, if we know that other parts of the virtual modeland physical device are operating in a passive range, we can focus on the subsystemthat the (less predictable) user is interacting with.Passivity Controller Overview and DesignWe designed our passivity controller (PCr) with the method described by [86]. Ourcontribution was mainly to implement the PCr on MagicPen and to evaluate itsperformance. We made no changes to its design. In overview (Figure 6.7), thePCr is interposed in series between the haptic interface and VE. This location issimilar to the virtual coupling controller, and like the VC, the PCr works by actingas a virtual dissipative element; the PCr differs from a VC through its more targetedenergetic accounting system.The human operator interacts physically with the haptic device in continuoustime; however, since the control system is digitally sampled, the VE is computedwith a time delay typically specified at 1/10 of the fastest dynamics in the system.The human operator is conceptualized as an admittance – a source of flows (i.e.,143Figure 6.7: Simulation model of a complete haptic interface system and pas-sivity controller, as implemented here. (Reproduced from [86], Figure 8.System blocks are (left to right): user, haptic display, passivity controllerα, and virtual environment.movement), and sink of efforts (i.e., forces) – and the VE as an impedance – asource of efforts and sink of flows.At the heart of the passivity controller is α, which is in turn based on thePassivity Observer (PO). The PO, also known as the parameter Eobsv, computesthe total energy observed in the system at a given moment (n) is the summation ofenergy from the initial time until the moment (n), and can be expressed as:Eobsv(n) = ∆Tn∑k=0f (k)v(k) (6.1)where ∆T is the sampling time, ( f ) and (v) are effort and flow (force and velocity)of the 1 port network at time step n. Specifically, f1 and v1 are effort and flow for thehaptic display, while f2 and v2 are for the force computed from the VE computation.When Eobsv(n) is positive, the system in losing energy; for negative values itis generating energy. The role of the passivity controller is to create a dissipativeelement based on the energy generated by the system (The mathematical proofs arepresented by Hannaford et al [86]). We compute α as:α(n) =−Eobsv(n)/∆Tv2(n)2, if Eobsv < 0.0, otherwise.(6.2)After the VE model is updated, its subsystem forces are recalculated, then passedthrough α before being passed as commands to the haptic display’s actuators. f1,the haptic display command force, is computed as the VE force plus the passivitycontrol component (which is acting to siphon excess energy out of the system). For144each timestamp (n) we have:f1(n) = f2(n) +α(n)v2(n) (6.3)v1(n) = v2(n) (6.4)In this implementation• If the amount of force exceeds the motor force saturation, we subtract theexcess amount and add it to the next time step,• If the user spends significant time in a mode where the PCr is active (dissipat-ing considerable energy to maintain stability), energy will accumulate and thePCr will not transmit actuation forces until the user has backed away from thedissipation-requiring usage, allowing the PCr to discharge. In practice, wereset the PO’s energy accumulation to zero every 5 seconds, scenario-tunablescenario or adapted automatically.Passivity Controller PerformanceExample 1: Large-load CouplingIn our first assessment, we examine the performance of our passivity controllerfor a simple scenario in which the user’s position (Xavatar) is “pinned” to a virtualmass as if holding it in their hand. We evaluate performance with two load levelsand show how the PCr performs on a large-load coupling.Virtual Coupling: (M=1X) Figure 6.5.4(A) shows the displacement (upper) andenergy output (lower) of the virtual coupling system of Section 6.5.2, i.e., without thePCr. The VC parameters are optimized for this system. Thus, when the user (Xuser)moves 2cm, Xavatar follows smoothly with no overshoot, achieving steady-state by150ms. The maximum kinetic energy of PCr can potentially reduce performancein a normal case where it is not needed, as it may siphon off system energy evenwhen not necessary, being a conservative approach. Therefore for cases close to thesystem parameters for which the VC was originally tuned, we switch it off.Virtual Coupling: (M=20X) To understand the effect of changing the virtual avatarproperties, we investigate a scenario of increasing the mass of the virtual free body145Figure 6.8: Abrupt movements of varying loads. The position, i.e., Xavataras it tracks Xuser(upper) and kinetic energy (lower) of the load for (A)the original avatar 25 gram; (B) for the avatar with 20 times more massthan the original avatar without the passivity controller, and (C) with thepassivity controller.being interacted with by 20. Figure 6.5.4(B) shows how the system oscillatesfollowing the same user movement. Although the oscillation is bounded by physicaldamping from the user’s hand, it can become unstable if the user releases the handle.The system kinetic energy peaks at 4.5Ncm then gradually decreases.Passivity Control: M=20X In Figure 6.5.4(C), with the PCr active with a largemass, the system overshoots by 44% but converges within 200ms to the desireddisplacement. System energy peaks at 2.7Ncm and decreases more quickly than inthe VC case for the same mass (B).Example 2: User Interacts with a Virtual Mass-Spring SystemThe previous example showed how PCr can handle a large change in the system’svirtual mass; how does it do with comparable changes in rendered stiffness as wellas the same 20X mass range? We implement the system as illustrated in Figure6.6, where a user draws a mass attached to a spring. Here, Figure 6.6(B) shows agraphical representation of the recognized mode. Our system recognizes a zigzagstroke as a spring and rectangle as a mass, and their connection on the sketch asa kinematic connection between them. The experience is similar to pulling on a146real spring: force increases as one pulls further. Figure 6.6(B) shows the interactionresult: as the user pulls down on the spring (change in Xuser) by around 3cm and then“drops” the force – i.e., stops resisting the haptic display’s applied force – the systemapplies up to 1.27N of force to restore Xuser to its starting position. The systemexhibits a damped oscillation, with two sources: (a) the user’s hand and (b) frictionsin the haptic display. Here, this is desired behavior faithful to the virtual systemdynamics in interaction with the user’s hand damping, not a controller instability.The graphical representation could optionally be displayed to the user to confirmrecognition, and animated as they interact. Drawing and animating could imple-mented on a co-located tablet screen under the haptic display. In future we plan toinvestigate impacts of employing the user’s original strokes versus changing themwith a cleaner graphical representation, and of animating the diagrams.The second row in Figure 6.6 shows the user placing two springs in parallel. Thelearning concept is that springs in parallel sum to a greater combined stiffness thanin series, and the operator should feel a tighter (stiffer) connection. In comparisonthe previous example, the user should perceive a difference in force for the samedisplacement: the system supplies up to 2.3N force to the user’s hand for a similardisplacement to the single-spring case. As these results show, this system remainsstable under passivity control for a doubling of total stiffness in combination withan already-large virtual mass. This mass-spring example can trivially be extendedto include a damper (dissipative element). This is energetically less demanding– a virtual damper does not store energy. In general increasing virtual damping(assuming adequate sampling) reduces susceptibility to large impedance variation[82].6.6 DiscussionWe examine this work’s contributions, and discuss how the PAL approach can bevalidated and extended.6.6.1 The PAL Framework, Guidance and Exposed NeedsWe drew on general theories of experiential learning to propose a framework thatto help haptic and educational experts work together to leverage physical intuition147in an effective learning process. This endeavor needs support: learning technologyis notoriously hard to evaluate for efficacy, and get feedback on what is helpful.Despite evidence for the role of physical intuition and embodiment in effectivelearning, we know far less about how to saliently recreate it in digital contexts.Thus, rather than trying to show directly that haptic feedback helps learning, webuilt on a proven approach in first (a) accepting that designing and exploring arepowerful supports to learning, then (b) seeing how haptic environments can makethese activities more powerful than without them.MetricsWhile we have not yet evaluated our technical demonstrations with students, we willin future choose metrics (as per PAL-inspired goals) to highlight how the activitiescan be more fluid, engaging, focused, intuitive and insightful than without haptics.Guidelines for PAL-Inspired SystemsIn applying PAL principles we exposed some key requirements. We made progressin translating these to technical challenges, some of which can be addressed withcurrent state-of-art techniques, and others where we need to further innovate. Herewe summarize these, noting that while we have identified one pathway to implementthem here (6.6.2), we hope that others will find more.1. Let learners design their own worlds: PAL (and experiential learning theorygenerally) indicates that we should lower friction in letting learners (or in some casetheir teachers) build their environments. This is an old idea – Scratch and its ilkhave born rich fruit – but we need this for environments amenable to haptic displayfor the purpose of accessing physical tuition.2. Let learners explore, iterate and compare those worlds with physical feedback:Exploration should be informative, flexible and fun. Haptic feedback needs to beclear enough to support insights; it must be possible to jump around easily withinan environment and try different things; and the whole process should flow, showinsights that might not be otherwise available, surprise and delight. This entails acertain quality of haptic display, and curation of environments (e.g., mechanicalsystems, electrical, hydraulic, chemistry) that while offering broad scope, also guide148the learner on a rewarding path.3. Moving between designing and exploring and back should be fluid: Whenexperiential learning is working as it should, learners will generate more questionsas they explore, and want to go back, re-design, compare and ask again. If theyhave to change modalities or undergo a laborious process to alter the environmentor compare different examples, this cycle will be inhibited. We wonder if it isworth trying to stay (graphically) on paper while the digital world plays out throughthe haptic device, for immersion, focus and the intuitiveness of physical drawing;instead of fussing with a GUI.4. Support a Broad Space for Experimentation: Instability is a continual riskfor haptic force feedback systems, and could quickly turn anyone off as wellas obscuring recognizable physical insights. Tightly restricting the explorableparameter space is an unacceptable solution, since it likewise limits the kinds ofexperiments to be conducted. Passivity control is one approach to a broader rangethan the methods currently available to novice hapticians via libraries.6.6.2 Technical Proof-of-ConceptIn the scope of this chapter, we have demonstrated at least one full technical pathwayfor a system that allows a user to design a haptically enabled system by sketching iton paper while adhering to some basic conventions, then interact with that systemhaptically – and stably – without changing mode or context across a parameter rangeof which is larger than typically supported in haptic environments. Its and-strokerecognition supports low-friction designing, so users can informally sketch ideas,even alter them. For exploring, we identified the inadequacy of the conventionalrendering method of virtual coupling given the range of system parameters weneed to support, and showed how a more specialized controller (based on passivitytheory) could take it to this needed level. We encourage curators of haptic librariesto include passivity control support.1496.6.3 Generalizing to Other Physics Environments: a bondgraph-Inspired ApproachOur examples demonstrate the ability of a passivity controller to bound a system’senergy and prevent instability across a broad range of simulated system parameters.We did this based on a basic mechanical dynamic system, a mass oscillating withdifferent spring combinations. This step can be translated with relative ease to othersystems of interest in science learning.Bond graph theory [109, 177] relates physical domains (e.g., mechanics, elec-tronics, hydraulics) based on energetic concepts of efforts and flows. This common-ality is a means to connect domains, and also allows the translation of ideas betweenthem. For our purposes, a physical model developed to represent a mechanicalsystem can be translated with relative ease to an electrical domain.Bond graphs hold threefold value here. First, technically we can exploit itsanalogies and representation to translate models and their support to other physicaldomains. Comparable properties will be relevant. In bond graphs, springs (mechan-ical) and capacitors (electrical) are analogs, both idealized to store energy in thesame way, as are mass and inductance, dampers and resistors. Table 6.2, drawnfrom [24], includes a full list of bond domain analogies.Second, these analogs provide a language and convention by which to renderphysical properties haptically: e.g., effort, flow, resistance and inertance can bedeveloped once and re-used in their relation to one another. It simplifies implemen-tation in new domains.Table 6.2: Analogy between some conventional physical domains, reproducedfrom Borutzky’s [24].Domain Flow Effort Compliance Resistance InertanceElectric Current Voltage Capacitor Resistor InductorKinetictranslationVelocity Force Spring Damper MassKineticrotationalAngular Ve-locityTorque TorsionalSpringDamper InertiaHydraulic Flow rate Pressure Chamber Valve Fluid iner-tia150Thirdly and most interesting pedagogically, these analogs are a powerful wayto grasp and generalize fundamental relationships in physical systems. The hapticrepresentation will reinforce this bond-centered generalization, helping learners totransfer their growing knowledge across domains: once they have mastered how therelations between current, voltage, compliance and resistance work in the electricaldomain, they should be able to quickly apply them to kinetic or hydraulic systems.It is often the case that a learner feels more comfortable in one domain; they canuse this ’home’ grounding to support their understanding elsewhere.6.7 ConclusionA long-awaited promise of ubiquitous computing [223] is natural access to compu-tational power where and when we need it. Yet, for the most part we remain tied toa small screen and a keyboard or tablet, with constrained space to work, keystrokeinput, a single viewport with many distractions, and interaction generally on theterms of the device.In this chapter we proposed an approach to support multimodal learning withpotential benefits to embodied learning and thinking. It includes a frameworkdrawn from validated theories of experiential learning translated to the physicaldomain to guide system designers in creating educational systems focused ondesigning and exploring; underscoring of the importance of fluid, same-modalitymovement between these learning phases; demonstrations of the technical feasibilityof implementing both idea capture and physical rendering in a pen-and-paperenvironment; and guidelines and assessment of how to move such a vision forward.We demonstrated these ideas on a fixed small-workspace device, but untethered,infinite workspace grounded force feedback has been prototyped and could becommercially viable given demand.The present work points to a path away from tethered, disembodied interaction,examining ways to harness the natural fluidity and ease of pen-and-paper interactionsand connect them to powerful digital simulation for the purpose of simulation,gaining physical, embodied insight, problem solving and thinking with our senseof touch as well as our heads and eyes. A graphical viewport is not always neededwhen we have our imagination, a sketchpad and hands to feel.151Chapter 7ConclusionsWe envision that a versatile digital manipulative should be able to support designand explore learning activities in response to continuous interactions with thelearner. Below we discuss our contributions towards this vision.7.1 Thesis Objectives and ContributionsIn contrast to other excellent research in educational haptics, the work in thisdissertation is shaped by an aspiration to develop and validate a versatile toolthat empowers learners to perform operations on the object of knowledge. Basedon Piaget’s cognitive development theory, to know an object means to act on it.Operation as an essence of knowledge requires the learner to modify and transforman object, and also understand the process of transformation which leads to theknowledge of how the object is constructed [180]. We further evaluate whetherdoing specific operations with our tool can improve the way that users learn andthink.When designing an educational manipulative, whether a tangible interface ora robot, we observed that several researchers focused only on exploration. Theyoften study learning impact by placing learners inside a simulation. The hypothesisin these studies is usually that learning will improve in a multi-modal learningenvironment as compared to using a single modality (often visual). Of course,some learners could bridge the gap between their previous knowledge to the new152knowledge and learn the new concept. But we believe we can increase the chancesfor successful assimilation/accommodation by empowering learners to constructthis bridge based on their hypothesis.Design is the key element of Constructivism. Through design, learners activelyconstruct their own knowledge. As opposed to expressing the results to learnersand ask them to accept them, design allows learners to build upon their previousknowledge, make assumptions, come up with a hypothesis, and then evaluate theresults. We did not find an educational tool that could aid learners during the processof designing and exploring; therefore, we sought a new digital manipulative thatcan help in both construction and exploration of new knowledge by exploitingphysicality and ubiquitous computing.A long-awaited promise of ubiquitous computing [223] is natural access tocomputational power where and when we need it. Yet, for the most part, we remaintied to a small screen and a keyboard or tablet, with constrained space to work,keystroke input, a single viewport with many distractions, and interaction generallyon the terms of the device.In this dissertation, we presented the design evolution and two applications (as-sisted sketching design and haptic exploration) for a novel digital manipulative in aneducational context. It documents technical feasibility for a basic implementationof a pen-and-paper interaction approach to interactive, self-driven, exploration-centered physical simulation for the sake of learning and gaining physical insightabout ideas. Much work remains before we can claim that the concept is ready forroll-out to students and teachers. In this chapter, we will summarize our contri-butions as well as our research findings, then outline the future steps towards ourvision of MagicPen for learning.7.1.1 Objective I: Design, Interaction Space, and Applications of aLow-Cost and Large Workspace Haptic DisplayHow can we create a low-cost, large workspace force feedback device? Whattype of new interactions can we support with it? What are the potential edu-cational applications of this platform?We made a low-cost, robust, and highly portable haptic stylus that can supporttwo types of interaction: (a) force feedback assisted drawing, (b) haptic rendering153of virtual environments. In Chapter 3, we introduce a novel, low-cost groundedforce feedback device with an unlimited 2D workspace.Our first contribution (Contribution I) was significant given the inaccessibility ofcurrent haptic displays due to the cost and workspace. For example, Haply [67] – acommercialized pantograph – costs about $300, despite its small workspace, whichis insufficient for most school-environment applications. We expect the retail costfor MagicPen to be around $150−$200 1.As we proceeded to optimize our design, we had to choose priorities. Wefocused on large strokes and fast communication as we thought they were morenecessary for the type of interactions needed for designing and exploring. Futurestudies can optimize the MagicPen for better accuracy, higher amplitude and moreresolution of the force feedback.There are also opportunities to optimize our system both in cost and ergonomy.For instance, it is feasible to employ only two motors in the ballpoint drive, whichwould potentially reduces the cost even further and improves the ergonomy of thedesign of the stylus-based haptic device. Therefore we believe that the introductionof MagicPen lowers the barriers of entry for haptics in the education settings. Weidentified three primary types of haptic feedback that MagicPen is able to support,namely, 2D spatial guidance, 2D virtual fixtures, and vibrations. We only exploredthe interactions that require 2D force feedback, e.g., navigational guidance, andvirtual walls. The interactions that fully or partially rely on vibrotactile feedbackand its combination with force feedback remain for future work.Creating a functional platform was critical to the success of this dissertationand at the same time, had the highest risk among the rest of the objectives. We didseveral iterations to ascertain the saliency and consistency of the generated force1Electronic components currently dominate cost, and can be reduced 40% through integration andmore optimized choices, leading a reduction in parts cost from the current prototypes $100 to $60.For 500 samples we have:• Device parts: $60 USD• Assembly labour: $15 USD per device• Marketing + Website + etc: $5000 USD• Benefits:$10 – $30 USD per device• Injection Moulding: $10000 – $15000 USDTotal cost= $115 – $155 USD per device= $148 CAD – $200 CAD154feedback. The risk of this objective was high due to the fact that the results of theother objectives highly depended on the performance of our system. For instance,in case of failure, we needed to be able to differentiate between causes of no effectof haptics or low haptic quality.MagicPen opens up new opportunities for a variety of applications includingeducation, gaming, and assistive technologies. In this dissertation, we focusedonly on the educational class of applications. Instead of focusing on a particulareducational problem or need, we studied how our device can impact the ubiquitouspen-and-paper interactions. Therefore, any positive result not only justifies theusefulness in a specific task but also pertains to a large impact size by takinginto account the number of times it is being used throughout the learning process.Further investigations can check the usefulness and efficacy of MagicPen on specificlearning disorders, e.g., dyslexia.7.1.2 Objective II: Phasking and Computer-Aided DesignWhat are the core interaction concepts for physically assisted sketching (phask-ing), and how we can support them with our force feedback pen?While a major objective of the development of this technology was to supportlearning, usually by children or youth, we did not design MagicPen to be limited toa certain age range or be used just by children. If children see that a tool is beingused by adults, they will not consider it as a toy and be more motivated to learn howto use it. Our Phasking application is an example that covers the full spectrum ofusers ranging from novices to skilled drawers. MagicPen not only helps designerswho are less proficient at drawing to enhance their rapid sketches on paper but alsoaids experts to exploit their drawing skills in their CAD designs.In our second contribution (Contribution II), we introduced Bring, Bound, Con-trol sharing, Tool selection, and Constructing constraint environment as core inter-action concepts in our Phasking framework. The details of how to support usershighly depend on the application and require further in-depth investigation. Wereview the benefits of each of these proposed core interaction concepts in differentsketching scenarios.Constraints (Bounding and Bringing) – Using a traditional (familiar) set of tools155(e.g., ruler, compass and protractor) can aid both novice and experienced designersin creating more consistent and comprehensible sketches. However, tools (or thelack of the right one) can also slow them down. Our device offers both bringand bound constraints covering the whole variety of drawing assistants which thedrawing assist tool-sets can bring to the table. Our results suggest how our systemimproves the accuracy of users’ drawings in different tasks even when the userswere putting the objects in perspective.Tool Selection – MagicPen enhances communication by empowering a user’s rapidsketching and drawing ability. We show how a user can select a specific functionalityfrom a paper tool palette and accordingly the MagicPen helps them to draw theselected option. For instance, a user can select basic geometries and draw themwith the help of MagicPen and then use them as the foundation for more complexdrawings. In the current design, a user needs to explicitly select a tool from a papertool palette. Another possibility is to implement more implicit physical assistantsby predicting the user’s intentions and help the user on the go.Control Sharing – Our shared control drawing concept tries to preserve the cre-ativity in assisted drawing by bringing the authority control to the hand of a user;therefore, the user can decide on the amount of assistant they receive. We hope thisapproach leads to more free collaboration between a user and the computer. Furtherstudies are needed to elaborate on the use of control sharing in the creation of newart and assess the novelty and creativity aspects of it.Constructing constraint environment – A key to expressive drawing is to constructand track the constraints. A user can define and set the constraint within theenvironment they operate. MagicPen supports the construction and required forcefeedback assistance in both digital and manual drawing mediums. To date, manualdrawing and CAD drawing are performed in separate worlds, with few conversionoptions to move from one to the other.We identified a gap between digital and manual drawing design spaces and wesought a new solution to close the gap. Our MagicPen uses built-in CAD softwareand is the first step towards a medium to interactively exchange pen-and-paperdrawing information between a user and a computer – a process that is known asdigital twinning.Our work did not evaluate digital twinning and neither did we study how bring-156ing ubiquitous computing directly to hand could enhance a designer’s effectivenessand productivity. Future studies can investigate how these core interactions alongwith ubiquitous computing can impact the analysis, modification, or optimization ofa design.7.1.3 Objective III: Intuitive Learning of STEM with HapticsHow can we improve the versatility of the device through force feedback- thecapacity to express information to users through the addition of haptics?We strove for a versatile device that would make it suitable for several learningscenarios. In the third objective, we tried to understand the importance of physicalityin STEM and uncover the useful strategies of employing haptics in learning activities.We tried to create more conclusive results through the lens of a collaborative learningframework. Specifically, we searched for how individuals try to construct, negotiateand share meanings using force feedback during the grounding process.We learned that haptic feedback works as an unobtrusive channel to provideessential physical information to the user, to reinforce the visual cues by making amulti-modal experience, and to increase awareness of partners’ presence and actions.It can also center one or a group of users’ attention around a certain manipulationlocation, and lower cognitive effort through unobtrusive GUI-provided constraints,particularly as the GUI changes due to partner activity.In our third contribution (Contribution III), we sought strategies of when andhow force feedback should be offered to students to enhance their science learningreveals a large frequency of concurrence between exploration and evaluation of ahypothesis during the monitoring and reflection grounding acts. We also observed asimilar trend of co-occurrence which suggests potential correlation between physicalcollaboration and the construction of a new environment to introduce a new topic.Our closer look at the critical haptic events for three different environmentspaired with different haptic devices demonstrates how haptics can impact differentdimensions of collaboration. Our results show a significant correlation betweenusing haptics and reaching consensus across three environments. It also suggests astrong relationship between the use of haptics and sustaining mutual understandingas well as dialogue management. We observed different collaboration dynamics157for the same dyad in three different environments suggesting that types of forcefeedback, the learning scenarios, and background knowledge, could impact theeffectiveness of using haptics for learning STEM.We shared the lessons we learned to help haptic designers and educationalspecialists for future research is touch/haptic sensory feedback for education toimprove the chances of delivering a more successful haptic learning experience.7.1.4 Objective IV: Physically assisted learning (PAL)What are the key haptic interactions that can support learners throughoutdifferent stages of experiential learning cycle?We connected the activities in design (Objective II) and exploration (ObjectiveIII) to achieve a smooth transition between these two stages in learning. Weconsidered Kolb’s [122] four stages of experiential learning and identify the keyhaptic interaction in each stage by reflecting on our findings from Chapter 4 andChapter 5.We introduced the Physically Assisted Learning (PAL) framework by focusingon the useful haptic interactions in two out of four stages of experiential learning i.e.,Active Experimentation & Concrete Experience. More specifically, we focused oncreating a smooth transition between these two stages through haptic augmentationby drawing the haptic experience and then feeling it.We proposed an approach to instantiate this framework for two learning scenar-ios: a) handwriting assistant and b) mass-spring experiment. In both scenarios, theuser creates the haptic experiences just by drawing them and then starts exploring.Accordingly, we unveil hidden technical difficulties as the haptic experiences levelsup to more advanced renderings. For example, we addressed different challengesin stroke recognition or maintaining control stability in a highly dynamic situation.Finally, we offered a path forward to extend our approach to other domains such asphysics and math.As a part of Contribution III, we presented the theoretically-grounded PAL frame-work and demonstrate the technical feasibility of it. We found several pieces ofevidence in related work to support our proposed framework; however, we still lacka user evaluation to confirm the manner and degree to which it actually supports158learning. Validating this framework will require a series of focused studies thatempirically evaluate the added value of physicality in design and explore learningphases as well as fluidity in the transition between them. These studies also needto consider factors such as engagement, ownership of knowledge, self-efficacy,self-confidence, and self-paced learning.We see an immense value in studying design and explore together. It is crucial toknow the impact of haptics in each part individually, but at the same time, these twocomponents complement and reinforce each other. Conducting design activities willhelp learners to set the assumptions and define important parameters that eventuallyimprove the learners’ awareness of the explored environment. On the other hand,the experiences in explore can enhance the following iterations in design. Thereforea summative assessment is needed to articulate the larger question of how PAL cansupport operation on the object of knowledge.7.2 LimitationsWe mentioned some specific limitations above in the context of each objective. Herewe discuss the higher level limitations that we found in the path towards evaluationand wide-spread use of MagicPen in education.7.2.1 Limitations in Assessing Quality of DrawingWe used accuracy as an objective measure to assess the benefits of phasking on thequality of drawing. Accuracy matters when the user has a clear image of the drawingfrom the beginning. This is often not the case as the user’s sketch extensivelyevolves over the course of drawing. The lack of clear expected outcomes inhibitscareful evaluation of any drawing assist tools, especially when we need to maintainauthorship and originality of the artwork.In order to support this application, implicit assistance offers another classof phasking’s supports that we did not study in this dissertation. As the drawingprogresses, the computer agent tries to predict the user’s intended drawing andaccordingly apply proper force feedback assistance to the user’s hand. The implicitassistance together with control sharing create a unique Human-AI collaborationthat potentially preserve creativity while enhancing the quality of drawing.159We identified two objective measures to assess the quality of drawing for implicitphasking support with the goal of communication [133][56]. We can improveexpressivity by assisting users to create:• (a) more realistically proportioned drawings,• (b) drawings with high-level perceptual information.Even with these studies, we will still be far from making any general claimabout the impact of phasking on a more artistic drawing or improving the drawingskills of the users.7.2.2 Evaluations are PreliminaryOur qualitative approach towards understanding the importance of haptics in learn-ing had a limited number of participants. It is often challenging to find the rightsample size to represent a given learner population. Several factors and covariancescontribute when we try to assess the impact of haptics and physicality in learning(e.g., cultural background, socioeconomic status, and mental ability). One way toaddress this problem is to use a large sample size from a variety of schools. Largesample size can increase the statistical power and uncover individual differences.This becomes important when if the educational tool can only benefit some per-centages of learners. We can still call this tool effective as long as we identify thatspecific group of learners who benefit from this tool the most.We did not investigate learner’s prior tactile experience and how it can impactthe efficacy of MagicPen comprehensively. This was partially due to the fact thatour participants did not have previous experiences with haptic displays (there werecases where learners were familiar with physical manipulatives). A longitudinalstudy could explore the long-term educational impact of haptics, as well as shedlights on how learners gradually develop skills that enhance using the sense of touchfor learning.7.2.3 Small Library of Learning ActivitiesWe used some specific learning activities that could highlight the strengths ofour PAL framework. However, similarly to other educational robots, we need tohave a variety of concrete lesson plans and a set of objectives to promote learning160activities with MagicPen. This requires a substantial investment and more resourcesbeyond what exists in our lab. Focusing on teachers as the main orchestrators inclassrooms, we tried to simplify this process and lower the technical barriers forinviting teachers to design these learning activities, set learning objectives, andeventually guide learners towards them. However, more research is required toprovide high incentives to teachers to get involved and adopt this technology in theirclassrooms.In addition, we lack a community platform that teachers and learners can sharetheir results and findings with their peers. This platform is where teachers canpresent their learning activities and share the lesson learned. Learners, on the otherhand, can discuss their problems, learn from each other, and together tackle newproblems. There is no need to pose any age or location constraint on this platform.We can create a learning society where learning occurs across learners’ lifespans.7.3 Future Work: The Path ForwardWe lay out some foreseeable next steps.Basic Access and VersatilityFirst and foremost, building haptically augmented worlds and even accessing theminteractively requires considerable expertise and infrastructure. Haptic technologyis anything but accessible, and this barrier will need to be breached. As for anyeducational technology, the principal barriers will be cost, robustness, versatility,and usability or expertise.The vision which will eventually spur the needed technical refinement is versa-tility: myriad ways to use physical interaction in a form factor that one can carryaround, perhaps first like an engineering calculator then becoming more ubiqui-tously useful as a haptically augmented smartphone stylus. Versatility is neededfirst in use case development, and following that in device form factor. In orderto enhance MagicPen’s versatility, we can incorporate other modalities to commu-nicate richer information to the user. It can embed an E-ink LCD to graphicallypresent information based on its location, or use a microphone to receive the user’scommand or understand their utterance and behave accordingly.161Usability and Expertise: A theme in this dissertation has been lowering frictionand barriers to entry for both teachers and learners. This also needs to becomemore true for system designers, allowing them to participate in system developmentfrom their home discipline and even without engineering expertise – e.g., educationexperts. Input methods, library construction, support groups, and other aspects ofdevelopment ecosystems will move us in this direction.Eventually, Logistical Deployment with Kids: Classrooms are challenging envi-ronments; just batteries, power cords and updating host computers present almostinsurmountable obstacles. The first point of contact may be science centers andtutoring centers, and potentially on to personal devices (like student calculators)rather than school-supplied technology. A key to classroom adoption is teachers’awareness about assistive learning technology in their classroom by monitoring thestatus of MagicPens in a dashboard. This enables the teachers to better orchestratetheir classroom and for immediate troubleshooting purposes.Enhanced Usability, Fluidity and FunctionThroughout this dissertation, we have described many possible variations andaugmentations to our basic implementation, all of which can be explored to discoveroptimality from logistic and pedagogical standpoints, and inform the direction offurther technology development. To name a few (and going beyond innovation inthe haptic technology itself):• CAD-type sketching support at the design stage• More advanced sketch-recognition functions, e.g., setting and modifyingsimulation parameter values by [re]writing them on paper• Generating more extensive simulation environments, in multiple domains(e.g., bond graph extensions)• Utilizing more sophisticated haptic rendering algorithms as we encounterlimits• Finding good haptic representations for abstract fields such as maths• Libraries to support educators setting up “design sandboxes”.1627.4 Final RemarksIn this dissertation, we tried to make a vehicle to deliver Piaget’s theory of Con-structivism and used this lens to both innovate and study alternative forms of haptictechnology. We learned that note-taking on paper is not just for information retentionbut are one of the main places where the construction of knowledge occurs. Paper in-teractions have already become smart through years of innovation in touch-enabledLCDs technology on tablets and laptops. So instead of paper, we focused on thepens. My supervisor’s and my efforts can be summarized as adding force feedbackto pens and evaluate the potential educational benefits. We tried to keep the devicelow-cost to be accessible by every motivated learner. The outcome turns into a fullystandalone device that can be a powerful tool with minimal requirements includingenergy and a piece of paper. Besides these requirements, the pick-and-play featureof MagicPen potentially can lower the logistic barriers for entering the classrooms.It is most likely that the virtual world will become a big part of our lives, andeducation will not be an exception. The field of haptics offers realism, immersion,and expressivity [118] to extend the physical experiences into the virtual world.This gives MagicPen a unique opportunity to push us toward mixed reality learning,where the learning does not stop in one world. Removing XR goggles puts an endto the users’ interaction with the virtual world. Unlike XR goggles, MagicPen willbe with the user throughout the whole experience. We call it a "haptic ecotone"between the physical and the virtual world – a learning companion that carries theexperiences forward.Our final note is essentially a reiteration of this dissertation’s objectives. Beyondthe creation of MagicPen itself, we learned that we should let the learners be incharge of constructing and exploring their learning environments. As a result,learners are able to hypothesize and then critique their own thinking, as opposedto traditional methods which expose learners to a pre-designed phenomenon in anexplanatory manner. We should not confine learners to the designer’s imaginationor mental model of the world. For example, learners cannot try out the effect ofnegative gravity unless this condition has been foreseen and implemented by thedesigner. Arguably, we believe that this contradiction arises from the Post-positivismapproach that we are taking to simulate the world in a virtual environment versus the163Constructivism requirements we need to create a hypothetical world for education.For instance, one of the key aspects of a Constructivism’s learning environment isthe freedom to make mistakes and learn from them. However, this freedom comesat a cost, when learners do not know what exactly to do and therefore need guidance.The challenge is to provide just enough guidance to lead the learner in the rightdirection without limiting them to the designer/instructor’s imagination.164Bibliography[1] 3Dsystems. Touch haptic device, accessed February 15, 2019. URLhttps://www.3dsystems.com/haptics-devices/touch. → pages 4, 23[2] D. Abbink and M. Mulder. Neuromuscular analysis as a guideline indesigning shared control. In M. H. Zadeh, editor, Advances in Haptics,pages 23–35. IntechOpen, 2010. → pages 19[3] D. A. Abbink, M. Mulder, and E. R. Boer. Haptic shared control: smoothlyshifting control authority. Journal of Cognition, Technology and Work, pages19–28, 2012. → pages 19, 52, 55[4] H. Abelson and A. A. DiSessa. Turtle geometry: The computer as a mediumfor exploring mathematics, volume 8. MIT press, 1986. → pages 2[5] E. K. Ackermann. 26 experiences of artifacts. Designing ConstructionistFutures: The Art, Theory, and Practice of Learning Designs, page 275, 2020.→ pages 124, 127, 129[6] B. D. Adelstein and M. J. Rosen. Design and implementation of a forcereflecting manipulandum for manual control research. Advances in Robotics,pages 1–12, 1992. → pages 16[7] S. Adhikari. Stylus based haptic peripheral for touch screen and tabletdevices, 2011. URL https://patents.google.com/patent/US8681130. USPatent US8681130. → pages 22[8] M. M. Amin, H. B. Zaman, and A. Ahmad. Visual haptic approachcomplements learning process of jawi handwriting skills. In 2013 5thInternational Conf on Information and Communication Technology for theMuslim World (ICT4M), pages 1–6, 2013. → pages 135[9] A. N. Antle and A. F. Wise. Getting Down to Details: Using Theories ofCognition and Learning to Inform Tangible User Interface Design.Interacting with Computers, pages 1–20, 2013. → pages 123, 126165[10] T. Asselborn, A. Guneysu, K. Mrini, E. Yadollahi, A. Ozgur, W. Johal, andP. Dillenbourg. Bringing letters to life: Handwriting with haptic-enabledtangible robots. In Proceedings of the 17th ACM Conf on Interaction Designand Children (IDC), pages 219–230, 2018. → pages 120, 135[11] S. Atmatzidou, S. Demetriadis, and P. Nika. How does the degree ofguidance support students’ metacognitive and problem solving skills ineducational robotics? Journal of Science Education and Technology, pages70–85, 2018. → pages 124[12] C. A. Avizzano, M. Satler, G. Cappiello, A. Scoglio, E. Ruffaldi, andM. Bergamasco. Motore: A mobile haptic interface for neuro-rehabilitation.In Robot and Human Interactive Communication (RO-MAN), pages383–388, 2011. → pages 27[13] M. Baker, T. Hansen, R. Joiner, and D. Traum. The role of grounding incollaborative learning tasks. Journal of Collaborative Learning: Cognitiveand Computational Approaches, 0:63–75, 1999. → pages xx, 73, 75, 79, 82[14] B. Barron. Problem solving in video-based microworlds: Collaborative andindividual outcomes of high-achieving sixth-grade students. Journal ofEducational Psychology, pages 391–405, 2000. → pages 78[15] A. L. Barrow and W. S. Harwin. High bandwidth, large workspace hapticinteraction: Flying phantoms. In 2008 Symp on Haptic Interfaces for VirtualEnvironment and Teleoperator Systems, pages 295–302, 2008. → pages 17[16] O. Bau, I. Poupyrev, A. Israr, and C. Harrison. Teslatouch: Electrovibrationfor touch surfaces. Proceedings of the 23nd annual ACM on User interfacesoftware and technology, page 283–292, 2010. → pages 25[17] M. Baumann, K. E. MacLean, T. W. Hazelton, and A. McKay. Emulatinghuman attention-getting practices with wearable haptics. In IEEE HapticSymp (HAPTICS), pages 149–156, 2010. → pages 72[18] B. Bayart, A. Pocheville, and A. Kheddar. An adaptive haptic guidancesoftware module for i-touch: example through a handwriting teachingsimulation and a 3d maze. In IEEE International Workshop on Haptic AudioVisual Environments and their Applications, page 6, 2005. → pages 48[19] N. Beckers, E. H. van Asseldonk, and H. van der Kooij. Haptichuman–human interaction does not improve individual visuomotoradaptation. Scientific reports, pages 1–11, 2020. → pages 128166[20] P. Berkelman, B. Tix, and H. Abdul-Ghani. Electromagnetic positionsensing and force feedback for a magnetic stylus with an interactive display.IEEE Magnetics Letters, pages 1–5, 2019. → pages 25[21] A. K. Bhunia, A. Das, U. R. Muhammad, Y. Yang, T. M. Hospedales,T. Xiang, Y. Gryaditskaya, and Y.-Z. Song. Pixelor: A competitive sketchingai agent. so you think you can sketch? ACM Trans. Graph., pages 166–181,2020. → pages 132[22] S. Blanchard, V. Freiman, and N. Lirrete-Pitre. Strategies used byelementary school children solving robotics-based complex tasks: Innovativepotential of technology. Procedia-Social and Behavioral Sciences, pages2851–2857, 2010. → pages 124[23] D. Bloom. Collaborative test taking: Benefits for learning and retention.College Teaching, pages 216–220, 2009. → pages 78[24] W. Borutzky. Bond graph modelling of engineering systems. SpringerScience Business Media, 2011. → pages xvi, 150[25] BOSCH. Absolute orientation sensor. 2015. URLhttps://www.bosch-sensortec.com/bst/products/all_products/bno055. →pages 39[26] V. Braun and V. Clarke. Using thematic analysis in psychology. QualitativeResearch in Psychology, pages 77–101, 2006. → pages 99[27] S. Brave and A. Dahley. intouch: A medium for haptic interpersonalcommunication. In Conf. on Human Factors in Computing Systems (CHI),pages 363–364. ACM Press, 1997. → pages 72[28] K. A. Bruffee. Collaborative learning and the conversation of mankind".College English, pages 635–652, 1984. → pages 78[29] G. Burdea, J. Zhuang, E. Roskos, D. Silver, and N. Langrana. A portabledextrous master with force feedback. Presence: Teleoperators and VirtualEnvironments, pages 18–28, 1992. → pages 16[30] D. Cesareni, S. Cacciamani, and N. Fujita. Role taking and knowledgebuilding in a blended university course. International Journal ofComputer-Supported Collaborative Learning, pages 9–39, 2016. → pagesxv, 91167[31] A. Chan, K. E. MacLean, and J. McGrenere. Designing haptic icons tosupport collaborative turn-taking. Int’l J Human Computer Studies, 66:333–355, 2008. → pages 73[32] Y. Cho, A. Bianchi, N. Marquardt, and N. Bianchi-Berthouze. Realpen.Proceedings of the 29th Annual Symp on User Interface Software andTechnology, pages 30–43, 2016. → pages 29[33] S. Choi and K. J. Kuchenbecker. Vibrotactile display: Perception,technology, and applications. Proceedings of the IEEE, Special Issue onPerception-Based Media Processing, pages 2093–2104, 2013. → pages 16[34] E. C. Chubb, J. E. Colgate, and M. A. Peshkin. Shiverpad: A glass hapticsurface that produces shear force on a bare finger. IEEE Trans on Haptics,page 189–198, 2010. → pages 25[35] H. H. Clark and E. F. Schaefer. Contributing to discourse. Cognitive science,pages 259–294, 1989. → pages 86[36] M. M. Clifford. Thoughts on a theory of constructive failure. EducationalPsychologist, pages 108–120, 1984. → pages 130[37] T. R. Coles, D. Meglan, and N. W. John. The role of haptics in medicaltraining simulators: A survey of the state of the art. IEEE Trans on Haptics,pages 51–66, 2011. → pages 23[38] J. E. Colgate and J. M. Brown. Factors affecting the z-width of a hapticdisplay. In IEEE International Conf on Robotics Automation, pages3205–3210, 1994. → pages 143[39] J. E. Colgate and G. G. Schenkel. Passivity of a class of sampled-datasystems: Application to haptic interfaces. Journal of Robotic Systems J.Robotic Syst., page 37–47, 1997. → pages 143[40] J. E. Colgate, W. Wannasuphoprasit, and M. Peshkin. Cobots: Robots forcollaboration with human operators. In the 5th Ann. Symp. on HapticInterfaces for Virtual Environment and Teleoperator Systems, ASME/IMECE,volume DSC, pages 443–439, 1996. → pages 21, 43, 55[41] F. Conti, F. Barbagli, R. Balaniuk, M. Halg, C. Lu, D. Morris, L. Sentis,J. Warren, O. Khatib, and J. K. Salisbury. The CHAI libraries. pages496–500, 2003. → pages 18168[42] A. L. Cornu. Meaning, internalization, and externalization: Toward a fullerunderstanding of the process of reflection and its role in the construction ofthe self. Adult Education Quarterly, pages 279–297, 2009. → pages 124[43] H. Culbertson and K. Kuchenbecker. Ungrounded haptic augmented realitysystem for displaying roughness and friction. IEEE/ASME TransMechatronics, pages 1839–1849, 2017. → pages 22, 26, 51[44] R. Davis, M. Martina-Ortiz, O. Schneider, K. E. MacLean, A. Okamura, andP. Blikstein. The haptic bridge: Towards a theory of haptic-supportedlearning. In Interaction Design and Children (IDC), pages 45–53, 2017. →pages 7, 28, 74, 77, 84, 129[45] T. de Jong, M. C. Linn, and Z. C. Zacharia. Physical and virtual laboratoriesin science and engineering education. Science, pages 305–308, 2013. →pages 78[46] J. Dewey. Logic: The theory of inquiry. 1938. → pages 123[47] S. Dey, P. Riba, A. Dutta, J. Llados, and Y.-Z. Song. Doodle to search:Practical zero-shot sketch-based image retrieval. In Proceedings of theIEEE/CVF Conf on Computer Vision and Pattern Recognition, pages2179–2188, 2019. → pages 132[48] P. Dillenbourg, D. Traum, and D. Schneider. Grounding in multi-modaltask-oriented collaboration. In Proceedings of the European Conf on AI inEducation, pages 401–407, 1996. → pages xv, 73, 86, 87, 89, 90[49] N. Diolaiti, G. Niemeyer, F. Barbagli, and J. K. Salisbury. Stability of hapticrendering: Discretization, quantization, time delay, and coulomb effects.IEEE Trans on Robotics, pages 256–268, 2006. → pages 132[50] D. Dixon, M. Prasad, and T. Hammond. icandraw: Using sketch recognitionand corrective feedback to assist a user in drawing human faces. InProceedings of the SIGCHI Conf on Human Factors in Computing Systems,pages 897–906. ACM, 2010. → pages 51[51] S. Do-Lenh, P. Jermann, A. Legge, G. Zufferey, and P. Dillenbourg.Tinkerlamp 2.0: Designing and evaluating orchestration technologies for theclassroom. In 21st Century Learning for 21st Century Skills, pages 65–78.Springer Berlin Heidelberg, 2012. → pages 77[52] W. Doise, G. Mugny, A. S. James, N. Emler, and D. Mackie. The socialdevelopment of the intellect. Elsevier, 2013. → pages 78169[53] P. R. . Electronics. Micro metal gearmotors. 2019. URLhttps://www.pololu.com/category/60/micro-metal-gearmotors. → pages 57[54] F. L. Engel, P. Goossens, and R. Haakma. Improved efficiency through i- ande-feedback: a trackball with contextual force feedback. InternationalJournal of Human-Computer Studies, pages 949 – 974, 1994. → pages 24[55] M. A. Evans. Rapid prototyping and industrial design practice: can hapticfeedback modelling provide the missing tactile link? Rapid PrototypingJournal, 2005. → pages 128[56] J. E. Fan, D. L. Yamins, and N. B. Turk-Browne. Common objectrepresentations for visual production and recognition. Cognitive science,pages 2670–2698, 2018. → pages 160[57] M. Fan, A. N. Antle, and E. S. Cramer. Exploring the design space oftangible systems supported for early reading acquisition in children withdyslexia. In Proceedings of the TEI ‘16: Tenth International Conf onTangible, Embedded, and Embodied Interaction - TEI, pages 689–692, 2016.→ pages 126[58] N. Fellion, T. Pietrzak, and A. Girouard. Flexstylus: Leveraging bend inputfor pen interaction. In Proceedings of the 30th Annual ACM Symp on UserInterface Software and Technology, page 375–385, 2017. → pages 29[59] P. Fernando, R. L. Peiris, and S. Nanayakkara. I-draw: Towards a freehanddrawing assistant. In Proceedings of the 26th Australian Computer-HumanInteraction Conf on Designing Futures: The Future of Design, pages208–211, 2014. → pages 23, 51, 52, 53[60] J. C. Flanagan. The critical incident technique. Psychological bulletin, page327, 1954. → pages 97[61] Forcedimension. Omega high precision force feedback interface, accessedFebruary 16, 2019. URL http://www.forcedimension.com/products. →pages 4, 23[62] J. Forsslund, M. Yip, and E.-L. Sallnäs. Woodenhaptics: A starting kit forcrafting force-reflecting spatial haptic devices. In Conf on Tangible,Embedded, & Embodied Interaction (TEI), pages 133–140, 2015. → pages28170[63] B. Forsyth and K. E. MacLean. Predictive haptic guidance: Intelligent userassistance for the control of dynamic tasks. IEEE Trans on Visualization andComputer Graphics, pages 103–113, 2006. → pages 55[64] P. Frei, V. Su, B. Mikhak, and H. Ishii. Curlybot: Designing a new class ofcomputational toys. In Proceedings of the SIGCHI Conf on Human Factorsin Computing Systems, pages 129–136, 2000. → pages 2, 6, 124[65] C. Gallacher and S. Ding. The haply development platform: A modular,open-source haptic ecosystem that enables accessible multi-platformsimulation development. In IEEE Haptics Symp (HAPTICS), page 156, 2018.→ pages 133[66] C. Gallacher and S. Ding. hAPI library, February 2017 (accessed September15, 2017). URL https://github.com/HaplyHaptics/hAPI. → pages 18, 133,137[67] C. Gallacher, A. Mohtat, S. Ding, and J. Kövecses. Toward open-sourceportable haptic displays with visual-force-tactile feedback colocation. InIEEE HAPTICS, pages 65–71, 2016. → pages xix, 23, 26, 28, 44, 74, 76, 81,86, 154[68] T. Gargot, T. Asselborn, I. Zammouri, J. Brunelle, W. Johal, P. Dillenbourg,D. Archambault, M. Chetouani, D. Cohen, and S. M. Anzalone. “it is not therobot who learns, it is me.” treating severe dysgraphia using child–robotinteraction. Frontiers in Psychiatry, page 5, 2021. → pages 135[69] S. Gerofsky. Seeing the graph vs. being the graph: Gesture, engagement andawareness in school mathematics. In Integrating gestures, pages 10–18.2011. → pages 28[70] D. C. Giancoli. Physics: principles with applications, volume 4.Pearson/Prentice Hall Upper Saddle River, NJ, 2005. → pages 139[71] B. Gillespie and M. Cutkosky. Stable user-specific haptic rendering of thevirtual wall. In ASME International Mechanical Engineering, Symp onHaptic Interfaces, pages 397–406, 1996. → pages 136[72] R. B. Gillespie and A. M. Okamura. Haptic interaction for hands-on learningin system dynamics and controls. Control Systems Magazine, pages 12–23,2008. → pages 20171[73] A.-L. Godhe, P. Lilja, and N. Selwyn. Making sense of making: criticalissues in the integration of maker education into schools. Technology,Pedagogy and Education, pages 317–328, 2019. → pages 3[74] E. B. Goldstein. Encyclopedia of perception. Sage, 2010. → pages 74[75] N. Goodman. Ways of worldmaking. Hackett Publishing, 1978. → pages 127[76] P. Griffiths and R. B. Gillespie. Shared control between human and machine:Haptic display of automation during manual control of vehicle heading. Inthe 12th International on Haptic Interfaces for Virtual Environment andTeleoperator Systems (HAPTICS), pages 358–366, 2004. → pages 52[77] P. G. Griffiths and R. B. Gillespie. Sharing control between humans andautomation using haptic interface: Primary and secondary task performancebenefits. Human Factors, pages 574–590, 2005. → pages 20[78] T. Grossman, R. Balakrishnan, G. Kurtenbach, G. Fitzmaurice, A. Khan, andB. Buxton. Interaction techniques for 3d modeling on large displays. InACM Symp Interactive 3D Graphics, pages 17–23, 2001. → pages 35[79] A. Guneysu Ozgur, A. Özgür, T. Asselborn, W. Johal, E. Yadollahi,B. Bruno, M. Skweres, and P. Dillenbourg. Iterative design and evaluation ofa tangible robot-assisted handwriting activity for special education.Frontiers in Robotics and AI, page 29, 2020. → pages 135[80] A. Gürel. Cognitive comparison of using hand sketching and parametrictools in the conceptual design phase. 2019. → pages 30[81] A. K. H. Habash. Touch pen with haptic feedback, 2015. URL https://patents.google.com/patent/US9116560B1/en?oq=+9%2c116%2c560.US Patent US9116560B1. → pages 22[82] A. Haddadi, K. Razi, and K. Hashtrudi-Zaad. Operator dynamicsconsideration for less conservative coupled stability condition in bilateralteleoperation. IEEE/ASME Trans on Mechatronics, pages 2463–2475, 2015.→ pages 132, 147[83] G. Hallman, I. Paley, I. Han, and J. B. Black. Possibilities of haptic feedbacksimulation for physics learning. In EdMedia+ Innovate Learning, pages3597–3602, 2009. → pages 74, 126[84] F. G. Hamza-Lup and I. A. Stanescu. The haptic paradigm in education:Challenges and case studies. The Internet and Higher Education, pages78–81, 2010. → pages 128172[85] I. Han and J. B. Black. Incorporating haptic feedback in simulation forlearning physics. Computers Education, pages 2281 – 2290, 2011. → pages78[86] B. Hannaford and J.-H. Ryu. Time-domain passivity control of hapticinterfaces. IEEE Trans on Robotics and Automation, pages 1–10, 2002. →pages xxii, 143, 144[87] Hapticshouse. Falcon Haptic Devices, accessed February 16, 2019. URLhttps://hapticshouse.com/. → pages 4, 23[88] S. Harada, S. Saponas, and J. A. Landay. Voicepen: Augmenting pen inputwith simultaneous non-linguistic vocalization. In Proceedings of the 9thInternational ACM Conf on Multimodal Interfaces, pages 1–15, 2007. →pages 28[89] V. Hayward and O. R. Astley. Performance measures for haptic interfaces.In Robotics Research, pages 195–206, 1996. → pages 19[90] V. Hayward and K. E. Maclean. Do it yourself haptics: Part i. IEEERobotics Automation Magazine, pages 88–104, 2007. → pages 35[91] F. Hemmert, A. Müller, R. Jagodzinski, G. Wintergerst, and G. Joost.Reflective haptics: Haptic augmentation of guis through frictional actuationof stylus-based interactions. In ACM Symp on User Interface Software andTechnology (UIST), pages 383–384, 2010. → pages 26[92] J. Holst-Wolf, Y.-T. Tseng, and J. Konczak. The minnesota haptic functiontest. Frontiers in Psychology, page 818, 2019. → pages 73[93] P. Honey and A. Mumford. The learning styles helper’s guide. Peter HoneyPublications Maidenhead, 2000. → pages xxi, 121[94] J. Huegel and M. O’Malley. Progressive haptic and visual guidance fortraining in a virtual dynamic task. IEEE Haptics Symposium, pages 343–350,2010. → pages 135[95] G. Huisman. Social touch technology: A survey of haptic technology forsocial touch. IEEE Trans on Haptics, page 391–408, 2017. → pages 72[96] M. Hung, D. Ledo, and L. Oehlberg. Watchpen: Using cross-deviceinteraction concepts to augment pen-based interaction. pages 1–8, 2019. →pages 29173[97] A. Inc. Anoto digital pen. 2017. URL http://www.anoto.com/. → pages 51[98] N. S. Inc. Neo smartpen m1. 2018. URL https://www.neosmartpen.com.→ pages 39, 51, 133[99] A. Ioannou and E. Makridou. Exploring the potentials of educationalrobotics in the development of computational thinking: A summary ofcurrent research and practical proposal for future work. Education andInformation Technologies, pages 2531–2544, 2018. → pages 130[100] N. Jafari, K. D. Adams, and M. Tavakoli. Haptics to improve taskperformance in people with disabilities: A review of previous studies and aguide to future research with children with disabilities. Journal ofrehabilitation and assistive technologies engineering, page 147, 2016. →pages 128[101] M. Jalal al Din Rumi. Townspeople, who have never seen an elephant,examine its appearance in the dark. the walters art museum, 1663 AD (1073AH). URL https://art.thewalters.org/detail/83750. → pages xix, 73[102] Y. Jansen, T. Karrer, and J. Borchers. Mudpad: Localized tactile feedback ontouch surfaces. In Adjunct Proceedings of the 23Nd Annual ACM Symp onUser Interface Software and Technology, UIST, pages 385–386, 2010. →pages 26[103] C. Johnson. Harold and the Purple Crayon. Harper Brothers, 1955. →pages 4, 34[104] L. A. Jones and M. Berris. The psychophysics of temperature perception andthermal-interface design. In Proceedings of the 10th Symp on HapticInterfaces for Virtual Environment and Teleoperator Systems, page 137,2002. → pages 114[105] M. G. Jones, A. Bokinsky, T. Andre, D. Kubasko, A. Negishi, R. Taylor, andR. Superfine. Nanomanipulator applications in education: the impact ofhaptic experiences on students’ attitudes and concepts. pages 279–282, 2002.→ pages 128[106] D. L. Kain. Owning significance: The critical incident technique in research.Foundations for research: Methods of inquiry in education and the socialsciences, pages 69–85, 2004. → pages 96174[107] P. Kammermeier, A. Kron, J. Hoogen, and G. Schmidt. Display of holistichaptic sensations by combined tactile and kinesthetic feedback. Presence,pages 1–15, 2004. → pages 16[108] M. Kapur and K. Bielaczyc. Designing for productive failure. Journal of theLearning Sciences, pages 45–83, 2012. → pages 130[109] D. C. Karnopp, D. L. Margolis, and R. C. Rosenberg. System dynamics: aunified approach. 1990. → pages 150[110] O. B. Kaul, M. Pfeiffer, and M. Rohs. Follow the force: Steering the indexfinger towards targets using ems. In Proceedings of the CHI Conf ExtendedAbstracts on Human Factors in Computing Systems, CHI, pages 2526–2532,2016. → pages 51[111] A. Kay. Tesseract: An open-source optical character recognition engine.Linux J., (159):2, 2007. → pages 141[112] H. Khodr, S. Kianzad, W. Johal, A. Kothiyal, B. Bruno, and P. Dillenbourg.Allohaptic: Robot-mediated haptic collaboration for learning linearfunctions*. In 2020 29th IEEE International Conf on Robot and HumanInteractive Communication (RO-MAN), pages 27–34, 2020. → pages 75, 77,78, 84, 120, 128[113] S. Kianzad and K. E. MacLean. Harold’s purple crayon rendered in haptics:Large-stroke, handheld ballpoint force feedback. In 2018 IEEE Haptics(HAPTICS), pages 106–111, 2018. → pages xix, 57, 61, 64, 76, 81, 86, 120[114] S. Kianzad and K. E. MacLean. Collaborating through magic pens:Grounded forces in large, overlappable workspaces. In Haptic Interaction,pages 233–237, 2019. → pages vii, 84, 121[115] S. Kianzad, S. O. Karkouti, and H. D. Taghirad. Force control of intelligentlaparoscopic forceps. Journal of Medical Imaging and Health Informatics,pages 284–289, 2011. → pages 114[116] S. Kianzad, Y. Huang, R. Xiao, and K. E. MacLean. Phasking on paper:Accessing a continuum of physically assisted sketching. In ACM Conf onHuman Factors in Computing Systems (CHI), pages 1–11, 2020. → pages81, 120, 128, 135[117] S. Kianzad, G. Chen, and K. E. MacLean. PAL: A framework for physicallyassisted learning through design and exploration with a haptic robot buddy.Frontiers in Robotics and AI, 8:298–250, 2021. → pages vii175[118] E. Kim and O. Schneider. Defining haptic experience: Foundations forunderstanding, communicating, and evaluating hx. Proceedings of the CHIConf on Human Factors in Computing Systems, page 1–13, 2020. → pages163[119] N. W. Kim, N. Henry Riche, B. Bach, G. A. Xu, M. Brehmer, K. Hinckley,M. Pahud, H. Xia, M. McGuffin, and H. Pfister. Datatoon: Drawing dynamicnetwork comics with pen + touch interaction. In CHI, pages 1–12, 2019. →pages 31[120] R. L. Klatzky, D. Pawluk, and A. Peer. Haptic perception of materialproperties and implications for applications. Proceedings of the IEEE, pages2081–2092, 2013. → pages 16[121] D. L. Knee. Dynamic resistance control of a stylus, May 3 2007. URLhttps://patents.google.com/patent/US7508382B2/en?oq=US+Pat.+No+7%2c508%2c382+B2. US Patent US7265750B2. → pages 22[122] D. Kolb. Experiential learning: experience as the source of learning anddevelopment. Prentice Hall, 1984. → pages xxi, 118, 120, 121, 125, 158[123] C. Kontra, D. J. Lyons, S. M. Fischer, and S. L. Beilock. Physical experienceenhances science learning. Psychological science, pages 737–749, 2015. →pages 78[124] M. Konyo. Tako-pen: A pen-type pseudo-haptic interface using multipointsuction pressures. In SIGGRAPH Asia 2015 Haptic Media And ContentsDesign, pages 2–4, 2015. → pages 29[125] G. Korres and M. Eid. Katib: Haptic-visual guidance for handwriting. InInternational Conf on Human Haptic Sensing and Touch Enabled ComputerApplications, pages 279–287. Springer, 2020. → pages 135[126] U. Kuckartz and S. Rädiker. Analyzing qualitative data with MAXQDA.Springer, 2019. → pages 91[127] K.-U. Kyung, J.-Y. Lee, and J. Park. Design and applications of a pen-likehaptic interface with texture and vibrotactile display. In 2007 Frontiers inthe Convergence of Bioscience and Information Technologies, pages543–548, 2007. → pages 128[128] J. T. F. Laurent Denoue. Force-feedback stylus and applications to freeformink, May 3 2005. URL https://patents.google.com/patent/US7508382B2/176en?oq=US+Pat.+No+7%2c508%2c382+B2. US Patent US7508382B2. →pages 22[129] T. B. Lauwers, G. A. Kantor, and R. L. Hollis. A dynamically stablesingle-wheeled mobile robot with inverse mouse-ball drive. In IEEE Int’lConf on Robotics and Automation (ICRA), pages 2884–2889, 2006. → pages27, 38, 41[130] J. J. LaViola and R. C. Zeleznik. Mathpad 2: A system for the creation andexploration of mathematical sketches. In ACM SIGGRAPH, page 432–440,2004. → pages 30[131] S. J. Lederman and R. L. Klatzky. Hand movements: A window into hapticobject recognition. Cognitive Psychology, pages 342–368, 1987. → pages73, 74[132] J. C. Lee, P. H. Dietz, D. Leigh, W. S. Yerazunis, and S. E. Hudson. Hapticpen: A tactile feedback stylus for touch screens. In Proceedings of the 17thAnnual ACM Symp on User Interface Software and Technology, UIST, pages291–294. ACM, 2004. → pages 23[133] Y. J. Lee, C. L. Zitnick, and M. F. Cohen. Shadowdraw: Real-time userguidance for freehand drawing. ACM Trans. Graph., page 1–27, 2011. →pages 51, 160[134] V. Levesque, L. Oram, K. MacLean, A. Cockburn, N. D. Marchuk,D. Johnson, J. E. Colgate, and M. A. Peshkin. Enhancing physicality intouch interaction with programmable friction. In Proceedings of the SIGCHIConf on Human Factors in Computing Systems, CHI, pages 2481–2490.ACM, 2011. → pages 25[135] G. Levin. Painterly Interfaces for Audiovisual Performance. PhD thesis,Massachusetts Institute of Technology, Cambridge, 2000. → pages 49[136] Y. Li, J. C. Huegel, V. Patoglu, and M. K. O’Malley. Progressive sharedcontrol for training in virtual environments. In 3rd Worldhaptics Conf(WHC), pages 332–337. IEEE Press, 2009. → pages 52[137] C. Liao, F. Guimbretière, and C. E. Loeckenhoff. Pen-top feedback forpaper-based interfaces. In Proceedings of the 19th Annual ACM Symp onUser Interface Software and Technology, UIST, page 201–210, 2006. →pages 31177[138] C. Liao, F. Guimbretière, K. Hinckley, and J. Hollan. Papiercraft: Agesture-based command system for interactive paper. ACM Trans.Comput.-Hum. Interact., (4), 2008. → pages 31[139] R. C. Limited. Pyqt. 2019. URLhttps://riverbankcomputing.com/software/pyqt/intro. → pages 59[140] L.-F. Lin, S.-Y. Teng, R.-H. Liang, and B.-Y. Chen. Stylus assistant:Designing dynamic constraints for facilitating stylus inputs on portabledisplays. In SIGGRAPH ASIA 2016 Emerging Technologies, pages 1–14,2016. → pages 52, 128[141] G. L. Long and C. L. Collins. A pantograph linkage parallel platform masterhand controller for force-reflection. In IEEE Int’l Conf on Robotics andAutomation (ICRA), pages 390–395, 1992. → pages 26[142] P. Lopes, D. Yuksel, F. Guimbretiere, and P. Baudisch. Muscle-plotter: aninteractive system based on electrical muscle stimulation that producesspatial output. In 29th Annual Symp on User Interface Software andTechnology, pages 207–217, 2016. → pages 51, 128, 141[143] F. M, L. DeLuke, R. Buerba, R. Fan, Y. Zheng, M. Leslie, M. Baumgaertne,and J. Grauer. Haptic biofeedback for improving compliance withlower-extremity partial weight bearing. Orthopedics, page 993–998, 2014.→ pages 135[144] K. MacLean and V. Hayward. Do it yourself haptics, part ii: Interactiondesign. IEEE Robotics and Automation Society Magazine, page 104–119,2008. → pages 74[145] K. E. MacLean. Haptic interaction design for everyday interfaces. HumanFactors and Ergonomics, 4(1):149–194, 2008. → pages 28[146] K. E. MacLean, M. J. Shaver, and D. K. Pai. Handheld haptics: A usb mediacontroller with force sensing. In IEEE Haptic Symp (HAPTICS), pages311–318, 2002. → pages 20[147] K. E. MacLean, O. Schneider, and H. Seifi. Multisensory haptic interactions:Understanding the sense and designing for it. In Handbook ofMultimodal-Multisensor Interfaces, pages 97–142. 2017. → pages 75[148] A. Magana and S. Balachandran. Unpacking students’ conceptualizationsthrough haptic feedback. Journal of Computer Assisted Learning, pages513–531, 2017. → pages 84, 85, 119, 128178[149] M. O. Martinez, T. Morimoto, A. Taylor, A. Barron, J. Pultorak, J. Wang,A. Calasanz-Kaiser, R. Davis, P. Blikstein, and A. Okamura. 3d printedhaptic devices for educational applications. In IEEE Haptics Symp(HAPTICS), pages 126–133, 2016. → pages 20, 28, 44, 74[150] T. H. Massie and J. K. Salisbury. The phantom haptic interface: A device forprobing virtual objects. In Proceedings of the ASME Dynamic Systems andControl Division, pages 295–301, 1994. → pages 16, 18[151] A. Meier, H. Spada, and N. Rummel. A rating scheme for assessing thequality of computer-supported collaboration processes. InternationalJournal of Computer-Supported Collaborative Learning, pages 63–86, 2007.→ pages xvi, 76, 99, 102[152] T. Mikropoulos and I. Bellou. Educational robotics as mindtools. Themes inScience and Technology Education, pages 5–14, 2013. ISSN 1792-8788. →pages 1[153] B. E. Miller, J. E. Colgate, and R. A. Freeman. On the role of dissipation inhaptic systems. IEEE Trans on Robotics, page 768–771, 2004. → pages 143[154] G. Minaker, O. Schneider, R. Davis, and K. E. MacLean. Handson: enablingembodied, creative stem e-learning with programming-free force feedback.In Eurohaptics, pages 427–437, London, 2016. → pages 123, 129[155] G. Minaker, O. Schneider, R. Davis, and K. E. MacLean. Handson:Enabling embodied, creative stem e-learning with programming-free forcefeedback. In Eurohaptics, pages 427–437. Springer, 2016. → pages 28, 74[156] J. Minogue and G. Jones. Measuring the impact of haptic feedback using thesolo taxonomy. International Journal of Science Education, pages1359–1378, 2009. → pages 74, 114, 128[157] J. Minogue, M. G. Jones, B. Broadwell, and T. Oppewall. The impact ofhaptic augmentation on middle school students’ conceptions of the animalcell. Virtual Reality, pages 293–305, 2006. → pages 74[158] M. Minsky, M. Ouh-young, O. Steele, F. P. B. Jr., and M. Behensky. Feelingand seeing: issues in force display. ACM Computer Graphics, page 235–242,1990. → pages 17[159] M. D. R. Minsky. Computational haptics: the sandpaper system forsynthesizing texture for a force-feedback display. PhD thesis, Queen’sUniversity, 1995. → pages 34179[160] G. J. Monkman. An electrorheological tactile display. Presence:Teleoperators and Virtual Environments, page 219–228, 1992. → pages 25[161] M. Montessori and B. Carter. The secret of childhood. Orient LongmansCalcutta, 1936. → pages 123[162] O. Mubin, C. J. Stevens, S. Shahid, A. Al Mahmud, and J. Dong. A reviewof the applicability of robots in education. Technology for Education andLearning, 1:1–7, 2013. → pages 2[163] P. A. Mueller and D. M. Oppenheimer. The pen is mightier than thekeyboard: Advantages of longhand over laptop note taking. PsychologicalScience, page 1159–1168, 2014. → pages 47[164] J. Mullins, C. Mawson, and S. Nahavandi. Haptic handwriting aid fortraining and rehabilitation. In IEEE Int’l Conf on Systems, Man andCybernetics, pages 2690–2694, 2005. → pages 23, 26, 128[165] J. Murayama, L. Bougrila, Y. Luo, K. Akahane, S. Hasegawa,B. Hirsbrunner, and M. Sato. Spidar: a two-handed haptic interface forbimanual vr interaction. In Proceedings of EuroHaptics, volume 2004, pages138–146. Citeseer, 2004. → pages 128[166] K. Nakagaki and Y. Kakehi. Comp*pass: A compass-based drawinginterface. In CHI Extended Abstracts on Human Factors in ComputingSystems, CHI, pages 447–450, 2014. → pages 51, 128[167] A. Nomoto, Y. Ban, T. Narumi, T. Tanikawa, and M. Hirose. Supportingprecise manual-handling task using visuo-haptic interaction. In Proceedingsof the 7th Augmented Human International Conf, pages 1–10. ACM, 2016.→ pages 51[168] B. M. of Education". "areas of learning: Science – science for citizens 11",2018. URL https://curriculum.gov.bc.ca/sites/curriculum.gov.bc.ca/files/curriculum/science/en_science_11_science-for-citizens_elab.pdf. →pages 82[169] M. A. Otaduy and M. C. Lin. High fidelity haptic rendering. SynthesisLectures on Computer Graphics, pages 1–112, 2006. → pages 16, 17[170] A. Özgür, W. Johal, F. Mondada, and P. Dillenbourg. Windfield: Learningwind meteorology with handheld haptic robots. In Proceedings of the 2017ACM/IEEE International Conf on Human-Robot Interaction, HRI, page156–165, 2017. → pages 7, 97180[171] A. Özgür, W. Johal, F. Mondada, and P. Dillenbourg. Haptic-enabledhandheld mobile robots: Design and analysis. In Proceedings of the CHIConf on Human Factors in Computing Systems, CHI, pages 2449–2461,2017. → pages xix, 6, 27, 37, 76, 80, 86, 121[172] A. Özgür, S. Lemaignan, W. Johal, M. Beltran, M. Briod, L. Pereyre,F. Mondada, and P. Dillenbourg. Cellulo: Versatile handheld robots foreducation. In ACM/IEEE Int’l Conf on Human-Robot Interaction (HRI),pages 119–127, 2017. → pages 124, 128[173] P. Paczkowski, M. H. Kim, Y. Morvan, J. Dorsey, H. Rushmeier, andC. O’Sullivan. Insitu: Sketching architectural designs in context. ACMTrans. Graph., page 1–10, 2011. → pages 30[174] S. Papert. Mindstorms: Children, Computers, and Powerful Ideas. BasicBooks, New York, 1980. → pages 1, 2, 6, 70, 106, 123, 126[175] S. Papert and I. Harel. Situating constructionism. In S. Papert and I. Harel,editors, Constructionism, chapter 1, page 3. Ablex Publishing Corporation,1991. → pages 123[176] G. Park, H. Cha, and S. Choi. Attachable and detachable vibrotactilefeedback modules and their information capacity for spatiotemporal patterns.In IEEE WorldHaptics, pages 78–83, Munich, 2017. → pages 28[177] H. M. Paynter. Analysis and design of engineering systems. MIT Press,1960. → pages 150[178] K. Perkins, W. Adams, M. Dubson, N. Finkelstein, S. Reid, C. Wieman, andR. LeMaster. Phet: Interactive simulations for teaching and learning physics.The physics teacher, pages 18–23, 2006. → pages 18[179] K. Perlin, Z. He, and K. Rosenberg. Chalktalk: A visualization andcommunication language–as a tool in the domain of computer scienceeducation. arXiv preprint arXiv:1809.07166, 2018. → pages 30[180] J. Piaget. Part i: Cognitive development in children: Piaget development andlearning. Journal of research in science teaching, pages 176–186, 1964. →pages 6, 123, 152[181] J. Piaget. The development of thought: Equilibration of cognitivestructures.(Trans A. Rosin). Viking, 1977. → pages 123181[182] M. Porcheron, J. E. Fischer, and S. Sharples. Using mobile phones in pubtalk. In Proceedings of the 19th ACM Conf on Computer-SupportedCooperative Work amp; Social Computing, page 1649–1661, 2016. → pages89[183] M. Price and F. C. Sup. A robotic touchscreen totem for two-dimensionalhaptic force display. In 2016 IEEE Haptics Symp (HAPTICS), pages 72–77,2016. → pages 21[184] I. Radu and B. Schneider. What can we learn from augmented reality (ar)?In Proceedings of the 2019 CHI Conf on Human Factors in ComputingSystems, pages 1–12, 2019. → pages 84, 112[185] I. Radu, V. Hv, and B. Schneider. Unequal impacts of augmented reality onlearning and collaboration during robot programming with peers. pages1–23, 2021. → pages 119, 120[186] C. Ramstein and V. Hayward. The pantograph: a large workspace hapticdevice for a multi-modal human-computer interaction. In ACM Conf onHuman Factors in Computing Systems (CHI), Boston, MA, 1994. → pages20, 23[187] M. Reiner. Sensory cues, visualization and physics learning. InternationalJournal of Science Education, 31(3):343–364, 2009. → pages 126[188] M. D. Renken and N. Nunez. Computer simulations and clear observationsdo not guarantee conceptual understanding. Learning and Instruction, pages10–23, 2013. → pages 128[189] M. Resnick. Technologies for lifelong kindergarten. Educational technologyresearch and development, pages 43–55, 1998. → pages 122[190] M. Resnick, F. Martin, R. Berg, R. Borovoy, V. Colella, K. Kramer, andB. Silverman. Digital manipulatives: New toys to think with. pages281–287. → pages 70, 124, 126, 129[191] M. J. Rodríguez-Triana, L. P. Prieto, A. Vozniuk, M. S. Boroujeni, B. A.Schwendimann, A. Holzer, and D. Gillet. Monitoring, awareness andreflection in blended technology enhanced learning: a systematic review.International Journal of Technology Enhanced Learning, 9(2-3):126–150,2017. → pages 126182[192] H. Romat, N. Henry Riche, K. Hinckley, B. Lee, C. Appert, E. Pietriga, andC. Collins. Activeink: (th)inking with data. In Proceedings of the CHI Confon Human Factors in Computing Systems, CHI, page 1–13, 2019. → pages121[193] J. F. Roscoe, S. Fearn, and E. Posey. Teaching computational thinking byplaying games and building robots. In 2014 International Conf onInteractive Technologies and Games, pages 9–12, 2014. → pages 3[194] L. B. Rosenberg. Virtual fixtures: Perceptual tools for teleroboticmanipulation. In Virtual Reality Annual International Symp, pages 76–82.IEEE, 1993. → pages 17[195] L. B. Rosenberg. Haptic feedback stylus and other devices, May 3 2007.URL https://patents.google.com/patent/US7508382B2/en?oq=US+Pat.+No+7%2c508%2c382+B2. US Patent US7265750B2. → pages 22[196] J. Saldien, K. Goris, S. Yilmazyildiz, W. Verhelst, and D. Lefeber. On thedesign of the huggable robot probo. Physical Agents, 2(2):3–11, 2008. →pages 72[197] J. K. Salisbury and M. A. Srinivasan. Phantom-based haptic interaction withvirtual objects. Computer Graphics and Applications, IEEE, page 6–10,1997. → pages 137[198] J. K. Salisbury, F. Conti, and F. Barbagli. Haptic rendering: introductoryconcepts. IEEE Computer Graphics & Applications, 24(2):24–32, 2004.ISSN 0272-1716. doi:10.1109/MCG.2004.1274058. → pages 35[199] B. Schneider, P. Jermann, G. Zufferey, and P. Dillenbourg. Benefits of atangible interface for collaborative learning and interaction. IEEE Trans onLearning Technologies, pages 222–232, 2011. → pages 77[200] Y. Sefidgar, K. E. MacLean, S. Yohanan, M. Van der Loos, E. A. Croft, andJ. Garland. Design and evaluation of a touch-centered calming interactionwith a social robot. Trans on Affective Computing, page 108–121, 2015. →pages 72[201] H. Seifi, G. Park, F. Fazlollahi, J. Sastrillo, J. Ip, A. Agrawal,K. Kuchenbecher, and K. E. MacLean. Haptipedia: Accelerating hapticdevice discovery to support interaction engineering design. In Conf onHuman Factors in Computing Systems (CHI), pages 12–24. ACM, 2019. →pages 19, 47, 74, 75183[202] U. A. Shaikh, A. J. Magana, L. Neri, D. Escobar-Castillejos, J. Noguez, andB. Benes. Undergraduate students’ conceptual interpretation and perceptionsof haptic-enabled learning experiences. International Journal of EducationalTechnology in Higher Education, page 15, 2017. → pages 87[203] Y. shan Chang, M. Y.-C. Chen, M.-J. Chuang, and C. hui Chou. Improvingcreative self-efficacy and performance through computer-aided designapplication. Thinking Skills and Creativity, pages 103 – 111, 2019. → pages49[204] C. Shannon. Communication in the presence of noise. Proceedings of theIRE, pages 10–21, jan 1949. → pages 137[205] G. Sidorov, A. Gelbukh, H. Gómez-Adorno, and D. Pinto. Soft similarityand soft cosine measure: Similarity of features in vector space model.Computación y Sistemas, 18(3):491–504, 2014. → pages 97[206] B. Smits-Engelsman, A. Niemeijer, and G. van Galen. Fine motordeficiencies in children diagnosed as dcd based on poor grapho-motor ability.Human Movement Science, pages 161–182, 2001. → pages 135[207] S. Snibbe, S. Anderson, and B. Verplank. Springs and constraints for 3ddrawing. In Third Phantom User’s Group, volume M.I.T. ArtificialIntelligence Laboratory Technical Report AITR-1643, pages 1–4, 1998. →pages 49, 52, 56[208] H. Song, T. Grossman, G. Fitzmaurice, F. Guimbretiere, A. Khan, R. Attar,and G. Kurtenbach. Penlight: Combining a mobile projector and a digitalpen for dynamic visual overlay. In Proceedings of the SIGCHI Conf onHuman Factors in Computing Systems, CHI, pages 143–152, 2009. → pages51[209] L. J. Speed and A. Majid. Grounding language in the neglected senses oftouch, taste, and smell. Cognitive neuropsychology, pages 363–392, 2020.→ pages 74[210] M. A. Srinivasan. Haptic Interfaces. Washington, D.C.: Report of theCommittee on Virtual Reality Research and Development, NationalResearch Council, National Academy Press, 1995. → pages 16[211] J. Steimle. Designing pen-and-paper user interfaces for interaction withdocuments. In Proceedings of the 3rd International Conf on Tangible andEmbedded Interaction (TEI), pages 197–204. ACM Press, 2009. → pages52, 53184[212] I. E. Sutherland. Sketchpad a man-machine graphical communicationsystem. Simulation, 2:3–20, 1964. → pages 28[213] H. Z. Tan and A. Pentland. Tactual displays for wearable computing. InInternational Symp on Wearable Computers, pages 84–89. IEEE ComputerSociety, 1997. → pages 20[214] H. Z. Tan, M. A. Srinivasan, B. Eberman, and B. Cheng. Human factors forthe design of force-reflecting haptic interfaces. In 3rd Ann. Symp. on HapticInterfaces for Virtual Environment and Teleoperator Systems, ASME/IMECE,pages 353–359, 1994. → pages 17[215] A. Teranishi, G. Korres, W. Park, and M. Eid. Combining full and partialhaptic guidance improves handwriting skills development. IEEE Trans onHaptics, pages 509–517, 2018. → pages 52, 135[216] D. Tsetserukou, A. Neviarouskaya, H. Prendinger, N. Kawakami, andS. Tachi. Affective haptics in emotional communication. In 2009 3rdInternational Conf on Affective Computing and Intelligent Interaction andWorkshops, pages 1–6, 2009. → pages 72[217] D. Tsetserukou, K. Sato, and S. Tachi. Exointerfaces: novel exosceletonhaptic interfaces for virtual reality, augmented sport and rehabilitation. InProceedings of the 1st Augmented Human International Conf, pages 1–6,2010. → pages 128[218] D. G. Ullman, S. Wood, and D. Craig. The importance of drawing in themechanical design process. Computers Graphics, pages 263 – 274, 1990.→ pages 30[219] P. Vandoren, T. Van Laerhoven, L. Claesen, J. Taelman, C. Raymaekers, andF. Van Reeth. Intupaint: Bridging the gap between physical and digitalpainting. In 2008 3rd IEEE International Workshop on HorizontalInteractive Human Computer Systems, pages 65–72, 2008. → pages 47[220] D. Vaquero-Melchor and A. M. Bernardos. Enhancing interaction withaugmented reality through mid-air haptic feedback: architecture design anduser feedback. Applied Sciences, page 5123, 2019. → pages 128[221] S. A. Wall and S. A. Brewster. Assessing Haptic Properties for DataRepresentation, page 858–859. 2003. → pages 128185[222] D. Wang, Y. Zhang, and C. Yao. Stroke-based modeling and haptic skilldisplay for chinese calligraphy simulation system. Virtual Reality, pages118–132, 2006. → pages 128[223] M. Weiser. The computer for the 21st century. Scientific American, page94–110, 1991. → pages 151, 153[224] M. Weiss, C. Wacharamanotham, S. Voelker, and J. Borchers. Fingerflux:near-surface haptic feedback on tabletops. Proceedings of the 24th annualACM on User interface software and technology, page 615–620, 2011. →pages 24[225] E. N. Wiebe, J. Minogue, M. Gail Jones, J. Cowley, and D. Krebs. Hapticfeedback and students’ learning about levers: Unraveling the effect ofsimulated touch. Computers Education, pages 667 – 676, 2009. → pages 78[226] Wikipedia. The blind men and the elephant (buddhist parable). 2021, 2021.URL https://en.wikipedia.org/wiki/Blind_men_and_an_elephant. → pages74[227] L. Winfield, J. Glassmire, J. E. Colgate, and M. Peshkin. T-pad: Tactilepattern display through variable friction reduction. In Second JointEuroHaptics Conf and Symp on Haptic Interfaces for Virtual Environmentand Teleoperator Systems (WHC), pages 421–426, 2007. → pages 25[228] W. Witaya, A. Prasad, P. Michael, and J. E. Colgate. Cobots: A novelmaterial handling technology. In 7th Ann. Symp. on Haptic Interfaces forVirtual Environment and Teleoperator Systems, ASME/IMECE, Anaheim,USA, 1998. → pages 21[229] E. B. Witherspoon, R. M. Higashi, C. D. Schunn, E. C. Baehr, and R. Shoop.Developing computational thinking through a virtual robotics programmingcurriculum. ACM Trans. Comput. Educ., pages 1–4, 2017. → pages 3[230] J. O. Wobbrock, A. D. Wilson, and Y. Li. Gestures without libraries, toolkitsor training: A $1 recognizer for user interface prototypes. In Proceedings ofthe 20th Annual ACM Symp on User Interface Software and Technology,UIST, page 159–168, 2007. → pages 139[231] J. Yamaoka and Y. Kakehi. depend: Augmented handwriting system usingferromagnetism of a ballpoint pen. In 26th Annual ACM Symp on UserInterface Software and Technology, pages 203–210. ACM, 2013. → pages26, 51, 128186[232] S. Yohanan and K. E. MacLean. The role of affective touch in human-robotinteraction: Human intent and expectations in touching the haptic creature.International Journal of Social Robotics, pages 163–180, 2012. → pages 72,89[233] Younq-Seok, M. Collins, W. Bulmer, S. Sharma, and J. Mayrose. Hapticsassisted training (hat) system for children’s handwriting. In 2013 WorldHaptics Conf (WHC), pages 559–564, 2013. → pages 128[234] Z. C. Zacharia. Examining whether touch sensory feedback is necessary forscience learning through experimentation: A literature review of twodifferent lines of research across k-16. Educational Research Review, pages116 – 137, 2015. → pages 77, 78[235] Z. C. Zacharia and M. Michael. Using physical and virtual manipulatives toimprove primary school students’ understanding of concepts of electriccircuits. In New developments in science and technology education, pages125–140. Springer, 2016. → pages 128[236] Z. C. Zacharia, E. Loizou, and M. Papaevripidou. Is physicality an importantaspect of learning through science experimentation among kindergartenstudents? Early Childhood Research Quarterly, pages 447–457, 2012. →pages 78, 119, 128[237] R. Zeleznik, A. Bragdon, F. Adeputra, and H.-S. Ko. Hands-on math: Apage-based multi-touch and pen desktop for technical work and problemsolving. In Proceedings of the 23nd annual ACM symposium on Userinterface software and technology, pages 17–26, 2010. → pages 31[238] K. Zhao and C. K. Chan. Fostering collective and individual learningthrough knowledge building. International Journal of Computer-SupportedCollaborative Learning, pages 63–95, 2014. → pages 78[239] M. Zinn, O. Khatib, B. Roth, and J. K. Salisbury. Large workspace hapticdevices - a new actuation approach. Symp on Haptic Interfaces for VirtualEnvironment and Teleoperator Systems, pages 185–192, 2008. → pages 17[240] R. F. Zory, D. Boërio, M. Jubeau, and N. A. Maffiuletti. Central andperipheral fatigue of the knee extensor muscles induced byelectromyostimulation. International journal of sports medicine, pages847–853, 2005. → pages 51187Appendix AMagicPen TechnicalDemonstrationHere, we present additional data on the technical performance of the MagicPen’sstylus form factor (Figure 3.1(a)) presented in Chapter 3.The graphs here showcase a user interaction with different virtual objects viaMagicPen. We hope that these graphs present both temporal and spatial aspects ofhaptic rendering with our device.188Figure A.1: The force and velocity behaviour as the user runs the avatar into avirtual wall. The blue line represents the velocity, the orange line showsthe force, and the black line represents the user’s trajectory. The graybox represents the regions where the user’s avatar is inside the virtualwall.189Figure A.2: A user is moving the MagicPen around the corner of a box (gray).190Figure A.3: A user is moving the MagicPen around a circular wall.191Appendix BPhasking Experiment ResultsThe drawings here presents the results of drawing experiments. P1-P7 are novices,and E1-E3 are domain experts.• Neo: No assist (NeoSmart pen)• Bring: Physical assists-active constraint• Bound: physical assists-passive constraint• SC: shared control• NSC: No shared controlaActivities:1. Draw bring constraints : a straight line (see Section B.1)2. Draw bring constraints : a rectangle (see Section B.2)3. Draw bring constraints : a rectangle in perspective (see Section B.3)4. Draw bring constraints : a circle (see Section B.4)5. Bound constraints : lines meeting an invisible line barrier (see Section B.5)6. Shared control : a sine wave/invisible line barrier at the center (see SectionB.6)192B.1 Draw bring constraints : a straight lineP1P2P3P4P5P6P7E1E2E3Figure B.1: Line with NeoSmart pen.P1P2P3P4P5P6P7E1E2E3Figure B.2: Line – bring.193B.2 Draw bring constraints : a rectangleP1P2P3P4P5P6P7E1E2E3Figure B.3: Rectangle with NeoPen194P1P2P3P4P5P6P7E1E2E3Figure B.4: Rectangle – bring195B.3 Draw bring constraints : a rectangle in perspectiveP1P2P3P4P5P6P7Figure B.5: Rectangle in perspective with NeoSmart pen196P1P2P3P4P5P6P7Figure B.6: Rectangle in perspective – bring197B.4 Draw bring constraints : a circleFigure B.7: Circle with NeoSmart pen198Figure B.8: Circle – bringB.5 Bound constraints:lines meeting an invisible linebarrier199P1P2P3P4P5P6P7E1E2E3Figure B.9: Line – boundB.6 Shared control : a sine wave/invisible line barrier atthe center()P1P2P3P4P5P6P7E1E2E3Figure B.10: A sine wave with no shared control – bound200P1P2P3P4P5P6P7E1E2E3Figure B.11: A sine wave with shared control – boundP1P2P3P4P5P6P7E1E2E3Figure B.12: A sine wave with no shared control – bring201P1P2P3P4P5P6P7E1E2E3Figure B.13: A sine wave with shared control – bring202Appendix CCollaborative GroundingPre-test/Post-test Questions203Electrostatic Lab Pre-TestName :..................................................................... Question purpose in yellow (removed in the original version)Group :............................................................................Please work individually to answer the following questions. There is no grading, so just answerto the best of your knowledge.Determining existing knowledge about electrostatic forces & their interactions1. In the context of physics, how would you define the following words?Electrostatic force: ______________________________________________________Net force: ______________________________________________________________System equilibrium: ______________________________________________________Testing understanding of net forces and how to calculate them2. Three small spheres are lined up in a row as shown in the figure below. A negativelycharged sphere is placed in the middle of the two positive charges. The charge on thissphere is -2 microCoulombs.Circle the sphere that experiences the lowest net force:Testing understanding of direction for net forces + equilibrium requirements3. Three tiny spheres with identical charges of 5 microCoulombs are situated at the cornersof an equilateral triangle. Draw the net force acting on each sphere:Is the system in an equilibrium state?❏ Yes❏ No❏ I don’t knowFigure C.1: Electrostatic lab pre-test204Electrostatic Lab Post-TestName :..................................................................... Question purpose in yellow (removed in the original version)Group: ....................................................................Please work individually to answer the following questions. There is no grading, so just answerto the best of your knowledge.1. Briefly, how would you define the following words, in terms of physics?Electrostatic force: _______________________________________________________Net force: ______________________________________________________________System equilibrium: ______________________________________________________Comparing definitions of key terms pre- and post-learning activity2. Three small spheres are lined up in a row as shown in the figure below. The first and thethird spheres are 10 cm apart and have the same charge of -8 microCoulombs.  Apositively charged sphere is placed in the middle of the two negative charges. Thecharge on this sphere is -8 microCoulombs.Draw the force vectors acting on each sphere.What is the net force on the positively charged sphere? _______________Testing understanding of net force3. Three tiny spheres with identical charges of +3 microCoulombs are situated at thecorners of a square. Is the system below in equilibrium?❏ Yes❏ No❏ I don’t knowTesting understanding of system equilibriumFigure C.2: Electrostatic lab post-test205Momentum & Collision Lab Pre-TestName :..................................................................... Question purpose in yellow (removed in the original version)Group :............................................................................Please work by yourself to answer the following questions:1. In the context of physics, how would you define the following words?Mass: _________________________________________________________________Velocity: _______________________________________________________________Momentum: ____________________________________________________________Determining existing knowledge about momentum, mass and velocity1. Circle the object with  the most momentum:Testing understanding of p = mv3. Circle the object with the most momentum:Testing understanding of p = mv3. Two objects, one travelling east and the other travelling at west, collide. After thecollision, they stick together and both travel east. This collision is:a. Elasticb. Inelasticc. NeitherDetermining existing knowledge about collision typesFigure C.3: Momentum and collision lab pre-test206Momentum & Collision Lab Post-TestName :..................................................................... Question purpose in yellow (removed in the original version)Group :....................................................................Please work individually to answer the following questions:1. Briefly, how would you define the following words, in terms of physics?Mass: _________________________________________________________________Velocity: _______________________________________________________________Momentum: ____________________________________________________________Comparing definitions of key terms pre- and post-learning activity2. Based on your work in the learning activity, which of the following is a valid formula forthe relationship between mass (m), velocity (v) and momentum (p)?a. pm = vb. ½ p = mvc. mv = ped. p = mvDirectly testing understanding of p = mv2. A pool ball is rolling along when it collides with a stationary ball of the same weight.After the collision, both balls stick together and continue to travel at the same velocity, inthe same direction as before. Compared to the original pool ball, has the group of twopool balls:a. Gained momentumb. No change in momentumc. Lost momentumd. It never had any momentum at alTesting ability to apply p = mv2. The collision described in Question 1 is:a. Elasticb. Inelasticc. ExplosiveTesting knowledge about collision typesFigure C.4: Momentum and collision lab post-test207Pressure Lab Pre-TestName :..................................................................... Question purpose in yellow (removed in the original version)Group :.....................................................................Determining baseline definitions of fluids, compressibility & pressure1. Briefly, what are your definitions of the following words, in terms of physics?Fluids: ________________________________________________________________Compressible: __________________________________________________________Pressure: ______________________________________________________________Testing understanding of the relationship between volume and forces acting on compressible fluids2. The main tank of an air compressor has a volume of 30L. When the compressor is turned on, a force isapplied to the air to move it into a smaller holding tank. After this force is applied, will the air in the holdingtank have a volume (circle one)greater than 30 L / less than 30 L / equal to 30LTesting understanding of the relationship between pressure and depth3. i) You’re floating on an inner tube at the beach, when all of a sudden, a shark steals it from you and swimsdown deep into the ocean. What do you expect to happen to the inner tube several hundred feet below sealevel?a) The tube’s volume will increase and it will eventually popb) The tube’s volume will increase but it will not popc) The tube’s volume will not changed) The tube will crumple as its volume decreasesii) What is causing this change?______________________________________________________________________Testing understanding of the relationship between pressure and surface area4. Jamal puts downward pressure of 200 Pa on his surfboard, which has an area of 4 m2. However, the back ofhis surfboard is suddenly bit off by the same shark that took your inner tube! Now that the surfboard’s area isreduced, how does the pressure Jamal places on the surfboard compare to the original pressure of 200 Pa?a) Jamal now places more than 200 Pa on the surfboardb) Jamal still places 200 Pa on the surfboardc) Jamal now places less than 200 Pa on the surfboardFigure C.5: Pressure lab pre-test208Pressure Lab Post-TestName :..................................................................... Question purpose in yellow (removed in the original version)Group :....................................................................Determining whether definitions have changed pre- vs post-activity1. Briefly, what are your definitions of the following words, in terms of physics?Fluids: ________________________________________________________________Compressible: __________________________________________________________Pressure: ______________________________________________________________Check your understanding of the concepts of fluids, pressure and hydraulics:Testing understanding of the relationship between volume and forces acting on compressible fluids2. You have two bottles that are completely full, one with a gas and one with liquid. You place them both in apressurized chamber that increases the air pressure. At the new pressure, do you think the volume of eachbottle will be greater than, equal to, or less than the volume at normal air pressure?Water bottle: greater volume / equal volume / lesser volume (circle one)Helium bottle: greater volume / equal volume / lesser volume (circle one)Testing understanding of the relationship between pressure and depth3. a) You go on a long hike up a mountain for your friend’s birthday. When you get to the top, you notice thatthe birthday balloon that you half-filled at the bottom is now fully inflated. Is the pressure from outside theballoon greater at the top of the mountain or the base? (Circle one)Pressure is greater at the top / Pressure is greater at the base / Pressure is equalb) Briefly explain your choice______________________________________________________________________Testing understanding of the relationship between pressure and surface area4. Ai Ling goes ice skating. Her downward force from her body weight creates a pressure of 150 Pascals. Theblades of her ice skates have a combined surface area of 50 cm2. Later, she takes her ice skates off andwalks around in her shoes (with combined surface area 200 cm2). How does the pressure she exerts on theground compare between shoes and skates?a) She exerts more pressure on the ground wearing skatesb) She exerts equal pressure on the ground wearing skates vs. wearing shoesc) She exerts more pressure on the ground wearing shoesFigure C.6: Pressure lab post-test209