Intelligent Support of Interactive Manual Control: Design, Implementation and Evaluation of Look-Ahead Haptic Guidance by Benjamin A . C . Forsyth B.Sc. (with honours) University of British Columbia, 2001 A THESIS SUBMITTED IN PARTIAL F U L F I L L M E N T O F T H E REQUIREMENTS FOR T H E D E G R E E O F Master of Science in T H E F A C U L T Y OF G R A D U A T E STUDIES (Department of Computer Science) We accept this thesis as conforming to the required standard The University of British Columbia September 2004 © Benjamin A . C . Forsyth, 2004 Abstract Intelligent systems are increasingly able to offer real-time information relevant to a user's manual control of an interactive system; however, effective presentation of this information creates many challenges. We consider how to use force feedback to convey information to a user about dynamic system control space constraints that have been computed by an intelligent system. Effective display of control constraints will require careful consideration of the usability of the forces, in addition to good technical design, to assure user acceptance of the feedback. Possible dynamic systems that can benefit from this kind of interaction feedback are tasks such as driving and the control of physically-based animations. In this thesis, we studied the haptic display of control constraints in a simple driving simulation. We developed a 'look-ahead' guidance method to display usable haptic guid-ance suggestions to a driver based upon the predicted location of the vehicle relative to the road, and implemented this using a custom vehicle simulator based on Reynolds's Open-Steer framework. The performance and usability of our Look-Ahead Guidance method are compared to a baseline of No-Guidance, and to Potential Field Guidance, the current state-of-the-art haptic path guidance method. Our experimental results show that Look-Ahead Guidance was more usable and showed performance benefits in our task compared to both No-Guidance and to Potential Field Guidance. We identified several factors that we sus-pect affect the usability of haptic path guidance and suggest future work based on these observations. ii Contents Abstract ii Contents iii List of Figures vii List of Tables ix Acknowledgements x Nomenclature xi 1 Introduction 1 1.1 Motivation 1 1.2 Objectives and Approach 3 1.3 Document Map 5 2 Related Work 6 2.1 Non-Haptic Control Guidance 6 2.2 Haptic Training 8 2.3 Haptic Non-Training Surgical Applications 9 2.4 Shared Control of Vehicles 10 2.5 Haptic Path Guidance 13 3 Implementation 15 3.1 Simulation and Rendering Engine 15 iii 3.1.1. OpenSteer . . 15 3.1.2 OpenSteer Modifications 19 3.2 Guidance Algorithms 23 3.2.1 Forces Common to All Guidance Methods 23 3.2.2 Desired Steering Angle Algorithm and Force Output 25 3.2.3 Look-Ahead Guidance 26 3.2.4 Potential Field Guidance 29 3.3 Haptic Interface Device 33 3.4 Servo Control . 34 3.4.1 PD Controller 35 3.5 Summary 36 4 Evaluation Methods 3 7 4.1 Hypotheses 37 4.2 Design 38 4.2.1 Choice of Independent Variables 38 4.2.2 Minimizing the Impact of Learning 41 4.2.3 Blocking and Randomization 42 4.3 Performance Metrics 43 4.3.1 Quantitative Path Following Performance Metric 43 4.3.2 Subjective Evaluation Methods 45 4.4 Data Collected 46 4.5 Procedure 46 4.6 Pilot Studies 47 5 Experiment Results and Analysis 4 9 5.1 Experiment Participant Demographics 49 5.1.1 Outlier Participant 50 5.2 Quantitative Results and Analysis 51 5.2.1 Data Handling 52 5.2.2 Statistical Analysis 56 iv 5.2.3 Independent Variable Interactions 60 5.2.4 Influence of Video Game Experience 62 5.3 Subjective Results 67 6 Discussion 72 6.1 Results and our Hypotheses 72 6.1.1 Quantitative Performance of Look-Ahead Guidance 72 6.1.2 Guidance Methods and Path Complexity 74 6.1.3 Guidance Methods and Visibility 75 6.1.4 Subjective Performance of the Guidance Methods 76 6.2 General Observations 77 6.2.1 Issues with Physical Interaction and Real World Similarity 77 6.2.2 Explicitness of Experiment Instructions 78 6.2.3 Observations on When Haptic Guidance is Useful 79 6.2.4 Effect of Gaming on Performance 79 7 Conclusions, Contributions &: Future Work 81 7.1 Conclusions 81 7.2 Contributions 82 7.3 Future Work 83 7.3.1 Improvements to Current System 83 7.3.2 The 'Big Picture' 85 Bibliography 87 Appendix A Experiment Constants 90 Appendix B Experiment Instructions 91 Appendix C Interview Questions 92 Appendix D R Details 94 D . l ANOVA 94 v D.2 Boxplot Details 94 Appendix E Experiment Consent Forms 96 Appendix F Raw Data 101 v i List of Figures 3.1 System Block Diagram 16 3.2 Schematic of Vehicle Dynamics: shaded area represents the vehicle itself. . . 21 3.3 Basic Path Extent Rendering Idea 22 3.4 Path Extent Rendering From Above 22 3.5 Transfer function from heading angle delta to desired steering angle . . . . 26 3.6 Components of Look-Ahead Guidance 27 .3.7 Force enveloping areas in the Look-Ahead Guidance method 28 3.8 An example of the subtleties involved with advanced location predictors . . 30 3.9 Components of the Potential Field force feedback method 31 3.10 Values of for a given d and (3 32 3.11 Haptic Interface 34 4.1 Examples of Paths used in the Experiment 39 4.2 Visibility Levels 40 4.3 Dialog box presented after each block 45 5.1 Game Playing Time Distribution 51 5.2 Example of shortcut path trajectory for outlier participant 52 5.3 Trial with worst score 53 5.4 Trial with an average score 54 5.5 Trial with the best score 55 5.6 Boxplot of scores across all six blocks 57 5.7 Guidance method performance for each path 60 vii 5.8 Effect of visibility " 61 5.9 Mean score across all conditions for each block given gaming experience . . 62 5.10 Boxplots of Participants' Scores Given Game Playing Status 63 5.11 Boxplots of Scores Given Game Playing Status 64 5.12 Boxplots of Score vs. Game Playing Time 65 5.13 Guidance Method and Game Playing Time Interaction 66 5.14 In Control Question Histogram 68 5.15 Helpfulness Question Histogram 70 5.16 Like Question Histogram 71 5.17 Overall Preference Histogram 71 viii List of Tables 1 Symbols and Associated Descriptions xi 4.1 Factors and Levels Presented to Experiment Participants 39 4.2 Possible Block Ordering . . . . ; 43 5.1 Participant Demographics (18 participants) 50 5.2 Within-Subject ANOVA table for MSE score 58 5.3 Mean scores for the levels of each factor across all other levels 59 5.4 P-values from post-hoc test on guidance method levels 59 5.5 P-values from post-hoc test on Path Complexity levels : . . . 59 5.6 Answers to post block questions 67 5.7 Counts for answers to the 'In Control' question 68 5.8 Counts for answers to the 'Helpful' question 70 5.9 Counts for answers to the 'Like' question 70 A . l Value of Constants During the Experiment 90 ix Acknowledgements Thanks to Bruce Dow, Michiel van de Panne, Giusi Di Pietro, Erin Austen and the SPIN Lab for their help, Craig Reynolds for OpenSteer and path following inspiration, and Pre-carn/IRIS for its support. Thanks as well to the members of Imager small for entertaining me over the past three years. Thank you to Ciaran Llachlan Leavitt for her help with technical writing. A special thanks to Karon MacLean for always being there to provide inspiration, feed-back, and motivation. You have been a terrific supervisor. Last, but definitely not least, I would like to express my sincere gratitude for the support I have received from my family and from Sarah. You have been extremely understanding over the past three years, and I would not have been able to do this without you. B E N J A M I N A . C . F O R S Y T H The University of British Columbia September 2004 Nomenclature Vectors are printed in lower-case, bold-faced italics, e.g. v. Points are printed in upper-case, bold-faced italics, e.g. O. Scalars are printed as plain-faced italics, e.g. R and 0. Table 1: Symbols and Associated Descriptions S y m b o l D e s c r i p t i o n Vehicle Model Symbols O Center of vehicle coordinate space Q Front of vehicle V Vehicle velocity e Vehicle wheelbase e Current steering angle ^max Magnitude of maximum steering angle r Current turning radius of the vehicle C Center of vehicle rotation T p a t h Path radius Control Knob Symbols A Control knob angle relative to initial position ^desired Desired control knob angle e Difference between desired and current control knob angle hp PD controller proportionality component constant kd PD controller derivative component constant General Force Symbols Ckp Centering force proportionality constant Ckd Centering force damping constant Ckmax Maximum centering force contribution to final force ky Viscous damping constant Guidance Methods Symbols P Predicted vehicle location R Scaling factor from control knob angle to steering angle Angle between current heading and system desired heading '/'max Maximum magnitude for desired heading offset T Target vehicle location for Look-Ahead Guidance t Look-Ahead time in seconds continued on next page xi Table 1: continued Symbol Description O p a t h Current vehicle position projected onto path ^ d e s i r e d Desired steering angle ^ e n v e l o p e Distance from edge of path over which enveloping occurs in the Look-Ahead Guidance method 4>LA Desired heading offset for the Look-Ahead Guidance method PF Desired heading offset for the Potential Field Guidance method FPD Force output from the PD Controller Potential Field Guidance force FLA Look-Ahead Guidance force P Distance from the center of the path at which point the force from the Potential Field Guidance method saturates Angle between the current vehicle heading and the line ( O . O p a t h ) d Distance of the vehicle from the path xii Chapter 1 Introduction With the spread of intelligent systems in applications as diverse as automobile driving sup-port, surgical simulation for training, animation design aids and tools for teaching physical-gestures, haptic force feedback presents an opportunity to enhance highly interactive user interfaces. Haptic interfaces can provide intuitive cues derived from an intelligent system's knowledge of the environment, from a user's intentions and preferences, and/or from an assessment of the user's current capabilities or needs. Many possible approaches to devising such cues exist, differing in the degree of control retained by the user. At one extreme, the system can behave autonomously but allow the user limited intervention when desired; at the other, the user is completely responsible for interface control, but the intelligent system can offer supplementary force suggestions. We have chosen to work in the space of the latter because we are interested in systems with tightly coupled user interaction, not semi-autonomous systems. 1.1 Motivation We were motivated to investigate the problem of effective haptic path guidance while con-sidering how to use force feedback to assist a user interacting with an intelligent system that computes constraints on the control space of a dynamic system (such a system is discussed in Section 2.1). For example, consider a driving simulation as the dynamic system in ques-tion. Then the control space of the system is the acceleration of the vehicle, manipulated by 1 the gas and brake pedals, and the steering angle, manipulated by the angular position of the steering wheel. For simplicity, assign the car a constant velocity and no acceleration so that the only way to control the system is via the steering wheel. Now consider an intelligent system that computes the constraints on the steering angle that will keep the vehicle on the road given the car's current location, heading and speed. How can these constraints be conveyed to the driver in an effective and usable way? We felt that force feedback was an obvious interaction modality to effectively con-vey such constraints to the driver. However, our initial attempts to use it did not go well. In our prototype we used a P H A N T O M , a three degree of freedom haptic interface (Massie and Salisbury, 1994), as the haptic interface to control the steering angle of the simulated vehicle based on the X-axis position of the end effector, and applied forces to push the end effector away from constraints. Our difficulty stemmed from the forces being applied too late and too strongly, thereby abruptly forcing the user away from a constraint and forcing the interface to the other extreme constraint, resulting in an annoying and inef-fective end effector oscillation. It was easier to control the vehicle with no forces displayed at all than with our initial attempts at haptically displaying the constraints. We needed a more subtle approach to make the forces we displayed more useful and intuitive. We anticipated that by predicting the state of the system and detecting impend-ing constraint violations (leaving the road) we could apply a force with gently increasing magnitude to slowly steer the user away from a constraint and thereby avoid the strong, oscillation-inducing forces we observed in our preliminary work. Ideally, these forces would be transparent to the user, who would be unaware that forces were being applied to guide him away from a constraint. Another possible haptic interaction technique is to display a rigid haptic "wall" to en-force a constraint rather than to steer the user away from a constraint. A rigidly displayed constraint would be desirable where it is critical that the user not violate the control con-straint and where the user is confident that the intelligent system is perfect at calculating control constraints. If not, the user may become confused and/or annoyed when the system computes a non-existent constraint, and/or misses a constraint altogether. We do not as-sume that the intelligent system is perfect at computing the control constraints and consider 2 problems where the user may want to override the intelligent system's computed control constraints. In our approach, the user retains ultimate control, not the intelligent system. It is important to note that driving is just one application that could benefit from interaction with control constraints augmented by haptic feedback. Any situation where a user is navigating a control subspace computed by an intelligent system and can over-shoot the boundaries of this subspace could benefit from predictive force feedback guidance through the control interface. Such applications include, for example: driving, path tracing (a common activity for graphic artists), and interactive animation control. For the sake of simplicity we studied force feedback guidance in a simple driving simulation. v 1.2 Objectives and Approach As touched upon in the previous section, there are many poorly understood human-in-the-loop considerations for haptic guidance methods. A successful approach will be one that is intuitive to use, aesthetically acceptable, and does not surprise or annoy the user; it should make the task at hand easier without being intrusive. The wrong implementation could result in the user reflexively fighting unexpected forces, relying too heavily on a system that is not meant to be completely autonomous, or being annoyed rather than aided by the feedback. Good haptic guidance will not require significant attentional resources from the, and will have a minimal learning curve, and it will either have a significant quantitative performance benefit when compared to no haptic feedback, and/or significantly reduce fatigue and increase user comfort and confidence. We felt that, if properly designed and implemented, a haptic guidance algorithm based on prediction would address all of these issues. We chose to study an application that should yield insights into how haptic guidance could benefit a larger class of interface problems: a driving simulation where we provide haptic cues to guide a user along a simulated road. To keep the complexity level reasonable, the user had control over the vehicle's steering angle, but not its velocity. This driving application has a number of useful characteristics: • It is relatively simple; therefore it can be studied in a reasonable amount of time, an 3 important quality for a Masters thesis project. • It requires a simple one Degree of Freedom (DoF) haptic interface, a wheel or knob, which is economical and easy to program compared to higher dimensional haptic interfaces. • Steering a car is an under-actuated system, a property shared by some of the other applications we are interested in such as interactive control of physically based anima-tions. An under-actuated system has more degrees of freedom than there are degrees of freedom for control. Our driving task is under-actuated because the system has three degrees of freedom: position (X and Y values) and orientation (angle of the vehicle); and one control degree of freedom, the steering wheel angle. • Most people are familiar with driving; therefore users of our system will require a minimal amount of time to learn how to use the system. This is important because we wanted participants in our experiment to spend the majority of their time generating useful data, not learning how to use the system. In our experiment, we present to the user a visual representation of a vehicle on a road by using a modified version of OpenSteer, a vehicle simulation environment developed by Reynolds (2003) to study intelligent steering behaviours of autonomous vehicles. We implemented a predictive algorithm, Look-Ahead Guidance, by extending an approach Reynolds (1999) developed for steering autonomous vehicles along a path. We compare this to two others, a baseline with No-Guidance force feedback, and a non-predictive, reactive algorithm similar to what we used in our early prototypes mentioned in the previous section. We refer to the latter as Potential Field Guidance because the forces displayed are as if the vehicle is in a force field pushing it away from the the edge of the road back toward the middle of the road. We considered potential field guidance to be the standard path guidance algorithm when we started this work because many previous haptic systems used similar methods, as is evident in Chapter 2 on related work. We describe the design and execution of a formal experiment to quantitatively and subjectively evaluate the performance of these three haptic guidance methods. We then 4 present our findings with respect to this experiment and discuss how we would proceed with future work on haptic path guidance. 1.3 Document Map The remainder of this document puts forward how we addressed the problem of haptic path guidance and is divided into the following Chapters: 2 - Related Work: This Chapter presents relevant previous work. 3 - Implementation: We present the design and implementation of the haptic path guid-ance methods that we decided to evaluate, as well as how we modified OpenSteer to meet the needs of our study. 4 - Evaluation Methods: We present an experimental design to evaluate our guidance methods in this Chapter. 5 - Experimental Results and Analysis: The process, results and analysis of our ex-periment are presented in this Chapter. 6 - Discussion: This chapter discusses the work done in the previous three chapters. It provides details on problems that were encountered, what we could have done bet-ter, and interesting observations made after the experiment, that were not initially apparent in the statistical analysis. 7 - Conclusions, Contributions and Future Work: We distill what we learned, present our contribution to the knowledge in the area of haptic guidance, and what can be done in the future to learn more about this problem. 5 Chapter 2 Related Work The use of force feedback to guide users in performing a variety of tasks dates back a number of years. In one of the earliest examples, Rosenberg (1993) used 'virtual fixtures' to support a teleoperated peg-in-hole task by providing simple guides and constraining entry to forbidden regions. The haptic guidance work done since can be loosely categorized into four areas: • Training • Non-Training Surgical Applications • Shared Control of Vehicles • Path Guidance 2.1 Non-Haptic Control Guidance Work by Reynolds (1999) on the control of autonomous vehicles provided inspiration for our Look-Ahead Guidance method. He presents a number of steering behaviors for autonomous vehicles that create realistic, complex behaviors such as flocking, obstacle avoidance and path following. An integral component of his path following algorithm is a predictor of a vehicle's position a fixed time interval into the future, which he accomplishes by using a simple linear algorithm based on the velocity and heading of the vehicle. If the predicted location of the vehicle is off of the path, then the system commands the vehicle to steer 6 toward the point on the path closest to the predicted location. We call this kind of predictive path following a look-ahead algorithm. Reynolds (2003) also developed and made publicly available a software toolkit, Open-Steer, a test bed for steering behaviors. We used OpenSteer as the basis of our simulation and rendering engine which saved us the time and effort of developing a similar system on our own. However, OpenSteer did not satisfy all of our requirements which meant that we had both to modify parts critical to our study and to accept the drawbacks to some other, less critical, features such as path representation. In Chapter 3, we discuss in detail our usage and modifications to OpenSteer. Look-ahead methods are used by Feng, Tan, Tomizuka, and Zhang (1999) to provide non-haptic path guidance for driving tasks, especially under low visibility conditions. This work focuses on the design of a complex vehicle location predictor, and approximations to this predictor to reduce its computational requirements to the point where the algorithm can run in real-time. They use a graphical display to present the driver with the predicted location of the vehicle derived from the approximated location predictor. Feng et al. use a more complicated vehicle model and predictor than we need. They test their system exper-imentally, but this is done primarily to verify that their approximated location predictor algorithm performs well compared to the full algorithm, not to compare the performance of their system to the performance of driving unaided by their predicted location display. They also provide some basic experimental results indicating that a larger look-ahead dis-tance improves path following performance but they do not provide any details on their experimental procedure and only fleetingly mention the driver's feelings about using the system. Kalisiak and van de Panne (2004) have created an an intelligent system to compute a set of safe control inputs for a dynamic system which they call viability envelopes. A viability envelope is the set of control inputs that will keep a dynamic system in a safe state given the current system state — for instance, the set of steering angles that will keep a driver in his lane given his current heading and speed. Viability envelopes could also be useful for helping to control physically-based animations which typically have a small subset of the entire control space that leads to a 'good' animation, such as keeping a character upright 7 while walking. Viability envelopes could be used to make interactive control of such systems much easier by constraining the control inputs to keep the system in a 'good' state. A user of a viability envelope system could potentially realize a large benefit from the haptic display of the viability envelope since graphical cues may be non-intuitive and/or distract the user from the task at hand. 2.2 Haptic Training Haptic feedback has been used to help teach complex motor tasks such as writing Asian text. The surgical community has several haptic training tools that are primarily used to simulate the feel of surgery and occasionally to repeat the motions of an expert surgeon. Teo, Burdet, and Lim (2002) used a 6-DoF haptic device to teach Chinese handwriting. They model both pen-based writing (2D motions) and calligraphic writing (3D motions). They employed experts in Chinese writing to record the motions required for a set of characters, which can then be played back to students (spatial and temporal constraint) or used as a guide (spatial constraint only). Both styles of constraints are implemented with a slight variation on a simple spring and damper constraint where the haptic interface's control point and the closest point on the constraint path to the control point are attached with a virtual spring and damper. They develop a complicated scoring scheme for a student's characters that involves the shape, motion, force and smoothness of their strokes. They do some basic experiments to measure the quantitative performance of their system via their score metric and do not formally evaluate the users' feelings about their interactions with the system. They report that spatial path constraints without a temporal constraint were "agreeable to users" and resulted in a performance increase, especially for beginners. Solis, Avizzano, and Bergamasco (2002) use a custom haptic device to teach the writing of Japanese characters. The main thrust of their work is in using Hidden Markov Models to recognize the character that the user is trying to write, and providing haptic path guidance, for that particular character. This contrasts with the work done by Teo et al. which does no such recognition. The path guidance method employed by Solis et al. is once again a simple spring and damper method that attempts to keep the control point on the outline 8 of the current character. They evaluate their system by considering the accuracy of their task recognition algorithm, and they do not look at user interaction issues with the haptic device or if haptic guidance improves a user's ability to write Japanese characters. They evaluate the task recognition of their system on ten different kanji characters, and report recognition rates varying between 76% and 100%. Feygin, Keehner, and Tendick (2002) present, and carefully evaluate, a haptic training method for a perceptual motor skill: tracking the spatial and temporal motion of a point following a 3D path over time. The trajectory of the point is specified through three 10-second sinusoidal curves; one for each of the X, Y and Z axes. A P H A N T O M is used to interact with their system. Their experimental task consists of first presenting the tra-jectory of the point and then having the user recall the presented trajectory. They have three different presentation methods: purely visual (watching the P H A N T O M follow the trajectory), purely haptic (subject cannot see his hand), and simultaneous visual and haptic presentation; and two recall methods: purely haptic and a combination of haptic and visual. No active force feedback is presented during recall. The trajectory is presented to the user by using a simple spring and damper model to guide the user along the trajectory. They perform a well-designed and detailed experiment with an equally detailed analysis of the performance of the different presentation and recall methods. They present some interesting metrics for positional, shape and temporal recall accuracy. Our experimental task is differ-ent enough from their task that we cannot use their performance metrics directly, but we foresee these performance metrics being useful for future haptic path guidance work. They conclude that "haptic guidance can benefit performance, especially when training temporal aspects of a task." 2.3 Haptic Non-Training Surgical Applications An example of a haptic system used for guiding but not for training a surgical task is found in the work done by Okamura's group on virtual fixtures for micro-surgical applications (Bettini, Lang, Okamura, and Hager, 2001, 2002; Marayong, Bettini, and Okamura, 2002; Marayong, Li , and Allison Okamura, 2003; Marayong and Okamura, 2003). These fixtures 9 provide variable admittance to a user's input forces which are factored into components parallel and perpendicular to a given constraint. The system guides a user along a con-straint by making movement in the direction parallel to the guidance constraint easier than movement in the perpendicular direction. With this setup, the system can vary the provided guidance from none to a rigid constraint. Furthermore, the user can be pulled toward the constraint by making perpendicular motion toward the constraint easier than away from it. This system is implemented using admittance control, meaning that the device's position changes in response to forces exerted on it by a user, as opposed to impedance control devices, such as ours, which apply forces based on the position of the device's end effector. Okamura's group has done a number of studies on the performance impact of varying the amount of guidance provided by the system across a number of different tasks. These tasks include standard path following, path following while avoiding an obstacle on the path, and path following with a secondary off-path targeting task. They have also looked at using ma-chine learning to attempt to identify the different tasks a user is trying to do, and changing the guidance characteristics appropriately. While we do not look at task recognition in this thesis, we anticipate that our system would benefit from such functionality. As is the case with much of the previous work, this system uses a haptic interface with at least the same degree of freedom as the system being controlled. Another difference between this work and ours is that the nature of microsurgical tasks dictates that the interaction with the haptic device involves very slow motions. This is not a characteristic of systems where predictive haptic path guidance will be most useful, such as interactive control of physically-based animation. 2.4 Shared Control of Vehicles A significant amount of work has been done on active steering in vehicles to help the driver with tasks such as lane keeping and passing. The majority of the lane keeping work appears to be motivated by a final goal of autonomous driving, and shared control of the vehicle is a stepping stone toward that goal. 10 Steele and Gillespie (2001) used haptic guidance in the shared control of a vehicle, and experimentally examined its effect on visual and cognitive demand. Their path guidance implementation uses the current lateral displacement of a vehicle from a path to calculate a desired steering angle "appropriate for good path following". This is done by applying a force to the steering wheel proportional to the difference between the desired steering angle and the current steering angle. This is very similar to our Potential Field Guidance method described in Section 3.2.4. They performed two experiments, one designed to test the effect of haptic guidance on the demand for visual cues, and another to test for the effect of haptic guidance on a driver's cognitive processing capacity. In their experiments, a small John Deer tractor was outfitted with a haptic steering wheel and L C D monitor. Both tasks involved having a user follow a straight path, with obstacles placed in the middle of the path at various points along the path length. The participant was given the primary goal of avoiding the obstacles, and the secondary goal of following the middle of the path as closely as possible. To measure visual demand in the first task, users saw nothing on the screen until they pushed a button on the wheel. After pressing this button, they would then, for half a second, see the simulated environment oh the monitor. The number of times the button was pushed was used to measure the visual demand required to perform the task. This task was done once with haptic feedback and once without. The authors found that haptic feedback provided both a significant decrease in visual demand and in lateral deviation from the path when compared to the no haptic guidance condition. In the second task, participants were asked to count backwards from 1000 by increments of 3 while they followed the path and avoided obstacles. They were instructed that the mental arithmetic was of lower priority than following the path and avoiding obstacles. The authors hypothesized that if haptic guidance affected the cognitive processing ability of the driver, then there would be a difference in the number of subtractions the driver could perform with haptic guidance compared to no haptic guidance. They did not find a significant difference between the number of subtractions performed with or without haptic guidance. The components of this study relevant to our work are the use of a similar desired-11 steering-angle approach to generate control forces, and their finding that haptic guidance provides a significant reduction in path following error compared to no haptic guidance. However, they only studied one kind of haptic guidance on a straight path and did not consider the drivers' feelings about the haptic guidance compared to no haptic guidance, something that we believe is necessary for user acceptance of shared control systems. Rossetter et al. (Rossetter, 2003; Rossetter and Gerdes, 2002a,b; Rossetter et al., 2003) employ force feedback potential fields in combination with a look-ahead algorithm to enforce vehicle guidance functions such as lane keeping and general hazard avoidance. They focus almost completely on safety concerns for implementing lane assistance in a real vehicle. They carefully develop a detailed vehicle model and use this model to design a mathematically stable lane keeping controller based on a potential field. To help make their controller stable they needed to add a look-ahead to their potential field method. This was encouraging information during the development of our system, because it echoed our experiences with potential field path guidance. The output of their lane-keeping controller is applied to the steering control of a vehicle which is shared with the driver via the steering wheel, so that the driver feels the output of the lane-keeping controller. They made a great effort to ensure that their controller will keep a car in its lane even in the absence of driver input. This is something that we are not interested in implementing as we envision a tightly coupled interaction between the system and the user for our applications, not automation. Only fleetingly in their work do they discuss the interaction between their system and the driver, mentioning that the forces feel intuitive without more formally evaluating this observation. In the future work section of 'his Ph.D. thesis, Rossetter (2003) acknowledges that user interaction with the system is an important issue that requires attention in order to create a good path guidance method. Throughout the design, implementation and evaluation of our system we took into account the user interaction issues with our system. 12 2.5 Haptic Path Guidance The majority of existing haptic path guidance work has either been for training or for the guidance of vehicles. There are some examples of haptic path guidance systems that do not fall into these categories and we present them in detail here. Cobots are an example of passive haptic path guidance: the user's input energy is steered, dissipated or stored to guide the user along a desired trajectory (Colgate et al., 1996; Swanson and Book, 2003). This is different from every other system that we have presented so far. Passive haptic guidance is a good candidate when safety is a primary concern since the haptic interface does not add energy to the system, making it possible to guarantee stability. This work is not as closely related to our work as some of the other previous work we have presented but it is an interesting approach to a related problem. Another example of a non-vehicle haptic path guidance algorithm is the work done by Donald and Henle (2000) on the haptic control of animation. Here, high-dimensional motion capture data is transformed to a three-dimensional trajectory that is interacted with via a P H A N T O M . They present two haptic methods to interact with motion capture data. In the first method the P H A N T O M follows a force 'river' around the 3-D trajectory representing an animation in a high-dimensional configuration space (a 57 degree of freedom humanoid character). They use a handcrafted transfer function that maps the 3-D configuration space to the character's 57-D configuration space. The user can manipulate the animation by pushing on the end effector of the P H A N T O M , altering its path as it follows the force river representing the motion capture animation. The P H A N T O M is connected to the animation trajectory via a virtual spring that pulls the end effector toward the trajectory, while another force tries to push the end effector along the trajectory at the pace set by the motion capture data. The second interaction method they present is not as direct; the P H A N T O M is used to interact with the force river from the outside instead of by following it. The force river is rendered as a 3-D tube and can be manipulated with direct haptic feedback from the P H A N T O M . The current temporal position of the animation is indicated by a ball following the tube. The tubes are designed to feel 'stretchy' when manipulated, and the user can change the shape of the tube by pulling on it with the P H A N T O M . 13 This work presents an interesting way to use a haptic device to interact with higher dimensional configuration spaces than the interaction device has, and uses paths to ac-complish this. However, their system is more autonomous than what we would like to implement and uses a more complicated haptic interface than we would like to use. They do not perform a user study to analyze the human computer interaction issues with their system, something that is very important to us and we believe that this requires a simpler system and task to analyze properly. 14 Chapter 3 Implementation Our system is comprised of several components, depicted in Figure 3.1, which we discuss in detail throughout this Chapter. 3.1 Simulation and Rendering Engine In this section we describe how our system simulates and renders a simple virtual envi-ronment. For the purposes of our experiment, we needed to be able to draw a path and something resembling a vehicle that a user can control to follow the path on the screen. The OpenSteer framework by Reynolds (2003) provides a good base set of functionality towards our goals, allowing more time to be spent developing the guidance algorithms and running experiments than would have been available if we developed everything from scratch. Open-Steer was designed to help develop intelligent behaviours for autonomous vehicles, and we extended it to allow for user controlled vehicles with force feedback. 3.1.1 OpenSteer This section summarizes relevant components of Reynolds's OpenSteer framework, a basic simulation and rendering engine. For a complete reference to OpenSteer see its on-line documentation (Reynolds, 2003). Our changes to OpenSteer are presented in Section 3.1.2. 15 60 Hz Simulation Loop Simulation Update Sec. 3.1.2.1 Guidance Algorithm Sec. 3.2 Desired Knob Angle Sec. 3.2.2 Render Sec. 3.1.1.4 Busy Wait A desired 1000 Hz Servo Control Loop PD Controller Sec. 3.4.1 FPD Force Enveloping Pg. 28 No Centering Force Sec. 3.2.1.1 Viscous Damping Sec. 3.2.1.2 Final Output Force Haptic Interface Sec. 3.3 Figure 3.1: System Block Diagram 16 3.1.1.1 Path Model OpenSteer defines an abstract path representation consisting of a radius (1/2 width) and methods to: • Get the total length of the path • Get a point on the path a certain distance from the beginning of the path • Get the distance along the path of an arbitrary point • Project a given point onto the path • Test if a given point is within the radius of the path Only one concrete implementation of this abstract definition is provided based on a polyline, a series of connected line segments. 3.1.1.2 Vehicle Model OpenSteer's vehicle model is a very simple one, consisting of a position (which is its center of mass and rotation), a velocity vector, a radius (size of the vehicle) and mass. The direction of the velocity vector is always aligned with the heading of the vehicle (i.e. the velocity vector is always coincident with the center line of the vehicle). The system steers a given vehicle by applying a 2-D force to the vehicle's position which in turn affects the vehicle's velocity via a physical simulation engine that integrates this force over time. This vehicle model is inadequate for our needs, and the vehicle model we replace it with is described in Section 3.1.2.1. 3.1.1.3 Simulation OpenSteer has three simulation loop steps: 1. Limit simulation update rate (optional). 2. Update the system state. 3. Render. 17 The update rate of the simulation can be limited either by the processor speed or by purposely setting a fixed rate, which is useful for applications such as games that typically need a fixed update rate. The update rate is limited by doing a busy wait until the next update time. The state of the system is updated by iteratively updating the state of each vehicle in the simulation. A typical vehicle update involves the following steps: • Calculate a 2-D steering force based on the current state of the vehicle and the rest of the system. • Update the vehicle's velocity by integrating the steering force over the simulation time-step. • Use the new vehicle velocity to update the position and orientation of the vehicle. 3.1.1.4 Rendering After the simulation state has been updated, a visual representation of the new state is drawn from the point of view of a virtual camera. This camera has a number of possible behaviors. The default OpenSteer camera behaviours are: • Static: Render the simulation from a static position and orientation. • Straight Down: Render the world looking straight down at the selected vehicle from above. The Y-axis of the view is aligned with the heading of the selected vehicle. • Fixed Distance Offset: Loosely follow the selected vehicle from a constant distance and focus on the vehicle. • Fixed Local Offset: Follow the selected vehicle at a constant position and orientation offset relative to the vehicle's coordinate frame. • Offset Point of View: A view from above and behind the selected vehicle aligned with the heading of the vehicle and focused on a point a fixed distance ahead of the vehicle. This is the camera positioning mode that we used for our experiment as it is similar to the view one would have while driving. 18 Many of the positioning modes depend on a selected vehicle. If the current simulation contains any vehicles, then there is always exactly one selected vehicle, and if there are multiple vehicles in the simulation then the user can select a vehicle by clicking on it. The default visual representation of a vehicle is a solid red triangle inscribed in a white circle centered at the position of the vehicle. A neighbourhood of the plane around the vehicle is drawn in a checkerboard pattern, which provides visual feedback about the speed of the vehicle as it moves over the checkerboard. OpenSteer draws paths as a red line, one pixel in width, and does not draw the full extent of the path. Overall, OpenSteer is a useful simulation engine for our purposes, but some parts of OpenSteer needed to be changed to suit our needs. The next section describes these changes. 3.1.2 OpenSteer Modifications This section describes the major changes we made to the stock version OpenSteer to support our work on haptic path guidance. The majority of our changes are to the vehicle model and the rendering components of OpenSteer. We change the vehicle model and vehicle simulation algorithm to enable a user to have control of a vehicle's steering angle. We change OpenSteer's rendering of the simulated system to meet the requirements of our experiment and to reduce the computational resources required for rendering. 3.1.2.1 Vehicle Model and Dynamics OpenSteer steers a vehicle by applying autonomously computed 2-D force vectors to the vehicle. We allow users to steer a vehicle in an OpenSteer simulation via a knob. In an effort to minimize changes to OpenSteer, we attempted to incorporate the user's input via the knob with OpenSteer's existing vehicle steering algorithm. This was accomplished by applying a force to the vehicle perpendicular to its centerline and with magnitude propor-tional to the control knob angle. Through some simple tests we found that this method of enabling user control of a vehicle was not going to work because of problems with how OpenSteer integrates steering forces over time. Rather than implement a better simulation integration method, we decided to implement a different vehicle model that does not depend on the integration of forces over time. 19 Our vehicle model consists of: a position (O), wheelbase (£), velocity (v) and steering angle (9). Table 1 on page xi lists and describes the symbols used in the definition of our vehicle model, and Figure 3.2 presents a graphical representation of the model. As in the stock OpenSteer vehicle model, the coordinate frame of our vehicle model is always aligned with the velocity of the vehicle and centered at O. Unlike the stock version of OpenSteer, the vehicle speed, can either be directly proportional to the position of a foot pedal or be set to a fixed value. A major departure from the OpenSteer vehicle model is that our model allows the user to steer the vehicle via a knob. The dynamics of the control of our vehicle is based upon that of a tricycle; the angle of the control knob changes the angle of the virtual front tire of the vehicle. Refer to Figure 3.2 on the next page for a schematic view of the vehicle dynamics presented here. The angle of the front wheel, 9, is proportional to the control knob angle A: 0 = RA (3.1) where R is a constant, manually tuned for good steering control. The value of A and 9 is initially zero, and counter-clockwise rotation of the control knob increases A while clockwise rotation decreases A . In prototype implementations we found that without a limit on the magnitude of 9 users could get lost in very tight turns. This was addressed by constraining 9 to the range [—#max, #max] where #m a x is a reasonably small constant (see Appendix A for the value used in the experiment). The vehicle steering angle, 9, defines the circle, with center C and radius r, that the vehicle will follow if the steering angle is held constant. The center of the circle, C , is the intersection of two lines. The first is line a in Figure 3.2, which is the line that passes through the rear axle of the virtual vehicle. The other line, c in Figure 3.2, is perpendicular to the front tire and lies on the tire's center of rotation, Q. The point C is the intersection of line a and line c and its position can be calculated trigonometrically. By definition, C is on line a and the sign of 9 tells us on which side of the vehicle it will be . If we can find the distance, r, between C and O then we will know 20 £ C Figure 3.2: Schematic of Vehicle Dynamics: shaded area represents the vehicle itself. the position of C exactly. By trigonometry IOCQ = 6 and then, The dynamics of our vehicle model is different from OpenSteer's. We do not integrate a steering force every simulation step, but instead move our vehicle along the circle (C, r) according to its speed. The parameters of the circle are computed at the beginning of each step using Equation 3.2. The simulation update rate is 60 Hz; therefore the distance that the vehicle moves each simulation step is \v\ ^ . If 0 < \8\ < 0.05° then the vehicle moves this distance in a straight line, otherwise the vehicle moves this distance along the circle (C, r) from the position of point O at the beginning of the simulation step. To finish the vehicle update, the direction of the vehicle's velocity is modified to be in the same direction as the tangent to the circle at the vehicle's new position. This vehicle model and dynamics are sufficient for the requirements of our work: a simple vehicle simulation that can be intuitively controlled via a knob. The model is not physically accurate because apart from the development time, exact physical accuracy is not critical to understanding the general performance and utility of haptic guidance methods. (3.2) r = < oo :9 = 0. 21 Figure 3.3: The dark line represents a segment of a path. The light lines show the extent of the path and are the elements of our path rendering method. Figure 3.4: The view of a path with extent, as seen from above, with the checkerboard ground plane also visible. 3.1.2.2 Rendering We changed how OpenSteer draws paths and the ground plane, and added the ability to vary the level of visibility. Path Rendering: The default visual representation of a path in OpenSteer is a thin red line along its center and its horizontal extent is not drawn. We need to display the extent of a path so users can tell if they are on it or not. We display the extent by drawing a filled rectangle around each line segment in a path, and filled circles at the ends of each segment. An example path line segment with extent drawn is shown in Figure 3.3. By drawing every line segment this way, every point within the radius of the path is visible to the user (Figure 3.4). Ground Plane Rendering: The plane on which vehicles in OpenSteer simulations move is drawn as a checkerboard pattern. We changed the ground plane rendering process to 22 use textures instead of OpenGL geometry because modern graphics accelerator cards can draw textures very quickly and the texture can repeat to infinity. The ground plane in the stock version of OpenSteer was only drawn in a small neighbourhood around the vehicle because of the large computational demands required to draw a larger ground plane with OpenGL geometry. To ensure the best visual quality of the ground plane rendering we employ anisotropic filtering (Everitt, 2000). Fog: One of the independent variables in our experiment (described in Chapter 4) is the level of visibility. We change the level of visibility in OpenSteer simulations by using OpenGL fog, specifically the GL_EXP2 exponential fog method (Woo and Shreiner, 2003) where the density of the fog increases exponentially with distance from the viewpoint. The color of the fog can be set to any 24 bit RGB value, and we use a dark grey with RGB values (0.3, 0.3, 0.3). These are the major changes that we made to OpenSteer which we then used to develop our path guidance methods and perform our experiment. 3.2 Guidance Algorithms This section presents how we compute forces for our haptic guidance methods and which forces are present in the baseline "No-Guidance" method. The two active haptic path guidance methods we implement are Potential Field Guidance and Look-Ahead Guidance. A pure potential field haptic path guidance method calculates guidance forces based solely upon the distance between the vehicle and the path. A look-ahead method, on the other hand, computes guidance forces based upon a predicted future position of the vehicle. 3.2.1 Forces Common to All Guidance Methods During the iterative development of the guidance methods described in Sections 3.2.3 and 3.2.4, we introduced a centering force and a viscous damping force for the sake of usability. These forces are present in each method, and they increase the usability of our system both through positive transfer, they make the interaction with the control knob more like the interaction with a real steering wheel, and by increasing the stability of the interaction. 23 3.2.1.1 Centering Force and Steering Angle Limit With early prototypes, some users had difficulty controlling the vehicle when no forces were applied to the control knob and they found that the active path guidance methods would steer the vehicle more accurately without their input than with it. We suspected that one of the problems was the lack of a limit to the magnitude of the vehicle's steering angle, 6. An unlimited steering angle range allowed a user to get into very tight turns, from which it was difficult to return. The steering angle limits discussed in Section 3.1.2.1 help address this issue by clipping the magnitude of the vehicle's steering angle, avoiding problematic tight turns the simulated vehicle could achieve. However, the maximum steering angle limits did not limit travel of the physical interface, and this actually increased user confusion. Rather than display a haptic 'wall' to enforce the steering angle and related knob angle limits, something that is difficult to do with our relatively low-power haptic interface, we implemented a centering spring force that attempts to keep the knob angle (and therefore the vehicle's steering angle) at zero. This works as a reasonable substitute for a haptic wall by providing a different, but just as useful, haptic cue indicating how far the knob is from center. The centering force also addressed another problem users had with early prototypes; it was difficult to drive in a straight line because there was no physical indication of how far the knob was away from the center. The steering wheel of a real car at speed has a gentle centering force because the tires naturally want to point straight ahead since this minimizes friction on the tires. The centering force is implemented as a simple damped spring with constants: Ckp, Ckd, Ckmax) f ° r proportionality, damping and maximum force output respectively. The centering force's contribution to the final output force is: F ' = F + c l ip ( -AC k p + A c k d , - C , k r a a x , C k m a x ) . (3.3) The output of the centering force is clipped to [—C k m a x, C k r a a x ] , manually tuned to provide just enough centering force without masking the guidance forces (see Appendix A for the values of C k p , C ka, and C k m a x used in the experiment). Our haptic interface calculates A in hardware, freeing us from doing this calculation in software which could introduce artifacts into the force output (similar differentiation artifacts are discussed in Section 3.4.1). 24 3.2.1.2 Viscous Damping To improve the 'feel' of the force feedback, and to help smooth the output signal of the PD controller described in section 3.4.1, we added a viscous damping component to the force output. This viscous damping is proportional to the angular velocity of the control knob and its contribution to the overall output force is defined as follows: F' = F - kv A . (3.4) Informal experimentation indicated that these two force components improved the us-ability and feel of our interface considerably. They provide a good base feeling for the haptic interface on which to layer the path guidance forces. 3.2.2 Desired Steering Angle Algorithm and Force Output Before it is possible to describe the haptic path guidance methods in detail, it is useful to understand a little bit about the process these algorithms use to have a force displayed to the control knob. The low-level force control for the haptic interface is a PD controller on the angular position of the control knob (described in Section 3.4). The Look-Ahead Guidance and Potential Field Guidance methods compute a desired vehicle direction, which is first transformed into a desired steering angle; and then to the desired angular position of the knob, which is given to the PD controller. The ultimate goal is to reduce the difference between the current and the desired vehicle heading by steering the vehicle towards the desired heading. The guidance algorithms described in the following two sections express the desired vehicle heading as an offset from the current vehicle heading, c6. Equation 3.5 shows how the desired steering angle, #desired> is a function of the desired heading offset, (j>, and Figure 3.5 illustrates this function. The reader may wish to refer to Table 1 on page xi, the reference to symbol definitions. ^desired {4>) = { fmax ^max • & — 0max #max : -0raax < < raax ( 3 - 5 ) ^max : ^ ^max-25 ^desired slope = $max ^max 1 7, / 1 '/'max / -^max Figure 3.5: Transfer function from heading angle delta to desired steering angle The desired steering angle is linearly proportional to the desired vehicle heading offset when the desired vehicle heading offset is in the range [—max, max] and equal to ±0max otherwise. Then by Equation 3.1, the desired knob angle is calculated by scaling the desired steering angle by 1/R: A d e s i r e d (« />) = g d e s i g W (3.6) This is used as the set-point for the haptic interface's PD controller which computes a force to display to the interface knob, written as FPD() (See Section 3.4.1 and Figure 3.1). That still leaves the question of how (f> is calculated, which is discussed in the following two sections. 3.2.3 Look-Ahead Guidance Our Look-Ahead Guidance method, illustrated in Figure 3.6, is an extension of the path following behavior developed by Reynolds (1999). He predicts the vehicle's future position, P, if it were to travel in a straight line for t seconds at the current speed \v\. His system then calculates T, the point on the path closest to the point P. If the distance between T and P is greater than the path radius, r p a t h , then the system steers the vehicle towards T; otherwise the system does not steer the vehicle. In Reynolds's look-ahead path guidance method, the system steers a vehicle by applying a 2-D force vector directly to the vehicle's 26 fpath Figure 3 . 6 : Components of Look-Ahead Guidance 2 7 center of mass, thus changing its velocity and heading. It is not possible to display a 2-D steering force to a user via a control knob, only a directional 1-D force. We use the angle IPOT in Figure 3.6 as the desired heading offset angle (cf>) to compute the desired steering angle as discussed in the previous Section 3.2.2. C6LA = IPOT. (3.7) The guidance force component of the final force displayed to the control knob is proportional to the angle IPOT. If P is within the path then the desired heading offset is zero and the final force output does not contain a guidance component. When P leaves the path, the distance between P and T will be non-zero (assuming the path radius is greater than zero) and therefore the magnitude of will jump from zero to non-zero as P leaves the path. To avoid having this discontinuity appear in the force output, we envelope the force from the Look-Ahead Guidance method as P leaves the path: FLA(^LA) = FpvfaA) • { 0 : \P - T\ < r p a t h ( | P Z'rr t h ) 2 : Otherwise (3-8) ^envelope 1 • | P ~ r | > ?*path " I " ^ envelope where ^envelope is the distance past the edge over which force enveloping occurs. A graphical depiction of the force enveloping components can be seen in Figure 3.7. 28 3.2.3.1 Look-Ahead Position Predictor Improvements Our current Look-Ahead Guidance method uses a simple linear method to predict the location of the vehicle t seconds into the future, multiplying the current vehicle velocity by t and adding this to the vehicle position. This predictor is accurate when the vehicle is moving in a straight line, but incorrect if the vehicle is in a turn. A more intelligent predictor has the potential to be more accurate across a wider range of vehicle behaviors than for one merely traveling in a straight line. For example, the position predictor could consider the movement of the vehicle in the recent past, or the predictor could look at the current turning radius of the vehicle to more accurately predict the future position of a vehicle while in a turn. However, departures from a linear predictor can lead to subtle problems. For instance, consider a predictor based on the current turning radius of the vehicle, illustrated in Figure 3.8, where the predicted location of the vehicle is inside the path, but the vehicle would leave the path before arriving at this location. Our current Look-Ahead Guidance method would not display any guidance forces under this condition, which is not correct. A possible solution to this problem is to adjust the Look-Ahead Guidance method to apply guidance forces to keep the entire predicted vehicle trajectory inside the path, but it should be apparent that any predictor changes can, and do, have subtle implications on the guidance method that affect usability. 3.2.4 Potential Field Guidance It is difficult to implement a haptic potential field guidance method with a one degree of freedom (DoF) haptic interface. We define potential field guidance as a force dependent only upon the distance of the vehicle from the path. With a two DoF haptic interface, one can apply a 2-D force pushing the end-effector of the haptic interface towards the center of the path. With a one DoF haptic interface such as ours, one can only apply a force to change the steering angle of the vehicle, and we found that the simple algorithm of applying a raw force proportional to the distance from the path was unstable and hard to use. We create a more usable, one DoF potential field guidance force that is proportional to the distance between the vehicle and the path, up to a maximum force at a distance p from 29 Figure 3 . 8 : An example of the subtleties involved with advanced location predictors the path. We create this force by computing an artificial desired heading offset, 0pf, that is proportional to the distance of the vehicle from the path. By taking advantage of the relationship between 4>p{ and the magnitude of the PD controller output force, Fpn^pf) ( s e e Equations 3.5, 3.6, and 3.13), we are able to produce a stable guidance force proportional to the distance from the path. Figure 3.9 on the following page illustrates the components of our Potential Field Guid-ance algorithm. The angle 0 is the angle between the current vehicle heading, v, and the line between the location of the vehicle, O, to the point on the path closest to the vehicle, <2path- This angle is negative if Q is to the left of the line (O, O p a t h) and positive if Q is to the right of this line. The angle (3 is important because it represents the vehicle heading offset required to head straight back to the path. We do not want the desired heading offset, pf, to be greater than (3 because this will result in non-intuitive guidance forces; the PD controller will apply guidance forces to achieve a vehicle heading that is beyond the line straight back to the path. Now we can describe the mathematical derivation of the artificial desired heading offset, pf, for the Potential Field Guidance method as a function of the 30 Figure 3 . 9 : Components of the Potential Field force feedback method 31 Figure 3.10: Values of for a given d and (3. distance d from the path and the angle f3 as described above: 0raw(d) and, 4>pf(d,/3) The value of pf is computed in two steps. Equation 3.9 computes a 'raw' desired heading offset that considers only the distance of the vehicle from the path. Equation 3.10 modifies this raw value to have the same sign as f3, and to limit the magnitude of cipf to that of /?, if necessary, to avoid steering forces for a desired vehicle heading past the direction straight back to the path. The sign of 4>pf needs to have the same sign as (3 to ensure that the guidance force has the correct direction to steer the vehicle towards the path. Figure 3.10 shows the value of Equation 3.10 over reasonable values for d and j3. The force generated by Potential Field Guidance can expressed as follows: FpF(d,(3) = Fm(p / £ A I^S^raw : ^raw(d) < W\ • 4>raw > • (3.9) (3.10) strong. We tested p at this value because the prototypical potential field method is actually trying to push you away from constraints, such as the edge of the path, and having p be equal to the radius of the path achieves this effect. Through informal experimentation, we found that when p was twice as large as the path radius the potential field guidance forces felt 'good'. 3.3 Haptic Interface Device We used a number of different haptic interfaces during the development of our haptic path guidance system. Initial prototyping was done using a "Twiddler" (Shaver and MacLean, 2003), an inexpensive one DoF haptic interface developed in our lab. We evaluated a commercial haptic gaming steering wheel to use as our haptic interface but found it was not controllable enough for our needs. We did not have the time or resources to build an ideal custom solution so we conducted the experiment using the highest quality one DoF interface available in our lab. The haptic interface that we used for the final development phase and the experiments consisted of a 20 W Maxon motor with a 4000 counts per revolution encoder mounted on a custom aluminum rig. The shaft of the motor is directly attached to a plastic, beveled-edge knob, 9 cm in diameter. Figure 3.11 shows the motor, knob, and mounting rig as they were configured for development and experiments. The knob interfaces with the computer via an Immersion data acquisition PCI board and associated amplifier board, the Impulse Drive Board 1.0. This board calculates the velocity of the knob in hardware, a feature we took advantage of in our implementation of the centering force and viscous damping force described in section 3.2. The host P C was a Dell Precision 530 with a 2 GHz Intel Pentium 4 Xeon processor and 512MB of R A M running the Microsoft Windows 2000 operating system. While this was a reasonable interface for our purposes, we would have liked a larger knob/wheel and more powerful interface which we describe in Section 7.3.1.1. The Immer-sion/Maxon haptic interface provides a reasonable base for a haptic interface but it needs to be controlled well to make it a good haptic interface. 33 Figure 3.11: The haptic interface used for experiments. Consisting of a Maxon motor, encoder, mounting rig and plastic knob. 3.4 Servo Control Force commands need to be sent to the haptic interface at approximately 1000 Hz to achieve a high fidelity force rendering. We have three force components that need to be rendered at this rate: the knob centering force, the viscous damping force and the guidance force. The guidance force is generated by a PD controller on the haptic interface knob angle. The set-point for this PD controller, the desired knob angle, is updated at 60 Hz (once every simulation update). These two different update rates are accomplished using separate threads of execution. The simulation code, including guidance algorithm calculations and rendering, runs in a normal priority thread at 60 Hz which is maintained by using a busy wait if necessary. The force rendering code runs at 1000 Hz in a separate thread created with the highest priority for user threads in Windows 2000. Due to the priority difference between these two threads, the force rendering thread will pre-empt the simulation thread if necessary, which is why a busy wait can be used to limit the update rate of the simulation thread. However, this means that if the execution of the force rendering code takes too long, not only could the force rendering loop not run at 1000 Hz and cause haptic artifacts, but it could also starve the simulation thread, resulting in choppy simulation rendering and choppy desired knob angle calculations. Care must then be taken to ensure that the force rendering code is 34 efficient, and that each update takes less than one millisecond to maintain a 1000 Hz update rate. Figure 3.1 on page 16 is a block diagram of the entire system and is useful to help understand how the different components of the system fit together. The PD controller is an important component of our system as the source of the actual guidance forces, and deserves a detailed discussion. 3.4.1 PD Controller A PD controller attempts to minimize the difference between a set-point and a process variable; our process variable is the current knob angle, A , and our set-point is the desired knob angle, A j e s i r e d - It attempts to reduce the difference A d e s ; r e ( j — A by applying a force to the control knob that is proportional to both the difference and the rate of change of the difference. The equation for our PD controller is as follows: e((f>) = A d e s i r e d ( 0 ) " A (3.12) Fpn(4>) = kpe(ct>)-kd e{cp) (3.13) where kp is the constant of proportionality and kd is the differential constant. The value of these constants during the experiments can be found in Appendix A. Calculating e smoothly is difficult because e comes from an encoder reading, A . It is known that it is difficult to differentiate encoder values using finite difference methods (Belanger, 1992), as we found out when attempted to estimate e using a finite difference method over several time steps as follows: k(A \ e (^" ) ~ £ ( ^n - r ) / O - M N e(F) Residuals 16 1.711 0.107 G M 2 0.187 0.094 4.860 0.0144 * Residuals 32 0.617 0.019 Path 2 0.241 0.121 8.984 0.0008 * Residuals 32 0.430 0.013 Vis 1 0.013 0.013 4.887 0.0420 * Residuals 16 0.043 0.003 GM:Path 4 0.042 0.011 1.646 0.1736 Residuals 64 0.410 0.006 GM:Vis 2 0.027 0.013. 1.603 0.2170 Residuals 32 0.268 0.008 Path:Vis 2 0.024 0.012 2.162 0.1317 Residuals 32 0.180 0.006 GM:Path:Vis 4 0.021 0.005 0.594 0.6683 Residuals 64 0.578 0.009 the two other path levels, indicating that it is easier to closely follow the low complexity path compared to the medium or high complexity path. The ANOVA table indicates a significant effect on the M S E score due to the visibility factor, which has two levels and does not require a post-hoc comparison to distinguish levels. The mean scores for the levels of the visibility factor found in Table 5.3 show that participants performed slightly better when the visibility was high, compared to when it was low. 58 Table 5.3: Mean scores for the levels of each factor across all other levels Factor Level Mean Score Guidance Method No-Guidance 0.108 Potential Field 0.099 Look-Ahead 0.051 Path Complexity Low 0.049 Medium High 0.094 0.116 Visibility Low High 0.093 0.080 Overall Mean 0.086 Table 5.4: P-values from post-hoc test on guidance method levels No-Guidance Potential Field Potential Field 0.614 Look-Ahead 0.004 * 0.012 * Table 5.5: P-values from post-hoc test on Path Complexity levels Low Medium Medium 0.018 * High 0.000 * 0.190 59 Performance Difference Relative to Overall Path Mean o fl 20 -o u ,® 0 -5 a -20 -fl o o u -40 -a* Guidance Method No—Guidance Potential Field Look—Ahead Curve Bump Zigzag Path Type Figure 5.7: The relative difference between the average path following score for each guidance method and the average path following score across all guidance methods for that path. Negative differences indicate an improvement in performance. 5.2.3 Independent Variable Interactions Our results show that the Look-Ahead Guidance method improves path following perfor-mance compared to No-Guidance and Potential Field Guidance methods, with a mean path deviation score of about half of the other methods (Table 5.3). However, we did not see the hypothesized significant interactions between the guidance method and path complexity factors, and between the guidance method and visibility level factors. Path Complexity and Guidance Method Interaction Look-Ahead Guidance did not prove especially useful for more complex paths compared to less complex paths. Figure 5.7 shows the relative performance of each guidance method for a given path type. It shows that the Look-Ahead Guidance method performs better than the average score for each path type, but the improvement is relatively consistent across path types. Visibility Level and Guidance Method Interaction We thought that Look-Ahead Guidance might have offered a performance benefit in low visibility conditions compared to other guidance methods because the system can 'see' 60 Mean Scores for all Combinations of Independent Variables No—Guidance Potential Field Look—Ahead 0.15 -\ Visibility Low High 0.05 H r-\ Curve Bump Zigzag Curve Bump Zigzag Curve Bump Zigzag Figure 5 . 8 : The effect of visibility compared to path type and guidance methods. The Look-Ahead Guidance method shows consistent performance across visibility levels. Lower scores are better. further than the user can. Figure 5.8 shows the average scores for all eighteen unique combinations of independent variable levels, illustrating the effect of visibility on each path type for each guidance method. Counter intuitively, the No-Guidance method performs better under low visibility conditions for two path types: bump and zigzag. However, some participants reported preferring the low visibility condition more than the high visibility condition because it focused their attention on the portion of the path immediately in front of the vehicle and not on upcoming path features, allowing them to follow the center of the path more accurately. This provides a possible explanation for the observed performance increases with low visibility compared to high visibility. The Potential Field method had a similar performance gain in low visibility conditions with the curve path, but it performed much worse in low visibility conditions on the other two path types. The Look-Ahead Guidance method performs consistently across visibility conditions. 61 Average Score For Each Block Given Game Playing Status 0.30 -f 0.25 4 Plays Video Games no yes I > < 0.10 -0.05 4 I L L II 1 2 3 4 5 6 Block Figure 5.9: Mean score across all conditions for each block given gaming experience 5.2.4 Influence of Video Game Experience In terms of ability to perform the task at hand, we noticed that the participants in our experiment appeared to fall into two categories. Some participants had no problem following the path accurately after the first few familiarization tasks, while other participants had difficulty throughout the primary familiarization phase and generally performed worse than the other group of participants. An extreme example of this is the participant who left after the familiarization phase because of frustration with the task. While running the experiment we hypothesized that this difference between participants may have been related to video game experience, based on past experiences in our research group. Figure 5.9 shows the performance of gamers and non-gamers according to block number, illustrating how performance changes over time. Both groups tend to improve over time, but the improvement is more pronounced in non-gamers than it is in gamers with the exception of the last block. This suggests that non-gamers improve more than gamers do over time, although still not reaching the performance level of gamers. The poor performance of non-gamers in the last block possibly suggests that they become fatigued and/or disinterested in the experiment by the last block. 62 Mean Score for Each Participant Given Game Playing Status Plays Video Games n i i y ™ 0.4 0.6 Mean Score Figure 5.10: Boxplots of Participants' Scores Given Game Playing Status. The verti-cal line indicates the overall mean score. See Appendix D.2 for a description of boxplot features. Figure 5.10 shows individual participant scores in boxplot form and are shaded according to game playing status. The vertical line in the figure indicates the mean score across all participants and all conditions. The game players tend to have lower (better) scores than non-game players with an average score of 0.13 for non-gamers and 0.067 for gamers, but there is considerable overlap. Figure 5.11 shows boxplots for the scores of game players and non-game players, and shows that game players have lower mean scores and are more consistent as a group than are non-gamers. This supports our feeling that video game experience could act to separate the participants into two performance groups. The reported average weekly gaming times fell into six bins: less than 1 hour per week, and 1, 3, 5, 9, and 10 hours per week. Figure 5.12 on page 65 shows boxplots for the scores 63 J L 0.25 n y Plays Video Games o.oo i r n y Plays Video Games (a) Outliers Shown (b) Zoomed (Outliers Not Shown) Figure 5.11: Boxplots of participants' scores given game playing status. Boxplot components are described in Appendix D.2 64 Boxplots (with Regression Line) of Score versus Gaming Time ~i i i i i i i i r 0 1 2 3 4 5 6 7 8 9 10 Average Weekly Game Playing Time (hours) Figure 5.12: Boxplots of reported weekly gaming time with a regression line fitted to individual scores. Regression line parameters: intercept = 0.085, slope = -0.046 in each gaming time bin, with a gaming time of 0 for the scores of participants who reported playing less than 1 hour of video games per week on average. A regression line showing the fitted linear relation between gaming time and score is also drawn. This line suggests that path following ability improves with game playing time. Separating the effect of game playing time on the path following score by guidance method is shown in Figure 5.13. This plot suggests that the effect of video game experience on path following performance is lower with Look-Ahead Guidance than with the other guidance methods. 65 Boxplots (with Regression Lines) of Score versus Gaming Time given Guidance Method • i i i i i i i i i i i ' i i ' ' j ' I I i i ' i ' ' j 1 , 1 ' i ' No—Guidance . Poteritial Field .*"'. • Look—Ahead o o 8 o o o T I § * B * o 51 § 3 • 8"*- s m 9—*—• m * I I I I I I I I I I I ' I I ,1 I I I I I I I I I I I I I I I I I I I 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 Average Weekly Game Playing Time (hours) Figure 5.13: Boxplots of reported weekly gaming time with regression lines given guidance method level. No-Guidance regression line parameters: intercept = 0.111, slope =• -0.0062. Potential Field Guidance regression line parameters: intercept = 0.091, slope = -0.0044. Look-Ahead Guidance regression line parameters: intercept = 0.054, slope = -0.0032 66 5.3 Subjective Results An important objective of our evaluation was to acquire information about the subjective (aesthetic) performance of the path guidance methods. Section 4.3.2 describes how we mea-sured subjective performance, and this section presents the results of those measurements. After each block a dialog box appeared and asked the participant a set of questions before the next block could be started. The first question asked if the participant had felt force feedback in the previous block. Table 5.6 summarizes the responses to this question. Most people reported feeling force feedback when they had just finished a block with either Potential Field Guidance or Look-Ahead Guidance. In the case when the block prior to the question had presented the baseline No-Guidance level, the majority of participants reported not feeling a force, even though a centering force was being displayed. Table 5.6: Answers to post block questions Question: Felt Force Feedback Guidance Method Yes No No-Guidance 3 14 Potential Field 15 2 Look-Ahead 17 0 We aimed to provide assistive path guidance, not autonomous control, which motivated the next question. We asked if the participant felt 'in control' while performing the trials in the block to see if the participant felt as if he was driving the simulation and not vice-versa. Table 5.7 on the following page summarizes the results of this question, and Figure 5.14 displays them as a histogram. People felt most in control of the vehicle with the Look-Ahead Guidance method, and the least in control with the No-Guidance method. The last two questions after each block are only answered if the participant reported feeling force feedback. The two questions try to discriminate between perceived helpfulness and how much the participant liked the force feedback. It may seem strange for something to be helpful and disliked, but consider 'Clippy' from Microsoft Office. Clippy is the active Office assistant that pops up with helpful suggestions about tasks the system thinks that you are doing, and even though Clippy's information is useful, many people do not like Clippy. 67 Table 5.7: Counts for answers to the 'In Control' question Question: What level of control did you feel you had over the vehicle? Guidance No Complete Mean Method Control Control Score 1 2 3 4 5 No-Guidance 1 4 1 7 4 3.5 Potential Field 0 3 4 8 2 3.5 Look-Ahead 0 2 2 5 8 4.1 W h a t Level O f Control D i d You Feel Y o u H a d Over T h e Vehicle? i i i i i I I i i i i i i i i No-Guidance.' % Potential Field •Look-Ahead; 8 -Mean _ Mean — Mean r 6 -4 -3.5 3.5 4.1 r—1 2 -n = 4 & participant != 12) The experimentData object is an R data frame with one row per unique combination of independent variables with the MSE value averaged across the five repetitions of each such combination seen by each participant in the last three blocks of the experiment. Each row of the data frame has the following fields: mse (MSE score), f f (Guidance (Force Feedback) Method), path (Path Complexity Level), v i s i b i l i t y (Visibility Level), block (Block Number), and participant. See the R reference manual for details on the aov function (R Development Core Team, 2003). D.2 Boxplot Details The dot in the box is the median of the data points. The box itself extends between the first and third quartile. The "whiskers" extend from the box to the most extreme data 94 point that is no more than one and a half times the inter-quartile range (IQR). Any values falling outside of the whiskers (called outliers) are drawn as open circles. 95 A p p e n d i x E E x p e r i m e n t C o n s e n t F o r m s The following three pages are a copy of the consent forms given to each experiment partic-ipant at the beginning of a session. 96 T H E U N I V E R S I T Y OF B R I T I S H C O L U M B I A You hereby CONSENT to participate in this study and acknowledge RECEIPT of a copy of the consent form: (please print) S I G N A T U R E ; D A T E . If you have any concerns regarding your treatment as a research subject you may contact the Research Subject Information Line in the UBC Office of Research Services at 604-822-8598. Page 2 of 2 revised 8/24/2004 100 A p p e n d i x F R a w D a t a The following pages are the path trajectories for every participant over the last three blocks. There are three pages per participant, one for each of the guidance methods. The trials on a given page were thus executed during the same block (30 trials). Trials on each page are organized by path type (zigzag, bump, and curve). The plots contain trials with both visibility levels to conserve space. Thus there are ten trials per path type, evenly distributed (on average) between the reflections of each path type. Participant 13 was the discarded outlier. Of the 17 participants used in the analysis participant 1 has the worst performance and participant 6 has the best. 101 Participant: 01, Guidance Method: N o - G u i d a n c e Participant: 0 1 , Guidance Method: Potential Field 103 Participant: 01 , Guidance Method: Look-Ahead 104 Participant: 02, Guidance Method: N o - G u i d a n c e Participant: 02, Guidance Method: Potential Field Participant: 02, Guidance Method: L o o k - A h e a d Participant: 03, Guidance Method: N o - G u i d a n c e Participant: 03, Guidance Method: Potential Field Participant: 03, Guidance Method: Look-Ahead 110 Participant: 04, Guidance Method: N o - G u i d a n c e 111 Participant: 04, Guidance Method: Potential Field 112 Participant: 04, Guidance Method: Look-Ahead 113 Participant: 05, Guidance Method: N o - G u i d a n c e Participant: 05, Guidance Method: Potential Field 115 Participant: 05, Guidance Method: L o o k - A h e a d Participant: 06, Guidance Method: N o - G u i d a n c e 117 Participant: 06, Guidance Method: Potential Field Participant: 06, Guidance Method: L o o k - A h e a d Participant: 07, Guidance Method: N o - G u i d a n c e Participant: 07, Guidance Method: Potential Field Participant: 07, Guidance Method: L o o k - A h e a d Participant: 08, Guidance Method: N o - G u i d a n c e 123 Participant: 08, Guidance Method: Potential Field Participant: 08, Guidance Method: Look-Ahead 125 Participant: 09, Guidance Method: No-Guidance 126 Participant: 09, Guidance Method: Potential Field Participant: 09, Guidance Method: L o o k - A h e a d Participant: 10, Guidance Method: No-Guidance Participant: 10, Guidance Method: Potential Field Participant: 10, Guidance Method: L o o k - A h e a d 131 Participant: 11, Guidance Method: N o - G u i d a n c e Participant: 11, Guidance Method: Potential Field 133 Participant: 11, Guidance Method: L o o k - A h e a d 134 Participant: 12, Guidance Method: N o - G u i d a n c e Participant: 12 , Guidance Method: Potential Field 136 Participant: 12, Guidance Method: L o o k - A h e a d 137 Participant: 13, Guidance Method: No -Gu idance 138 Participant: 13, Guidance Method: Potential Field Participant: 13, Guidance Method: L o o k - A h e a d Participant: 14, Guidance Method: N o - G u i d a n c e 141 Participant: 14, Guidance Method: Potential Field 142 Participant: 14, Guidance Method: L o o k - A h e a d Participant: 15, Guidance Method: N o - G u i d a n c e 144 Participant: 15, Guidance Method: Potential Field Participant: 15, Guidance Method: L o o k - A h e a d Participant: 16, Guidance Method: N o - G u i d a n c e Participant: 16, Guidance Method: Potential Field 148 Participant: 16, Guidance Method: L o o k - A h e a d 149 Participant: 17, Guidance Method: N o - G u i d a n c e 150 Participant: 17, Guidance Method: Potential Field 151 Participant: 17, Guidance Method: L o o k - A h e a d 152 Participant: 18, Guidance Method: N o - G u i d a n c e Participant: 18, Guidance Method: Potential Field 155