UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Modeling adaptation behavior to driving simulators and effect of experimental practice on research validity Sahami, Saeed 2011

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2011_fall_sahami_saeed.pdf [ 4.38MB ]
Metadata
JSON: 24-1.0063027.json
JSON-LD: 24-1.0063027-ld.json
RDF/XML (Pretty): 24-1.0063027-rdf.xml
RDF/JSON: 24-1.0063027-rdf.json
Turtle: 24-1.0063027-turtle.txt
N-Triples: 24-1.0063027-rdf-ntriples.txt
Original Record: 24-1.0063027-source.json
Full Text
24-1.0063027-fulltext.txt
Citation
24-1.0063027.ris

Full Text

MODELING ADAPTATION BEHAVIOR TO DRIVING SIMULATORS AND EFFECT OF EXPERIMENTAL PRACTICE ON RESEARCH VALIDITY  by  SAEED SAHAMI  B.Sc., Sharif University of Technology, 2001 M.Sc., Universities of Sussex and Brighton, 2003  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  DOCTOR OF PHILOSOPHY  in  The Faculty of Graduate Studies (Civil Engineering)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  May 2011  © Saeed Sahami, 2011  ABSTRACT Driving simulators provide a safe and controllable environment, where different aspects of driving can be analyzed without risking other road users’ safety. However, as simulators cannot precisely replicate real-life scenarios, there has been an ongoing debate about how well the results of simulator studies can be generalized to the actual world. Many studies have compared the outcomes of field experiments and those involving their simulated counterparts in order to test the validity of the research on driving simulators. In nearly all cases, however, the researchers made comparisons without analyzing the underlying psychological explanations behind potential differences. This thesis will discuss why adaptation, or the process by which participants learn how to interact with a simulator, is an important precondition of validity in simulator experiments. Data collected from several experiments revealed that adaptation, due to imposing an extra mental load on participants, can distract participants from performing the main task and can systematically bias the results of the experiments. The current study demonstrated that although most researchers provide a practice session before the main scenario, there is no unified approach to determine the characteristics of practice scenarios. The practice sessions vary greatly both in duration and form; and no method has been formulated to verify that a participant has in fact adapted at the end of the practice session. To address these shortcomings, this thesis provides a methodology that mathematically models the learning pattern of subjects to steering and pedals, which can also help identify the adapted and non-adapted subjects at the conclusion of practice scenarios. A comparison of the results of two groups of subjects (control and experiment) showed that adaptation to a driving simulator is largely task-independent. This study analyzed the effect of the practice scenario design on the performance of participants in the main task, which led to the observation that during the main scenario participants tend to continue focusing on the subskills they learned during the practice scenario. Based on the results of these experiments, the thesis provides recommendations on how to measure adaptation and also how to improve the quality of the practice scenario design in order to minimize any unwanted impact on the main scenario. ii  PREFACE The experiments carried out in this research were conducted in accordance with UBC Research Ethics Boards requirements (The University of British Columbia Office of Research Services, Behavioral Research Ethics Board, UBC BREB Number H07-02001). Consent forms were given to and signed by all the participants can be found in Appendix A and Appendix B. The work done in Chapters 3 to 6 were published or submitted for publishing in the following peer reviewed publications:   A version of the experiment explained in Chapter 3 was published as: Sahami, S., J.M. Jenkins, and T. Sayed,T. Methodology to Analyze Adaptation in Driving Simulators. Transportation Research Record: Journal of the Transportation Research Board, No. 2138, TRB, National Research Council, Washington, DC, 2009, pp. 94-101.    A version of the experiment explained in Chapter 4 has been reviewed and accepted for publication as: Sahami, S., and T. Sayed. An Insight into steering adaptation patterns in driving simulator. Transportation Research Board Annual Meeting 2010 Paper #10-3575 (In-press).    A version of the experiment provided in Chapter 6 has been submitted for peer review as: Sahami, S., and T. Sayed,T. How drivers adapt to drive in driving simulator, and what is the impact of practice scenario on the research?  iii  TABLE OF CONTENTS ABSTRACT ............................................................................................................ ii PREFACE ............................................................................................................. iii TABLE OF CONTENTS ........................................................................................ iv LIST OF TABLES .................................................................................................. xi LIST OF FIGURES ............................................................................................... xii ACKNOWLEDGMENTS ....................................................................................... xv DEDICATION ...................................................................................................... xvi CHAPTER 1 INTRODUCTION ..............................................................................1 1.1 Motivations ...................................................................................................8 1.1.1 Technical Benefits ...............................................................................9 1.1.2 Financial Benefits ..............................................................................10 1.1.3 Ethical Benefits..................................................................................11 1.1.4 Practice Scenario Design ..................................................................11 1.2 Purpose ......................................................................................................12 1.2.1 Review Existing Methods of Driver Adaptation and Evaluate Their Advantages and Disadvantages ........................................................12 1.2.2 Review the Literature in the Field of Psychology and Learning ........13 1.2.3 Develop a General Methodology to Mathematically Model Adaptation ...........................................................................................................13 1.2.4 Validate Whether the Methodology can be Used to Predict When Adaptation is Likely to Occur .............................................................14  iv  1.2.5 Analyze Different Patterns of Learning and Explain the Reasons behind Each Adaptation Pattern. .......................................................14 1.2.6 Provide Recommendations on How to Design a Proper Practice Scenario Based on the Analysis of the Adaptation Patterns. ............15 1.3 Thesis Contributions ...................................................................................15 1.3.1 Development of a Methodology to Measure Adaptation ...................16 1.3.2 Measuring the Impact of Practice ......................................................18 1.4 Organization of the Thesis .........................................................................19 CHAPTER 2 LITERATURE REVIEW ..................................................................21 2.1 Part One: Adaptation to Driving Simulator .................................................21 2.1.1 Fixed Time or distance ......................................................................24 2.1.2 Driver Self-Assessment .....................................................................27 2.1.3 Undefined Procedure ........................................................................28 2.1.4 Quantitative Analysis .........................................................................30 2.2 Part Two: Psychology of Learning..............................................................35 2.2.1 Overview ...........................................................................................35 2.2.2 Learning Curve ..................................................................................37 2.2.3 Plateau ..............................................................................................41 2.2.4 Part-Whole Training ..........................................................................42 2.2.5 Skill Transfer .....................................................................................44 2.2.6 Feedback and Instruction ..................................................................53  v  2.2.7 Theories ............................................................................................55 CHAPTER 3 METHODOLOGY ............................................................................60 3.1 Methodology Structure ...............................................................................60 3.2 Internal Validity ...........................................................................................67 3.2.1 Temporal Precedence .......................................................................67 3.2.2 Correlation between Cause-Effect ....................................................67 3.2.3 No Plausible Alternative Explanation ................................................68 3.3 Construct Validity........................................................................................71 3.4 External Validity ..........................................................................................74 3.4.1 Generalizability to Other Tasks .........................................................74 3.4.2 Generalizability across Times ...........................................................75 3.4.3 Generalizability across People ..........................................................75 3.4.4 Generalizability across Driving Simulators ........................................75 3.5 Limitations ..................................................................................................75 CHAPTER 4 ADAPTATION TO GAS AND BRAKE PEDALS .............................77 4.1 Introduction .................................................................................................77 4.2 Methodology ...............................................................................................78 4.3 Driving Simulator ........................................................................................81 4.4 Participants .................................................................................................82 4.5 Design ........................................................................................................83 4.6 Experimental Procedure .............................................................................83  vi  4.7 Performance Measure ................................................................................84 4.8 Results and Discussion ..............................................................................85 4.9 Summary ....................................................................................................97 CHAPTER 5 ADAPTATION TO STEERING WHEEL ..........................................98 5.1 Introduction .................................................................................................98 5.2 Methodology ...............................................................................................99 5.3 Driving Simulator ......................................................................................100 5.4 Participants ...............................................................................................100 5.5 Experimental Design and Scenario ..........................................................100 5.6 Experimental Procedure ...........................................................................101 5.7 Data Collection .........................................................................................102 5.8 Performance Measure ..............................................................................102 5.9 Results and Discussion ............................................................................104 5.9.1 Learning Curve Fit and Plateau Phase ...........................................104 5.9.2 Adaptation Time, Quantitative Results ............................................112 5.9.3 Fixed Time and Self-assessment versus Quantitative Analysis .....115 5.9.4 Prediction of Adaptation Time .........................................................118 5.10 Summary ................................................................................................118 CHAPTER 6 ADAPTATION WITH SLALOM TASK AND IMPACT OF TRANSFER ........................................................................................................120 6.1 Introduction ...............................................................................................120  vii  6.2 Methodology .............................................................................................121 6.3 Simulator ..................................................................................................122 6.4 Participants ...............................................................................................122 6.5 Experimental Design and Scenario ..........................................................123 6.6 Experimental Procedure ...........................................................................123 6.7 Performance Measures ............................................................................124 6.8 Results......................................................................................................126 6.8.1 Adaptation to the Slalom Task ........................................................126 6.8.2 Adaptation to the Cornering Task ...................................................129 6.8.3 Comparing Control and Experimental Results (Impact of Practice) 135 6.9 Discussion ................................................................................................143 6.9.1 Observation of Learning after Transfer ...........................................143 6.9.2 Higher Converged Speed for Group BA in Comparison to Group A 144 6.9.3 No Improvement in Error for Group B .............................................147 6.9.4 Self-assessment Value....................................................................148 6.10 Summary ................................................................................................149 CHAPTER 7 CONCLUSION AND FUTURE RESEARCH.................................151 7.1 Conclusion and Summary of Findings......................................................151 7.1.1 Chapter One: Introduction ...............................................................155 7.1.2 Chapter Two: Literature Review ......................................................155 7.1.3 Chapter Three: Methodology ..........................................................158 viii  7.1.4 Chapter Four: Adaptation to Gas and Brake Pedals .......................162 7.1.5 Chapter Five: Adaptation to Steering Wheel ...................................164 7.1.6 Chapter Six: Adaptation to Slalom Task and Impact of Transfer ....167 7.2 Research Contributions ............................................................................170 7.2.1 Chapter Two: Literature Review ......................................................170 7.2.1 Chapter Three: Methodology ..........................................................170 7.2.1 Chapter Four: Adaptation to Gas and Brake Pedals .......................170 7.2.1 Chapter Five: Adaptation to Steering Wheel ...................................171 7.2.1 Chapter Six: Adaptation to Slalom Task and Impact of Transfer ....171 7.3 Recommendations....................................................................................172 7.3.1 Repetition of Identical Task .............................................................172 7.3.2 Blocked vs. Random Practice .........................................................172 7.3.3 Level of Difficulty .............................................................................173 7.3.4 Not Focusing on Specific Style of Driving .......................................174 7.3.5 Feedback During Practice Session .................................................174 7.4 Future Research .......................................................................................174 7.4.1 Selection of Participants ..................................................................175 7.4.2 Adaptation to Manual Transmission ...............................................175 7.4.3 Eye Movement as a Generic Performance Function .....................176 7.4.4 Development of an Adaptation Recognition Software ....................177  ix  7.4.5 Assessing the Effectiveness of Adaptation on Improving External Validity .............................................................................................178 REFERENCES ...................................................................................................180 APPENDICES ....................................................................................................193 APPENDIX A ..................................................................................................193 APPENDIX B ..................................................................................................195  x  LIST OF TABLES Table 5.1 A Summary of Adaptation for All Subjects. ...................................................113 Table 6.1 Summary of Learning Parameters for Slalom Test. ......................................129 Table 6.2 Summary of Learning Parameters for the Cornering Test. ...........................134  xi  LIST OF FIGURES Figure 3.1 Constructs, operationalization, and cause-effect relationship. .......................63 Figure 3.3 Expected outcomes for the performance function values. .............................64 Figure 3.2 Repeated measures as structure of the experiment. .....................................64 Figure 3.4 Experiment setup for studying the impact of transfer. ....................................65 Figure 4.1 Human-in-the-loop control concept. ...............................................................79 Figure 4.2 UBCDrive driving simulator. ...........................................................................82 Figure 4.3 Transition pattern for a sample driver compared to the ideal transition pattern. .........................................................................................................................................85 Figure 4.4 Improvement in performance for two subjects for “Decrease and maintain speed” task. .....................................................................................................................86 Figure 4.5 Improvement in performance for two subjects for “Increase and maintain speed” task. .....................................................................................................................87 Figure 4.6 Adapting behavior pattern in which a learning curve cannot be fitted to the data points. ......................................................................................................................88 Figure 4.7 Power curve fit to the learning phase for participant #9. ................................89 Figure 4.8 CEPU and the fitted power function. ..............................................................90 Figure 4.9 Performance pattern of a non-adapting (or adapted) participant. No power curve can be fitted to either the performance values or to the cumulative performance values. .............................................................................................................................92 Figure 4.10 Performance pattern of a participant who adapted to a simulator very quickly. No power curve can be fitted to the performance values, but there is a good power curve fit for the cumulative performance values. ..................................................93  xii  Figure 4.11 Performance pattern of a participant who is still in the adaptation process. A power curve can be fitted both to the performance values and also to the cumulative performance values. ........................................................................................................94 Figure 4.12 A simple flowchart showing how to judge whether a subject has adapted to a simulator. ......................................................................................................................96 Figure 5.1Driver’s lateral position and center of the lane (black) along a curve. ..........103 Figure 5.2 Average speed over trials (Top), learning phase and power curve fit (Bottom). .......................................................................................................................................106 Figure 5.3 Lateral shift deviation over trials (Left), learning phase and power curve fit (Right). ...........................................................................................................................108 Figure 5.4 Performance of a subject who adapted quickly............................................110 Figure 5.5 Nonadapting performance pattern. ..............................................................111 Figure 5.6 Two learning periods with a plateau period shown in a box ........................112 Figure 5.7 Distribution of adaptation time over 6 time bins. ..........................................114 Figure 5.8 Relationship between self-reported and calculated adaptation time............117 Figure 6.1 Slalom scenario driving path. .......................................................................123 Figure 6.2 Pattern of speed improvement for a subject who adapted to the slalom test before it was completed.................................................................................................128 Figure 6.3 Speed and error values for one participant on the left turns ........................130 Figure 6.4 Speed and error values for one participant on the right turns. .....................131 Figure 6.5 Learning phase of the same participant in previous figure to both performance functions, i.e., speed and error. ................................................................133 Figure 6.6 Distribution of adaptation time for all the participants. .................................135 Figure 6.7 Average speed across all participants in control and experimental groups on left and right corners. .....................................................................................................138 xiii  Figure 6.8 Learning curve fit to the average speed across all participants in control and experimental groups on left and right corners. ..............................................................139 Figure 6.9 Average error across all participants in control and experimental groups on left and right corners. .....................................................................................................141 Figure 6.10 Average error across all participants in control and experimental groups on left and right corners with power curve fit test. ..............................................................142  xiv  ACKNOWLEDGMENTS It is my pleasure to thank those who made this thesis possible. I owe my deepest gratitude to Prof. Tarek Sayed for his kind and insightful support throughout this research. This couldn’t have gone so far without his unique approach and advices. This thesis would not have been possible without the help and support of all my committee members, Dr. Jinhua Zhao, Dr. Mohamed Wahba, Dr. Carlo Menon, Prof. Sandra Robinson, Dr. Sheryl Staub-French, and Dr. Nima Mahanfar. Especially, I should greatly thank Dr. Mahanfar and Prof. Robinson for pointing me to the right direction in the initial phases of this research. Last but not the least, I should greatly thank my previous supervisor, Dr. Jacqueline Jenkins, who initially laid out the concept of this research. I would also like to thank all those who participated in this research by driving in the UBCdrive simulator.  xv  DEDICATION I’d like to dedicate this thesis: •  to my mother for your love, your dedication, and your continuous encouragement throughout my education.  •  to my father for teaching me to question, take risk, and think independently.  •  to my partner for your love, patience, and unconditional smiles.  •  to Dr. Morteza Safi-Abadi and Minou Sahami, I will always be grateful for what you did for me.  xvi  CHAPTER 1 INTRODUCTION Enhancements in technology and scientific breakthroughs in the past century have led to significant changes in the way human beings live and interact with each other. In terms of transportation and mobility, the introduction of commercial airlines made intercontinental trips possible in a matter of hours, something that could not have been imagined just a century ago. Scheduled flights in the United States of America (USA) and Canada alone flew 11.8 and 1.2 billion kilometers in 2009, respectively (1). Affordable cars also became available in the early 20th century, providing consumers with a more convenient and available means of on-demand transportation, which led in part led to the steady growth of the automobile industry over the course of the century. There are now more than 600 million passenger vehicles in use around the world, of which 150 million are in the USA and Canada alone (2). In 2009, each person on average drove an average of 14876 km, 9705 km, and 6960 km in the USA, Canada, and the UK, respectively, which translates into more than 5 trillion kilometers of road trips in those three countries alone in 2009. 1 While the introduction of such innovative technologies has made travel much easier compared to 100 years ago, technology has not been without its disadvantages. Great numbers of people are injured, disabled, or die in road accidents each year. Based on a recent report from the World Health Organization (3), every year over 1.3 million people around the world die in road traffic crashes.  1  This number is only for passenger cars without considering the distance traveled by other modes of road transportation, i.e., buses, trucks, etc.  1  Put into perspective, this is almost 5 times more than the number of people killed in the deadliest earthquake in recent history in which 230,000 people lost their lives in the Indian Ocean earthquake and tsunami in 2004 (4). Fifty million people worldwide are injured or disabled yearly in road traffic accidents. Globally, road accidents are the second cause of death in the population aged 5 to 29 years, and it is the third cause of death for the age group between 30 and 44 years (3). The direct human capital costs associated with road accidents in the USA were estimated to be $230 billion in 2000, and the direct global economic cost of road accidents exceeded $518 billion, which accounted for almost 1% of the Gross National Production (GNP) in underdeveloped countries and 2% in developed countries (5). In 2004, half of all those killed in road accidents were between the ages of 15 and 44, the age at which those individuals could make the greatest contribution to their families and communities as a whole. This loss of breadwinners has enormous implications for the welfare of families, especially in developing and underdeveloped countries, where proper insurance policies do not exist. It also has a negative impact on the society at large by removing members of the workforce from the market as a result of death, serious injuries, or disability. A recent study by the government of Canada (6), which has considered and monetized the above-mentioned indirect costs, indicates that the total cost of road accidents in 2006 was approximately C$62.7 billion, which accounts for 5% of the Canadian Gross Domestic Product (GDP) in that year. To highlight the severity of the problem, it is interesting to know that this cost was 37% more than the total Canadian government expenditures on defense (C$16.1 billion), education (C$6.7 billion), and healthcare (C$22.9 billion) in the same year (7). 2  Due to the great financial toll and negative social consequences of road traffic accidents, various approaches have been taken to reduce the number and severity of accidents in the past few decades. Automotive manufacturers introduced new safety systems on cars to reduce the impact of accidents (i.e., passive safety) and ideally to prevent them (active safety). Policymakers have also focused on advancing car safety by improving road design and imposing more stringent regulations on automakers and stricter laws for drivers. Human error, however, is still the single greatest contributing factor in all accidents, and is something that cannot be fully prevented by better technologies or stricter law enforcement. In one of the largest studies to date of how human error is involved in creating accidents, the Virginia Transportation Institute, in collaboration with the National Highway Traffic Safety Administration (NHTSA), concluded that driver inattentiveness and driver distraction were a contributing factor in 78% of crashes and 65% of all near-crashes (8). As a result of such studies, interest in analyzing driver behavior and identifying the sources and the extent of distractions has gained momentum in the past few years. For instance, bans on cell phone use while driving have come into effect in some cities after studies on driver behavior (including reaction time, lane positioning, attentiveness, etc.) provided enough evidence that using cell phones can significantly increase the risk of being involved in an accident. There are two major methods for studying driver behavior: research performed in the field and that undertaken in a driving simulator. Experiments in the field have an advantage in that they are more realistic and drivers are completely  3  engaged in the scenarios. However, in comparison to studies that use driving simulators, there are some disadvantages associated with the research in the field. First, there are certain scenarios that are too dangerous or are completely impossible to generate in the field. For example, if the focus of a research topic is the impact that drivers’ drinking alcohol has on accident rates with pedestrians, it is not possible to conduct a research scenario with impaired drivers driving and real pedestrians suddenly appearing in front of the car without considerably risking the participants’ safety. Secondly, field studies are not very powerful in terms of internal validity, i.e., it is not always easy to associate a certain change in behavior with an exact stimulus in the scenario. Due to the nature of field studies, there are many uncontrolled elements (other drivers, pedestrians, weather conditions, time of day, traffic conditions, etc.) that can potentially result in the observed changes in behavior. Moreover, manipulating the drivers requires physical implementations in the field, for example, changing the location of traffic lights or changing the color of a stop sign to measure which position or color is best seen by drivers. This approach, if possible at all, requires the researcher’s cooperation with local authorities and also physical changes to road elements, which can be costly and time consuming. Finally, data collection on real vehicles requires fitting many sensors and pieces of equipment, which can potentially distract drivers from the main task. In most cases, test vehicles for field research have so many wires and sensors in the cabin that the cars seem less like the real cars that participants drive regularly.  4  Driving simulators, to a large extent, can address the aforementioned shortcomings in field studies. They are used to simulate controlled scenarios in laboratory experiments in order to study the impact of specific conditions on drivers. The scenario is presented to subjects through computer-generated graphics, and the subjects react to this virtual reality by using the controls available to them, including the steering wheel and brake and gas pedals, which are instrumented on a mock-up vehicle. Due to the nature of laboratory experiments, studies performed in driving simulators are very powerful in terms of internal validity, i.e., researchers can fully control the scenario (traffic volume, time of day, other drivers’ behavior, etc.), and therefore any change measured in the driver’s reaction can be attributed to the change in a certain stimulus coded in the scenario. The creation of new scenarios is achieved by software coding, and therefore does not require physical elements, leading to a reduction in operational costs. As long as all of a vehicle’s variables (lane position, speed, steering angle, etc.) are measured to create the visual representation, almost all the variables are available, and the simulators do not require additional instruments for data logging. Most importantly, dangerous and risky driving scenarios can be safely generated and studied without risking the participants’ and other road users’ safety. However, there has been ongoing debate on the external validity of simulator studies, i.e., how well the results in a fully controlled and simulated environment can be generalized to real-life scenarios. Researchers who doubt the results argue that drivers in a simulator do not have a realistic perception of risk, acceleration, dimensions, and speed, therefore, the drivers’ reactions may not be the same as 5  they might be in the field. For example, participants may negotiate a corner with higher than usual speed, not because they really drive in such a manner, but because they do not have a good understanding of their speed while in a simulator. Some researchers have attempted to investigate whether there is an acceptable correlation between the results of a field study and its simulated counterpart. The general procedure for such validation is to perform specific field research and make a representation of that scenario in a simulator. The results of a simulator study compared to those of the field study to measure the validity of the simulator research. The results have been mixed: some studies found close results in the field and in the simulator, while others concluded that simulator results are significantly different from those of a field study and are therefore not valid. However, what has been missing from these studies is a detailed analysis of driver behavior in a simulator as an explanation for difference in results of simulators and the field. Understanding the underlying causes that can potentially distinguish a simulator outcome from field results can significantly help researchers minimize and control for those causes and therefore conduct more valid research in the future. The difference between simulator and field results can be explained from different angles. First of all, perception of distance, and consequently speed, in simulators is not usually the same as that in the field. Secondly, in most simulators with a fixed cabin, there is no feeling of acceleration (lateral and longitudinal), and therefore the feedback mechanism that is used in real driving to correct the speed, especially during cornering, is not present. Moreover, as long as participants know that there is ultimately no real risk of an accident or of injuring someone, they may take more risks in comparison to participants in field experiments. 6  Another important element that has not been previously studied 2 is the role of adaptation and how it can affect the results of a simulator experiment. As long as the virtual environment in simulators is not a perfect reflection of real driving experience and simulated vehicles do not respond exactly like real cars, there is a learning process involved until the subjects can automatically and comfortably control the simulated vehicle in the virtual environment. During the learning process, and based on the simulator response, subjects try to modify their inputs to the simulator in a trial-and-error fashion until they can show a consistent behavior. Learning requires mental processing power, and therefore adaptation puts some level of mental load on participants, which can draw their attention away from the experiment scenario, i.e., the specific driving task. As long as there is no equivalent mental load in field experiments, drivers in the field have more processing power at their disposal to deal with driving and handling unexpected situations. Therefore, the results from the field and the simulator can potentially be different if participants are not adequately and correctly adapted to the simulator before starting the experiment. Adaptation to a driving simulator is a subjective measure, and thus it cannot be easily analyzed and studied. The lack of a proper platform to quantify and investigate adaptation may be the reason why it has not been adequately addressed in the driving simulator community. Therefore, the purpose of this thesis is to provide a generic approach to this problem and suggest a methodology to measure and analyze adaptation in a driving simulator.  2  To the best knowledge of the author, there has been no other research analyzing the validity of driving simulators with respect to learning, adaptation, and negative skill transfer from the practice session to the experiment.  7  1.1 Motivations Learning how to interact with the driving simulator may take very little time or a few repetitions of a particular driving task for some drivers, while other drivers may need much more time or a greater number of repetitions. Therefore, it is essential to know how and when participants adapt to the simulator and what the negative consequences of non-adaptation or improper adaptation are on the experiment. If adaptation period is not known, enough time may not be provided to participants for proper adaptation and as a result the conclusions of research may be negatively affected. Quantifying the adaptation process can identify the length of required practice time and also distinguish adapted and non-adapted participants during a practice session. Therefore, the overall goal of the current research is to analyze adaptation and learning in driving simulators to: i) explain and mathematically model the process by which adaptation occurs during a practice session. Quantifying the adaptation process can identify the length of required practice time for adaptation and also distinguish adapted and nonadapted participants during a practice session; ii) explain how the skills learned during a practice session may be transferred to the experiment and potentially bias the results. Analysis of skill transfer and identification of the time required for each driver to reach a consistent behavior and thus adapt to the simulator have an important impact on the technical, ethical, and financial aspects of using driving simulators for research.  8  1.1.1 Technical Benefits Before adaptation occurs and in situations where the scenario under study consists of a relatively difficult task, drivers may not be able to pay enough attention to the task itself, and the recorded performance underestimates the drivers’ ability to respond to the stimuli in the scenario. When a participant’s mental resources are engaged to adapt to the simulator, there may not be enough mental processing power to deal with a complicated situation, which may require a great portion of the participant’s mental capacity. One example of such a difficult task is a driver passing another vehicle on a twisting two-lane, two-way highway with limited visibility and a heavy opposing traffic volume. During such a demanding task, if the driver has to deal with another parallel task of adaptation, his or her driving performance is negatively affected, and the results may not be the same as if the test had been carried out in the field. On the other hand, for a simple task like driving on a straight, rural expressway with very low adjacent traffic volume, non-adapted participants may be distracted less than those who are completely adapted. The absolute ease of an activity allows the mind to wander and become distracted by other things or simply to do nothing, which in driving is regarded as “highway hypnosis.” Based on the Yerkes-Dodson law (9), performance of an over-learned and simple task decreases with low levels of arousal, and increases, up to a certain point, in more challenging environments. Therefore, if a non-adapted participant is in a more challenging environment than the field, his or her performance level can increase when doing very simple tasks, like highway driving. The results can therefore overestimate the  9  performance of drivers in responding to an unexpected situation during such a scenario. In both circumstances, if the experiment starts before adaptation occurs, the recorded data does not represent consistent and valid behavior. Therefore, if drivers have not been given enough time to practice and adapt to a driving simulator, the quality of the data will be compromised, which could potentially lead to incorrect conclusions. More importantly, as will be explained in Chapter 6, if the practice session is not designed properly, there may be an unwanted skill transfer from the practice scenario to the main experiment. The transfer of unwanted and unrelated skills from the practice to the main experiment introduces a systematic error to the results, i.e., shifts the mean of the affected parameter. Therefore, understanding how the acquired skills during a practice session may transfer to the main scenario can help researchers design better and more neutral practice scenarios to ultimately reach a higher validity for their studies. 1.1.2 Financial Benefits How fast adaptation occurs depends on a person’s ability to interact smoothly with the simulated environment. Like any other skill, the required time is different for different people; some need fewer repetitions of a task, while others require more time and practice. As will be discussed in Chapter 2, some researchers have used very long practice sessions of up to a couple of days, which may not have been necessary. Therefore, a methodology to identify how much practice is sufficient for each individual can be used to customize the practice length and hence reduce the  10  practice time for those who adapt quicker than average. This can translate to savings in operating costs by reducing unnecessary practice time on the simulator. 1.1.3 Ethical Benefits Driving in a simulator has some potential side effects, including nausea, headache, etc., which are regarded as simulator sickness. From an ethical point of view, having a driver practice for an extended period of time unnecessarily may not be ethical. By knowing when a driver is likely to become adapted, researchers can identify those who require a very long practice time. If such cases are not diagnosed early, the participants may experience uncomfortable conditions by continuing with the practice and then the experiment. If adaptation analysis is performed in real time, it can reduce unnecessary practice time for drivers who adapt faster than normal and can also identify drivers with a very low learning rate and exclude them from the study to prevent them being exposed to driving simulator discomfort. Therefore, this tool can help prevent participants from being subjected to unnecessary prolonged discomfort. 1.1.4 Practice Scenario Design As will be discussed in Chapter 6, driving in a practice scenario can have a residual effect on subjects that can be negatively carried forward to the experiment scenario. For example, if participants are instructed to drive slowly during the practice session, they may develop a tendency to drive more slowly than they normally would drive. This can potentially introduce a systematic error into the results of the experiment and can deviate the results away to the conditions of the practice scenario. 11  Based on a comprehensive literature review, and as will be discussed in Chapter 2, the variety of methods used by researchers suggests that there is not a unified and accepted approach to designing a practice scenario in the research community and that research methods do not address how to measure adaptation time. Almost none of the researchers reported a method of assurance at the end of a practice scenario to verify that adaptation has in fact occurred at the end of a practice session. There was also no previous research to identify and analyze the impact of practice on an experiment scenario and to measure the potential negative aspects of a badly designed practice scenario. Therefore, a gap in the literature has existed for a methodology to better design the practice scenario, measure adaptation, and identify the adapted subjects. The methodology needs to be sensitive to the diversity of driving styles and applicable to a variety of driving tasks, driving simulators, subjects, and performance measures. 1.2 Purpose Based on the importance of driver adaptation and the shortcomings associated with the current approaches used, the purpose of this research was defined to: 1.2.1 Review Existing Methods of Driver Adaptation and Evaluate Their Advantages and Disadvantages Driving simulator literature will be reviewed to identify how practice sessions are designed and how much practice is given to the participants prior to the experiment. To highlight the importance of adaptation, some of the studies will be 12  explained in detail to reveal how the learning process or lack of adaptation can have a potential negative impact on the final conclusion. 1.2.2 Review the Literature in the Field of Psychology and Learning The second section of the literature review will focus on how learning has been addressed in the field of psychology. Although learning and adaptation have not been well addressed in the driving simulator literature and research community, there exists a vast amount of literature in the field of psychology that investigates how individuals learn, which can be helpful in analyzing the adaptation process observed in the experiments. A short history of the research on learning is followed by discussion of the concept of the learning curve, which will be used throughout this thesis. Later, skill transfer from one task to a similar task is explained, and finally the main theories that describe how learning takes place and progresses in the brain are discussed. The extent to which the concepts and theories are outlined in the literature review is based on the degree to which they aid the analysis of participants’ behaviour in the current research. An exhaustive review of the psychology of learning is not provided in this thesis in order to maintain the focus on the main topic, i.e., adaptation in a driving simulator. 1.2.3 Develop a General Methodology to Mathematically Model Adaptation The concepts related to the psychology of learning will be implemented to find a mathematical relationship between the amount of practice and the adaptation level. Three driving simulator experiments will be carried out, and the pattern of adaptation will be analyzed according to the different controls available in a driving simulator, i.e., the gas pedal, the brake pedal, and the steering wheel. As long as 13  adaptation is a subjective concept, a performance function for each scenario should be defined that can quantitatively reflect the level of adaptation. It will be hypothesized that performance, as an indicator of adaptation, can be modeled by a negative power function during the learning phase. This mathematical relationship can show that adaptation in simulators is in fact a form of learning, and that therefore it poses a mental load on participants that can introduce error in results. 1.2.4 Validate Whether the Methodology can be Used to Predict When Adaptation is Likely to Occur If learning and adaptation are modeled with a smooth and predictable function, it may be possible to predict how much practice a participant requires to completely adapt to the simulator by only asking the participant to perform a few first trials and then extrapolating the performance values. The shape of the learning function will provide evidence to whether or not the methodology can be used for such prediction. If irregular patterns of learning are frequently observed, it will be concluded that prediction is not always possible. 1.2.5 Analyze Different Patterns of Learning and Explain the Reasons behind Each Adaptation Pattern. Different performance function shapes will be observed throughout the experiments. Theories and concepts from the learning literature will be used to explain why a specific pattern in performance is detected. This approach will provide in-depth understanding of why certain patterns (like plateau, negative skill transfer, etc.) occur and how to control for them to achieve more valid research results.  14  1.2.6 Provide Recommendations on How to Design a Proper Practice Scenario Based on the Analysis of the Adaptation Patterns. On the basis of a summary of the results from the three experiments, recommendations are given to improve the quality of the research performed in driving simulators. It will be concluded that the practice scenario should relate to the subskills required in the experiment, i.e., it should give participants the opportunity to practice acceleration, deceleration, and some form of steering in a repetitive manner during the practice scenario. The other recommendation is to ensure that participants are not excessively focused on a specific performance measure, like speed or accuracy, as that specific over-learned aspect of driving in a simulator can have a residual and negative effect on the experiment results. 1.3 Thesis Contributions Driving simulators have gained significant popularity in recent years, in part due to advancements in software and hardware technology that have led to the availability of less expensive simulators. Researchers have conducted many experiments with simulators to study various aspects of driving behavior by testing participants in different scenarios. However, very little is known about the participants’ behavior inside the simulator itself, i.e., how they interact with the simulator. This research raises awareness of the adaptation requirements for driving simulator experiments by reviewing the literature and identifying the risks associated with not properly addressing adaptation. Contributions of this thesis can be divided into the following major categories.  15  1.3.1 Development of a Methodology to Measure Adaptation A methodology is proposed to measure performance, as an indicator of adaptation, which can show that a learning process often takes place in the first few minutes of driving in a simulator. The developed methodology is based on the concept of learning curve and how learning a task has been addressed and explained in the field of psychology. Adaptation and learning are both subjective concepts and therefore a performance function will be defined to quantify how well a task is performed. Referring to the literature in the field of psychology, the performance function will be defined as how quickly and/ or how accurately a person can perform a trial of a task, and the pattern of improvement in performance values for several trials of a repeated task is plotted. Numerous studies, as will be discussed in Chapter 2, have shown that the pattern of performance values can be modeled by a negative power curve for both cognitive and motor skills. Such a mathematical model supports the idea that improvement is very quick in the beginning of the practice but will be significantly slower later and eventually converges to a constant value. It will be shown that a negative power curve is also a proper candidate to model the learning period in this thesis and therefore, adaptation is a form a learning which can impose extra mental load on the participants and consequently limit their mental resources in responding to the stimuli in simulator research. A person will generally be considered adapted when there is no significant improvement in their performance values for a few repetitions of a task. As it will be argued in Chapters 3 and 4, participants in a driving simulator usually have the required skills to drive and control a real vehicle and therefore the 16  purpose of the practice scenario should be to give them enough time to get familiar with the controls in a simulator. It is expected that as soon as the participants learn the response of the simulated vehicle to their inputs to steering wheel, gas, and brake pedals, they can use their existing strategies to drive through the scenario. Therefore, two practice scenarios will be defined to monitor the performance level of participants to pedals and steering wheel. The first practice scenario, explained in Chapter 4, will be a repetitive acceleration/ deceleration task on a straight road, designed to give participants a chance to practice how brake and gas pedals respond to their inputs. The second practice scenario, discussed in Chapter 5, will be a repetitive cornering task which gives the participants the opportunity to learn how the simulated vehicle responds to the steering inputs. The results from both Chapters will show that a power curve can model the pattern of performance function values. However, some irregular patterns are also observed which will be analyzed using the theoretical concepts provided in Chapter 2. It will be shown that the performance function values can provide a robust measure to distinguish between adapted, and non-adapted participants. This methodology can help researchers and practitioners reduce operational costs by excluding drivers with low learning rates where the time and budget of a research is limited. This can increase the statistical power of their research by starting the experiment earlier for those with higher learning rates. The existence of such a general methodology will also allow researchers and practitioners to carry out more ethical research by identifying participants with low learning rates and limiting their exposure to a driving simulator, which may cause symptoms of simulator sickness and discomfort.  17  Moreover, applying the results of this research to any simulator study can ensure that no extra mental load is imposed on drivers as a result of the learning process. This helps to minimize any alternative explanation for a certain driver’s reaction and associate it only to the stimulus coded into a simulator scenario. Properly addressing adaptation will result in more robust conclusions and can improve the internal validity of research in a driving simulator. The proposed methodology is in a general form, so it can be modified, calibrated, and applied to various practice tasks and driving simulators. By defining a proper performance function, this method will provide researchers a tool to consistently and systematically address adaptation when conducting driving simulation studies and improve the quality of research done with driving simulators. 1.3.2 Measuring the Impact of Practice As it will be explained in Chapter 2, there may be a residual effect transferred from practice to the main task. Therefore, it was essential to know how a practice scenario may affect the result of a simulator study. To measure the effect, a simulator study will be designed with a slalom task as a practice scenario and the cornering task, as explained in Chapter 5, to be the main task. The performance of participants for the cornering task in this group (experimental group) will be compared to the performance of the group with no prior practice (control group) to measure and later explain the differences. The results of this experiment will show that participants tend to carry over the major subskills they perform during the practice scenario. For example, as long as the slalom task was majorly focused on achieving higher speed, the experimental  18  group had significantly higher cornering speed in compare to the control group. Such differences between the control and experimental group in Chapter 6 highlights the importance of a neutral practice scenario, and for the first time ,to the best knowledge of the author, will quantitatively measure how practice may affect the conclusion in simulator research. Having a more in-depth understanding of how skill acquisition progresses during practice and later transfers to the main scenario will significantly help researchers to design a proper practice scenario that does not introduce any systematic error into the results of experiments. 1.4 Organization of the Thesis Chapter 2 is dedicated to the literature review and is divided into two sections. The first section reviews how adaptation has been addressed in the driving simulation community, which further highlights the contributions of this research. The second section reviews the concepts of skill acquisition, the power law of learning, skill transfer from one task to another, and finally the major theories of skill acquisition are examined in terms of both cognitive and motor skills. This section will provide a basis for analyzing and explaining subjects’ behavior and adaptation patterns in a simulator. Chapter 3 outlines the research methodology in detail and examines the validity of the research method in regards to internal, external, and construct validity. Three different scenarios are defined to analyze how adaptation and transfer occur in a simulator, and an appropriate performance function is developed for each scenario to track adaptation. Potential limitations of the methodology are also explained at the end of this chapter in respect to internal, external, and construct validity. 19  Chapter 4 explains the first experiment, in which adaptation to control the gas and brake pedals is studied and the appropriateness of a power curve to model the learning process is examined. Chapter 5 is the second experiment dealing with adaptation to the steering wheel in a cornering task. Once again, the power curve proves to model adaptation to another task in driving simulators. The subjects’ results in Chapter 5 are considered as a control group in Chapter 6, in which the subjects first practice a slalom test before driving in the same cornering task as in Chapter 5. The impact of practicing this slalom test was analyzed by comparing the adaptation times and patterns to the cornering task for the subjects in Chapters 5 and 6. The distribution of adaptation times and correlations between age, experience, gender, and adaptation times were studied. The impact of skill transfer is studied in detail, which can provide a basis for a better practice scenario. Chapter 7 is the conclusion, which provides a summary of all the findings and also recommendations for future research.  20  CHAPTER 2 LITERATURE REVIEW The literature review is provided in two separate sections. The first section explains how adaptation has been addressed in the driving simulator community, and the second section explains different aspects of skill acquisition, theories, and how and under what circumstances skills are transferred from one task to another. The first part highlights the current shortcomings in the simulator community’s attempts to adequately address adaptation, and the second part lays out a foundation to analyze and understand the subjects’ learning process in this research. 2.1 Part One: Adaptation to Driving Simulator A literature review has been carried out to demonstrate how driver adaptation is currently addressed and identified in the driving simulator community. The researchers were classified into three major groups. The first group did not report any concern regarding adaptation and hence did not reveal any details about whether or not a practice session was given prior to the experiment (10-28). It is not possible to verify whether any practice session was provided prior to the experiment. Even if there was a practice session, it cannot be analyzed to identify whether or not the method and duration were properly arranged. If a practice scenario was not used, there could be validity concerns regarding the results of the experiments in this group. Knodler et al. (16), for instance, conducted research into the effectiveness of a Flashing Yellow Arrow (FYA) signal for permissive left turn at intersections with large medians. Part of this research was carried out on a driving simulator and asked participants to drive in an urban environment, completing a route that consisted of several intersections. The 21  purpose was to measure how well drivers perceived a FYA and could properly react to it as they drove. This study provides no information on whether or not a practice session was provided prior to the experiment. If there was no practice session, or if it was not adequate, drivers could have been busy with the adaptation process, at least during the first few minutes of the experiment. If drivers are not adapted, controlling the car would put more mental load on them in comparison to a situation when they drive in the field. This extra mental load in urban areas with complicated traffic patterns may result in more errors, as participants lack sufficient mental resources to process the whole environment and respond correctly to traffic lights. Therefore, the outcome may be biased towards one side, and a systematic error can be introduced to the results and ultimately harm the validity of the conclusions and recommendations. However, it should be mentioned that adaptation is not always required for the research result to be valid, and there may be some exceptional research examples that can be conducted even without a practice session. For instance, Lidstrom (23) carried out an experiment in a simulator to provide a virtual threedimensional view from the inside of a tunnel that had not been built yet. This strategy was intended to help engineers to have a more realistic view from inside of the tunnel prior to its construction and to decide what positions were best suited for the placement of traffic signs. As long as the simulator was utilized as a virtual tour in this research, and in fact driving and its attributes were not the focus of the study, it could have been done without having any practice scenario. The second group, as explained in sections 2.1.1 to 2.1.3, provided a practice session for the participants and recorded no data at this stage, which suggests that 22  they appreciated the importance of adaptation. However, as will be explained below, there is not a consistent approach in designing a practice session. Some researchers used a fixed time or distance, others used the drivers’ self-assessment of adaptation, and some did not report any details about the design or implementation of a practice session. Moreover, the literature identified no methodology to verify whether drivers adapted to DS at the end of a practice session. The third group, consisting of only one study and as discussed in 2.1.4, quantitatively analyzed adaptation to the steering wheel, but no model was developed to propose how adaptation occurs. The methods used to design the practice scenario, along with this quantitative research and the shortcomings of the methods, are discussed further in sections 2.1.1. to 2.1.4. It should be emphasized that final conclusions and results of the following studies may still hold even if they are repeated with all participants adapted prior to the experiment. The point in this section is not to dismiss the results of all the following experiments, but to highlight the potential impact of inappropriate adaptation to a simulator research, which can introduce a systematic error to the results. Whether or not such a shift in results will ultimately change the final decision to wrongly accept or reject a hypothesis in a specific study is not the scope of this review and only potential threats are meant to be identified.  23  2.1.1 Fixed Time or distance 2.1.1.1 Review and Sample Works A common approach among the researchers was to give participants a practice scenario with a fixed length of time, distance, or a combination thereof prior to the experiment (29-61). However, the time and distance used in different studies varied significantly, from two minutes given by Bass et al. (29) to two full days in the experiment conducted by O’Neill et al. (60). Upchurch et al. (30), for instance, used a short segment of a freeway that was 1.25 to 2.25 miles long to ensure participants were adapted before they took part in the experiment. They intended to identify the optimum location for exit signs in a tunnel in order to facilitate drivers choosing the correct lane prior to exiting the tunnel. Different sign locations were coded in the simulator to identify which ones the participants could see best. As was previously argued, if drivers are not completely adapted to the simulator, the load imposed by the adaptation process may consume a portion of their mental processing power and, therefore, they may not be able to identify and see the exit signs properly. Driving in a tunnel while trying to locate exit signs can be regarded as a challenging task for drivers unfamiliar with an area, and therefore those who were not completely adapted to the simulator were prone to miss the signs more frequently than they would have in the field. A fixed distance was also used on some other occasions, including in the research done by Jenkins and Rilett (33, 34), which involved five kilometers of rural road and took approximately four minutes to complete. In this study, the authors  24  reported that they observed drivers having good control of the simulator vehicle, but they performed no quantitative analysis to prove adaptation had occurred. Some other researchers, like Maltz and Shinar (31), used a combination of time and distance. Their practice consisted of a two-minute practice drive, followed by a drive ona1.5-mile road that was meant for drivers to practice a secondary task. Most researchers in this group have used longer practices drives, lasting five (3547), six (48), ten (49-53), or fifteen (54-56) minutes. Bella (50), for example, used a 10-minute practice scenario in an unidentified setting to compare the speed collected in the field with that recorded in a simulator experiment. The goal of the study was to calibrate the measured speeds on a driving simulator for later research purposes. Traffic speed was collected along a work-zone area on a highway, and later, researchers created the same area virtually in the driving simulator. The speed profile of participants in the driving simulator was compared to that collected in the field to find a transfer function between the simulator and field speed. It was demonstrated that in almost all the measurement sites, the average speed recorded in the simulator was lower than what had been observed in the field. This may partly be explained by the fact that some of the participants in the simulator may have not been completely adapted to it prior to starting the experiment. Therefore, they consumed some mental processing power in learning how to fully control the vehicle, which could result in lower speed. By driving slower than normal, participants had to deal with a smaller amount of information per second, and therefore, this could have been a mechanism to compensate for the lower than normal processing power available to them.  25  Some other researchers used much longer practice sessions. Van Winsum et al. (57) employed a thirty-minute practice session to allow drivers to get comfortable making lane changes, and Anderson et al. (58) provided drivers one hour of training with the driving simulator. Drivers participating in the study by Ranney et al. (59) received one hour of training, followed by 4.5 hours of practice driving on the first day, and then returned on the second day and received another thirty minutes of practice. O’Neill et al. (60) gave drivers two full days of training, which was identified as the longest period in this group. An excessive amount of practice, as was provided in some of the above studies, increases the time and cost required to conduct a certain experiment. As long as most studies are limited in budget and time, this will result in lower sample size, which can harm the statistical power of the research. 2.1.1.2 Shortcoming of the Method Different people adapt to simulators at different rates, and a fixed time or distance means some participants may not have adapted after the practice session, and thus the data collected during the experiment can lead to inaccurate conclusions. On the other hand, some useful data was not recorded for the drivers who learned faster and were required to continue to practice, which can potentially reduce the statistical power of the research and increase the operational costs of a study. Therefore, the first and most important deficiency of this method is that there exists no tool to make sure drivers are adapted when the practice session ends. Moreover, the large range of practice times and distances seen in the literature suggests that a fixed and predefined standard is unable to provide researchers with a tool to plan their practice scenario or assure adaptation has 26  occurred. Therefore, this method is not transferable, and even if it proves to work under certain circumstances for one simulator, one group of people, or a particular task, it is not possible to apply it to other simulators, people, and tasks. 2.1.2 Driver Self-Assessment 2.1.2.1 Review and Sample Works Some researchers (62-67) asked participants to judge their own ability to control the simulator vehicle and started the experiment just after participants reported they had adapted. Maltz et al. (62) used a combination of driver selfassessment and a fixed-time practice session of 3 minutes. On completing the fixedtime practice, drivers were asked if they felt comfortable and in control of the simulator, and only then were they permitted to start the experiment. Fisher et al. (64) and Pradhan et al. (65) asked participants to drive as long as they wished and until they felt comfortable; however, the reported times are not explained in detail. The two following studies reported the time it took for participants to report themselves adapted. McAvoy et al. (66) found participants chose to stop practicing after approximately ten minutes, but Peli et al. (67) found participants continued to practice for fifteen to thirty minutes. 2.1.2.2 Shortcoming of the Method The driver self-assessment method uses an assurance tool (i.e., the driver’s feeling), making it superior to the fixed-time or fixed-distance method. However, to make a self-assessment, drivers may try to compare their level of comfort with how they normally feel when driving in the real world. Because driving in a simulator may not feel the same as driving in the real world, this comparison may be inappropriate  27  and lead the driver to an inaccurate assessment. It is even possible that drivers may experience some level of discomfort as a symptom of simulator sickness and report feeling comfortable so that they can hurry and complete the experiment. Moreover, as long as the perception of adaptability varies from one individual to another, this method does not provide the same conditions for everyone. Some participants might believe they have adapted while they actually have not, and vice versa. As a consequence, the accuracy of the collected data will partly be a function of the driver’s perception, potentially leading to inaccurate conclusions. Finally, people have an intrinsic tendency to overestimate their abilities and do not often have a fair and realistic view of their capabilities, and therefore their judgment is not a valid indicator of when they are adapted. In psychology, this perception is usually referred to as “the optimistic bias” or “the above-average effect,” and it will be explained in more detail in Chapter 5. 2.1.3 Undefined Procedure 2.1.3.1 Review In addition to those studies in which the predefined practice time or driving distance approach and/or the driver self-assessment approach was used, there exists a body of research in which the participants were reported to have been given one or more practice drives, but the authors did not provide sufficient detail for readers to ascertain what approach was used (68-88). If these researchers employed a novel approach to insure learning and adaptation assurance, it was expected that it would have been reported in detail. Therefore, it is believed that the  28  researchers used the fixed time, fixed distance, and/or driver self-assessment methods. 2.1.3.2 Sample works and shortcomings It is difficult to judge the studies that did not report any significant information or details regarding the practice time and scenario that were employed. Therefore, there is always a lack of assurance of adaptation, and therefore the outcome of a study might have been negatively affected. For instance, Horrey and Wickens (69) conducted an experiment to measure the glance duration to an in-vehicle display during a certain driving task. If the participants are not well adapted to simulator, they may be less likely to be confident, and therefore, their glance periods to the invehicle display may be shorter than what might have been measured in the field to compensate for the cognitive load dedicated to the adaptation process. Another example of how results may be negatively affected by an improper practice scenario can be found in the study undertaken by Hegeman et al. (70). These researchers attempted to measure the effectiveness of a passing assistant system that provides information to drivers on whether or not it is safe to pass at each point of time. Each participant drove in a few scenarios, once with and once without the passing assistant. The sequence of scenarios was counterbalanced to overcome any carryover from one setting to the next. However, as explained below, this strategy alone may not be enough to eliminate the systematic error that improper adaptation can introduce to the results. Those drivers who started with a warning system were somehow taught to rely on the system to assist them in making a passing decision. However, in the  29  second part of the experiment, they were left without any assistance, which can result in fewer attempted passes due to a driver not being sure when it is safe to pass. On the other hand, those who started with no assisting system were engaged with adaptation and therefore attempted to pass less frequently. In the second section, not only were the drivers more adapted but also they were introduced to an assisting system, so they were even more likely to pass. Therefore, participants without an assisting system may show less frequency anyway in passing in any of the two counterbalanced scenarios. This, of course, can create a systematic error, resulting in less passing frequency in the “without assistance” setting. 2.1.4 Quantitative Analysis 2.1.4.1 Review The only research with an in-depth study of adaptation was done by McGehee et al. (89). This study measured the time required for steering behavior adaptation to occur for 80 older and younger drivers in a fixed based simulator located at the University of Iowa. The simulator is called the Simulator for Interdisciplinary Research in Ergonomics and Neuroscience (SIREN) and is comprised of a complete 1994 GM Saturn. The simulator had four LCD projectors that  provided a 150-degree forward field of view and a 50-degree rear field of view. A built-in steering wheel sensor recorded the steering wheel position at 30 Hz, and a torque motor provided steering feedback to the drivers for a more natural driving experience. Although accelerator and brake pedal sensors logged driver foot activity during the scenario, no analysis on brake and gas pedal patterns was provided in this research.  30  The scenario was comprised of 25 minutes of driving on a straight, rural 2lane roadway, and the purpose was to measure the difference between the adaptation times for older and younger drivers. Participants had to interact with different road elements like traffic signals, intersections, and also adjacent traffic. There were also a few situations coded into the scenario in which the drivers had to react to prevent an accident; however, the location and type of situations are not reported. Steering behavior was analyzed in three different road segments, which was reported as the “uneventful” portions of the scenario. The first portion started when drivers reached a speed of 1 m/s on the shoulder, and ended when they reached the center of the right lane. Segment two started immediately after segment one was finished, and it lasted for 68 seconds. Segment three was also on a straight segment of the road, and it started approximately 250 seconds after the scenario began and lasted for 68 seconds. Eighty participants, including 24 younger and 56 older drivers, took part in this experiment. Steering wheel reversal, steering wheel input, and lane position deviation were measured and analyzed to discover when adaptation occurred. The Fourier transform of the steering wheel signal and also lane deviation were calculated to measure the amplitude of low, mid, and high frequency inputs drivers used to control the car 3. It was concluded that steering wheel angle deviation was consistently reduced for both younger and older drivers from section one to section three. Older drivers showed steering wheel deviation of 21.9, 4.4, and 2.3 degrees 3  Fourier analysis is a mathematical method to transform a signal from time domain (description of signal amplitude over time) to frequency domain (a description of signal amplitude over frequency). It can differentiate between the quick steering inputs that occur when a driver is weaving erratically and in a slow, gradual drift from side to side.  31  for sections one, two, and three, respectively. The measurements of steering wheel deviation for younger drivers were 18.2, 3.2, and 1.8 degrees for the same sections, respectively. The average number of steering wheel reversals greater than 6 degrees for the younger drivers were 16.4, 2.9, and 0.8 for the three consecutive sections, while those of older drivers measured at 13.9, 4.0, and 1.8 times per minute. Following the numerical analysis on the dependent measures, a time series of steering wheel position was drawn, and it was shown that the number of participants who had steering inputs larger than 6 degrees was reduced throughout the scenario. In section three, only a few showed a steering input greater than 6 degrees. The evaluation for adaptation was based on comparing the steering signal to a global and fixed benchmark equal to 6 degrees. Based on McGehee et al.’s methodology, a participant with two or more steering corrections larger than six degrees in each segment was considered to be still in the learning process and not yet adapted. Not considering the outliers, they concluded that “most” younger and older drivers adapted to the simulator within 120 seconds from the beginning of the scenario, while they also suggested two other adaptation periods of 240 seconds and 300 seconds throughout the article. The results did not support a significant difference in the adaptation time between older and younger drivers, but indicated that a few drivers could not adapt even after 240 seconds while some showed a stable behavior soon after the experiment started.  32  2.1.4.2 Shortcomings of the method Although this study provides very good insight into driver steering adaptation in a specific scenario, there are various areas that future methodologies can improve on. a) The study does not propose a flexible method to address adaptation to pedals or other tasks. Although pedal signals were reportedly collected during the experiment, no analysis or conclusion is given on how adaptation may occur in controlling the pedals. As long as fixed simulators lack a feedback mechanism for acceleration that exists in real cars, drivers’ perception of speed and acceleration may not be correct, and therefore, they have to adjust their speed using the numbers they read on the instrument panel. This process, as will be shown in Chapter 4, requires adaptation, and its pattern should be analyzed with a comprehensive methodology. b)  Considering the differences in people’s driving styles and learning  processes, the use of a global benchmark of 6 degrees is restrictive. Some participants may be adapted, but based on their normal driving style, they still have steering movements greater than six degrees. This method is insensitive to differences in driving style and judges everyone based on the same fixed and predefined threshold. Any future methodology should be able to define a more generic and comprehensive performance function to measure adaptability. Although using a relative or a fixed benchmark may be inevitable in some cases, the methodology should mainly focus on individual behavior patterns to recognize adaptation time rather than finding a universal benchmark. 33  c) The driving scenario, which was performed on a straight segment of a rural roadway, does not provide enough opportunities for drivers to observe the vehicle’s output to all possible steering inputs. Therefore, in such a setting, drivers are only adapted to a static domain of controlling the car on a straight line. As long as they do not experience controlling the car in dynamic and transitional situations (negotiating a corner, making sudden maneuvers, etc.), their adaptation domain is not comprehensive and limited to statically controlling the simulated vehicle. Therefore, any suggested methodology should consider adaptation to dynamic situations in which drivers have an opportunity to measure the vehicle’s output in a more realistic manner and gradually modify their inputs (adaptation process) until their ideal output is achieved. d) The methodology does not address the impact of adaptation on an experiment. As will be discussed later in this chapter, and further in Chapter 6, a residual effect could carry over from the practice session to the main experiment. If the practice session is not well designed, this effect can introduce a systematic error to the results of the experiment. Therefore, an improved methodology could address the impact of skill transfer from the practice session to the experiment. e) No explanation is provided in the current methodology concerning why any of the resulting behaviors occur. An understanding of the underlying psychological processes of adaptation can significantly help improve the way practice scenarios are designed and timed. Therefore, the suggested  34  methodology should be able to analyze the adaptation process and should not only rely on finding a time when most drivers adapt to a certain simulator. 2.2 Part Two: Psychology of Learning 2.2.1 Overview Because very little work has been done in the field of driving simulators to analyze drivers’ learning and adaptation to simulators, a review of the literature in the driving simulation field alone does not provide enough background and understanding of how skills are learned and how they are transferred from one context to another. Therefore, this section of the chapter will examine how learning and skill transfer have been generally addressed in the field of psychology. “Learning” can be defined as the storage of information in memory about a certain response to a specific stimulus, and “skill acquisition” is an extended form of learning by which a person relates a family of similar stimuli-response pairs to develop knowledge of how to respond in certain situations (90). The two terms have very close and interrelated meanings, and are often used interchangeably in the literature. As long as both terms are abstract concepts and not quantifiable, a quantitative measure usually defined as “performance” represents how well a skill is learned. The performance, as will be discussed below, usually refers to the time it takes a person to perform a task or to a person’s accuracy in doing each iteration of that task. Although the first published research in learning dealt with a cognitive task (91), most early research into skill acquisition and learning was largely concerned with motor skills until the early 1960s. The goal of early studies in this field was to 35  identify the best methods to improve the training in motor skills. After the introduction of more robust theories and models in cognitive psychology in the mid-1960s, learning became a topic of interest for cognitive psychologists, too. At first, researchers focused on mathematically modeling the performance as a result of practice. Although there has been great success in modeling the performance improvement by a power curve, some irregularities were identified as well. For example, for certain experiments and tasks, there existed a period in the learning process in which performance stopped improving, before starting to progress again. The power curve could not explain such irregularities, and therefore, after the 1960s and with advancements in cognitive psychology, researchers began to analyze the learning mechanism more closely in order to explain how and under what circumstances learning occurs. Some researchers proposed learning theories that are still in use today, and some theories were not taken seriously due to incomplete or inappropriate explanations of certain aspects of learning. One of the most challenging and debated topics in validating skill acquisition theory was how the theory can address skill transfer, which explains how and under what circumstances the skills learned for a specific task can be used to perform another (similar) task. As driving in a simulator draws on a great portion of participants’ driving skills, and the adaptation process is in fact the process of such skill transfer, this aspect of learning will be studied and explained in more detail. Some aspects of skill transfer explained here will be employed later in Chapter 6 to analyze the data and also explain how a negative skill transfer may occur due to an improper practice scenario design.  36  In the following sections, different definitions theories, and the previous studies in skill acquisition and transfer are studied in detail, and the concept and history of power law of learning are investigated as well. These concepts will provide the necessary background to analyze and explain how learning takes place, progresses, and may be transferred in a driving simulator. 2.2.2 Learning Curve The graph of performance, when plotted against the amount of practice, is usually a smooth curve, referred to as a learning curve. The first study to plot this curve and explore this relationship was undertaken over one hundred years ago by Ebbinghaus (91). He was interested in identifying the elements involved in storing and forgetting information in memory. The element of practice in his research consisted of participants rehearsing lists of nonsense syllables (e.g., HOKW) for thousands of times in order to memorize them. The performance was defined as the number of syllables participants remembered as a result of practice, and Ebbinghaus found that generally memory increases by practice. The most significant aspect of his research involved drawing the graph of performance against practice, which provided a characteristic shape and was later known as the learning curve. The impact of his findings was so great that it was even extended beyond human learning, to learning in societies and organizations and even to how animals learn. Snoddy (92), doing a study of mirror-tracing of visual mazes, was probably the first researcher to notice that the logarithm of performance time against practice usually forms a straight line, and thus the performance time can be described as a power function of practice, as indicated by Equation 1 below:  37  T = Tmin + CPα  Eq. (2.1),  where T: Time to complete the task on each trial Tmin: The minimum time that a task can be performed C: Constant P: Amount of practice α: learning rate, a value usually between 0 and -1  He showed that a pure power function, i.e., Tmin=0, often provides a very good fit to data, especially in cases where a lot of practice is involved in learning a task. Wright (93) realized that the performance of a group of individuals can also be modeled by a power curve. He described the effect of practice (i.e., having produced more units) on labor productivity, where performance was measured in the time it took a group to produce one aircraft. He showed that the more units the group produced, the less time it took them to produce the next unit. Ferster and Skinner (94) studied how performance in rats and pigeons increased through practice. Animals were kept hungry in boxes, and food was provided only if they pushed a button or pressed a lever. The animals’ pressing on those objects was recorded by a pen on continually moving paper. The researchers were interested to know how the rate of pressing the button (or pushing the lever) increases by time, and what would be the shape of the learning curve. De Jong (95) is also known to have discovered this mathematical relationship, seemingly independently and without knowing of Snoddy’s research results. Studies done later by Crossman (96) provided more robust evidence that performance in skill acquisition can be modeled with a power function, which they named as the De Jong law in reference to the work done he did in 1957. Crossman revisited the data of 38  many experiments in different fields including cigar rolling, card sorting, and the addition of digits, and confirmed that learning in all those domains can be modeled with a power curve. Newell and Rosenbloom (97) also did a comparative study in which they compared the goodness of fit for a power curve and other negatively accelerated functions, such as the hyperbolic and exponential functions, to model the performance function in a variety of domains. They discovered that the power function provided the best overall candidate to model how performance improves by practice. They were so impressed by how well performance in skill acquisition can be modeled by a power function that they referred to the regularity as the “power law of practice.” In fact, the function became so present and accepted in the literature that it has almost become a fact in psychology. The power law of practice was not without its criticism, and there were doubts about its lawfulness. Some early researchers questioned the appropriateness of the power function to model the learning curve. Hull (98) suggested that learning develops exponentially. There were also suggestions on using a hyperbolic function to model the performance (99,100). Mazur and Hastie (99) reviewed data from more than 70 studies and concluded that generally a hyperbolic function resulted in a better fit than an exponential function. Recent research, including studies by Heathcote et al. (101) and Haider and Frensch (102), argued that it is the averaging function over a group of subjects, or over the sub skills of a task that results in a smooth power function shape. They have asserted that the performance of each single subject may not be modeled by a power curve if the skill under study consists of only very few sub skills. In other 39  words, a power curve is either observed in cases in which performance is averaged over a group of subjects or the skill under study has many smaller sub skills, which may or may not individually improve based on a power curve. However, despite all the disagreements, a power curve is known to be the best mathematical representation for modeling performance during the learning phase. The acceptance of the power law of practice grew to the point that researchers such as Logan (103) suggested that any learning theory can only be considered a serious work if it can explain why performance improves following a power function. Anderson (104) also expressed the same view in his book published in 1983. One of the most extensive studies in skill acquisition in the field of motor skills was by Siebel (105), which modeled the reaction time of subjects by a power curve. The study consisted of 10 lamps in front of the subjects and 10 response keys under their fingers. At each trial, a random combination of lamps was lit, and the subject had to press the corresponding keys. The experiment was repeated up to 40,000 times, the reaction times were recorded for each trial, and the improvement showed to be modeled by a power curve (R2= .991). Two recent studies by Groeger and Clegg and Groeger and Brady found that the performance of students in a driving school improves by practice, and that, more specifically, the improvement can be modeled by a power function. Groeger and Clegg (106) used the number of comments provided by the instructor as the (inverse) level of performance. Prior to that, Groeger and Brady (107) employed the actual errors made in observed lessons as the level of performance for each subject. Although the measures were different, the performance values in both studies, when 40  plotted against the amount of practice, could be mathematically modeled by a power curve function. They found that the power relationship between error and practice existed both for the general performance of the students and also for each specific task, such as turning left at an intersection. Further on, this chapter, by using The Component Theory of Skill Acquisition for the learning process, will discuss why a power curve generally provides the best fit to performance data. Adaptation in a driving simulator can be regarded as a process by which participants learn how to comfortably control the simulated vehicle. Due to the historic and widespread acceptance of power function to model learning, it is expected that the performance of drivers during the adaptation process can be modeled by a power function too. To ensure that the observed changes in performance are only caused by practice, a repetitive and identical task should be defined for the practice scenario to rule out the impact of any alternative explanation. As will be shown in Chapters 4,5, and 6, the power function can provide a reliable tool to mathematically model the learning process during the adaptation period in driving simulators. 2.2.3 Plateau Plateau is defined as a period in which performance stops during the practice, before starting to improve again. The first reported instance of plateau is in a study done over one hundred years ago by Bryan and Harter (108). Subjects showed plateau periods while learning how to receive Morse code signals. Interestingly, no plateau was observed for the process by which they learned to send Morse code. To  41  explain the existence of such a period, the researchers suggested that learning any skill consists of acquiring a hierarchy of habits. Lower-order habits, which are acquired first, may reach their maximum performance but are not completely automatic so as to leave enough mental resources available to develop higher-order habits, and therefore, the performance will stop for a while until enough mental resource is freed to attack the higher-level habits. Instances of plateau have also been observed and reported during training in typewriting skills (109), and recently Thomas (110) argued that it only can take place in certain circumstances when the task is complex and consists of many skill components that may be learned at different rates. Since the work of Bryan & Harter, there have not been many other notions of observing plateau in the course of practice. A few researchers, like Adams (111) and McGeoch (112), also reported observing plateau, but it was not generally a subject of great interest in the literature. A few instances of plateau were observed in one of the experiments described in this thesis in which participants had to maximize their speed in a cornering task while trying to minimize their error (staying in their own lane). A sample performance function shape with a plateau area will be explained in more detail in Chapter 5. 2.2.4 Part-Whole Training A group of researchers showed interest in finding an optimum method of training for different tasks. They wanted to know if it is more efficient, first, to train an individual in the parts of a complex task, and then to combine these tasks to achieve learning of the whole task, or to teach the whole task at once. Finding a definitive  42  answer was important to setting up a practice session for tasks that were difficult or dangerous to practice as a whole from the beginning, like flying an aircraft. However, as McGeoch and Irion (113) suggested, it was almost impossible to come to a conclusive and unique result for all types of tasks. Welford (114) also suggested that the best approximate answer to this topic is based on the task. He concluded that whole-task training is more effective for tasks that require the simultaneous performance of interrelated sub skills, like flying an aircraft. On the other hand, tasks that consist of independent sub skills that are performed in a sequence and without great interaction between them can benefit the most from a part-task training. Driving is regarded as a very complicated task that includes many subtasks, and therefore, it may be concluded that controlling the pedals and steering wheels of a vehicle should be performed simultaneously. However, the purpose of adaptation is not to teach the participants how to drive but rather is only to provide an environment in which they can make themselves familiar with the simulator’s response to their inputs. Therefore, adaptation should be regarded as an easier task in comparison to driving, and as a result learning how the pedals and steering wheel respond to the driver’s input can be performed by part-task training. Focusing only on one task will provide a one-dimensional and repetitive situation where the changes in performance are only attributed to one independent measure, i.e., adaptation to the gas or brake pedal, or adaptation to the steering wheel. This methodology of part-training will be used throughout the thesis to establish the methodologies for adaptation to pedals and steering wheel independently.  43  2.2.5 Skill Transfer There are two topics in this thesis that require a proper understanding of skill transfer. The first topic is adaptation to a simulator itself, which can be considered as the transfer of driving skills from the real world to the simulator. The second topic is how the skills learned in a practice scenario, which is designed for adaptation, are transferred to and may affect the main scenario. Due to the central role of those topics in this research, skill transfer is reviewed and studied in more detail here to better understand the mechanism of adaptation to a simulator and also to discover how well the subjects who learned to drive in a simulator and focused on a specific task could perform other tasks in the same simulator. The latter helps to better understand whether adaptation to a driving simulator can or cannot be considered task independent. The interest in research on transfer of skills was initiated from questions regarding how training in a certain situation can facilitate or impede learning in another situation. Training in flight simulators, for example, has been a research topic for which it was important to know to what extent the skills learned in a simulator can be applied to the real world of flying an aircraft. The focus of research in skill transfer has been to identify what aspects and components of a certain skill can be transferred from one situation to another, and consequently, what effect such transfer can have in learning the task in the new environment. From the beginning, there were different views on the conditions under which transfer occurs and whether it occurs at all. Thorndike and Woodworth (115) and later Thorndike (116) suggested that the amount of transfer between any two tasks is dependent on the level of similarity between the tasks. Their identical elements 44  theory was developed on the basis that the more similar stimulus-response pairs exist in two tasks, the greater will be the rate of transfer between the two. The theory was tested several times and was confirmed by researchers like Adams (111) and Duncan (117) to hold for both motor skills learning and for cognitive tasks like verbal learning (118). However, since this theory was published, it faced some fundamental criticism, too. For example, Judd (119) suggested that transfer is not only dependent on the stimulus-response pairs, but rather relies on a “deep structural relationship” between the tasks. Mieklejohn (120) also criticized the theory as being so restrictive that it can rule out the transfer altogether and found the stimulus-response an abstract concept that could not address transfer adequately. Although the theory fell short in describing many aspects of evident transfers, like those from writing with a pen to writing with a pencil, there was not a proper alternative theory until the late 1960s due to lack of a cognitive model of knowledge and learning. Development of the theoretical models since then has helped psychologists to provide more abstract discussions of the underlying process involved in learning and transfer. The literature has identified two major elements that can impact when and how transfer takes place: generality and specificity of the task, and training effect. Some researchers have argued that learning is task specific and that what is learned is a collection of mental snapshots of past specific experiences (121). Therefore, transfer does not occur from one task to a similar one, or even from one form of a task to another form, and as a result no transfer takes place at all. In a study by Logan and Klapp (121), the subjects were tested on alphabet arithmetic, i.e., they were provided with questions like “B+3,” and they had to answer 45  with the letter “E.” The researchers observed that the subjects practicing the alphabet arithmetic task on the first half of the alphabet did not have any effect on the learning pattern on the remaining letters. This result suggests that the subjects developed an experience that was specific to the letters they practiced and could not be transferred to other alphabets, and that therefore no transfer occurred. The results of a later study by Lassaline and Logan (122) also confirmed the lack of transfer for counting the number of dots. Subjects were given a frame of dots, and they had to answer with the number of dots in the frame. In the beginning, the response time was proportional to the number of the dots as the subjects typically started the experiment by counting the dots. However, later in the practice session and by becoming familiar with the patterns, the subjects reduced their reaction time, which became independent of the number of the dots, as they began to automatically retrieve the answer from memory rather than counting the dots. When the numbers of dots and patterns were changed, the subjects started counting the dots again, and the same pattern of learning in the first phase was repeated, which again proved no transfer had occurred between the two phases. Task-specific learning and the fact that skills are specific to a certain situation were also observed in other studies in proofreading (123), in reading normal text (124, 125), and in the Stroop task (126). In all the foregoing studies, subjects did not show any improvement on a second task that was closely related to the first one they had done before. On the other hand, some research suggested that skills can be general, and therefore, what a subject learned for one task could be, to some extent, transferred to a similar skill, i.e., the nature of the tasks is the same, but they are different in 46  form. In this paradigm, the extent to which transfer occurs is argued to be proportional to the extent to which the two tasks were similar in nature. Computer programming (127,110), solving arithmetic problems (128), and syllogistic reasoning (129) are some of the examples for which researchers demonstrated a degree of transfer from one task to another similar one. For instance, Speelman and Kirsner presented subjects with many syllogisms to resolve. The syllogisms were presented in a two-part manner as shown below: Form 1  Form 2  All “A”s are “B”s.  All “B”s are “C”s.  All “B”s are “C”s.  All “A”s are “B”s.  As an example for Form 1, the participants had “All of the beekeepers are singers” and “All of the singers are politicians,” which resulted in the conclusion of “All of the beekeepers are politicians.”Subjects first practiced on several combinations on Form 1 and then for the second portion of the experiment examples of Form 2 were shown to them. Although the response time dropped slightly for the second task, the performance in the beginning of the second part was faster than the beginning of the first part. Moreover, the rate at which the performance improved in the second part was greater than in the first part. Therefore, the researchers concluded that a degree of transfer occurred between the two tasks. Rabinowitz and Goldberg (130) suggested that the nature of transfer is dependent on the nature of a task. When subjects are exposed to the repetition of a specific set of stimuli and responses on a simple task such as alphabet arithmetic, they develop task-specific skills. As long as task specific skills consist of only 47  recalling a certain response to a specific stimulus, it is not possible to transfer such skills to another similar situation because no rule other than memorizing the stimulus-response pair exists. On the other hand, practicing on tasks that require a more complex processing of a range of stimuli in a certain fashion (like computer programming and reasoning) contributes to learning skills that are more general and can be transferred to similar situations. Therefore it is the methodology to process the information which is learned and transferred and can consequently be used to process a new set of stimuli. The training method is also known to impact how learning and transfer occur. There are two important elements in practicing a task that have been identified as impacting the learning mechanism and the level of transfer. One is the number of situations that subjects are exposed to, and the other is in what sequence those situations are introduced during the training. When subjects are exposed to many different stimuli, they tend to develop more general skills that can later be transferred to a similar situation. There are different studies that support the foregoing statement. For example, Schneider and Fisk (131) conducted research into a category search task. Subjects were given three categories and several words and were asked to pick a category that matched the word. There were two groups: one group had a practice session of 4 examples, and the other practiced on 8 examples. After the practice session, the subjects were assigned the same task but with completely different words. Data from this research suggested that those who had practiced on a greater variety of examples showed more transfer to the new words. Robinowitz and Goldberg (130) also found that greater variation in stimuli results in shaping a more general form of skill and 48  therefore a greater level of transfer. Subjects in the Robinowitz and Goldberg study were asked to practice on alphabet arithmetic, and then they repeated the task on a different set of alphabets. Data from that study suggested that those subjects who had practiced on a fewer number of items developed skills that were more itemspecific in comparison to those who had practiced on a greater number of alphabets. Anderson et al. (132) suggested that people have an intrinsic tendency to find ways to reduce effort while developing strategies in learning a task. This is the perspective that Speelman and Kirsner (90) employ to explain why variety in practice leads to the shaping of a general skill. They argue that when subjects are exposed to only a limited number of simple variations in stimuli, they do not tend to develop general strategies as it is easier to memorize the response to each single stimulus without knowing the underlying mechanism of why it happens. However, when the number of stimulus-response pairs increase, knowing the responses to all possible stimuli obviously requires more effort. Therefore, developing a more general strategy by which subjects can respond to a variety of stimuli requires less energy, and the result of training is expected to be less item-specific. As long as these skills are developed to deal with more general situations, it is more likely that during the transfer they can cope with a new environment and produce a faster and better response for the new set of stimuli. The sequence or schedule by which the practice scenario progresses also impacts the learning, retention, and transfer. There are at least two types of practice, blocked and random. In the blocked order, subjects practice a specific type of a task for a number of times over a period called a block. Different blocks of practice, each containing only one specific task, are repeated one after another. In random-order 49  practice, the tasks are selected randomly, and therefore, the subjects do not know what stimulus they can expect in the next iteration of the task. It has been argued (133) that when the subjects are exposed to random-order practice, the learning rate may be slower, but it results in the more general form of skills with longer retention, i.e., skills can be remembered after a period with no practice. To achieve fastest adaptation time in a driving simulator, and as will be shown throughout this thesis, all the practices will be in blocked order. Based on the findings in the literature, this method is faster than the random order, and as long as long retention of skills is not an important issue for simulator experiments, this disadvantage of blocked order has no impact on the outcome. The effect of practice form and learning is part of a concept known as the “contextual interference effect,” and it was originally observed in motor skills (134), but it later proved to hold for cognitive skills too (135). The contextual interference effect shows how the training scheme can influence training and transfer, and it therefore highlights the need to carefully design the practice scenario by having the final goal of the practice in mind. It suggests that the practice scenario should be targeted towards the skills that the subjects should learn, and it should avoid engaging the subjects with skills not required for transfer. One implication of this concept can be seen in the literature concerning flight simulators and the fidelity of the task environment. The fidelity of a simulator is the degree to which a scenario in a simulator matches real-world situations. Research has shown that perfect fidelity is not required for a successful training (136), and in fact, some other research suggested that perfect fidelity may obstruct perfect skill acquisition because the 50  subjects are concerned with many extra stimuli, which can distract them from learning the targeted skills (137). Therefore, designing a proper practice scenario/apparatus requires understanding the task under study and then providing the best method by which subjects can learn general skills that can be easily and positively transferred to the main task. What has been discussed above only concerned positive or zero skill transfer. However, there is some evidence of negative transfer in the literature as well. The most cited and most famous example of negative skill transfer is the demonstration of the Einstellung (German word for attitude) phenomenon by Luchins (138). The Einstellung effect refers to the tendency to solve a problem with a previously known mechanism although there are simpler and faster methods to solve it. Luchins gave three jars of water to each subject. Each jar had a fixed capacity of A, B, and C. The subjects were asked to think of any combination of full jars that can result in a specific amount of water. The test was designed with only one possible answer, which was B – A – 2C. The experimental group practiced on this task for a few times before they were introduced to the second part with a different set of jars, for which the problem had two possible answers, A + C and B – A – 2C. The control group only participated in this second task, which could be solved either way. Luchins found that almost all of the subjects who had practiced the first task continued to use the same logic to solve the problem. In contrast, the control group almost always solved the problem with the shorter method. He concluded that the initial training encouraged the subjects to transfer and use the non optimal solution for the new task. Other studies showed that psychological stress, including pushing subjects to rapidly solve the second problem, increased the Einstellung effect 51  significantly. The older subjects also showed significantly higher negative transfer in comparison to the younger participants, with no significant difference for IQ and gender (139). The reason behind the Einstellung effect can be explained by the theories of inductive reasoning. Kendler and Kendler (140) suggested that adults and older children tend to act according to non continuity theory, i.e., they tend to choose a reasonable rule or methodology and keep using it as long as it can solve the problem. Subjects usually start to look for alternatives only when the accepted methodology fails to provide the correct answer. In the water jar study, subjects already had a methodology that still worked for the new problem. Although it may not have been the best method to solve the problem, they tended to reapply the same method because it worked for the second problem too. Later research on transfer in text-editing (141) and computer programming (142), however, suggests that negative transfer is in fact a positive transfer of non optimal methods from one task to another. Regardless of how negative transfer occurs and is explained, it should be considered an important concern in designing practice scenarios so that only the positive attributes of a skill are transferred to the main task. The impact of negative skill transfer from the practice session to the experiment will be shown and explained in detail in Chapter 6, and consequently, it will be used to propose how a proper practice scenario should be designed.  52  2.2.6 Feedback and Instruction The impact of instruction and feedback during training was a focus of concern for some researchers investigating skill acquisition to understand whether performance can be improved by knowledge of results. Although some researchers argued that there was no correlation between knowledge of results and performance improvement (143), most other researchers showed a link between the two. Thorndike (144) noted that knowledge of results had the most impact on performance improvement when the feedback was provided as closely as possible to the stimulus-response. He also found that as the number of feedbacks increased, the performance improved as well. However, Lorge and Thorndike (145) later concluded that there is an extent to which the feedback can be temporally close to the response and more frequent, or closer feedback may actually cause deterioration in performance. Bilodeau and Bilodeau (146) studied the effect of delay in feedback and the number of feedbacks on skill acquisition. They partly contradicted the results of earlier research by suggesting that performance was not affected by a delay in feedback and was only improved as a result of the frequency of feedbacks given to the subject. Later, Welford (114) studied the changes in performance as a result of withdrawing or delaying feedback for motor skills. He concluded that when knowledge of results is delayed, learning is slowed, and that the rate at which it slows down is proportional to the gap delay between the action and the feedback. Therefore, he again validated the results from Lorge & Thorndike, which suggested that immediate feedback is always best to improve the performance faster.  53  Feedback also has been shown to affect the quality of training and the retention rate for the acquired motor skills. Subjects who receive more feedback in motor skill training tend to perform best during the training, but perform worse during retention in comparison to those subjects who received less feedback or for whom the feedback was gradually withdrawn during the practice. Schmidt et al. (147) suggested that a knowledge of results and feedback is essential during the initial phases of the practice, but that later during the training it may start to produce dependency, which will result in a less deep understanding of how the skill should be performed. This idea supports another earlier study by Annett and Kay (148), which pointed out that temporary knowledge of results can only have a value if the subject starts to establish a deeper knowledge of the skill and gather all the possible required information by himself or herself. The researcher’s providing extensive feedback can potentially mask or distract the subject’s attention away from the natural cues of the task, which can be used to provide a proper response to a slightly different stimulus in future (149, 150). The learner, therefore, relies heavily on the feedback, and as a result is not able to construct a proper process on which he or she can rely for future use. Furthermore, as the feedbacks are forgotten from the memory, there is not much left to effectively process the information. In the experiments carried out for the current research, no direct feedback is given to the participants. Any feedback that was provided would have been a result of the instructor’s qualitative judgment based on the driver’s behavior and driving style, and as long as the reactions and driving styles are not the same, the level and nature of feedbacks would have been different for different participants. Such a 54  random nature of feedback could have introduced a heterogeneous effect among trials and/or subjects, and therefore it was decided not to offer any direct feedback to the participants during the experiments. However, for the second experiment, explained in Chapters 5 and 6, it was decided to provide a systematic, fixed, and universal feedback mechanism. In a pilot study, drivers showed that they did not have a good understanding of how well they were keeping the vehicle inside the right lane due to limited visibility of the front of the car. Therefore, a white line was added on the bottom of the central screen that completely covered the right lane. If the car was not exactly in the center of the right lane, the white line would extend beyond the lane and provide visual feedback for drivers to help them maintain the car within the right lane. Introduction of that fixed line improved how participants perceived their own performance, and therefore it was retained for the main experiment. 2.2.7 Theories There has been a great body of research dealing with how skill acquisition and the learning process take place in the subject’s mind as a result of practice. The theories can be divided into two major categories. The first group suggests that practice results in refining procedures to do a task, and therefore leads to improvement in the performance of skills. Others argue that practice itself is not a reason for skill improvement, but that improvement is a byproduct of practice, like the accumulation of knowledge. ACT-R (151-152), Crossman (96), SOAR (153-155), MacKay (156), and the connectionist theory (157) all belong to the first group, while Logan’s instance theory (103,158) is an example of the second group.  55  Another recent theory in the field of skill acquisition has been offered by Speelman and Kirsner (90), which will be the basis for the analysis throughout the current research. The theory is referred to as “The Component Theory of Skill Acquisition,” and it defines any skill to be comprised of a number of component processes that perform the various subtasks involved in the skill. Low-level subtasks are first developed by practice, and as soon as a process to compile a low-level subtask becomes automatic, enough mental resources are freed and allow the developing of other low-level skills or the combining of them to achieve a higher-level task. One example is learning how to read a text, which requires the reader, first, to learn a lower-level skill, i.e., reading the letters of the alphabet, and then after that becomes automatic, the reader can start learning to read a combination of letters, i.e., words and consequently, sentences. Speelman and Kirsner further developed this theory by combining it with the theory of complex systems (159), by considering the human brain as a complex system. The theory of complex systems suggests that each complex adaptive system is comprised of different interacting and greedy agents trying to attract as many resources as they can. As long as the resources are limited in any finite system, the agents will compete, and those that are more competitive in attracting resources will survive. Therefore, the primary interaction between the agents is defined by resource attraction and competition, and also by the adaptive nature of the agents modifying the rules to outcompete others. The agents in any given complex system grow in a sense by attracting more resources. The meaning of growth, however, is defined based on the complex system under study. Halloy (160) 56  has demonstrated that in such a complex system, the distribution of the agents’ size and frequency follows Zipf’s law, i.e., there are many small agents, some medium agents, and very few large agents. Halloy showed that in any complex system the frequency and size of agents has a log-normal relationship. Using this theory as a metaphor and by considering the brain as a complex adaptive system, Speelman and Kirsner suggested that neurons, or a network of neurons, are parallel to agents and resource can be considered as acquiring and processing information. The agents receive, process, and transmit information. They compete to efficiently process information, and when the process outcome is satisfactory in achieving the task’s goal, those agents are more likely to be recruited in future instances of the task. Moreover, among those that can achieve the goal, the ones that can process the information faster are more competitive and attractive. Speelman and Kirsner suggested that the more an agent is recruited to perform a certain task, the more it grows, and therefore the more attractive it becomes. More attractive agents can join another agent or a group of other agents if the task requires such a combination of agents. Therefore, the agents or combination of agents that have been used very frequently grow more than the ones that were only recruited for specific tasks and not used as frequently. Referring to the distribution of agent size and frequency by Halloy and Whigham (159), Speelman and Kirsner suggested that there are a very large number of agents that have a specific purpose, a small number of general purpose agents, and that the relationship between the size and frequency of agents between the two extent of this spectrum can be modeled by a log-normal distribution.  57  Therefore, when a person faces a new problem, he or she starts using wellknown and generic agents to identify and comprehend the problem. After receiving feedback on the results, the person starts to identify better and more specified agents that can carry out the task more effectively. It is evident that discovering generic agents takes little time because there are few of them, they are well trained, and they have a history of solving similar problems, so they kick in fast. However, refinement of the process becomes more difficult as the selection of more competitive agents is more difficult due to the high number of task-specific agents. The log-normal relationship for distributorship of specific and general agents can explain why improvement in a task and the learning process, i.e., selection between the candidates, follow a negative power curve. Transfer was also explained by another aspect of the theory of complex systems, which suggests that the probability of attracting a resource is inversely proportional to the distance between the agent and that specific resource. In the cognitive concept of learning, Speelman & Kirsner suggested that the probability of an agent being selected to do a task is proportional to the similarity between the task condition and the normal condition previously processed by the agent. Therefore, practicing a task can impact learning and improvement in a later and similar task as agents exist in the network that can be recruited to do the task. As the Component Theory of Skill Acquisition is the most recent concept, and as it can well explain the power shape of learning progress and also skill transfer, it is best applicable to the current research. This theory will be frequently employed in the next chapters to explain how subjects develop and adapt their existing skills to a  58  driving simulator. The theory will be also used to explain how and under what circumstances negative transfer may occur.  59  CHAPTER 3 METHODOLOGY 3.1 Methodology Structure It is expected that as drivers learn more about the simulator and develop driving skills, they spend less time and make fewer errors in performing subsequent iterations of the same task. Therefore, referring to the concepts of skill acquisition discussed in Chapter 2, it is hypothesized that such improvement during the learning process can be mathematically modeled by a learning curve. To develop a methodology based on the learning curve concept, performance functions or learning indexes are required to quantify adaptation. As seen in the literature review in Chapter 2, performance has often been measured by the speed of doing a task, the accuracy of performing the task, or a combination of both. Therefore, an appropriate performance function should be developed for any practice scenario to reflect the errors a driver makes and/or the time used to perform a specific task.  The exact function depends on how the  practice task is defined and what participants are asked to do during the practice session. However, a general form of performance function for any task on a driving simulator, which accounts for both error and speed, can be introduced as below:  P= k.Tn. Em  Eq. (3.1)  where: P: k: T: E: m, n:  performance function constant time to complete the task error function calibration factors  60  This function will be calibrated and refined later for pedals and steering to represent participants’ performance in doing each of those tasks. A good fit of power function to the consecutive performance measures can show that the adaptation in a simulator, like the pattern of skill acquisition in many other motor skills, can be modeled by a power curve. Developing a methodology based on the learning curve concept has important benefits. First, such a methodology is sensitive to differences in driving styles because the behavior of an individual driver is analyzed in isolation. The performance function is calculated for successive repetitions of a specific task, and when it stops showing improvement for a few trials, the subject can be considered adapted. In most cases, the driver’s performance is not compared to that of others to verify adaptation has occurred. The only exceptions are to recognize outliers, where an individual’s performance is compared to the average of a group in order to identify those who could not adapt to a simulator due to much slower than normal reactions. As will be discussed in the following chapters, this exception has only applied in a few instances, and in most cases the performance was studied in isolation. Second, the methodology does not use a preset, fixed benchmark. Adaptation occurs when the driver exhibits consistent behavior, not a behavior matching some predefined measure. The main focus in the proposed methodology is to analyze the pattern of behavior rather than to concentrate on only one point of time (when performance reaches a threshold). Adaptation is a process, and its analysis should consider the process as a whole and not only a single point. Even if a preset performance value can be found for adaptation, this value will be different 61  for different simulators, and finding that fixed value for each simulator will be challenging. Therefore, not having a fixed benchmark will significantly help the transferability of this methodology from one simulator to another. Performance functions can be developed based on Eq. (3.1) for different scenarios while considering the specific data collection capabilities of the driving simulator being used. To develop specific performance functions and to ensure that the research method is appropriate, an in-depth analysis of the proposed methodology is provided below. It describes how the research was conducted, what potential threats might have damaged the results, how they were addressed, and how the method was developed by controlling for the threats. Validity of the method is also evaluated in three different categories: internal validity, external validity, and construct validity to highlight the applicability and limitations of the research method. The experiments for analysis of adaptation are considered as a cause-effect relationship in which cause is the amount of practice and effect is adaptation. The research concept maybe better visualized as shown in Figure 3.1.  62  Figure 3.1 Constructs, operationalization, and cause-effect relationship.  Both “practice” and “adaptation” are qualitative concepts, and therefore it is difficult to perform quantitative analysis on those concepts. Sections 3.1.3.and 3.2 will explain how operationalization—the way practices were implemented in the experiment—and performance were defined to quantify both of those concepts. Adaptation is a process, and therefore it cannot be studied by cross-sectional methods, i.e., at a fixed point of time, and improvement over successive trials should be observed and analyzed. If no, or little, improvement is observed during a few trials, the driver is considered adapted to the task. As a result, the study is categorized as a repeated measures- or a time series method. Figure 3.2 shows in more detail how the operationalization is arranged, and Figure 3.3 shows an expected outcome for the performance function.  63  R x1 o1  x2  x3  x4  ….  xn  o2  o3  o4  ….  on  Figure 3.2 Repeated measures as structure of the experiment. Unit of analysis is “each subject”, selected randomly (R), driving in a set of  Subjects go through practice scenarios X={x1,x2,…} while their behavior is observed during that scenario O={o1, o2,…}. The observations are later quantified using a performance function to track the learning process. After repeating the same study for different subjects, similarities among the learning patterns and performance functions can provide a tool to categorize subjects into different groups, analyze each group’s behavior, and explain the effect of practice on skill acquisition in the driving simulator. Chapters 4, 5, and parts of Chapter 6 provide evidence to support that the power curve was a suitable candidate for modeling the performance improvement for consecutive iterations of a task.  Figure 3.3 Expected outcomes for the performance function values.  64  The next purpose of this thesis is to study the impact of skill transfer, i.e., when a subject is adapted to a certain skill, how well this skill can be applied to other driving skills. As was suggested in the literature, there could be a transfer, positive or negative, from one task to another. Analyzing the type of transfer and how it occurs can help researchers design practice scenarios that have no or little negative impact on the experiment. To quantitatively analyze the impact of transfer from one task to another, a between-group comparison is required, and so the structure of research is defined as shown in Figure3.4.  R (Group A)  X1  O1  R (Group B)  X2  O2  X1  X1 = {x11,x12,x13,…}: A set of trials on task 1 X2 = {x21,x22,x23,…}: A set of trials on task 2 O1= {o11,o12,o13,…}: observation points for task 1 O2= {o21,o22,o23,…}: observation points for task 2 O’1= {o’11,o’12,o’13,…}: observation points for task 1 after doing task 2 Figure 3.4 Experiment setup for studying the impact of transfer.  Group A practices task 1, and performance measures are calculated from an observation of driver behavior. Group B first practices task 2 and then repeats the same experiment as Group A. Any difference between O1 and O’1 can be associated to transfer and to the practice that Group B had on task 2. This is the main objective of the third experiment, explained in Chapter 6.  65  There are several validity concerns that should be considered in any research dealing with cause-effect relationships. Internal validity examines how well the observed effect is in fact the result of the cause manipulated by the researcher, and not any other alternative and uncontrolled variables. For example, any distraction during the practice scenario may cause disruption in performance that is not a result of practice. On the other hand, construct validity deals with how well the manipulation and the observed effects are a valid representation of the phenomenon under study. In the context of the current research, the question is whether the repetitive task a good representation of practice, or whether the performance function consider all aspects of adaptation/learning. If the performance function is not comprehensive, some aspects of the construct may be left unmeasured, and so the full impact of cause cannot be analyzed completely. Finally, external validity explains how well the results of any research can be generalized to other situations, people, and times, i.e., if a certain relationship between cause and effect is found in the study, will it hold for other people, or for the same people in a future time? In the following sections, internal, external, and construct validity of the methodology are examined, and the potential risks in each category are highlighted and addressed. The methodology is further developed and completed step by step by controlling for the risks. There are, however, validity concerns that may not be addressed with the available equipment and conditions, which will be identified as limitations at the end of this chapter.  66  3.2 Internal Validity Most laboratory research is designed to study the causal relationship in isolation, so the changes in “effect” can be only attributed to the changes manipulated in the “cause.”Therefore, laboratory experiments are by their nature powerful in internal validity, which, of course, comes with a cost: limitation in external validity. The isolated nature of laboratory research potentially limits the generalization of the results to real-life situations where there may be numerous other causes impacting the subject. As explained below, there are three main classic conditions to be met (161) so that an observation can be considered as the result of a cause. 3.2.1 Temporal Precedence Temporal precedence verifies whether the observed effect happened after the cause. This condition is usually important when there is a bidirectional relationship between the cause and the effect, and when they usually coexist at the same time. A simple example could be the relationship between education and income in two societies, i.e., is the lower income a result of lower education? Or is it the case that those with lower education will end up making less money? Temporal precedence is usually important to verify in cases where there is no direct manipulation in the research. In the context of the current study, this condition is met as adaptation to a simulator cannot be achieved without practice or before practice. 3.2.2 Correlation between Cause-Effect This condition verifies whether there is any correlation between the cause and the effect. The purpose is to examine that when there is a change in the cause, 67  a change is observed in the effect. It is hypothesized that the relationship between practice and performance during the learning phase can be modeled by a power curve, and therefore if that is satisfied, there will be a correlation between the practice and performance. This is the main objective of this research, which will be studied in detail in the upcoming chapters. 3.2.3 No Plausible Alternative Explanation The methodology has to be very specific and exclude any variable that can potentially affect the adaptation or learning. As an example, if participants are allowed to practice as they wish (random route selection) in a simulated urban area with random traffic flow, it is difficult to conclude whether the observed improvement in performance is a result of practice and not the specific traffic flow, route choice, speed choice, etc. Therefore, to have a good internal validity, the “practice scenario” (Xi) should be carefully designed. The best possible solution is to have all scenarios be identical, so that if a subject performs better at xj in comparison to xi, there will be no alternative explanation other than repetition or practice. i.e: ∀𝑖𝑖,j :𝑥𝑥𝑖𝑖=𝑥𝑥𝑗𝑗  Eq. (3.2)  To prevent the presence of other unwanted alternative explanations, they should be first identified. The following alternative explanations can have an impact on the observed change in performance from practice scenario (i) to practice scenario (j), so there should be methods for controlling them:  68  a) Change in traffic flow from xi to xj b) Audio distraction c) Visual distraction d) Temperature change e) Humidity change f) Driver’s distraction g) Simulator sickness The following measures were implemented in order to achieve a design that is high in internal validity. a) Change in traffic flow from xi to xj: To prevent any effect of traffic flow on the drivers, either no traffic or a constant flow of traffic should be presented to each driver, such as one opposing car per 300 meters or 25 seconds, etc. The color of the cars can be changed as it does not seem to have any effect on the driver’s performance. b) Audio distraction: Any distraction in the learning process, including audio distraction, can potentially affect the driver’s performance and ability to control the simulated car. Therefore, the lab environment must be silent, and the experiment room should be well isolated to prevent audio noises. This condition was addressed for all the experiments and trials in this research. If for any reason the generated sound from a simulator is distorted or discontinued, a note should be made to report the time. This report can help adjust for the affected trial(s). Participants should also be asked to turn off their mobile phones prior to the experiment. This precaution ensures that they are not distracted if the phone rings. 69  c) Visual distraction: Similar to audio distraction, any visual distraction can impact the performance of participants. To minimize visual distraction, there should be a black curtain all around the lab with no window visible to the outside. No one should be present or moving in front of the drivers while they are driving. All of these conditions were applied in all the experiments. d) Temperature and humidity change: A significant change in temperature or humidity can potentially affect a driver’s performance. Therefore, the lab was equipped with an air-conditioning system to keep the temperature and humidity constant. All the studies were performed within the same temperature range. e) Driver’s distraction: Any person might be distracted by remembering an event, or by trying to process some information during the experiment. This kind of distraction, in comparison to audiovisual distraction, can harm validity to a greater extent because, due to its nature, the instructor does not usually recognize it, and therefore no compensation can be made during data processing. One possible solution to prevent such random distraction is to request drivers to be focused as much as they can. But if the scenario is not challenging enough, participants will get bored quickly, and as soon as enough mental resources are freed, they may start focusing on some other mental activities. Therefore, another possible solution was to design the scenario to be difficult enough that the workload from the task does  70  not allow them to become distracted, and thus they can remain focused on the study. f) Simulator sickness: Simulator sickness, if it exists, generally increases according to the amount of time spent in a simulator environment. It can lead to less performance as it makes drivers impatient. To resolve this issue, an introduction should be given to participants to describe the symptoms of simulator sickness, and they should be asked to report any discomfort they might experience. However, as will be explained in the following chapters, none of the participants reported a significant discomfort as a result of driving in the simulator. 3.3 Construct Validity Construct validity addresses the appropriateness of the manipulation of cause and how the measurements represent the effect under study. The cause is directly implemented into this research, i.e., practicing a task, and therefore, there is no validity concern for that. On the other hand, adaptation as an abstract concept can be measured and quantified in many different ways, and thus, a good performance function is needed to ensure the effect is measured appropriately and can reflect the construct. Based on the literature review provided in Chapter 2, the more adapted a person is to a task, the more automatic his or her reactions become, and that person can perform the task faster with less error. Therefore, as defined in Eq. (3.1) and depending on the purpose of the practice, the function to measure adaptation should account for time, error, or both. If the purpose of a practice scenario is to train drivers  71  for fast reactions, then only speed should be incorporated into the performance function, and if accuracy is the goal of training, the only measured parameter should be error. In Chapter 6, recommendation is given on how to setup the purpose of the practice scenario in order to have proper adaptation with little negative transfer to the main task. A detailed explanation will be provided in each chapter on how each performance function was defined to measure the subjects’ performance in each scenario. However, a general overview of the functions used is provided below. For the first experiment, adaptation to gas and brake pedals was measured by how quickly and accurately the subjects could reach a certain speed and keep that speed constant until the next trial. Therefore, the performance function should account for both speed and error, and it was defined as below: ~  t1  Ek = ∑ ( t0  v(t i ) − v (t i ) V − v(t 0 )  )2  Eq. (3.3)  Where;  Ek:  Error value for trial k (reversely proportional to performance);  t0 :  time at which each trial begins;  t1:  end of each trial;  v(ti):  subject’s speed at each time ti;  ~  v(t i ) :  ideal speed (objective) value at each time ti;  V:  final speed to reach;  v(t0):  initial subject’s speed at time t0;  This function calculates the normalized deviation of the subject’s speed from an ideal speed, and is proportional to the area between the ideal and the subject’s speed graphs. If the subject can accurately follow the ideal transition from V1 to V2, 72  the value of the function will be minimized, i.e., zero and error will increase if the speed fluctuates a lot around the requested speed. This function also considers how fast the subject can follow the ideal speed, i.e., if the subject’s speed lags behind the ideal speed, the area between the two graphs will increase and so will the error. Therefore, the error function indirectly accounts for reaction time, and independent measurement for time is not required for this task. The second experiment required the subjects to drive through a number of corners as fast as they could while attempting to drive smoothly and accurately inside the right lane. In this scenario, the performance was defined with two separate functions for speed and error. Speed was the average speed along a corner, and error was the lateral position standard deviation, which showed how smoothly the drivers could drive along the curve. The standard deviation of the lateral position has been identified as a proper indicator of mental workload in several studies, as will be discussed in Chapter 5. Therefore, as the drivers adapt to the simulator and as their mental load decreases, it is expected that error is decreased as well. Including speed as a contributing factor into the performance definition ensures that drivers will not try to increase their accuracy by reducing their speed. Therefore, they are told that their performance is not only a function of how smoothly they can drive, but that it also depends on how quickly they can complete each cornering maneuver. Subjects in the third experiment start by practicing a slalom task, in which they are only required to maximize their speed while maneuvering around 7 cones and are not asked to drive to minimize any error. The aim of this experiment is to study the impact of transfer from one task to another and examine the potential threats from negative transfer. Therefore, the only performance function considered 73  for this scenario was the speed to complete each iteration of the task. This experiment will be explained in detail in Chapter 6. 3.4 External Validity In many cases, controlling for good internal validity results in poor external validity as it cannot represent real-life situations, and the generalizability of research outcomes becomes limited. Therefore, strong external validity is not expected from this research due to its laboratory nature. The following concerns have been taken into consideration in designing the methodology and addressed where possible. 3.4.1 Generalizability to Other Tasks If only adaptation to a certain task is examined, questions may remain about the generalizability of adaptation to other tasks. In other words, if the power curve can model adaptation in one task, can it be used to measure and model adaptation in other tasks? Analyzing the learning pattern on a single task may not address this issue; therefore, three different tasks were chosen for this research to show that the power curve is a proper tool to model adaptation and learning. Furthermore, it should be investigated whether adaptation is task dependent, i.e., adaptation on a task is enough to perform other tasks. In Chapter 6, the impact of transfer from the slalom task to the cornering task is studied to understand whether adaptation is task dependent, and therefore, to help improve the external validity of this research.  74  3.4.2 Generalizability across Times As long as the concept behind human learning does not change according to time, the methodology can be applied and the results will hold in the future, too. This assumption is based on the timeless fact that people usually tend to learn faster at the beginning, and then after a few trials, the performance does not significantly improve by repetition. 3.4.3 Generalizability across People Although an effort was made to recruit the participants from a wide background and various age groups, most of the subjects who participated in this research were college students. This limits the generalizability of the research to other people of different ages and socioeconomic status. This choice was, however, imposed by the budget and time allocated for this research. 3.4.4 Generalizability across Driving Simulators As long as only one simulator was available for this research, it was not possible to work on other simulators to prove the hypothesis applies to other simulators too. However, it is expected that only the learning rate would be different based on the fidelity of the simulator, but the pattern shown in Figure.3.3. would still be applicable. 3.5 Limitations The limitation of the methodology is also analyzed with respect to the internal, construct, and external validity. The experiments are done in a fully controlled  75  environment, which controls for almost every plausible alternative explanation. Therefore, it is expected that no significant limitation exists regarding internal validity. Construct validity refers to the appropriateness of the functions to measure the performance. As previously mentioned, speed and error are often identified as performance functions in the literature and are used that way in this research, too. However, one other useful variable that can help improve the validity of performance function is using an eye-tracking system and incorporating its output into the performance function. Eye movement has been shown to be a good representative of a driver’s mental load (162), and therefore, it can be used to decide when a subject’s mental load is stabilized in a repetitive task, i.e., finding the time at which no mental load is imposed by adaptation. However, this approach could not be carried out as long as the current simulator was not equipped with an eye-tracking system. In terms of external validity, as explained above, having a very strong internal validity would likely mean having poor external validity, and the current research is no exception. The generalizability of the study in terms of people, location, and simulator can be improved in the future by repeating the study in different settings and with other simulators to provide more evidence that adaptation follows a learning curve and that it is task independent as long as the practice tasks include the essential subtasks of acceleration, deceleration, and steering.  76  CHAPTER 4 ADAPTATION TO GAS AND BRAKE PEDALS The purpose of the experiment in this chapter is to design a practice scenario for adaptation to gas and brake pedals in order to observe and model adaptation patterns. A subject’s performance is quantified by defining a function, which accounts for both reaction time and accuracy. The performance values provide evidence that the subject’s learning pattern can be modeled by a power curve. However, some other patterns that do not follow a learning curve, like all-or-none learning, are observed. A methodology is also defined at the end of this chapter to provide a systematic approach to address such irregular adaptation patterns. 4.1 Introduction Driving simulators are safe to run and easy to control for almost any parameter in a lab environment, which makes them a suitable choice when it comes to causal research, i.e., participants are exposed to different situations and their responses to various stimuli can be analyzed. As Torchim (161) explains, there are three main validity concerns in any experimental research: construct, external, and internal validity. Most of the research into driving simulator validity, including studies done by Wang et al. (163), Lee (164), Shechtmanet et al. (165), and many others, has focused on external and construct validity; however, concerns about internal validity have not been adequately addressed. Adaptation, as explained in Chapter 3, can negatively affect the internal validity of research and therefore analyzing how learning and adaptation occur in driving simulator can help improve internal validity of research.  77  As explained in Chapter 2, a common method for modeling learning is to employ a power curve to show how participants learn a task. Although there have been a few disagreements on the appropriateness of the power curve compared to other functions (98-100), the power function was widely accepted in the past century as a way to represent how people learn different tasks. In the past few decades, many researchers found that human learning and performance in the course of practice can be modeled by a power curve, either for motor skill tasks (166) or for cognitive and verbal learning (167), or even at the organizational level (168). Therefore, to provide evidence that adaptation in simulator also follows the pattern of learning, a performance function is defined to measure how well participants can perform a specific acceleration/deceleration task. The pattern of change in performance values is analyzed, and a method to classify the participants is provided. 4.2 Methodology The driver’s role in a simulator can be considered as the controller of the simulated vehicle. Drivers use the simulated images to recognize the current state of the vehicle and decide how to control the car by manipulating the pedals, gearbox, and steering wheel. This is often referred to as the human–in-the-loop concept, as shown in Figure 4.1. When drivers start the experiment on a simulator for the first time, they do not know how the machine will react to their inputs. Therefore, there is an identification process in the beginning of the experiment in which drivers observe the output to their inputs and try to adjust the inputs to achieve the objective output by minimizing the error. The purpose of this research is to observe and analyze the pattern at which the error value decreases and finally stabilizes. When error values 78  cannot be decreased anymore with practice, the driver can be considered as adapted and is ready to start the experiment.  Objective  Error  Driver (Controller)  Simulator (System)  Graphical observation  Figure 4.1 Human-in-the-loop control concept.  Participants in driving simulator research usually have valid drivers’ licenses, and thus they possess driving skills and each driver has developed a unique strategy to react to any scenario. Therefore, the scope of the practice scenario should not be focused on teaching a specific scenario to drivers, but rather should provide them with a chance to learn how the controls react to their inputs. The missing component in controlling a simulated vehicle is finding the correct inputs drivers should give to pedals and the steering wheel to obtain the desired output. As soon as drivers understand the response of the simulated vehicle to their inputs, they use the new modified inputs to achieve their real-life goals in dealing with any scenario. To analyze adaptation to gas and brake pedals, the scenario for the current experiment is designed to repeatedly force the subjects to use the pedals in order to achieve a set speed. Based on the research method suggested in Chapter 3, the scenario consisted of repetitive and identical increase and decrease of speed in fixed intervals (block order), so the performance improvement can only be associated with practice and not with any other alternative factor.  79  Several pilot studies were carried out to observe the adaptation pattern and study the possible improvements that could be taken into consideration. The driving scenario was a 29 km rural, two-lane roadway with level terrain and no other vehicles present. The scenario was first based on a fixed iteration of change in speed from 60km/h to 100km.h and vice versa. After a couple of trials, the subjects showed a tendency to guess the upcoming speed. To eliminate the guessing factor, the experiment consisted of random speeds in the beginning of each iteration. The speed range was from 60 to 120 and the transition from higher to lower speed was random, but the transition from lower speed to higher speed was still maintained at V+40, where V is the current speed. Moreover, the nature of driving in a rural, flat terrain environment, with no other cars present in the scenario, seemed not be engaging enough, and the drivers were bored and therefore distracted after a few minutes. Introducing some incoming cars into the scenario significantly helped engage the subjects with the simulator. Opposing vehicles were randomly spaced with time headways of less than 20 seconds, and the incoming volume was 240 vph, based on an average speed of 80 km/h for the simulated vehicle. This volume was used to simulate a rural environment. Data collection was initially started right after the scenario was loaded, but it took some subjects 2 or 3 minutes to actually start driving and as a result some unnecessary data was recorded. To eliminate any extra data and reduce the probability of error in processing such data, the starting point of data collection was triggered when the driver pressed the gas pedal for the first time.  80  The requested speed was initially displayed on the top left side of the center monitor. This speed changed every 40 seconds (i.e., test window), and it was immediately shown on the screen. However, subjects participating in the pilot study did not often pay attention to the change on the screen, and therefore they could miss a transition. To overcome this issue, every change in speed was followed by a beep sound to alert participants to the change, which successfully eliminated any missed interval. The speed value was also shown on all monitors to further reduce the chance of subjects missing a transition. 4.3 Driving Simulator The simulator used for this experiment, UBCDrive, is shown in Figure 4.2. It was designed by Oktal, and has a mock-up of a Hyundai passenger car. Drivers interact with the simulator using the steering, brakes, and gas pedals in an automatic transmission.  81  Figure 4.2 UBCDrive driving simulator.  The simulator shows the scenario graphics on five 30-inch LCD televisions, and provides a horizontal field of view of 178°. Five dual-core computers with 20 GB of RAM support the visual representation on five LCDs, while two other dual-core computers with 8 GB of RAM control the cabin, record the data, calculate the vehicle dynamics, and provide a graphical user interface on a separate display, allowing the instructor to monitor various parameters. Data is electronically logged at 20 Hz, and all samples for selected signals are stored on a local hard drive in a compressed text format. 4.4 Participants The experiment was undertaken in three different sessions from 2008 to 2010, in which 5 subjects participated in the first session, 4 others took part in the second round in 2009, and 4 more in 2010. In total, 3 females and 10 males, all with  82  a valid driver’s license, were recruited from among University of British Columbia graduate students and staff. There were no dropouts, and all the participants completed the experiment successfully with no report of simulator sickness. Participants were not paid to take part in this experiment. 4.5 Design Frequency of sampling was 20Hz, and all data was electronically recorded and then manually grouped into 40-second intervals to calculate the cost value associated to each iteration of the task. As a result, this study had a time series design that included several data points representing the driver’s error in performing each iteration of the task. Increasing and decreasing speed were considered to be different tasks as long as the controls to perform each task were different, i.e., the gas pedal to increase speed and the brake pedal to decrease speed. 4.6 Experimental Procedure Participants signed a consent form, and then the purpose of study was clearly described to them. They were asked to change their speed as fast as they could and to do so as soon as they saw the instructions on the screen, and to try to maintain the requested speed as closely as they could until the next request for change in speed was shown on the screen. All the risks and possible sickness side effects were explained, and the participants were informed that the experiment could be stopped at any time if they felt uncomfortable. Speed range was from 60 to 120, and transition from higher to lower speed was random, but the transition from lower speed to higher speed was always V+40, where V is the existing speed just before the transition.  83  4.7 Performance Measure The best possible transition from one speed to another was defined as the condition in which there is no perception lag for the driver and the full power of engine/brake is applied. This ideal transition was collected from the simulator when the drivers applied the full power/brake at different speeds. The difference between an ideal transition and the driver’s transition in speed is calculated and is considered as the cost of doing a trial of the task. ~  t1  Ek = ∑ ( t0  v(t i ) − v (t i ) V − v(t 0 )  )2  Eq. (4.1),  where  Ek:  cost of performing task for trial k;  t0 :  beginning of each trial;  t1:  end of each trial;  v(ti):  subject’s speed at each time ti;  ~  v(t i ) :  ideal speed (objective) value at each time ti;  V:  final speed to reach;  v(t0):  initial subject’s speed at time t0;  The equation’s outcome is proportional to the area between the two graphs shown in Figure 4.3. It was initially modeled as ∫|V1(t)-V2(t)|.dt over each 40-second period. However, because this is a discrete analysis, Δt was assumed to be 1 unit of time (0.05 sec), the integral was changed to summation, and the absolute function was replaced by the square value to more closely resemble the meaning of standard deviation. As long as transition values were random and the speed difference was different in each trial, the denominator was introduced into the formula to normalize the error and to make the values of different trials comparable. A 2-point moving 84  average was used to focus on the general data trend rather than random disruption in performance. A sample 40-second window for the ideal (i.e., objective) and the driver’s transition are shown in Figure 4.3.  Figure 4.3 Transition pattern for a sample driver compared to the ideal transition pattern.  4.8 Results and Discussion Figure 4.4 shows the performance functions for two participants while they attempted to match a lower speed. The sequences of performance values for two other participants while they attempted to use the gas pedal to reach and maintain a higher speed are shown in Figure 4.5.  85  Figure 4.4 Improvement in performance for two subjects for “Decrease and maintain speed” task.  86  Figure 4.5 Improvement in performance for two subjects for “Increase and maintain speed” task.  87  Analyzing the pattern of performance function values from the participants confirmed that performance during the learning phase can be modeled by employing a negative power curve effect. Use of the learning curve equations and the graph shape can determine when a participant is showing a consistent reaction, and therefore that he or she is adapted. However, the shape of the error function is not always so clear, and it is not always possible to fit a learning curve to the error values. Figure 4.6 indicates other possible behavior that is not easily described by the learning curve effect.  Figure 4.6 Adapting behavior pattern in which a learning curve cannot be fitted to the data points.  Although a learning curve cannot be fitted to the data series in Figure 4.6, it is clear that the participant is showing improvement in performing the task in the beginning of the practice session. It is possible to visually distinguish the learning phase, which has occurred from the first to the fifth trial, and then see that the 88  performance became stable. As a result, the power curve should be fitted to the first few trials as learning does not continue until the end of the scenario. Figure 4.7 reveals that a power curve can show the progress in performance during the learning phase.  Figure 4.7 Power curve fit to the learning phase for participant #9.  However, as explained previously, visual inspection of the shape is necessary, and the process may be difficult to perform if it is meant to be done through a computer program. To overcome this issue, Cumulative Error Per Unit (CEPU) is introduced, as seen below :  89  n  ∑C CEPUn=  j =1  j  Eq. (4.2),  n  where CEPUn:  Cumulative error per unit until the nth trial  Cj :  Error in performing the jth trial of task  Research has demonstrated that averaging the performance over trials (or groups of subjects) usually provides a better fit for the power curve (102). As shown in Eq. (4.2), CEPU considers previous error values in computing the current function value and has some form of memory associated with it. The power curve fit to the CEPU of the same dataset shown in Figure 4.6 is provided in Figure 4.8.  Figure 4.8 CEPU and the fitted power function.  The reason that the CEPU function works well is that the memory in the function recalls and considers those previous values, so a sharp decrease in the  90  history of behavior is well portrayed in the shape of the function; therefore, it is possible to draw a precise conclusion about whether or not a person has adapted by studying the learning and experience curves simultaneously. The CEPU function, along with the error function, can provide a mathematical tool for an algorithm or a computer program to decide when a participant is fully adapted to the simulator. It should be mentioned that, due to the systematic bias of the CEPU function towards the beginning of a practice, it should not be used alone to draw a conclusion about whether or not a subject is adapted. The graphs shown in Figures 4.9 to 4.11 demonstrate how different combinations of the CEPU and learning curves can help in identifying adapted drivers. The learning curve fit for three different participants is shown on the upper graphs. The lower graph shows the CEPU and the power curve fit to the values.  91  Figure 4.9 Performance pattern of a non-adapting (or adapted) participant. No power curve can be fitted to either the performance values or to the cumulative performance values.  92  Figure 4.10 Performance pattern of a participant who adapted to a simulator very quickly. No power curve can be fitted to the performance values, but there is a good power curve fit for the cumulative performance values.  93  Figure 4.11 Performance pattern of a participant who is still in the adaptation process. A power curve can be fitted both to the performance values and also to the cumulative performance values.  94  Two graphs in Figure 4.9 show a nonadapting pattern. It is not possible to fit a power curve to either the error values or to the CEPU values. The noisy error function (top graph) suggests that the subject cannot demonstrate a consistent behavior, and so has not adapted to the simulator. It can be concluded that for a nonadapting person, the performance function shows a noisy data series and that the CEPU function tends to be flat after a few trials with a possible concave or convex shape at the beginning. The two graphs in Figure 4.10 indicate the error and CEPU functions for a participant who adapted very quickly. As previously discussed, a learning curve cannot be fitted to the error function in this case, but a power curve can be well fitted to the CEPU function. Another characteristic of the CEPU function is that it shows the overall trend in learning and that an individual point cannot change the shape of a graph to a great extent. For example, this driver began to talk to the instructor in trial 9, and so was distracted from keeping his speed constant. As a result, the error value in trial 9 is much larger than in neighboring trials although this distraction (which is not a part of the adaptation process) has a very small effect on the CEPU function and could not change the shape of the function drastically. Therefore, the CEPU function automatically reduces the impact of isolated distractions, which often occur later in practice, and only shows a large picture of how learning and adaptation are proceeding. The last two graphs in Figure 4.11 indicate the usual pattern of error and the CEPU for an adapting subject. As shown on the graphs, a power curve can be fitted to both error and CEPU functions. Such participants still require more  95  practice until they become adapted, and their error function values show stable behavior. Using the aforementioned logic, a flowchart is proposed in Figure 4.12 to provide a general methodology to recognize when and if adaptation has occurred for participants.  Figure 4.12 A simple flowchart showing how to judge whether a subject has adapted to a simulator.  96  4.9 Summary It was shown that the power curve fit to the error and cumulative error values can be used to identify adapted, nonadapting, and adapting drivers while they are driving in a practice session. What has been proposed in this chapter was a generic approach, and it was not intended to formulate adaptation for all possible skills like steering, shifting, etc. The key element for a successful analysis is a well-defined performance function that can reflect the driver’s ability to perform a specific task. In general, drivers who still show improvement in carrying out a task should keep practicing until no significant improvement is observed in a few consecutive trials. For drivers with not much improvement in performance over a few trials, attention should be paid to the history of their performance. If no improvement is observed at any time since the practice started, the subject is considered as nonadapting; otherwise, the subject has adapted and is ready to start the experiment. There are three major controls available in each simulated vehicle: gas and brake pedals and steering. For this reason, adaptation to a simulated vehicle requires adaptation to all three of these controls. In this chapter, adaptation to the braking and accelerating of a simulated vehicle was quantified and analyzed, but adaptation to steering was not addressed, so future chapters will focus on defining a practice scenario and a proper performance function to analyze and quantify the subject’s performance while learning how to use the steering wheel.  97  CHAPTER 5 ADAPTATION TO STEERING WHEEL In this chapter, the performance of subjects in doing a repetitive cornering task is traced and different patterns of adaptation to the driving simulator are studied. The results show interesting characteristics of the adaptation process, including a power curve fit to the learning phase and plateau periods, in which performance stops improving before it progresses again. The subjects were also asked to report when they felt adapted to the simulator while taking the test. Selfreported times were compared to the quantitative analysis of their performance. The results showed that the self-report values were significantly lower than the actual adaptation time, with insignificant correlation between self-report and actual values. 5.1 Introduction The steering wheel and the gas and brake pedals are the media between the driver and the simulator, and one cannot control a simulated vehicle without knowing their responsiveness and gain. In Chapter 4, adaptation to gas and braking pedals was studied by observing drivers performing a repetitive acceleration and deceleration task, and the results were interpreted using the learning curve concepts. It was shown that the learning process that related to how to accelerate and decelerate in a driving simulator could be modeled by a learning curve. The first objective of the experiment in this chapter is to expand the learning curve concept further to adaptation to steering wheel control and to analyze how subjects improve their performance on a repetitive cornering task. 98  The second objective is to analyze the range of times required for adaptation and to compare it to fixed time/self-report lengths previously used in the literature. 5.2 Methodology The methodology in this research gives subjects enough time to transfer their driving skills to a simulator by practicing a repetitive cornering task until the performance of the task of driving becomes automatic and has no significant mental load. The primary task in driving, as defined by DeWaard (169), is to maintain safe control over the vehicle’s lateral position. O'Hanlon (170) also suggested that the standard deviation from the center line is a sensitive performance measure and can appropriately reflect the state of a driver. Green et al. (171) defined as well the performance as a function of the standard deviation of lateral positioning and found that standard deviation increases when the mental workload on subjects increases in a driving simulator. As a result, controlling the vehicle’s lateral position and its deviation from the center line are good measures by which to evaluate the performance of a driver. Prior to the test, the subjects were informed that their performance was a function of how smoothly they could keep the vehicle in their own lane as well as how fast they could finish each cornering maneuver. Including speed as an attribute of performance ensured the subjects would not decrease their speed to compensate for a smoother lateral positioning. Snoddy (92) also reported using error and the speed of doing a task concurrently to avoid having participants decrease their speed as a way of achieving better accuracy.  99  5.3 Driving Simulator UBCDrive employs an Oktal driving simulator that has a mock-up of a Hyundai passenger cabin. Further specification is provided in Chapter 4, section 4.3. 5.4 Participants Twenty-five participants, 9 females and 16 males, all with valid driver’s licenses, were recruited from among residents of the local community and graduate students and staff of the University of British Columbia, and ranged in age from 20 to 64 years old. There were no dropouts, and all the participants completed the experiment successfully. However, data collected from two subjects was not considered for this research, as the subjects lost control of the vehicle and went off the road in the middle of the practice. In such circumstances, the data logger on the driving simulator crashes and captures incorrect information, and therefore, the recorded data is not valid. Participants were not paid to take part in this experiment. 5.5 Experimental Design and Scenario The driving scenario was on a 2-lane road and consisted of 18 identical right and left turns, each followed by a short segment of straight road. The scenario was designed to be repetitive in order to study the pattern of improvement under identical conditions. A horizontal and fixed white line was coded on the bottom of the center screen, which completely covered the width of the right lane when the car was at the center of that lane. If the car was not located at the center of the lane, one end of this fixed line extended out beyond  100  the road, and so provided a visual feedback to drivers on how well they could keep the vehicle inside their own lane. Data collection was not started before drivers pressed the gas pedal for the first time, in order to ensure no extra data was recorded. No other impeding or opposing vehicle was coded into the scenario so as to guarantee that drivers would concentrate only on the cornering task. This approach will help in analyzing the adaptation progress independently from potential distractions caused by other vehicles in the network. 5.6 Experimental Procedure Participants signed a consent form, and then the purpose of study was clearly described to them. Some examples were provided as a way to explain the concept of adaptation to them. Participants were asked to go around the curves as fast as they could while trying to maintain a fixed line on the screen within their own lane. They were informed that their performance would be a function of speed and also how well they could keep the car inside the right lane while cornering along the curve. Different strategies of making progress were explained to them, and they were given the free choice to follow those hints or to use their own judgment or any other method to improve their performance during the course of the test. They were asked to report the time at which they felt comfortable controlling the car, and the instructor also took notes that described his judgment about the adaptation time and pattern. All the risks and possible sickness side effects were explained to participants, who were also informed that the experiment could be stopped at any time if they felt uncomfortable.  101  5.7 Data Collection The data logger on the UBCDrive captures all the data available in a simulator with a 20-Hz sampling rate. Vehicle lateral positioning and speed were among the recorded data, in addition to the steering wheel and the gas and brake pedal inputs. Right-turn and left-turn curves were analyzed in separate groups to identify any possible differences between them in the subjects’ adaptation. The performance and the time to complete each iteration were later processed. As a result, this study had a time series design that included several data points representing the driver’s performance and speed in completing each iteration of the task. 5.8 Performance Measure Based on the definition of performance given to participants, the following performance functions were defined for each iteration of the task: Tk  ∑V Vk=  t =tk  t  Eq. (5.1),  Nk  where Vk:  Average speed along the curve k;  tk :  Time at the beginning of each curve;  Tk:  Time at the end of each curve;  v(t):  Subject’s speed at each time t;  Nk:  Total number of samples recorded between tk and Tk  Equation (5.1) measures the average speed along the curve and reports one value per iteration.  102  To analyze the smoothness of cornering along each curve, the lateral shift standard deviation is also measured as follows: E k = SD( y t ) where  Eq. (5.2),  Ek:  Standard deviation of lateral shift along the curve k;  Yt:  Road lateral shift measured from the road center line at time t  A sample for the ideal (i.e., center of the lane shown in black) and the driver’s lateral position along a curve are shown in Figure 5.1.  Vehicle Lateral Shift along a curve -2.400  Road Lateral Shift (m)  -2.200  -2.000  Towards the end of the curve  1  -1.800  -1.600  -1.400  -1.200  Figure 5.1 Driver’s lateral position and center of the lane (black) along a curve.  Equation (5.2) reflects the extent of corrections a subject has attempted while going around a corner. A less value means the driver was more comfortable and did not require many corrections to perform the task. On the 103  other hand, a nonadapted driver, frequently using steering wheel correction to keep the car in the right lane, has a higher E value, and therefore is less adapted. To better indicate the trend of adaptation, rather than focusing on individual values, a 3-point moving average was used to smoothen the pattern and highlight the overall performance gain. 5.9 Results and Discussion 5.9.1 Learning Curve Fit and Plateau Phase The average speed and lateral standard deviation were calculated for each cornering, and the trend was analyzed over the trials. Qualitatively, there were four major patterns observed. The first pattern is named the identification phase, in which the subjects tried to identify the system without intending to improve the performance. The performance function values do not have a pattern and show a noisy behavior in this period. The identification phase is followed by a learning phase, in which the subjects showed improvement in one or both of the performance functions defined in Equations (5.1) and (5.2). The third phase was a plateau period, in which the participants did not show significant improvement in performance, but started to improve again after a few trials. The fourth phase was automaticity, in which the drivers did not improve their performance anymore and came to a stable and repeatable condition. This is a phase in which the subjects perform the task automatically without a significant mental load, and therefore, the practice scenario can be stopped in order to begin the experiment. The performance function’s values do not follow any specific rule and show a random behavior in this phase.  104  All the patterns mentioned above did not necessarily exist for all the subjects. For example, the subjects who adapted quickly only showed the learning and automaticity patterns. The subjects who never adapted only showed the identification pattern, and the ones with very fast adaptation only showed the automaticity pattern. A subject was considered to be adapted if both functions had reached the automaticity phase. The automaticity and identification phases are very similar in their patterns, i.e., there is no converged value, and the performance values show a random behavior. This becomes critical when no learning pattern is observed at all from the beginning of the practice, and so the judgment would be that either the participant has not adapted or has adapted very quickly. The distinction between these two possibilities in this very specific case would be based on the instructor’s judgment and on the absolute value of the performance function in comparison to other the participants. As long as the non adapted and the very quickly adapted subjects are at the two ends of the learning spectrum, this distinction is not difficult to make and is evident from the absolute values of the standard deviation or the speed. Quantitatively, data for all the subjects who demonstrated a learning phase for one of the performance attributes followed the learning curve concept, i.e., a power curve can be fitted to that period. It is worth mentioning that all the subjects showed a learning period in at least one of the performance attributes, i.e., speed or lateral standard deviation. An example of progress in speed along the trials with the respective power function fit to the learning period is depicted in Figure 5.2. 105  Speed over few trials (kmh)  Speed (km/h)  80 70 60 50 40 30 0  5  10  15  20  Trial  Learning phase for speed 80  Speed (km/h)  70 60  y = 52.77x0.1946 R² = 0.9451  50 40 30 0  2  4  6  8  Trial  Figure 5.2 Average speed over trials (Top), learning phase and power curve fit (Bottom).  106  The same pattern was also observed for the lateral shift standard deviation. Participants generally showed improvement, and therefore the standard deviation over consecutive corners was reduced until it reached an oscilatory value around a lesser constant value. Figure 5.3 indicates a participant’s progress in having smoother and less erratic steering inputs and, as a result, less standard deviation.  107  Lateral shift standard deviation 0.7 0.6  SD (m)  0.5 0.4 0.3 0.2 0.1 0  5  10  15  20  Trial  Learning phase for lateral shift control 0.75 0.65  SD (m)  0.55  y = 0.7057x-0.709 R² = 0.948  0.45 0.35 0.25 0.15 0  2  4  6  8  10  Trial  Figure 5.3 Lateral shift deviation over trials (Left), learning phase and power curve fit (Right).  108  There was a group of subjects who did not show a significant improvement over the course of practice on one of the performance attributes. After a closer look, only two were not adapted, and the rest in this group started with an already low standard deviation or a high speed and had little room left for improvement. As a result, they were considered to be in a stable condition immdediately after starting the practice. A comparison of the values in Figure 5.4 with those in Figures 5.3 and 5.2 reveals that the subject in this example started with an already high speed on the very first curve and also could maintain a low deviation along the corners. This is an example of a subject who did not show any learning phase, but the results support that s/he was quickly adapted at the beginning of the test.  109  Road Lateral Shift standard deviation 0.45 0.4  SD (m)  0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 0  5  10  15  20  15  20  Trial  Speed  90 85 80  Speed(kmh)  75 70 65 60 55 50 45 40 0  5  10  Trial Figure 5.4 Performance of a subject who adapted quickly.  The only difference between this category and the non adapting one, depicted in Figure 5.5 below, is between the values performance functions take. 110  Non adapting subjects show considerably lower speed and higher lateral deviation in comparison to other subjects.  Speed 60 58  Speed (Km/h)  56 54 52 50 48 46 44 42 40 0  2  4  6  8  10  12  14  16  18  Trial Figure 5.5 Nonadapting performance pattern.  Another interesting pattern observed was the plateau period after a series of improvements in performance. The reason that plateau occurs during a learning process is well discussed in cognitive psychology. Thomas (172) argues that when many components with different learning rates are involved in a complex task, plateaus appear in learning curves. The Component Theory of Skill Acquisition, which was proposed by Speelman and Kirsner (90), suggests that the acquisition of subskills is serial, and that therefore, some low-level subskills require mastery of higher-level skills. In the context of the current study, the subject may need to reach a certain level of automaticity in accuracy before trying to concentrate on increasing speed. After a period of focusing on speed, the subject may focus back on accuracy. Between these two times, the accuracy 111  will not improve or even may be reversely affected due to the mental workload dedicated to speed improvement. However, as the task in this research was not too difficult and involved transferring existing knowledge to a new context, only very few subjects showed plateau areas. A sample of an observed plateau area is provided in Figure 5.6.  Road lateral shift standard deviation  Standard Deviation (m)  0.24 0.22 0.2 0.18 0.16 0.14 0.12 0.1 0  2  4  6  8  10  12  14  16  18  Trial Figure 5.6 Two learning periods with a plateau period shown in a box  5.9.2 Adaptation Time, Quantitative Results Table 5.1 indicates a summary of the adaptation results for all 23 participants. All the subjects showed a steady learning period that could be modeled by a negative power curve for at least one of the performance attributes. The number of performances modeled with a learning curve was not the same for all subjects. Some subjects showed a learning phase for only one of the performance 112  attributes, while some others had a steady learning phase for both attributes on the left and right curves. The R2 (goodness of fit) for the power curve fit to performance data was usually large, and the average for each group is provided in Table 5.1. Table 5.1 A Summary of Adaptation for All Subjects. Left Turn Deviation (m)  Speed (Km/h)  12 (0.87)* 15 (0.93) Number of subjects with learning phase 23 21 Number of adapted subjects 0 2 Number of nonadapted subjects * Numbers in parentheses show average goodness of fit for each group  Right Turn Deviation (m)  Speed (Km/h)  16 (0.86)  14 (0.93)  23 0  22 1  In most cases when no learning phase was observed, the subjects were in the automaticity phase, i.e., they adapted so quickly that no learning was traced among several trials. As explained earlier, adaptation criteria were satisfied when a subject showed the automaticity phase for all tasks. Over 90% of the subjects were adapted at the end of the trials, and there were only 2 subjects (1 male and 1 female) who could not adapt to the simulator based on this methodology. As shown in Table 1, the two nonadapted subjects were identified by their inability to increase their speed over several trials. While the other subjects reached an average of 74.4 km/h in the automaticity phase (SD=9.4 km/h), the nonadapted subjects could only reach 44 and 45 km/h, respectively, which is significantly lower than the average. The notes taken by the instructor during the practice tests also suggest that those two subjects had difficulty controlling the vehicle even at the end of the test.  113  The average adaptation time for the subjects was 456.6 seconds, with a standard deviation of 144.3 seconds. Male and female subjects did not show a statistically significant difference, with an average of 456.8 and 456.4 seconds, respectively (alpha=0.95). Approximately 20% of the subjects required more than 600 seconds of practice for adaptation, with a maximum of 721 seconds for one subject. Figure 5.7 illustrates the distribution of the adaptation time in 6 different time bins.  Distribution of adaptation time Number of subjects in each group  7 6 5 4 3 2 1 0 300  400  500  600  700  800  Time bins (sec.) Figure 5.7 Distribution of adaptation time over 6 time bins.  It is worth comparing the results of this study to another research study performed by McGehee et al. (89) that quantitatively analyzed steering adaptation. The result from the current study is almost three times greater than the suggested adaptation time by McGehee et al., which can be explained by the different nature of the scenarios implemented in the two studies and also by the 114  methodology used to evaluate the subjects’ performance. The current study deals with a more demanding task of cornering, while the 240-second adaptation time suggested by McGehee et al. was for the adaptation to simulator steering on a straight segment of road. Moreover, the performance definition in the McGehee et al. study was based on accuracy only, while in the present study, the speed of completing a task was also considered as a component of performance. Speelman and Kirsner (90) suggested that accuracy and speed are two attributes that should be considered simultaneously for analyzing the performance and that usually have a reverse effect on each other in the beginning of any training, i.e., the faster a task is performed, the less accurate it will be and vice versa. After one component (speed or accuracy) becomes automatic and stable, the mental load decreases, which provides more resources for improvement in the other component. Therefore, accuracy alone, as used in McGehee et al., cannot represent the performance of a subject and can potentially lead to less adaptation time. 5.9.3 Fixed Time and Self-assessment versus Quantitative Analysis In Chapter 2, thirty-one studies were identified in which a fixed time practice scenario was used for adaptation, of which 20 employed practice times of less than 360 seconds. This evidence suggests that the current practice time implemented in the research literature is, in most cases, not sufficient for all subjects to completely adapt to the simulator. Driver self-assessment is also another method seen in the literature that researchers employ to tailor the practice period to different individuals. Subjects  115  in a study undertaken by Fisher et al. (64) and Pradhan et al. (65) were required to practice until they felt comfortable (no time limit). In another study, conducted by McAvoy et al. (66), participants reported that they were adapted on average ten minutes after the practice started, but the results of another study done by Peli et al. (67) suggests that the participants continued to practice for fifteen and even up to thirty minutes before they reported adaptation. To evaluate the appropriateness of using such a method, the results of self-report values in the current study and actual adaptation times were compared to analyze their relationship. The reported values showed an average of 122 seconds (SD=25.7) for the adaptation time, which is significantly lower than the calculated average adaptation time of 456.7 seconds (SD=144.3). As shown in Figure 5.8, the subjects tended to report adaptation much earlier than they had actually adapted. The correlation between reported and actual times was also very low at 0.087, which does not allow for identifying a relationship between the actual adaptation time and the self-report value.  116  Self-Report Vs actual time, Correlation Actual Adaptation Time (Sec.)  800 700 600 500  y = 0.1439x + 440.02 R² = 0.0076  400 300 200 100 0 0  50  100  150  200  250  300  Self Reported Adaptation Time (Sec.) Figure 5.8 Relationship between self-reported and calculated adaptation time.  An overview of cognitive psychology as related to driving behavior also supports the results. Svensen (173) was the first to report drivers’ tendency to overestimate their abilities and skills in driving in 1981.The research showed that a majority of people rated themselves over 5 when asked to identify their driving skills on a 10-point scale. Groeger and Brown (174) conducted a similar study in 1989, and it supported Svenson’s results. Another survey (107), involving almost 1000 young drivers, required them to judge the ability of a novice, an average, and an expert driver on a 5-point scale. On average, these drivers rated expert drivers at 4.36, novice drivers at 3.09, and average drivers at 3.42. Interestingly, the participants rated themselves higher than both the average and novice drivers, which reflects a tendency to over evaluate one’s driving skills. Therefore,  117  based on the results discussed above, using the self-assessment method, which usually results in insufficient practice, is not recommended for setting the practice time. 5.9.4 Prediction of Adaptation Time It was initially hypothesized that by using the power curve and extrapolating the performance measures in the few first trials, the adaptation time could be predicted by observing the few first performance measures. However, due to irregularities observed in the performance function values in both experiments in Chapters 3 and 4, like the identification and the plateau phases, it was concluded that prediction is impossible when only a few performance values are observed and that practice should continue until a stable pattern is observed. 5.10 Summary This experiment provided more detailed insight into the dynamics of steering adaptation in a driving simulator. It was shown that when a learning phase existed during the practice, the performance could be modeled by a power curve. A few subjects showed a period in their learning process during which there was very little or no progress in performance, which was referred as a plateau. The mean adaptation time observed in this research was higher than both the practice time usually implemented in research and also the subjects’ self-report time. Moreover, no correlation was found between the self-report values and the actual observed adaptation time. Male and female subjects did not reveal a significant difference in adaptation time, and in fact, the average values from this study were almost identical for both groups.  118  The next experiment, discussed in Chapter 6, will focus on discovering the relationship between different practice scenarios and adaptation times required for each scenario, i.e., how the subjects who adapted to a specific scenario will adapt to a different scenario. The results can provide evidence on whether or not adaptation is transferable from one task to another.  119  CHAPTER 6 ADAPTATION WITH SLALOM TASK AND IMPACT OF TRANSFER As discussed in Chapter 4, the scope of the practice scenario should not be focused on teaching a specific scenario to drivers, but rather should provide them with a chance to learn how the controls react to their inputs. As soon as drivers understand how the simulated vehicle responds to their inputs, they can use the new modified inputs to achieve their real-life goals in dealing with any scenario. The experiment in this chapter is partly designed to evaluate whether the skills drivers learn during a practice session can be transferred to the experiment. Furthermore, the impact of negative skill transfer is studied, as it can highlight the importance of a neutral practice scenario design. 6.1 Introduction The previous two chapters demonstrated that using a power curve can mathematically model the learning pattern of subjects for steering and pedal control and can also help identify the adapted and nonadapted subjects at the end of practice scenarios. The same concept is implemented in the current experiment in this chapter to evaluate the appropriateness of such a methodology for a different practice scenario. The second objective of this experiment is to verify whether adaptation to a driving simulator is task-independent. Therefore, the impact of the practice scenario on the main task is also analyzed to understand the extent and pattern of transfer from practice scenario to the main experiment scenario. Based on the results of these adaptation sessions, recommendations are provided to improve 120  the quality of the design for the practice scenario and to minimize its effect on the experiment scenario. 6.2 Methodology The new task defined for this study was 12 consecutive slalom tests in which participants were required to drive around 7 cones without hitting them, while trying to maximize their speed through practice. After completing all 12 trials, participants were tested on the same cornering scenario described in Chapter 5. Their performance measures were compared to those of participants in the previous chapter who had no prior exposure to a simulator. The first objective of the current research is to verify whether the same methodology (power curve) is applicable to another practice task, i.e., the slalom test. Therefore, the performance of participants was plotted against practice to verify whether a power curve can model learning. The second and more important objective is to study the extent to which skills learned in the slalom task can be transferred to the cornering task as a means to understand whether the practice scenario is task-dependent. Therefore, performance was defined in the same way as it was earlier, and the pattern of improvement on the cornering task was studied in detail. The adaptation pattern for the cornering task found in the previous chapter was considered as a baseline, and the participants who first drove the slalom test and then performed the cornering task were considered as an experimental group. The difference between the experimental and control groups can exemplify how another task, the slalom test, can impact the learning rate and time for adaptation to the cornering task. 121  The repeated measures or within-subjects method is used to analyze the learning pattern for both the slalom and cornering tasks. This method analyzes the impact of training and practice on the performance for each task separately. In the last section of this chapter we will use a between-subjects analysis to determine the difference between the group that had prior exposure to the simulator and the group that had no experience driving in the simulator, i.e., the control group. This comparison will clarify whether the practice scenario is taskdependent or whether different scenarios require the use of different practice tests. 6.3 Simulator The simulator used for this experiment, UBCDrive, is designed by Oktal and has a mock-up of a Hyundai passenger car. Further details can be found in 4.3. 6.4 Participants Nine female (average age: 28.5 years; average driving experience: 9.9 years) and 18 male (average age: 22.7 years; average driving experience: 6.2 years) participants from the University of British Columbia and the local community took part in this research. Consent forms were distributed on campus and in the local community. Thirty-one total requests were received from participants within 120 days, of which the first 27 were assigned to the current study and the last 4 were assigned to another research project. All participants had valid driver licenses at the time of the experiment. The participants were not monetarily compensated for participating in this research.  122  6.5 Experimental Design and Scenario The first portion of the experiment, shown in Figure.6.1, was a slalom test in which drivers attempted to score the least time possible while driving through 7 cones located 100ft apart. After the last cone, there was enough distance for the driver to return to normal driving conditions before starting the next trial.  Figure 6.1 Slalom scenario driving path.  There were 12 consecutive and identical trials divided by specific markings on the road indicating that the next set of cones was to appear shortly. The second part of the experiment was an exact repetition of the previous study discussed in Chapter 5. 6.6 Experimental Procedure Participants were given the option to withdraw from the experiment at any time if they could not continue for any reason. After signing the consent form, they were introduced to the simulator and the scenarios they were to drive. The first scenario was the slalom test (12 consecutive and identical trials), and participants were informed that their performance was the average speed between the first and the last cone, and that neither speed nor any other behavior would considered if it was outside of that area.  123  After participants completed the first part of the experiment, displays were turned off and participants were introduced to the second portion of the study, which required them to drive through 18 repetitive right and left corners. The break between the two scenarios was short, and participants did not leave the simulator cabin. They were asked to go over the curves as fast as they could and to try to stay in the center of the lane by keeping the fixed horizontal line within their own lane. They were informed that their performance would be a function of speed and also how well and stable they could keep the car inside the right lane while cornering along the curve. No performance measure was recorded on the straight segments of the road between the two curves. 6.7 Performance Measures For the first part of the experiment, the performance measure is defined as the average speed between the first and last cone for each trial and is calculated in Equation (6.1) below: Xk  ∑V  Vk=  x = xk  t  Eq. (6.1),  Nk  where k:  Trial index, between 1 and 12;  Vk:  Average speed for trial k;  xk :  Distance where the first cone is placed on the kth trial;  Xk:  Distance where the last cone is placed on the kth trial;  Vt:  Subject’s speed at each time stamp t;  Nk:  Total number of samples recorded between xk and Xk  124  In rare situations, when a subject could not successfully complete a trial (due to issues such as spinning the car, losing control of the vehicle, etc.), that specific iteration was ignored, and the total number of trials for that subject was reduced by one unit. This was done to eliminate the data outliers and focus on the learning trend rather than on single errors or distractions. Out of the 324 trials completed by 27 participants, the number of iterations omitted was limited to 19, which represents less than 6% of total data points. For the second part of the experiment, i.e., the cornering task, the average speed, and the standard deviation of the lateral shift along each corner are considered to represent each subject’s performance (similar to the study done in Chapter 5). In order to use the participants in our previous research as a control group, the formulation is defined to be consistent with the earlier study and a 3-point moving average was used to smooth the pattern of the time series. A moving average had been used to focus on the learning trend rather than on isolated values representing out-of-range performances that may have been caused by distraction, boredom, or impatience observed in some participants as a result of doing a repetitive task many times. Tk  ∑V Vk=  t =tk  t  Eq. (6.2),  Nk  where k:  Trial index, between 1 and 18;  Vk:  Average speed along the curve k;  tk :  Time at the beginning of each curve;  Tk:  Time at the end of each curve;  v(t):  Subject’s speed at each time t;  Nk:  Total number of samples recorded between tk and Tk  125  Equation (6.2) measures the average speed along the curve and reports one value per iteration. The smoothness of cornering along each curve is also analyzed by calculating an error function defined as lateral shift standard deviation: E k = SD( y t )  Eq. (6.3),  where Ek:  Error, defined as standard deviation of lateral shift along the curve k;  yt:  Road lateral shift measured from the road center line at time t  6.8 Results The adaptation time and patterns from this study (where subjects had a previous chance to adapt to the simulator on a slalom task) are compared to those of the previous study for the cornering task for subjects with no prior experience (explained in Chapter 5). The learning pattern for the slalom test was also studied to verify whether a power curve can provide a suitable mathematical model for adaptation to the slalom test as well. 6.8.1 Adaptation to the Slalom Task The number of trials on the slalom task was intentionally small (12), so that not all subjects would adapt before the practice task was completed. This provides samples of participants still learning at the end of the slalom task; therefore, it is possible to observe the effect of complete adaptation, or lack thereof, on the consequent cornering task. The criterion for adaptation was to observe no improvement in speed during the last few trials. If a participant could improve his or her speed in the last 126  trial, s/he was considered to be still in the learning phase. Twenty-six out of 27 participants showed steady improvement during practice, and the improvement pattern could be modeled by a power curve during the learning phase. A visual representation of performance improvement for a sample participant is provided in Figure 6.2, which shows that he adapted to the slalom test by the eighth trial. Eleven participants out of 27 adapted completely before the slalom test ended. They adapted to the test within an average of 8.5 trials—4 female participants with an average of 7.7 trials, and 7 male participants with an average of 9 trials, with no significant difference (P> .4) between the male and female participants. The average learning rates for female and male subjects in this group were 0.24 and 0.29, respectively. Therefore, on average, female participants adapted at a slower pace; however, the difference again was not statistically significant (p> .4).  127  Figure 6.2 Pattern of speed improvement for a subject who adapted to the slalom test before it was completed.  Five female and 10 male participants still showed improvement at the end of the slalom test. The average learning rates for male and female participants in this group were 0.21 and 0.17, respectively. Once again, male participants had a faster learning rate, although it was not statistically significant (P> .29). For all participants, a positive but not significant correlation (0.24) was found between years of driving experience and the learning rate of participants for the slalom test. Only one participant had considerable trouble controlling the vehicle in this test. He did not show any significant or steady improvement throughout the test, his reactions remained erratic, and the maximum speed he could achieve was only 42.4 km/h, while other participants on average reached a maximum of 66 km/h (SD: 11.32 km/h). Notes from the instructor also verified that he could 128  not adapt to the simulator and was driving with no confidence. He was also one of the two participants who did not adapt to the cornering task. A summary of all findings for the slalom test is provided in Table 6.1. Table 6.1 Summary of Learning Parameters for Slalom Test. Still Learning  Adapted  Group Size  Max speed (km/h)  Alpha*  R2  Group Size  Max speed  Alpha  R2  M.  10  67.3┼ (7.2)**  0.21 (.09)  0.89 (.05)  7  77.7 (5.7)  0.29 (.10)  0.95 (.03)  F.  5  53.8 (4.3)  0.17 (.07)  0.78 (.12)  4  57.2 (12.1)  0.24 (.09)  0.94 (.05)  Total  15  62.8 (9.1)  0.20 (.09)  0.85 (.09)  11  70.3 (13.0)  0.27 (.09)  0.95 (.04)  ┼  All values shown in the table are average for the group. * Alpha is the learning rate. ** Numbers in parentheses show standard deviation. R2 is the goodness of fit for a power curve to the performance data (speed) M: Male, F: Female  6.8.2 Adaptation to the Cornering Task Based on the definition provided to participants, a person was considered adapted when error and speed values for both the right and left turns came to a stable or oscillatory pattern, i.e., improvement stopped. If a person showed an oscillatory or stable pattern from the beginning, s/he was considered to fall into the automaticity region and so adapted right away. The only exception is that the oscillatory or converged speed (or error) is very different from the average for the group, in which case the participant is considered not adapted. A sample of speed and error functions for a subject is depicted in Figures 6.3 and 6.4 for both the right- and left-turn maneuvers. This participant shows no further improvement after the eighth point in any measures, and so is considered adapted after 8 trials.  129  Figure 6.3 Speed and error values for one participant on the left turns  130  Figure 6.4 Speed and error values for one participant on the right turns.  131  When there was a continuous improvement in one of the metrics above, the performance could be modeled by a learning curve. An example of a power curve fit to the performance of the same subject on right turns is provided in Figure 6.5.  132  .  Figure 6.5 Learning phase of the same participant in previous figure to both performance functions, i.e., speed and error.  Similar to the results of Chapters 4 and 5, not all subjects showed a learning period on all performance measures in the course of the experiment.  133  Some who adapted very quickly only showed the automaticity phase. Some participants had a learning curve on one or both measures, but most only had a learning curve for speed. The error measure mostly showed in the automaticity level, with no improvement during the experiment. A summary of adaptation measures for all the participants is provided in Table 6.2. Table 6.2 Summary of Learning Parameters for the Cornering Test. Left Turn Speed #Learning 2 ┼ subject(R )  Right Turn Error  Speed  Error  Alpha*  #Learning 2 subject(R )  Alpha  #Learning 2 subject(R )  Alpha  #Learning 2 subject(R )  Alpha  M.  9(.96)  .09(.06)  1(.95)  .49(na)  8(.92)  .08(.05)  2(.87)  .38(.12)  F.  7(.95)  .06(.03)  0(na)  na(na)  3(.89)  .04(.02)  2(.89)  .46(.01)  Total  16(.96)  .08(.05)  1(.95)  .49(na)  11(.92)  .07(.04)  4(.88)  .42(.09)  ┼  2  Numbers in parentheses show the average R value for learning curve fit to performance data in the group *Alpha is the learning rate. Values are the average in the group; numbers in parentheses show the standard deviation for the group M: Male, F: Female na: Not Applicable  One male and one female participant could not adapt to the cornering task. The maximum speed values they could reach on the left corners were 59.1 and 52.6 km/h, respectively, which was significantly lower (p< .001) than the average of the other participants at 89.8 km/h (SD: 5.3). The average speed on the right corners (62.8 and 51.9 km/h) for these two participants was also significantly lower (p< .001) than the average of the others at 87.1 km/h (SD: 10.4km/h). Adaptation time for the other 25 participants ranged from 45 to 396 seconds, with an average of 192 seconds (SD:97). Average adaptation time for female and male participants was 212 (SD:96) and 183(SD:99) seconds,  134  respectively, which did not show any significant difference between the two (P>.50). Distribution of adaptation times is shown in Figure 6.6—more than half of the participants (52%) required less than 3 minutes to adapt to the task.  Figure 6.6 Distribution of adaptation time for all the participants.  Correlation between age and adaptation time was 0.51, and that of experience and adaptation was 0.53, which shows a moderate but not strong relationship between age, driving experience, and adaptation time, i.e., younger and less experienced participants tend to adapt to this task more quickly. 6.8.3 Comparing Control and Experimental Results (Impact of Practice) A between-subjects method is used to evaluate the difference between the control and experimental groups to measure the level of impact of practice on the main scenario. To study the extent of skill transfer from the slalom task to the cornering task, different performance measures of the participants in the  135  cornering task in the current study (noted as Group B hereafter) are compared to those of the participants in our previous study (Group A), in which no prior exposure to another task in a simulator was provided. Group B is also divided into two subgroups, Group BA and Group BL, representing those who adapted completely at the end of the slalom test and those who were still learning at the end of the slalom test. 6.8.3.1 Adaptation Time Participants in Group A adapted to the cornering task with an average time of 457 seconds (SD: 144), while participants in Group BA adapted to the cornering task within 156 seconds (SD:102), which is significantly lower than Group A’s time by 210 seconds (P< .01). Average adaptation time for Group BL was 221 seconds (SD:89), which was still significantly less than for Group A (p< .01). Adaptation time in Group BL was also longer than the 156 seconds of Group BA, but the difference between the two was not strongly significant (P=0.06). However, a closer look at the individual data points shows that one participant in Group BA demonstrated a long learning phase for the cornering task in comparison to the others, and in fact, she had the longest adaptation phase to the cornering task in the current study. If this data point is considered as an outlier, the average adaptation time to the cornering task for Group BA drops from 156 seconds to 132 (SD:67), and will be significantly lower than that of 221 seconds for Group BL (P< .01).  136  Therefore, adaptation to the slalom test had a significant effect in reducing the adaptation time to the cornering task. This suggests that a significant amount of skill transfer has occurred between the two tasks. Moreover, the level of transfer was shown to be significantly higher for those participants completely adapted to the practice scenario (Group BA) in comparison to those who were still in the learning phase at the end of the practice scenario. As long as Group BA is considered to be adapted to the simulator, the 156-second time observed in this group can be associated with learning how to perform the task and not as related to adaptation to the simulator. 6.8.3.2 Average Speed The average speed on each curve was calculated for all the participants in Groups A and BA, and the average of the groups on each curve was compared. The control group’s average speed on each curve was significantly lower than that of the experimental group (P<.02 for all left and right corners). Another interesting observation is that there is a learning curve in the cumulative behavior for each group, and this suggests that the performance function of a group during a learning process can also be modeled by a power curve. The existence of a learning curve at the group level indicates that in general, both groups showed improvement in increasing their speed throughout the scenario. The average speed for each group and the power curve fits to respective learning phases are shown in Figure 6.7 and Figure 6.8.  137  Figure 6.7 Average speed across all participants in control and experimental groups on left and right corners.  138  Figure 6.8 Learning curve fit to the average speed across all participants in control and experimental groups on left and right corners.  139  6.8.3.3 Average Error While the control group (Group A) improved their accuracy as they drove more during practice, no significant improvement was observed either for Group BL or Group BA. It was shown that (101, 102) averaging several performance functions in a group will always mathematically result in a performance function that can be better modeled with a power function, even if the individual performance functions are not perfectly modeled with a power curve. However, as shown in Figure 6.9 and Figure 6.10, even at the aggregate level, no learning curve could be fitted to this performance measure for the experimental group, while a good learning curve fit is observed for the control group at the aggregate level. Lack of a learning curve fit to group level error is an indication that in general the experimental group did not show improvement in accuracy throughout the scenario. Comparing the number of participants who showed a learning curve to this measure in both groups (1 to 12 for the left-,and 4 to 16 for the right-cornering error) also confirms that even at the individual level, considerably fewer participants showed a learning phase for decreasing error throughout the task.  140  Figure 6.9 Average error across all participants in control and experimental groups on left and right corners.  141  Figure 6.10 Average error across all participants in control and experimental groups on left and right corners with power curve fit test.  142  On both left and right corners, the difference in error between the two groups is not significant in the few first trials (P> .25). However, as the control group continued making progress in reducing the error, the difference became significant. After the eighth trial, the control group consistently showed significantly less error than the experimental group (P< .05). 6.9 Discussion Four issues require further discussion at this point. The first concerns why subjects who had been adapted to the slalom (Group BA) still show learning on cornering, and why adaptation to the second task was not instant. The second issue relates to the possible reasons to justify a higher converged speed for Group BA in comparison to the results of the previous study (Group A). The third issue deals with why only speed improved for Group BA, and why the error did not show any significant improvement in most cases. Finally, the results of the self-assessment values reported in Group A may not be representative of actual adaptation, and this will be explained. 6.9.1 Observation of Learning after Transfer At first glance, observing the learning curve in the cornering task for Group BA may suggest that those who completely adapted to the simulator in a practice task still need to adapt to the simulator when the task is altered. However, improvement in doing an unknown and difficult task takes place even in a real car, and should not necessarily be considered part of adaptation to the simulator. It is important to differentiate between learning how best to perform a new task and adapting to the simulator.  143  The disruptive effect of combining new and old tasks has been observed previously (e.g., 175,176), and is known as the context effect on learning and transfer. As suggested by Miller and Paredes (177) and Speelman and Kirsner (178), learning a skill in a new context forces the subject to rethink existing strategies and therefore, to reorganize previous knowledge, which in turn results in a temporary disruption of performance. Such disruption usually does not last long, and in most cases, practice of the new task will result in a quick transfer to the new task. These researchers conclude that such disruption in the performance of old skills, caused by a new task, has an impact on the performance, but is not representative of the actual learning. Therefore, learning curves observed at the beginning of the cornering task for Group BA could be associated with learning how to deal with the new task, not with adaptation to the simulator. 6.9.2 Higher Converged Speed for Group BA in Comparison to Group A As evident in Figure.6.6, although participants in the control group (Group A) had enough time to increase their speed, their improvement stopped and converged at a value well below those who had adapted on the slalom test prior to the cornering task (Group BA). This may be a residual effect from the slalom test, as the purpose of that test was to maximize speed, and so participants may have been more focused on finding strategies to improve speed. This can be better explained with the Component Theory of Skill Acquisition combined with the Theory of Complex Systems, as introduced by Halloy and Whigham (159). The Component Theory of Skill Acquisition defines  144  any skill as a set of processes that perform different subtasks existing within a skill. These processes improve with practice, and as soon as enough basic processes are in place, more complicated processes can be built by combining them. The Theory of Complex Systems, on the other hand, defines a complex system as a set of agents competing for resources based on greed. The survival of an agent depends on how it can attract resources and be part of a larger cluster of agents. The level of attraction between individual agents or between an agent and a cluster of agents is proportional to the amount of resources each possesses, i.e., agents with more resources are more desirable. It is also inversely proportional to how difficult it is to acquire the resources, i.e., agents will join others if this helps them to acquire resources more easily. If the human brain is considered to be a complex system, Speelberg and Kirsner (90) suggest that neurons (or a network of neurons) are agents in this system. They compete in processing the information; the better they perform in acquiring and processing that information, the more likely it is that they would be used for future similar tasks and the more likely it is that they will survive. If using an agent succeeds in achieving a certain goal, it becomes more attractive and can join other attractive agents or network of agents. In such a paradigm, learning is defined as the procedure by which the selection of the best agents takes place based on consecutive trials of a task. This selection process requires effort and time and puts a load on the system (the brain), but as soon as the best agents are finally selected and grouped together (automaticity level), performing the task becomes automatic, as the information is directly fed into the specific network of selected agents and the best result is quickly achieved.  145  In the context of the current study, subjects who drove in the slalom test focused on finding the agents that could help them achieve the defined goal, i.e., maximizing speed. During the practice session, such competitive agents gradually merged to shape a bigger network optimized to do a task as quickly as possible. Agents in Group A, on the other hand, were gradually selected and combined based on their competitiveness in achieving a different goal, i.e., maximizing speed while minimizing error. When the subjects in Group B drove in the cornering task, they used their existing strategy, which happened to include agents selected to achieve a higher speed, and hence could reach a speed higher than that of Group A. The impact of previous training and automaticity on performing a new task has been studied previously and confirms the above explanation that a residual effect from previous training can influence how a new task is performed. MylesWorsley et al. (179) suggested that the knowledge an expert has in a domain is determined and influenced by what specifically the expert does in that domain. These researchers studied three groups of radiologists with various levels of experience. Different x-ray slides were shown to all the subjects. Later, they were given some slides and asked to identify the ones they had seen earlier. The more experienced radiologists showed better performance in recalling the abnormal xray slides. However, they performed poorly in recognizing the normal x-ray slides in comparison to the less experienced radiologists. It was argued that the search for specific markers in abnormal x-ray slides reduces the chance of detecting other aspects of the slide, and thus, experience can bias the way we do things  146  toward more focused classes of stimuli-response in that domain and away from some others. Therefore, the strategy of Group B, trained for speed, will be more successful in achieving a higher speed, but at the same time, this group will have difficulty reaching the error levels achieved by Group A (discussed in the next section). 6.9.3 No Improvement in Error for Group B As mentioned above, the residual effect of training on the slalom task can explain the difference between the two groups, as Group B had not been focused on minimizing error in the slalom test. The subjects in Group B were not initially trained to minimize error, as they were free to drive on any path they chose to only maximize their speed through the slalom. As long as Group B had already developed a solid network of agents prior to starting the cornering task, they continued using the same strategies automatically and effortlessly, which led to higher error. The other explanation can be drawn from a body of research carried out to analyze drivers’ visual strategy while driving through corners. Mourant and Rockwell (180) suggested that as drivers become more experienced, they gradually look further away from the front of vehicle, while novice drivers are more focused on areas closer to the vehicle. Steering along a curve is believed to be controlled by two parallel processes, long- and short-range visual perceptions (181-183). Long-range visual perception relies on preview and prediction of the curvature ahead of the car and helps drivers to apply a gross  147  steering movement in advance of the curve. Long-range perception works as a feed-forward and open-loop process, which is based on previous experience and, as mentioned before, is more present inexperienced drivers. The shortrange perception, on the other hand, works in a feedback, closed-loop fashion by gathering information from the areas on the road closer to the front of the vehicle to fine-tune steering behavior at each point along the curve. Based on this model, drivers with more experience in driving the simulator in Group BA are expected to focus more on areas well beyond the point at which they are currently driving and to implement an open-loop gross steering input based on their experience. Therefore, they are not focused on the horizontal white line coded on the bottom of the center screen, which shows how accurately they are driving inside their own lane. On the other hand, drivers in Group A, who are not as experienced, are more focused on the areas closer to the vehicle, which includes the white line, and so they use a feedback mechanism to finetune the steering input more accurately. 6.9.4 Self-assessment Value It is worth comparing the self-assessment values for Group A with the adaptation time of Group BA to test the hypothesis that the self-reported time in Group A was actually when drivers became familiar with the task itself; i.e., it is not necessarily representative of adaptation time. The comparison shows that adaptation time to cornering for Group BA (average: 156 sec, SD:102) was not significantly different from the values of self-assessment values in Group A (average:122 sec, SD:25.7) with a confidence level of 99% (P> .29).  148  If, as argued in 6.8.3.1, the 156-second period in Group BA should be considered as the time to learn the new task rather than to adapt to the simulator, it may be concluded that what participants in Group A perceived and reported as adaptation is in fact the learning of the task, not actual adaptation to the simulator. Therefore, participants do not have clear judgment about when they are adapted to the simulator; their judgment is based on when they become familiar with the task, which may occur before they actually adapt to the simulator. 6.10 Summary Driver adaptation to a simulator and the impact of skill transfer from practice to the main scenario were analyzed in depth to identify important criteria in designing a practice scenario. It was concluded that adaptation times for female and male participants are not statistically different. It was also statistically shown that there is an amount of skill transfer from the practice scenario to the main scenario, and that most transfer was observed for those who had completely adapted to the practice scenario. The Theory of Complex Systems was used to lay out a cognitive framework that explains how the learning process takes place during adaptation in a simulator. The framework was used to explain the learning transfer from the practice task to the main scenario and also the potential bias it can introduce to the behavior. The difference between learning a task and adapting to a simulator was discussed, indicating that those who were adapted to the practice scenario only needed time to learn the new task, and that they required no further adaptation for the new scenario. This suggests that participants’ adaptation to a simulator is largely task-independent as long as the 149  practice scenario provides them with a chance to repeatedly practice a scenario using pedals and steering. Therefore, it was suggested that a scenario on a simple and straight segment of a road at a constant speed is not a proper set-up for a practice scenario. The findings also demonstrate that there is a residual effect transferred from the practice scenario to the main scenario, so design of the practice session should not introduce a bias transferred from practice to the main research scenario.  150  CHAPTER 7 CONCLUSION AND FUTURE RESEARCH 7.1 Conclusion and Summary of Findings Even until one hundred years ago, the human race’s experience of traveling at speeds higher than 30 km/h was very limited. However, in today’s world, a speed of over 100 km/h is easily achievable on the roads for a majority of people driving a car. Such a quick transformation in the way we move probably has exceeded the pace at which human nature has evolved and can adapt to the new task of driving. Driving is a complicated task, in fact, the most complicated task for most people, and is comprised of many subtasks that must be performed simultaneously. Therefore, driving requires a great deal of attention, especially at higher speeds. The fact that only a few seconds of inattentiveness can cause a fatal accident highlights the difficulty of a task most people perform on a daily basis. The probability of accident occurrence may seem low; however, considering the individuals’ and communities’ enormous reliance on road transport across the world (trillions of kilometers per year), this low probability translates into a massive number of people killed and injured on the roads. More than 1.3 million people die every year as a result of road accidents, and 50 million more are injured or disabled. The direct and indirect costs of accidents thus place an enormous burden on people’s health and well-being and on the economy of all countries around the world. In Canada, for example, the direct and indirect costs of road accidents exceed the federal government’s total expenditures on health, education, and defense.  151  Different approaches have been taken to minimize the number of accidents and also to reduce their impact when they occur. Governments in various countries have introduced strict rules and required auto manufacturers to comply with higher safety standards. Traffic laws have become stricter, and their enforcement in developed countries has contributed to lowering the relative number and reducing the severity of accidents. However, a silent problem that is still taking lives is driver distraction/inattentiveness. Recent studies have shown that almost 80% of crashes occur as a result of driver distraction (8). Therefore, research into driver behavior and an understanding of how, why, and when drivers become distracted can significantly help lower the number of accidents by offering ways to better design cars, road geometry, and road elements. Research into driver behavior was traditionally carried out in the field. However, with the availability of powerful computers and advancements in the area of virtual reality during the past decade, driving simulators have gained more popularity and acceptance in behavioral research. Driving simulators provide a fully controlled environment in which drivers’ reactions to certain programmed stimuli can be studied safely without compromising other road users’ safety. However, to ensure studies in driving simulators are valid and the results can be generalized to real-world scenarios, certain conditions should be met. Driving simulators are not a perfect reflection of real driving experience, and the simulated vehicle does not convey the exact same feel as a real car. Therefore, there is an adaptation period in the beginning of the experiment in which drivers attempt to adjust to the simulator and learn how to comfortably control the 152  simulated vehicle. The learning process involved in this adaptation, like the learning of any other skill, imposes a mental load on drivers, and as a result, the mental resources available for driving itself are not as available as they might be in the field. Therefore, it is common in the literature to allow participants a practice session so that they adapt to the simulated vehicle; however, the practice scenario, its length of time, and its importance have not been addressed in detail. If drivers are not adapted completely, their reaction may lead researchers to draw conclusions that may overestimate or underestimate the drivers’ ability to deal with the driving task. If the experiment scenario is demanding, the drivers’ reactions in a simulator may lead to an underestimation of the real ability of drivers to manage such a task, as drivers do not have enough free mental resources to tackle the task and respond to the stimuli. On the other hand, it has been demonstrated that in performing very simple tasks, drivers are more susceptible to error and distraction in comparison to tasks that require closer attention. From this perspective, for very simple tasks, the mental load imposed by adaptation keeps drivers alert, and so they may show a better reaction in scenarios such as extended driving on straight rural roads. There are thus several benefits to research that analyzes drivers’ adaptation patterns to driving a simulator and identifies when each participant adapts. Technically, the conclusions are more valid and can be more easily related and generalized to the same scenarios occurring in the field. Ethically, participants who are identified as nonadapting, are excluded from the study, and therefore, are less exposed to the potential effects of simulator sickness. In 153  financial terms, the costs of a study are lowered if researchers can identify the time required for each driver to adapt, as this helps reduce unnecessary practice time for participants and consequently cut down on the costs of a study. Moreover, a practice scenario should be designed not only to provide drivers a chance to adapt to a simulator, but also to ensure it is neutral and will not affect their driving behavior in the main experiment. The residual effect of practice was previously studied, and research in the field of psychology has indicated that people are prone to carry over certain subskills from a practice session to the main experiment. The residual effect can distract a driver’s attention from some of the subtasks of driving and bias the final results. Adaptation is a subjective concept and is difficult to measure or analyze. This could be the reason that it has not been adequately studied in the research literature. However, due to the importance of adaptation for research on driver behavior in simulators, the current study attempts (for the first time to the best of the author’s knowledge) to closely analyze how and when adaptation occurs in a simulator and to provide recommendations on how to design a proper practice scenario that does not negatively affect the experiment results. Three experiments have been carried out to analyze the drivers’ adaptation process to the gas and brake pedals and the steering wheel, and also to study the impact of skill transfer from one task to another. In the following sections of this chapter, summaries of the experiments and the resulting conclusions are briefly presented.  154  7.1.1 Chapter One: Introduction This chapter of the thesis provided background material to explain why research on driver behavior in a simulator can help improve road safety. Adaptation in a driving simulator was defined, and its vital role in research validity was examined. It was explained that if adaptation period is not known, enough time may not be provided to participants for proper adaptation and as a result the conclusions of research may be negatively affected. The motivations and purpose of the research were identified, and the contribution of this research was explained. 7.1.2 Chapter Two: Literature Review The second chapter was dedicated to reviewing the previous work in the field of driver behavior research in driving simulators as well as studying the concepts and history of how learning has been addressed and analyzed in the field of psychology. The first section highlighted the current gap in the literature, while the second section provided the theoretical background related to how to address and analyze the problem at hand. Five major approaches were identified in the literature that deals with adaptation, as explained below: 1. The first group of researchers did not provide any detail on whether or not they had used any practice sessions prior to the experiment. If no practice session is implemented, there is a risk that the results of such studies are not valid enough to be generalized to their real-world counterpart scenarios. 2.  The second group, which were the majority among all the identified  researchers, used a practice scenario with a fixed time or fixed distance to 155  ensure participants had adapted before they started the experiment. The time span for such a practice session ranged between two minutes and two full days, although the studies did not provide enough detail on why a specific time or distance was chosen for a practice session. There was also no explanation of why the researchers selected a specific scenario for the practice session. Adaptation to a driving simulator depends on the fidelity of a given simulator, i.e., how well it can represent real driving experience, and also on the learning rate of participants. Therefore, a fixed time that may be found to be sufficient for one person to adapt to a specific simulator cannot be used for other people on that same simulator or, to a broader extent, to other participants in other simulators. Therefore, employing the fixed time or fixed distance method, as currently discussed in the literature, cannot provide researchers with a standard procedure on how to determine the length of the practice scenario, nor can it offer a tool to ensure participants are in fact adapted at the end of the practice scenario. 3. The third group used driver self-assessment to tailor the length of the practice time to each individual. Participants drove in the simulator until they felt comfortable in controlling the simulated vehicle. Several studies, in assessing driving as well as other skills, have identified an “over-the-average” tendency among participants, i.e., people tend to overestimate their abilities for various reasons. Therefore, such a biased methodology, as was further shown in Chapter 5, could not provide a proper methodology to pinpoint when participants have adapted to a simulator. 156  4.  The fourth group (although the studies mentioned that there was a  practice scenario used prior to the experiment) did not explain the length of the practice time or the scenario used for the practice session at all. 5. One study identified in the literature specifically dealt with adaptation. The methodology in this research involved observing steering corrections of more than 6 degrees as an indication that participants were still in the adaptation process. The study was mainly targeted to determine whether there was any significant difference between the time required for younger and older drivers to adapt to the steering wheel. Several shortcomings were identified in this research. First of all, the scenario was on a straight segment of the road that did not provide a proper opportunity for participants to extensively use the steering wheel and hence adapt to it. Adaptation is a trial-and-error process by which the participants try to fine-tune their inputs to achieve a desired output by repeatedly performing a task. If the task does not require participants to use the steering wheel frequently and extensively, they will not be able to ascertain the steering response to all possible inputs they may apply during an experiment. Moreover, the scenario only dealt with adaptation to the steering wheel, and therefore, adaptation to a vehicle’s gas and brake pedals was not studied. As long as most simulators are fixed and no feeling of acceleration is transferred to the participants, the participants need some time to adjust to the virtual acceleration and deceleration of the simulated vehicle by looking at the speedometer. Therefore, the proposed methodology cannot provide a tool to 157  measure adaptation to gas or brake pedals, and does not provide any suggestion on how to design a practice scenario for that purpose. On the other hand, the analysis method used a restrictive benchmark of 2 or more steering reversals in any 60 seconds as an indication of the adaptation process. This is a subjective measure, and as long as driving styles are different among drivers, such a fixed benchmark cannot provide a flexible methodology to distinguish adapted and nonadapted participants. Based on the reviews in the recent research in the driving simulator community, a gap was identified concerning how to address adaptation in detail and how a practice scenario, in general, can impact the validity and conclusions of the research. There was a need to understand the process by which learning progresses in the human mind, and therefore, a review of the literature in the field of psychology was carried out to identify the methodologies and the approaches to analyzing learning. The second section of Chapter 2 provided a brief review of the concept of learning, including the history of research into learning and the theories and concepts of the learning curve, which were later used throughout this thesis to analyze the adaptation pattern and explain the possible reasons behind each observed adaptation pattern. 7.1.3 Chapter Three: Methodology The proposed methodology to measure and analyze the adaptation pattern was explained in this chapter. As long as adaptation and learning are subjective measures, performance functions were developed to reflect the extent of learning in doing a specific task.  158  Based on the literature review in the field of psychology, and based on the available facilities in the UBCDrive, two parameters were included in the performance function: time and accuracy of doing a task. Developing a specific performance function based on these two variables may be different for various tasks, but three different examples were provided concerning how a performance function was developed for each of the three experiments carried out in this research. The validity of methodology was studied in respect to internal, external, and construct validity, and the threats to the methodology’s validity were identified and addressed. It was argued that for the best internal validity, the practice scenario should comprise a series of exactly identical trials of the same task. Therefore, any change in performance can be specifically attributed to the amount of practice, and not to any other measure. Therefore, a time series method with an identical trial of the same task was defined to analyze the performance improvement in adaptation to the steering wheel and the gas and brake pedals. Practice scenarios were defined to provide the participants enough opportunity to become familiar with the vehicle’s response to a broad range of their inputs. Therefore, for adaptation to pedals, the practice scenario was defined as a set of requests asking the drivers to change their speed from V1 to V2 as quickly as possible and to maintain it at V2 until the next message was shown. The transitional part gave drivers an understanding about the extent of acceleration/deceleration available to them, and the constant part helped them to  159  maintain a fixed speed without receiving any real acceleration or deceleration clue from the simulated vehicle. The performance function was defined so that it reflects reaction time and precision in doing the task. Therefore, the fastest transition between any two speeds was recorded as the ideal transition. The ideal speed pattern after reaching V2 was a constant line at that speed. The participant’s speed was measured during each iteration of the task, and the difference between the driver’s and the ideal’s values was calculated as the performance function. The function accounted for both how fast drivers react as well as how accurately they could maintain their speed. For steering wheel adaptation, a series of sharp left and right corners was defined as the practice scenario, with short segments of straight road between every two corners. This approach provided a task requiring a large range of inputs to the steering wheel, and hence, drivers could experience the vehicle’s response to a range of inputs. The performance function was defined as the standard deviation of the road lateral position, as this has been repeatedly identified as a proper reflection of the driver’s mental load. To prevent participants from driving slowly to compensate for smooth driving, it was decided to include the speed of completing each maneuver. To ensure all the trials were identical, it was suggested that the speed and accuracy of the right and left turns should be studied independently. To measure how skills learned from one task can be transferred to another similar task, a between group analysis was defined to compare the results of a control group with an experiment group. The methodology was to 160  give a slalom test to the experimental group, and then ask them to drive in the cornering task described above. The pattern of learning and adaptation then would be compared to that of those who had not had any previous experience with driving in a simulator. A slalom task was defined for the experimental group, which consisted of 7 cones placed 100ft apart from each other. There were twelve identical repetitions of the cones, separated by a straight segment of a wide road. The drivers were asked to maneuver around the cones and try to maximize their speed. It was decided that no accuracy measure would be required from the participants, so they could focus only on maximizing their speed. This approach was intended to analyze the impact of concentrating on certain aspects of driving and how this would impact the transfer of skills from one task to another. The difference between the speed and accuracy for the experimental and the control group could show the extent to which transfer can occur from a practice scenario to an experiment. To improve the quality of the methodology, the threats to internal validity were identified and addressed. It was concluded that to improve internal validity, the traffic flow should be kept as simple and homogeneous as possible in order to rule out the impact of traffic flow on performance. Audiovisual distractions were identified as other threats, and it was decided to keep the room as quiet as possible, to ask participants to turn off their mobile phones, and to cover the walls with black curtains. Drivers’ distraction was controlled by designing the scenario so that it was demanding enough that participants did not have a lot of free mental resources available to dedicate to other tasks. 161  Limitations of the methodology were also identified at the end of this chapter. Limited external validity, which is inherent in laboratory experiments in general, was identified as the most important limitation of the current methodology. The inability to confidently generalize the results found in this research to other simulators and to participants from other socioeconomic backgrounds can be improved in the future by repeating the research in other simulators and recruiting participants from a larger spectrum of the population. 7.1.4 Chapter Four: Adaptation to Gas and Brake Pedals The first experiment explained in this chapter included a driving task for adaptation to gas and brake pedals. Participants were required to change their speed as soon as a new speed was announced on the simulator’s central display. The performance function, which was suggested in Chapter 3, was further explained and developed in this chapter and used to measure the performance of participants as they carried out each trial of the task. There were few pilot experiments to draw on in fine-tuning the scenario. It was concluded that a scenario without opposing traffic was boring, and that participants were distracted a few minutes into the experiment. Therefore, some traffic was added to the scenario to engage drivers more with the simulator. Transition speeds in the pilot studies were deterministic, and so after a few trials, participants knew what speed would come next. To eliminate the role of guessing in performance, the transitions were coded as random changes. Thirteen participants, 4 females and 9 males, took part in the final stage of this experiment, and three different adaptation patterns were identified.  162  Qualitatively, three patterns were detected. The first pattern, which was observed in most cases, was a steady learning pattern that could be modeled with a power curve, as hypothesized before. This provided evidence that adaptation in a simulator can be modeled with the concepts of a learning curve. The second pattern was for a group that required only one or two trials of the task for adaptation and that suddenly, after the first or second trial, showed a consistent and improved performance for the rest of the experiment. The third group consisted of those who showed a random performance measure throughout the experiment and were identified as a nonadapting group. An algorithm was suggested to identify the adapting and nonadapting participants based on their performance measure values. After each trial of the task, the performance measures of the few last trials were examined. If a learning curve could be fitted to the performance measures, the participant was considered to be still in the adaptation process. If the performance values of the few last trials showed a random oscillatory pattern, the person was identified as either adapted or adapting. The difference between these two groups was shown to be the performance measures of the first few trials. If the participant had shown the same oscillatory value from the beginning, s/he was considered as nonadapting. On the other hand, if a participant showed a history of a worse performance that suddenly improved to oscillatory values much less than the initial values, s/he was considered as a fast adapter. Distinguishing between the two patterns, although easy for a human, was not straightforward mathematically.  163  Therefore, another function was defined as the Cumulative Error per Unit (CEPU), which was used to differentiate between the two groups. If the participant had a significant and quick history of improvement in his or her performance, the CEPU could be modeled with a power curve. For those participants without any improvement in their history, the CEPU could not be modeled by a power curve. As a result, when fitting a learning curve was not possible to consequent performance measures, fitting a power curve into the CEPU values was an indicator of a quick adapted person. Otherwise, the participant was identified as nonadapting. A flowchart was provided at the end of this chapter to better show how the process can be modeled. 7.1.5 Chapter Five: Adaptation to Steering Wheel Chapter 5 provided evidence that adaptation to the steering wheel can also be modeled by a power curve. An experiment was carried out that consisted of 18 right and left turns. Participants were asked to keep the vehicle in the center of the right lane as they drove through the corners. There was a risk that some drivers would attempt to decrease their speed to achieve better accuracy. Therefore, the performance measure, as introduced to the drivers, had two components, speed and accuracy. They were told that their performance was a function of how smoothly they could keep their car inside the right lane and also how fast they could complete the experiment. A few pilot studies were performed, and it was concluded that drivers did not have a good understanding of how well they were keeping the vehicle inside the right lane. They could not properly see the bonnet of the car on the displays, and  164  therefore, they received no feedback on whether or not they were in the center of the right lane. To overcome this issue, one horizontal white line was coded at the bottom of the center screen, which covered the right lane when the vehicle was exactly in the center of the lane. If the car shifted to one side, the line would extend beyond the lane width, and so could provide a visual feedback to the drivers on how well they were controlling the car. Twenty-five participants, 16 males and 9 females, participated in the experiment. Qualitatively, a few phases were identified during the adaptation. The first pattern was identification, in which drivers tried different approaches to control the car without necessarily attempting to improve their performance. After a few trials, the actual learning process started, which could be modeled with a power curve. A few participants also showed a plateau period in which their performance did not progress significantly before it started to improve again. The last phase was an automaticity period, in which the performance function value was not improved anymore and converged around a fixed value. Some subjects also showed no improvement at all from the beginning, and their performance function values were randomly scattered across the trials. Quantitative analysis indicated that more than half of the participants showed a learning period, either for accuracy or speed, which could be modeled by a power curve. Most of the subjects who did not show any improvement in their performance were in the automaticity period, as their performance was above the average. Only two participants who showed a random performance pattern were identified as nonadapting. While other subjects reached an average of 74.4 km/h in the automaticity phase (SD=9.4 km/h), nonadapted subjects 165  could only reach 44 and 45 km/h, respectively, which is significantly lower than the average. A participant was considered adapted when s/he was in automaticity mode for both performance measures, i.e., speed and accuracy. The average adaptation time was 456.6 sec, and it was shown that the average adaptation times for males (456.8 sec) and females (456.4 sec) participants were not statistically different. Approximately 20% of the subjects required more than 600 seconds of practice for adaptation, with a maximum of 721 seconds for one subject. The participants were also asked to identify the time at which they thought they were adapted to the simulator and could comfortably control the simulated vehicle. The results of quantitative analysis were compared to the selfassessment values, and it was demonstrated that there was no correlation between the two. The comparison also showed that participants usually overestimated their abilities and reported adaptation well before they were in fact adapted. The other important outcome of this experiment was that, due to complexity and various patterns of adaptation, a prediction of adaptation time by extrapolating the initial performance value was not possible in most cases. The following are the conclusions from the experiment conducted in this chapter: 1. It was shown that the fixed time used for adaptation in the research literature may often not be enough for adaptation; 166  2. The adaptation time for female and male participants was analyzed separately to show that there is no significant difference between the two; 3. It was shown that participants do not usually have a proper judgment of when they are adapted to the simulator, and therefore, self-assessment is not a proper methodology to identify adaptation time in simulators. 7.1.6 Chapter Six: Adaptation to Slalom Task and Impact of Transfer There were two goals in performing the experiment in this chapter. First, the suitability of a power curve to model adaptation to another task, i.e., the slalom test, was studied. More importantly, it was important to understand whether or not the skills learned in a practice scenario can be transferred to the experiment and what potential negative consequences such a transfer can have. The experiment consisted of two sections. In the first section, drivers were asked to drive around 7 cones as quickly as they could. Each participant performed this practice 12 times. After completing the first part of the experiment, they were asked to drive the same cornering scenario described in Chapter 5. Twenty-seven participants (18 males and 9 females) took part in the experiment. It was shown that for twenty-six participants, there existed a learning curve for improving speed in performing the slalom task. Eleven participants were identified as completely adapted at the end of this task, and fifteen were still in the learning process and had not reached a stable value for their performance function. Those who were identified as adapted at the end of this section showed an average maximum speed of 70.3 km/h, while the other group, still in the learning process, had an average maximum speed of 62.8 km/h. 167  For the second section of the experiment, which involved driving through left and right corners, most participants showed improvement in gaining more speed throughout the test; however, they showed no significant improvement for reducing error (accuracy). Adaptation time to this task was calculated for the two groups: those who were adapted at the end of the slalom test (Group BA), and those who were still in the learning phase (Group BL). The adaptation times were compared to the results of the participants in the previous experiment in Chapter 5, in which participants did not have any prior experience in the simulator (Group A). Group BA adapted to the cornering task within 156 seconds, which was significantly lower than Group A’s time by 210 seconds. Average adaptation time for Group BL was 221 seconds, which was still significantly less than for Group A (p< .01).By identifying and removing an outlier value in Group BA, the average adaptation time to the cornering task for Group BA was reduced to 132 seconds, which was significantly lower than that of 221 seconds for Group BL (P< .01). Therefore, a very important conclusion was drawn that adaptation to the slalom test had a significant effect in reducing the adaptation time to the cornering task. This showed that a significant amount of skill transfer occurred between the two tasks. Moreover, it was statistically shown that the level of transfer was higher for those completely adapted to the practice scenario (Group BA) in comparison to those who were still in the learning phase at the end of the practice scenario. The average speed over each corner was calculated for Group BA and Group A. The average speed, at group level, showed a steady increase and 168  could be very well modeled by a power curve for both groups. This provided evidence that both groups showed improvement in speed as they drove through the cornering experiment. However, the calculations revealed that the average speed of Group BA was significantly higher for all the iterations of the task. The observation of the learning curve for speed in Group BA in the cornering task was associated with learning the cornering task and not adaptation to the simulator. The argument was backed up by the recommendations provided by several studies in the field of psychology and learning. The average error (standard deviation from the center line) was also calculated. Participants in Group BA did not show any improvement in reducing error at the group level. For the first 8 trials of the cornering task, there was no significant difference between the two groups. However, as participants in Group A reduced their error, the difference between the two became significant after the 8th trial. No power curve could be fitted to the average error of Group BA, while a very good power curve fit was found for Group B at the group level. Using the Component Theory of Skill Acquisition combined with the Theory of Complex Systems, it was concluded that the participants in Group BA were more focused on developing subskills, which helped them increase their speed during the slalom task without paying any attention to accuracy. Therefore, they carried over those subskills to the cornering task, which resulted in higher speeds and lower accuracy. This once again confirmed that the skills learned in a practice scenario can be transferred to the experiment, and that the adaptation to a driving simulator is largely task independent if the practice scenario is designed properly. 169  7.2 Research Contributions The following research contributions were achieved in this thesis. 7.2.1 Chapter Two: Literature Review •  A gap in the literature was identified on how driver adaptation should be addressed and analyzed. Different methods of adaptation were identified, and their shortcomings were highlighted by using several research examples and explaining the potential threats to their validities;  •  A review of the literature in the field of psychology was carried out, which provided a robust foundation and background material for future researchers to further analyze adaptation and learning in a driving simulator.  7.2.1 Chapter Three: Methodology •  A flexible methodology was developed that can be modified and applied to future research topics. The threats to the methodology’s validity were studied systematically and with regards to internal, external, and construct validity. Although the threats to validity are different in each research study, this systematic approach can provide an example of how the validity of future methodologies should be verified.  7.2.1 Chapter Four: Adaptation to Gas and Brake Pedals •  It was shown that adaptation can be modeled by a power curve. This provided evidence that adaptation in a driving simulator is a learning 170  process, and that, like the learning of any other skill, adaptation imposes a mental load on participants. This extra mental load may change the driver’s reaction to the scenario and hurt the validity of the research; •  Adapting, adapted, and nonadapting participants were identified by observing the performance function’s values and patterns over time;  7.2.1 Chapter Five: Adaptation to Steering Wheel •  Different stages of adaptation to a driving simulator were identified. The learning stage was shown to follow a power curve format. However, the existence of other irregular patterns like identification and plateau meant that a prediction of adaptation time was not possible in the beginning of a practice session;  7.2.1 Chapter Six: Adaptation to Slalom Task and Impact of Transfer •  It was statistically shown that adaptation to a task in a simulator has a significant effect on adaptation to another task. This provided evidence that skills learned in a practice session are transferred to the experiment, and that therefore, adaptation is largely task independent;  •  It was shown that the practice session can have a negative effect on the experiment if the practice scenario is not designed properly. It was revealed that when drivers are focused on a certain subskill during the practice, they will continue using and improving on those subskills and will undermine other aspects of the experiment;  171  7.3 Recommendations Based on the literature review and conclusions from the experiments, the following recommendations are provided to help researchers design and conduct a better practice scenario. 7.3.1 Repetition of Identical Task As previously discussed, learning is a trial-and-error process by which drivers test their strategies to control the car. Therefore, the scenario should provide opportunities for them to modify all their driving skills (distance judgment, pedal and steering control). Driving on a straight road at a constant speed, as drivers do mostly within a community, will not allow drivers enough opportunity to perceive the response of pedals and steering to all possible inputs. Moreover, providing a repetitive scenario in which drivers practice the same task will help the researcher track learning under identical conditions and verify whether in fact adaptation has occurred. If the scenario does not have a repeated measure concept, it will be almost impossible to judge if and when a driver is adapted. 7.3.2 Blocked vs. Random Practice As was discussed in Chapter 2, it is recommended that the practice scenario should be designed in block format. Research has shown that block format practice results in faster learning, and therefore, can reduce the adaptation time and hence the cost of research. The only drawback identified for the block format is the retention rate, i.e., how well the participants remember the skills they learned as the time passes. As long as retention is not of great importance to most research topics related to a driving simulator, it is 172  recommended that researchers provide repetitive and identical tasks for acceleration, deceleration, and steering. This will result in the fastest adaptation time. As designing a scenario with continuous repetitive acceleration or deceleration is unrealistic, it is recommended to merge the tasks for gas and brake adaptation, similar to the approach explained in Chapter 4. 7.3.3 Level of Difficulty The scenario should not be undemanding. If the practice scenario is easy, participants will have enough mental resources to dedicate to other activities, which can potentially hurt their performance. If the level of distraction recorded in performance elevates, the recorded values will have so much noise that the real signal (performance) cannot be retrieved, which will make it difficult to analyze the impact of practice on learning and adaptation. Moreover, a simple task may not expose the participants to situations in which they are required to apply extensive input to the steering and pedals. Therefore, an easy task will not teach the drivers the complete spectrum of a simulated vehicle’s response to their various inputs. On the other hand, the practice task should not be very difficult. If the practice is too demanding, it may cause participants to become bored and distracted during the experiment and after the practice task is finished. If the practice task is too challenging, their expectation of the difficulty of driving in a simulator will become unrealistically inflated. Therefore, the difficulty of the task should be similar to that of the task that will appear in the experiment.  173  7.3.4 Not Focusing on Specific Style of Driving The scenario should not be defined in such a way that drivers focus on one specific style of driving. As shown in this chapter, an improper practice design can introduce unwanted bias, as drivers tend to focus on specific subskills that they have practiced more. Such a bias towards a specific aspect of driving can negatively affect the internal validity of the research. 7.3.5 Feedback During Practice Session Indirect feedback, like the white line coded on the bottom of the screen in Chapter 5, or the availability of a current speed read on the speedometer in Chapter 4, can help participants have a better idea of how well they are performing a task. In the absence of most physical clues available in real cars, providing some types of feedback to the participants can help them better finetune their next inputs, and therefore can potentially reduce the adaptation time. However, as was previously discussed, if feedback is direct and frequent, it will encourage dependency and reduce a deeper understanding of how the controls respond to the driver’s input. Therefore, it is suggested that researchers should provide minimal feedback only for the purpose of systematically alerting drivers to how well they are performing each iteration of the task, and not to guide them on how they can perform the task better. 7.4 Future Research To the best of the author’s knowledge, this research was the first study in the driving simulator community to analyze the role of a practice session and of  174  adaptation in the validity of research, and therefore, there are several avenues for future research and improvement of the methodology. 7.4.1 Selection of Participants The selection of participants in this research from among mostly undergraduate and graduate students from the University of British Columbia (UBC) was constrained by the time and budget limitations for this study. However, the population selected as above is not a perfect representation of the society. More subjects from different socioeconomic and age groups could be invited to take part in similar experiments to validate that the results from the current research hold for a more representative sample of society. Moreover, a broader selection of participants will help identify any differences between the adaptation time required for older and younger drivers. Any difference found between the adaptation time for younger and older drivers can justify different practice times tailored for each age group for future research. 7.4.2 Adaptation to Manual Transmission The driving simulator at UBC utilizes an automatic transmission, and as a result, it was not possible to measure adaptation to the manual transmission. Changing gear in a manual transmission vehicle is a complicated task that requires practice in simulators prior to an experiment, and thus it can be a subject for future study to define proper performance function and track improvement. The performance function should consider how fast and smoothly participants can change gears. One of the indicators of a smooth performance could be considered the jump in vehicle’s acceleration. A driver with a large 175  acceleration (or deceleration) right after releasing the clutch pedal still needs practice to better learn how to apply the proper inputs to the gas and clutch pedals for a smooth transition. Therefore, a possible candidate for the performance function for a changing gears in a manual transmission simulator can be defined the derivative of acceleration with respect to time, which is regarded as jerk in physics. As discussed before, it is a good idea to ask participants to change gears as quickly and as smoothly as they can, so that they do not compensate time for performance. The procedure may be implemented as explained in Chapter 4 of this thesis, i.e. a note on the screen inform the subjects to change gears which may be followed by a beep sound to ensure they don’t miss a transition. The time and smoothness of transition (as defined by rate of acceleration) can be plotted to study the adaptation pattern and the time needed for adaptation. It is recommended to perform the study on a straight road with no adjacent traffic to rule out any other factor which may affect the performance. 7.4.3 Eye Movement as a Generic Performance Function Eye movement has been shown to be a good indicator of mental load in driving, and therefore, using an eye-tracking system in a simulator can indicate how mental load decreases through the repetition of a practice scenario. Eye movement can thus be used as a performance function and an indicator to define when adaptation occurs. It can be measured in any scenario including adaptation to pedals, steering, or changing the gears.  176  As long as eye movement is a closer construct to the mental load (in comparison to the performance functions defined in this thesis e.g. lateral deviation, etc), it is interesting to measure and observe the pattern of eye movement, as an indicator of performance, and confirm whether it also follows a learning curve. If eye movement proves to be a good candidate for measuring performance and adaptation, future research into driver adaptation can be performed more easily regardless of the task. Currently, the performance functions are defined based on the practice task and in some specific scenarios, defining a proper performance function may not be easy. The difficulty will be more pronounced if a researcher prefers to utilize a random-order practice scenario, in which the performance function should be changed several times and as the practice tasks change. Tracing the performance pattern in such a setting is not easy and therefore using a more task-independent performance function, like eye movement, can have great potentials in simplifying the measurement of adaptation in complex practice scenarios. 7.4.4 Development of an Adaptation Recognition Software Developing an add-on software package for driving simulators that computes and draws the performance function of a participant in realtime could provide an easy-to-use and a visual tool for researchers and practitioners to identify when adaptation occurs. This package could have default and wellprepared practice scenarios and predefined performance functions for each scenario. If eye movement shows to be a good candidate for performance measurement, the software can use it to monitor and analyze adaptation regardless of the practice task. 177  The performance of each subject in each trial could be depicted on a graph in realtime, and the instructor could view the trend and decide when performance had reached a flat line, i.e., adaptation had occurred. The software also can utilize an algorithm, similar to what recommended in Chapter 4, to suggest an adaptation time automatically. However, even in presence of an automatic algorithm, it is recommended to provide the performance functions to the instructor, as the software may not be able to capture all irregularities observed in adaptation pattern. This software package would be of interest to researchers to ensure the results of research is valid and that participants start the experiment at the right time. Manufacturers of driving simulators may also find this software package interesting to incorporate into their simulators as an option to ensure practitioners conduct more valid research. 7.4.5 Assessing the Effectiveness of Adaptation on Improving External Validity A specific experiment can be designed for a field experiment and the same scenario is modeled in a simulator to verify how adaptation helps reducing the gap between the field and simulator results. As it has been theoretically argued in this thesis, if participants in a simulator study are not adapted they may not respond to certain stimuli as they would do in the field. To provide evidence for this argument, a control group will drive in the simulated scenario without any prior practice where the experimental group will drive after adaptation is achieved using the methodology suggested in this thesis and with a proposed practice scenario.  178  The results of two groups will be compared to the results of the field experiment to measure the potential improvement that proper adaptation can contribute to the validity of research. It is expected that results of the adapted group are closer to the results achieved in the field. This can provide practical evidence that adaptation can improve construct and external validity of research and therefore can help researchers conduct experiments that are closer to the field experiments.  179  REFERENCES 1.  EUROMONITOR INTERNNATIONAL website.Query for Transport. http://www.portal.euromonitor.com.ezproxy.library.ubc.ca/Portal/Magazines/Welcome.aspx. Accessed November 14, 2010.  2.  EUROMONITOR INTERNATIONAL website. Query for Automotive http://www.portal.euromonitor.com.ezproxy.library.ubc.ca/Portal/Magazines/Welcome.aspx. Accessed November 22, 2010.  3. World Health Organization. World Report on Road Traffic Injury Prevention. WHO, 2004. 4.  US Geological Survey http://earthquake.usgs.gov/earthquakes/eqinthenews/2004/usslav/#summary. November 15, 2010.  website. Accessed  5.  Jacobs G., A. Aeron-Thomas, A. A. Crowthorne.Transport Research Laboratory, TRL Report, Vol. 445, 2000.  6.  Vodden, K, et al. Analysis and Estimation of the Social Cost of Motor Vehicle Collisions in Ontario, Final Report. Transport Canada, 2007.  7.  EUROMONITOR INTERNATIONAL website. Query for Goevernment Expenditure. http://www.portal.euromonitor.com.ezproxy.library.ubc.ca/Portal/Magazines/Welcome.aspx. Accessed November 15, 2010.  8.  Klauer, S. G. Assessing the Effects of Driving Inattention on Relative Crash Risk. PhD Thesis, Blacksburg, VA: Virginia Polytechnic Institute and State University, 2005.  9. Yerkes, R., M., and J. D. Dodson, The Relation of Strength of Stimulus to Rapidity of HabitFormation. Journal of Comparative Neurology and Psychology, Vol. 18, 1908, pp. 459–482. 10. Muttart, J. W., D. L. Fisher, M. Knodler, and A. Pollatsek. Driving Simulator Evaluation of Driver Performance during Hands-Free Cell Phone Operation in a Work Zone: Driving Without a Clue. Transportation Research Record: Journal of the Transportation Research Board, No. 2018, TRB, National Research Council, Washington, DC, 2007, pp. 9-14. 11. Liang, Y., J. D. Lee and M. L. Reyes. Non-intrusive Detection of Driver Cognitive Distraction on Real-Time Using Bayesian Networks. Transportation Research Record: Journal of the Transportation Research Board, No. 2018, TRB, National Research Council, Washington, DC, 2007, pp. 1-8. 12. Knodler, M. A., D. A. Noyce and D. L. Fisher. Evaluating the Impact of Two Allowable Permissive Left-Turn Indications. Transportation Research Record: Journal of the Transportation Research Board, No. 2018, TRB, National Research Council, Washington, DC, 2007, pp. 53-62. 13. Garay-Vega, L., D. L. Fisher and A. Pollatsek. Hazard Anticipation of Novice and Experienced Drivers: Empirical Evaluation on a Driving Simulator in Daytime and Nighttime Conditions. Transportation Research Record: Journal of the Transportation Research Board, No. 2009, TRB, National Research Council, Washington, DC, 2007, pp. 1-7. 14. Knodler, M. A., D. A. Noyce, K. C. Kacir and C. L. Brehmer. Analysis of Driver and Pedestrian Comprehension of Requirements for Permissive Left-Turn Applications. Transportation Research Record: Journal of the Transportation Research Board, No. 1982, TRB, National Research Council, Washington, DC, 2006, pp. 65-75.  180  15. Watson, G. S., Y. E. Papelis and O. Ahmad. Design of Aimulator Scenarios to Study Effectiveness of Electronic Stability Control Systems. Transportation Research Record: Journal of the Transportation Research Board, No. 1980, TRB, National Research Council, Washington, DC, 2006, p. 79-86. 16. Knodler, M. A., D. A. Noyce, K. C. Kacir and C. L. Brehmer. Potential Application of Flashing Yellow Arrow Permissive Indication in Separate Left-Turn Lanes. Transportation Research Record: Journal of the Transportation Research Board, No. 1973, TRB, National Research Council, Washington, DC, 2006, p. 10-17. 17. Levinson, D., K. Harder, J. Bloomfield and K. Winiarczyk. Weighting Waiting. Evaluating Perception of In-Vehicle Travel Time Under Moving and Stopped Conditions. Transportation Research Record: Journal of the Transportation Research Board, No. 1898, TRB, National Research Council, Washington, DC, 2004, p. 61-68. 18. Noyce, D. A. and V. V. Elango. Safety Evaluation of Centerline Rumble Strips. Crash and Driver Behavior Analysis. Transportation Research Record: Journal of the Transportation Research Board, No. 1862, TRB, National Research Council, Washington, DC, 2004, p. 4453. 19. Noyce, D. A. and C. R. Smith. Driving Simulators for Evaluation of Novel Traffic-Control Devices. Protected-Permissive Left-Turn Signal Display Analysis. Transportation Research Record: Journal of the Transportation Research Board, No. 1844, TRB, National Research Council, Washington, DC, 2003, p. 25-34. 20. Hoffman, J. D., J. D. Lee, T. L. Brown and D. V. McGehee. Comparison of Driver Braking Responses in a High Fidelity Simulator and on a Test Track. Transportation Research Record: Journal of the Transportation Research Board, No. 1803, TRB, National Research Council, Washington, DC, 2002, p. 59-65. 21. Bittner, A. C., O. Simsek, W. H. Levison and J. L. Campbell. On-Road Versus Simulator Data in Driver Model Development. Driver Performance Model Experience. Transportation Research Record: Journal of the Transportation Research Board, No. 1803, TRB, National Research Council, Washington, DC, 2002, p. 38-44. 22. Upchurch, J., D. Fisher, R. A. Carpenter and A. Dutta. Freeway Guide Sign Design with Driving Simulator for Central Artery-Tunnel. Boston, Massachusetts. Transportation Research Record: Journal of the Transportation Research Board, No. 1801, TRB, National Research Council, Washington, DC, 2002, p. 9-17. 23. Lidström, M. Using Advanced Driving Simulator as Design Tool in Road Tunnel Design. Transportation Research Record: Journal of the Transportation Research Board, No. 1615, TRB, National Research Council, Washington, DC, 1998, p. 51-55. 24. Comte, S. L. New Systems: New Behaviour? Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 3, 2000, p. 95-111. 25. Velichkovsky, B. M., S. M. Dornhoefer, M. Kopf, J. Helmert and M. Joos. Change Detection and Occlusion Modes in Road-Traffic Scenarios. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 5, 2002, p.99–109. 26. Levinson, D., K. Harder, J. Bloomfield and K. Carlson. Waiting Tolerance: Ramp Delay vs. Freeway Congestion. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 9, 2006, p. 1-13.  181  27. Victor,T. W., J. L. Harbluk and J. A. Engström. Sensitivity of Eye-Movement Measures to InVehicle Task Difficulty. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 8, 2005, p. 167-190. 28. Engström, J., E. Johansson and J. Őstlund. Effects of Visual and Cognitive Load in Real and Simulated Motorway Driving. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 8, 2005, p. 97-120. 29. Baas, P. H., S. G. Charlton and G. T. Bastin. Survey of New Zealand Truck Driver Fatigue and Fitness for Duty. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 3, 2000, p. 185–193. 30. Upchurch, J., D. L. Fisher and B. Waraich. Guide Signing for Two-Lane Exits with an Option Lane. Transportation Research Record: Journal of the Transportation Research Board, No. 1918, TRB, National Research Council, Washington, DC, 2005, p. 35-45. 31. Maltz, M. and D. Shinar. Imperfect In-Vehicle Collision Avoidance Warning Systems Can Aid Distracted Drivers. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 10, 2007, p. 345-357. 32. Haigney, D. E., R. G. Taylor and S. J. Westerman. Concurrent Mobile (Cellular) Phone Use and Driving Performance: Task Demand Characteristics and Compensatory Processes. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 3, 2000, p. 113-121. 33. Jenkins, J. M. and L. R. Rilett. Classifying Passing Maneuvers: A Behavioral Approach. Transportation Research Record: Journal of the Transportation Research Board, No. 1937, TRB, National Research Council, Washington, DC, 2005, p. 14-21. 34. Jenkins, J. M. and L, R. Rilett. Application of Distributed Traffic Simulation for Passing Behavior Study. Transportation Research Record: Journal of the Transportation Research Board, No. 1899, TRB, National Research Council, Washington, DC, 2004, p. 11-18. 35. Kass, S. J., K. S. Cole, and C. J. Stanny. Effects of Distraction and Experience on Situation Awareness and Simulated Driving. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 10, 2007, p. 321-329. 36. Horberry, T., J. Anderson and M. A. Regan. The Possible Safety Benefits of Enhanced Road Markings: A Driving Simulator Evaluation. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 9, 2006, p. 77-87. 37. Bar-Gera, H. and D. Shinar. The Tendency of Drivers to Pass Other Vehicles. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 8, 2005, p. 429-439. 38. Harb, R., E. Radwan and X. Yan. Larger size vehicles (LSVs) Contribution to Red Light Running, Based on a Driving Simulator Experiment. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 10, Issue 3, May 2007, p. 229-241. 39. Greenberg, J., L. Tijerina, R. Curry, B. Artz, L. Cathey, D. Kochhar, K. Kozar, M. Blommer and P. Grant. Transportation Research Record: Journal of the Transportation Research Board, No. 1843, TRB, National Research Council, Washington, DC, 2003, p. 1-9. 40. Van der Horst, R. and S. De Ridder. The Influence of Roadside Infrastructure on Driving Behavior: A Driving Simulator Study. Transportation Research Record: Journal of the Transportation Research Board, No. 2018, TRB, National Research Council, Washington, DC, 2007, pp. 36-44.  182  41. Hoffman, J. D, J. D. Lee, D. V. McGehee, M. Macias and A. W. Gellatly. Visual Sampling of In-Vehicle Text Messages. Transportation Research Record: Journal of the Transportation Research Board, No. 1937, TRB, National Research Council, Washington, DC, 2005, p. 2230. 42. Rizzo, M., Q. Shi, J. D. Dawson, S. W. Anderson, I. Kellison and T. Pietras. Stops for Cops. Impaired Response Implementation for Older Drivers with Cognitive Decline. Transportation Research Record: Journal of the Transportation Research Board, No. 1922, TRB, National Research Council, Washington, DC, 2005, p. 1-8. 43. Laberge, J., C. Scialfa, C. White and J. Caird. Effects of Passenger and Cellular Phone Conversations on Driver Distraction. Transportation Research Record: Journal of the Transportation Research Board, No. 1899, TRB, National Research Council, Washington, DC, 2004, p. 109-116. 44. McGehee, D., T. L. Brown, J. D. Lee and T. B. Wilson. Effect of Warning Timing on Collision Avoidance Behavior in a Stationary Lead Vehicle Scenario. Transportation Research Record: Journal of the Transportation Research Board, No. 1803, TRB, National Research Council, Washington, DC, 2002, p. 1-7. 45. Lee, J., D. V. McGehee, T. A. Dingus and T. Wilson. Collision Avoidance Behavior of Unalert Drivers Using a Front-to-Rear-End Collision Warning Display on the Iowa Driving Simulator. Transportation Research Record: Journal of the Transportation Research Board, No. 1573, TRB, National Research Council, Washington, DC, 1997, p. 1-7. 46. Ranney, T. A., A. J. Masalonis and L. A. Simmons. Immediate and Long-Term Effects of Glare from Following Vehicles on Target Detection in Driving Simulator. Transportation Research Record: Journal of the Transportation Research Board, No. 1550, TRB, National Research Council, Washington, DC, 1996, p. 16-22. 47. Desmond, P. A., P. A. Hancock and J. L. Monette. Fatigue and Automation-Induced Impairments in Simulated Driving Performance. Transportation Research Record: Journal of the Transportation Research Board, No. 1628, TRB, National Research Council, Washington, DC, 1998, p. 8-14. 48. Lee, J. D., D. V. McGehee, T. L. Brown and D. Marshall. Effects of Adaptive Cruise Control and Alert Modality on Driver Performance. Transportation Research Record: Journal of the Transportation Research Board, No. 1980, TRB, National Research Council, Washington, DC 2006, p. 49-56. 49. Bella, F. Parameters for Evaluating Speed Differential: Contributions Using Driving Simulator. Transportation Research Record: Journal of the Transportation Research Board, No. 2023, TRB, National Research Council, Washington, DC, 2007, pp. 37-43. 50. Bella, F. Validation of a Driving Simulator for Work Zone Design. Transportation Research Record: Journal of the Transportation Research Board, No. 1937, TRB, National Research Council, Washington, DC, 2005, p. 136-144. 51. Abe, G. and J. Richardson. The Effect of Alarm Timing on Driver Behaviour: An Investigation of Differences in Driver Trust and Response to Alarms According to alarm Timing. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 7, 2004, p. 307-322. 52. Van Erp, J. B.F. and H. A. H. C. Van Veen. Vibrotactile In-Vehicle Navigation System. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 7, 2004, p. 247-256.  183  53. Rook, A. M. and J. H. Hogema. Effects of Intelligent Speed Adaptation Human-Machine Interface Design on Driver Behavior and Acceptance. Transportation Research Record: Journal of the Transportation Research Board, No. 1937, TRB, National Research Council, Washington, DC, 2005, p. 79-86. 54. Jamson, S. Would Those Who Need ISA, Use It? Investigating the Relationship Between Drivers’ Speed Choice and Their Use of a Voluntary ISA System. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 9, 2006, p. 195-206. 55. Hoedemaeker, M. and K. A. Brookhuis. Behavioural Adaptation to Driving with an Adaptive Cruise Control (ACC). Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 1, 1998, p. 95-106. 56. Jamson, A. H. and N. Merat. Surrogate In-Vehicle Information Systems and Driver Behaviour: Effects of Visual and Cognitive Load in Simulated Rural Driving. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 8, 2005, p. 79-96. 57. Van Winsum ,W. , D. de Waard and K. A. Brookhuis. Lane Change Manoeuvres and Safety Margins. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 2, 1999, p. 139-149. 58. Andersen, G. J., C. Sauer and A. Saidpour. Visual Information for Car Following by Drivers. Transportation Research Record: Journal of the Transportation Research Board, No. 1899, TRB, National Research Council, Washington, D.C., 2004, p. 104-108. 59. Ranney, T. A., L. A. Simmons, Z. Boulos and M. M. Macchi. Effect of an Afternoon Nap on nighttime Performance in a Driving Simulator. Transportation Research Record: Journal of the Transportation Research Board, No. 1686, TRB, National Research Council, Washington, DC, 1999, p. 49-56. 60. O’Neill, T. R., G. P. Krueger, S. B. Van Hemel, A. L. McGowan and W. C. Rogers. Effects of Cargo Loading and Unloading on Truck Driver Alertness. Transportation Research Record: Journal of the Transportation Research Board, No. 1686, TRB, National Research Council, Washington, DC, 1999, p. 42-48. 61. Laurie, N. E., S. Zhang, R. Mundoli, S. A. Duffy, J. Collura and D. L. Fisher. An Evaluation of Alternative Do Not Enter Signs: Failures of Attention. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 7, 2004, p. 151–166. 62. Maltz, M., H. Sun, Q. Wu and R. Mourant. In-Vehicle Alerting System for Older and Younger Drivers. Transportation Research Record: Journal of the Transportation Research Board, No. 1899, TRB, National Research Council, Washington, DC, 2004, p. 64-70. 63. Charlton, S. G. Delineation Effects In Overtaking Lane Design. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 10, 2007, p. 153–163. 64. Fisher, D., A. K. Pradhan, A. Pollatsek and M. A. Knodler. Empirical Evaluation of Hazard Anticipation Behaviors in the Field and on a Driving Simulator Using an Eye Tracker. Transportation Research Record: Journal of the Transportation Research Board, No. 2018, TRB, National Research Council, Washington, DC, 2007, pp. 80-86. 65. Pradhan, A. K., D. L. Fisher and A. Pollatsek. Risk Perception Training for Novice Drivers. Evaluating Duration of Effects of Training on a Driving Simulator. Transportation Research Record: Journal of the Transportation Research Board, No. 1969, TRB, National Research Council, Washington, DC, 2006, p. 58-64.  184  66. McAvoy, D. S., K. L. Schattler and T. K. Datta. Driving Simulator Validation for Nighttime Construction Work Zone Devices. Transportation Research Record: Journal of the Transportation Research Board, No. 2015, TRB, National Research Council, Washington, DC, 2007, pp. 55-63. 67. Peli, E., A. R. Bowers, A. J. Mandel, K. Higgins, R. B. Goldstein and L. Bobrow. Design for Simulator Performance Evaluations of Driving with Vision Impairments and Visual Aids. Transportation Research Record: Journal of the Transportation Research Board, No. 1937, TRB, National Research Council, Washington, DC, 2005, p. 128-135. 68. Ullman, B. R., G. L. Ullman, C. L. Dudek and A. A. Williams. Driver Understanding of Messages Displayed on Sequential Portable Changeable Message Signs in Work Zones. Transportation Research Record: Journal of the Transportation Research Board, No. 2015, TRB, National Research Council, Washington, DC, 2007, pp. 28-35. 69. Horrey, W. J. and C. D. Wickens. In-Vehicle Glance Duration: Distributions, Tails and a Model of Crash Risk. Transportation Research Record: Journal of the Transportation Research Board, No. 2018, TRB, National Research Council, Washington, DC, 2007, pp. 2228. 70. Hegeman, G., R. van der Horst, K. A. Brookhuis and S. Hoogendoorn. Functioning and Acceptance of an Overtaking Assistant Design Tested in a Driving Simulator Experiment. Transportation Research Record: Journal of the Transportation Research Board, No. 2018, TRB, National Research Council, Washington, DC, 2007, pp. 45-52. 71. Dudek, C. L., S. D. Schrock and B. R. Ullman. License Plate and Telephone Numbers in Changeable Message Sign AMBER Alert Massages. Transportation Research Record: Journal of the Transportation Research Board, No. 2012, TRB, National Research Council, Washington, DC, 2007, pp. 64-71. 72. Schattler, K., J. Pellerito, D. McAvoy and T. K. Datta. Assessing Driver Distraction from Cell Phone Use. A Simulator-Based Study. Transportation Research Record: Journal of the Transportation Research Board, No. 1980, TRB, National Research Council, Washington, DC, 2006, p. 87-94. 73. Dudek, C. L., S. D. Schrock, G. L. Ullman, and S. T. Chrysler. Flashing Message Features on Changeable Message Signs. Transportation Research Record: Journal of the Transportation Research Board, No. 1959, TRB, National Research Council, Washington, DC, 2006, p. 122-129. 74. Knodler, M. A., D. A. Noyce, K. C. Kacir and C. L. Brehmer. Evaluation of Flashing Yellow Arrow in Traffic Signal Displays with Simultaneous Permissive Indications. Transportation Research Record: Journal of the Transportation Research Board, No. 1918, TRB, National Research Council, Washington, DC, 2005, p. 46-55. 75. Molino, J. A., K. S. Opiela, C. K. Anderson and M. J. Moyer. Relative Luminance of Retroreflective Raised Pavement Markers and Pavement Marking Stripes on Simulated Rural Two-Lane Roads. Transportation Research Record: Journal of the Transportation Research Board, No. 1844, TRB, National Research Council, Washington, DC, 2003, p. 45-51. 76. Dutta, A., R. Carpenter, D. A. Noyce, S. A. Duffy and D. L. Fisher. Drivers’ Understanding of Overhead Freeway Exit Guide Signs. Evaluation of Alternatives with an Advanced FixedBase Driving Simulator. Transportation Research Record: Journal of the Transportation  185  Research Board, No. 1803, TRB, National Research Council, Washington, DC, 2002, p. 102109. 77. Gish, K. W., L. Staplin, J. Stewart and M. Perel. Sensory and Cognitive Factors Affecting Automotive Head-Up Display Effectiveness. Transportation Research Record: Journal of the Transportation Research Board, No. 1694, TRB, National Research Council, Washington, DC , 1999, p. 10-19. 78. Klee, H., C. Bauer, E. Radwan and H. Al-Deek. Preliminary Validation of Driving Simulator Based on Forward Speed. Transportation Research Record: Journal of the Transportation Research Board, No. 1689, TRB, National Research Council, Washington, DC, 1999, p. 3339. 79. Fox, J. E. and D. A. Boehm-Davis. Effects of Age and Congestion Information Accuracy of Advanced Traveler Information Systems on User Trust and Compliance. Transportation Research Record: Journal of the Transportation Research Board, No. 1621, TRB, National Research Council, Washington, DC, 1998, p. 43-49. 80. Yang, C. Y. D., J. D. Fricker and T. Kuczek. Designing Advanced Traveler Information Systems from a Driver’s Perspective. Results of a Driving Simulation Study. Transportation Research Record: Journal of the Transportation Research Board, No. 1621, TRB, National Research Council, Washington, DC, 1998, p. 20-26. 81. Van der Hulst, M., T. Meijman and T. Rothengatter. Maintaining Task Set Under Fatigue: A Study of Time-on-Task Effects in Simulated Driving. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 4, 2001, p. 103-118. 82. Van Driel, C. J. G., M. Hoedemaeker and B. van Arem. Impacts of a Congestion Assistant on Driving Behavior and Acceptance Using a Driving Simulator. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 10, 2007, p. 139-152. 83. Donmez, B., L. N. Boyle, J. D. Lee and D. V. McGehee. Drivers’ Attitudes Toward Imperfect Distraction Mitigation Strategies. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 9, 2006, p. 387-398. 84. Merat, N., V. Anttila and J. Luoma. Comparing the Driving Performance of Average and Older Drivers: The Effect of Surrogate In-Vehicle Information Systems. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 8, 2005, p. 147-166. 85. Santos, J., N. Merat, S. Mouta, K. Brookhuis and D. de Waard. The Interaction Between Driving and In-vehicle Information Systems: Comparison of Results from Laboratory, Simulator and Real-World Studies. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 8, 2005, p, 135-146. 86. Dutta, A., D. L. Fisher and D. A. Noyce. Use of a Driving Simulator to Evaluate and Optimize Factors Affecting Understandability of Variable Message Signs. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 7, 2004, p. 209- 227. 87. Caird, J. K., W. J. Horrey and C. J. Edwards. Effects of Conformal and Nonconformal Vision Enhancement Systems on Older-Driver Performance. Transportation Research Record: Journal of the Transportation Research Board, No. 1759, TRB, National Research Council, Washington, DC, 2001, p. 38-45.  186  88. Salvucci, D. D. and A. Liu. The Time Course of a Lane Change: Driver Control and EyeMovement Behavior. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 5, 2002, p. 123–132. 89. McGehee, D. V., J. D. Lee, M. Rizzo, J. Dawson and K. Bateman. Quantitative Analysis of Steering Adaptation on a High Performance Fixed-Base Driving Simulator. Transportation Research Part F: Traffic Psychology and Behaviour, Vol. 7, 2004, p. 181-196. 90. Speelman, C.P., and K. Kirsner. Beyond the Learning Curve. Oxford University Press Inc., New York, 2005. 91. Ebbinghaus, H. Memory, A Contribution to Experimental Psychology. Translated by H. A. Ruger. Teachers College, Columbia University, NY, 1913. 92. Snoddy, G.S. Learning and Stability. Journal of Applied Psychology, Vol. 10, 1926, pp. 1-36 93. Wright, T.P. Factors Affecting the Cost of Airplanes. Journal of the Aeronautical Sciences, Vol. 3, No. 4, 1936, pp. 122-128. 94. Ferster, C.B., and B.F. Skinner. Schedules of reinforcement. Appleton-Century-Crofts, New York, 1957. 95. De Jong, J.R. The Effects of Increasing Skill on Cycle-Time and Its Consequences for TimeStandards. Ergonomics, Vol. 1, 1957, pp. 51-60. 96. Crossman, E.R. A Theory of the Acquisition of Speed-Skill . Ergonomics, Vol. 2, 1959, pp. 153-166. 97. Newell, A., and P.S. Rosenbloom. Mechanism of Skill Acquisition and the Law of Practice. In J.R. Anderson (Ed.), Cognitive Skills and Their Acquisition. Erlbaum, Hillsdale, NJ, 1981. 98. Hull, C.L. Principles of Behavior. Appleton Century Crofts, New York, 1943. 99. Mazur, J.E,. R. Hastie. Learning as Accumulation: A Re-Examination of the Learning Curve. Psychological Bulletin, Vol. 85, No. 6, 1978, pp. 1256-1274. 100. Thurstone, L.L. The Learning Curve Equation. Psychological Monographs, Vol. 26, 1919, p. 114. 101. Heathcote, A.S., S. Brown, and D.J.K. Mewhort. The Power Law Repealed: The Case for an Exponential Law of Practice. Psychonomic Bulletin & Review, Vol. 7, No. 2, 2000, pp. 185207. 102. Haider, H., and P.A. Frensch. Why Aggregated Learning Follows the Power Law of Practice When Individual Learning Does Not: Comment on Rickard (1997,1999), Delaney et al. (1998) and Palmeri (1999). Journal of Experimental Psychology: Learning, Memory, and Cognition, Vol. 28, No. 2, 2002, pp. 392-406. 103. Logan, G.D. Toward an Instance Theory of Automization. Psychological Review, Vol. 95, 1988, pp. 492-527. 104. Anderson, J.R. The Architecture of Cognition. Harvard University Press, Cambridge, MA, 1983. 105. Seibel, R. Discrimination Reaction Time for a 1,023 Alternative Task. Journal of Experimental Psychology. Vol. 66, No. 3, 1963, pp. 215-226.  187  106. Groeger, J.A., and B.A. Clegg. Systematic Changes in the Rate of Instruction During Driver Training. Applied Cognitive Psychology, Vol. 21, No. 9, 2007, pp. 1229-1241. 107. Groeger, J.A., and S. Brady. Differential Effects of Formal and Informal Driver Training. London: Department of Environment, Transport and the Regions, 1998. 108. Bryan, W.L., and N. Harter. Studies on the telegraphic language: The Acquisition of a Hierarchy of Habits. Psychological Review, Vol. 6, 1899, pp. 345-375. 109. Book, W.F. The Psychology of skill. Grigg, New York, 1925. 110. Thomas, R.C. Long Term Human-Computer Interaction: An Exploratory Perspective. Springer-Verlag, London, 1998. 111. Adams, J.A. Historical Review and Appraisal of Research on the Learning, Retention, and Transfer of Human Motor Skills. Psychological Bulletin, Vol. 101, 1987, pp. 41-74. 112. McGeoch, J.A. The Acquisition of Skill. Psychological Review, Vol. 28, 1931, pp. 413-466. 113. McGeoch, J.A., and A.L. Irion. The Psychology of Human Learning, 2nd Ed., Longmans, Green, New York, 1952. 114. Welford, A.T. Fundamentals of Skill. Methuen, London, 1968. 115. Thorndike, E.L., and R.S. Woodworth. The Influence of Improvement in One Mental Function upon the Efficiency of Other Functions. Psychological Review, Vol. 8, 1901, pp. 247-261. 116. Thorndike, E.L. Principles of Teaching. A.G. Seiler, New York, 1906. 117. Duncan, C.P. Transfer after Training with Single verses Multiple Tasks. Experimental Psychology, Vol. 55, 1958, pp. 63-72.  Journal of  118. Osgood, C.E. The Similarity Paradox in Human Learning: A resolution. Psychological Review, Vol. 56, 1949, pp. 132-143. 119. Judd, C.H. The Relation of Special Training on General Intelligence. Educational Review, Vol. 36, 1908, pp. 28-42. 120. Mieklejohn, A. Is Mental Training a Myth? Education Review, Vol. 37, 1908, pp. 126-141. 121. Logan, G.D., and S.T., Klapp. Automizing Alphabet Arithmetic: I. Is Extended Practice Necessary to Produce Automaticity?Journal of Experimental Psychology: Learning, Memory, and Cognition, Vol. 17, 1991, pp. 179-195. 122. Lassaline, M.E., and G.D. Logan. Memory-Based Automaticity in the Discrimination of Visual Numerosity. Journal of Experimental Psychology: Learning, Memory, and Cognition, Vol. 19, No.3, 1993, pp. 561-581. 123. Levy, B.A. Proofreading Familiar Text: Constraints on Visual Processing. Memory & Cognition, Vol. 11, 1983, pp 1-12. 124. Byrne, B. On Teaching Articulatory Phonetics via an Orthography. Memory & Cognition, Vol. 12, No. 2, 1984, pp 181-189. 125. Proctor, R.W., and A.F. Healy. Acquisition and Retention of Skilled Letter Detection. In A.F. Healy, and L.E. Bourne, Jr. (Eds), Learning and memory of knowledge and skills: Durability and Specifity. Sage, Thousand Oaks, CA, 1995. 126. Clawson, D.M., C.L. King, A.F. Healy, and K.A. Ericsson. Training and Retention of the Classic Stroop Task: Specificity of Practice Effects. In A.F. Healy, and L.E. Bourne, Jr. (Eds),  188  Learning and Memory of Knowledge and Skills: Durability and Specificity. Sage, Thousand Oaks, CA, 1995. 127. Pennington, N., R. Nicolich, and J. Rahm. Transfer of Training between Cognitive Subskills: Is Knowledge Use Specific? Cognitive Psychology, Vol. 28, No. 2, 1995, pp. 175-224. 128. Blessing, S., and J.R. Anderson. How People Learn to Skip Steps. Journal of Experimental Psychology: Learning, Memory, and Cognition, Vol.22, 1996, pp. 576-598. 129. Speelman, C.P., and K. Kirsner. The Specificity of Skill Acquisition and Transfer. Australian Journal of Psychology, Vol. 49, No. 2, 1997, pp. 91-100. 130. Rabinowitz, M., and N. Goldberg. Evaluating the Structure-Process Hypothesis. In F.E. Weinert, and W. Schneider (Eds), Memory Performance and Competencies: Issues in Growth and Development. Erlbaum, Hillsdale, NJ, 1995, pp. 225-242. 131. Schneider, W., and A.D. Fisk. Automatic Category Search and its Transfer. Journal of Experimental Psychology: Learning, Memory, and Cognition, Vol. 10, No. 1, 1984, pp. 1-15. 132. Anderson, J.R., C. Lebiere, and M. Lovett. Performance. In J.R. Anderson, and C. Lebiere (Eds.), Atomic Components of Thought. Erlbaum, Mahwah, NJ, 1998. 133. Jelsma, O., J.J.G. van Merrienboer, and J.P.Bijlstra. The ADAPT Design Model: Towards Instructional Control of Transfer. Instructional Science, Vol. 19, 1990, pp. 89-120. 134. Shea, J.B., and R.L. Morgan. Contextual Interference Effects on the Acquisition, Retention, and Transfer of a Motor Skill. Journal of Experimental Psychology: Human Learning and Memory, Vol. 5, 1979, pp. 179-187. 135. Carlson, R.A., and R.G. Yaure. Practice Schedules and the Use of Component Skills in Problem Solving. Journal of Experimental Psychology: Learning, Memory, and Cognition, Vol. 16, 1990, pp. 484-496. 136. Proctor, R.W., and A. Dutta. Skill Acquisition and Human Performance. Sage Publications, London, 1995. 137. Lintern, G., S.N. Roscoe, and J. Sivier. Display Principles, Control Dynamics, and Environmental Factors in Pilot Performance and Transfer of Training. Human Factors, Vol. 32, 1990, pp. 299-317. 138. Luchins, A.S. Mechanization in Problem Solving. Psychological Monographs, Vol. 54, 1942, p. 248. 139. Luchins, A. S., and E. H. Luchins. Rigidity of Behavior: A Variational Approach to the Effect of Einstellung. University of Oregon Press, Eugene, Oregon, 1959. 140. Kendler, H.H., and T.S. Kendler. Vertical and Horizontal Processes in Problem Solving. Psychological Review, Vol. 69, 1962, pp. 1-16. 141. Singley, M.K., and J.R. Anderson. The Transfer of Cognitive Skill. Harvard University Press, Cambridge, MA, 1989. 142. Kessler, C. , and J.R. Anderson. Learning Flow of Control: Recursive and Iterative Procedures. Human Computer Interaction, Vol. 2, 1986, pp. 135-166. 143. Bartlett, F.C. The Measurement of Human Skill. Occupational Psychology, Vol. 22, 1948, pp. 83-91.  189  144. Thorndike, E.L. The Fundamentals of Learning. Bureau of Publications, Teachers College, New York, 1932. 145. Lorge, I., and E.L. Thorndike. The Influence of Delay in the after-effect of a Connection. Journal of Experimental Psychology, Vol. 18, 1935, pp. 187-194. 146. Bilodeau, E.A., I.M. Bilodeau. Variable Frequency Knowledge of Results and the Learning of Simple Skill. Journal of Experimental Psychology, Vol. 55, 1958, pp. 379-383. 147. Schmidt, R.A., D.E. Young, S. Swinnen, and D.C. Shapiro. Summary Knowledge of Results for Skill Acquisition: Support for the Guidance Hypothesis. Journal of Experimental Psychology: Learning, Memory, and Cognition, Vol. 15, 1989, pp. 352-359. 148. Annett, J., and H. Kay. Knowledge of Results and Skilled Performance. Occupational Psychology, Vol. 31, pp 69-79. 149. Boldovici, J.A. Measuring Transfer in Military Settings. In S.M. Cormier, and J.D. Hagman (Eds.), Transfer of learning, Academic Press, San Diego, CA, 1987, pp. 239-260. 150. Kincade, R.G. A Differential Influence of Augmented Feedback on Learning and Performance. Publication AMRL-TDR-63-12. Aerospace Medical Research Laboratory, Dayton, OH, 1963. 151. Anderson, J.R. Rules of the Mind. Erlbaum, Hillsdale, NJ, 1993. 152. Anderson, J.R., and C. Lebiere. Atomic Components of Thought. Erlbaum, Mahwah, NJ, 1998. 153. Laird, J.E., A. Newell, and P.S. Rosenbloom. Soar: An Architecture for General Intelligence. Artificial Intelligence, Vol. 33, 1987, pp. 1-64. 154. Newell, A. Unified Theories of Cognition. Harvard University Press, Cambridge, MA, 1990. 155. Rosenbloom, P.S., A. Newell, and J.E. Laird. Toward the Knowledge Level in Soar: The Role of the Architecture in the Use of Knowledge. In K. VanLehn (Ed.), Architectures for Intelligence: The Twenty-second Carnegie Mellon Symposium on Cognition. Erlbaum, Hillsdale, NJ, 1991, pp. 75-111. 156. MacKay, D.G. The Problem of Flexibility, Fluency and Speed-Accuracy Trade-off in Skilled Behavior. Psychological Review, Vol. 89, 1982, pp. 483-506. 157. McClelland, J.L., and D.E. Rumelhart. Parallel Distributed Processing: Exploration in the Microstructure of Cognition (Vol. 2). MIT Press/Bradford Books, Cambridge, MA, 1986. 158. Repetition Priming and Automaticity: Psychology, Vol. 22, 1990, pp. 1-35.  Common  Underlying  Mechanisms.  Cognitive  159. Halloy, S.R.P. , and P.A. Whigham. The Lognormal as Universal Descriptor of Unconstrained Complex Systems: a Unifying Theory of Complexity. Proceedings of the 7th Asia-Pacific Complex Systems Conference, Cairns Convention Center, Cairns, QLD, Australia, , 2004, pp. 309-320. 160. Halloy S.R.P. A Theoretical Framework for Abundance Distributions in Complex Systems. Complexity International, Vol. 6, 1998, P. 12. 161. Trochim, W. The Research Methods Knowledge Base (2nd ed.). Atomic Dog Publishing , Cincinnati, OH, 2000.  190  162. Engström, J., E. Johansson, and J. Östlund. Effects of Visual and Cognitive Load in Real and Simulated Motorway Driving. Transportation Research Part-F: Traffic Psychology and Behaviour, Vol. 8, No. 2, 2005, pp. 97-120. 163. Wang, Y. , B., Mehler , B. Reimer, V. Lammers, L.A. D'Ambrosio, and J.F. Coughlin, J. F. The Validity of Driving Simulation for Assessing Differences between In-Vehicle Informational Interfaces: A Comparison with Field Testing. Ergonomics, Vol. 53, No. 3, 2010, pp. 404-420. 164. Lee, H.C. The Validity of Driving Simulator to Measure On-Road Driving Performance of Older Drivers, Proceedings of CAITR, 2002. 165. Shechtman, O., S. Classen, K. Awadzi, and W. Mann. Comparison of Driving Errors Between On-the-Road and Simulated Driving Assessment: A Validation Study. Traffic Injury Prevention, Vol. 10, No. 4, 2009, pp. 379-385. 166. Neisser, U., R. Novick, and R. Lazar. Searching for Ten Targets Simultaneously. Perceptual and motor skills, Vol. 17, 1963, pp. 955-961. 167. Fitts, P.M. Perceptual Motor Skill Learning. In A.W. Melton (Ed.), Categories of Human Learning. Academic Press, New York, 1964, pp. 243-285. 168. Searle, A.D., and C.S. Gody. Productivity Changes in Selected Wartime Shipbuilding Programs. Monthly Labor Review, Vol. 61, 1945, pp. 1132-1149. 169. Dewaard, D. The Measurement of Drivers' Mental Workload. PhD Thesis, Haren, Netherlands: University of Groningen, Traffic Research Centre, 1996. 170. O'Hanlan, J F. Driving Performance under the Influence of Drugs: Rational for, and Application of, a New Test. British Journal of Clinical Pharmacology, 1984, pp. 1215-1295. 171. Green, P., B. Lin, and T. Bagian. Driver Workload as a Function of Geometry: A Pilot Experiment. Ann Arbor, MI: Transportation Reserach Institute, The University of Michigan, 1993. 172. Thomas, R.C. Long Term Human-Computer Interaction: An Exploratory Perspective. Springer-Verlag , London, 1998. 173. Svenson, O. Are We All Less Risky and More Skillful Than our Fellow Drivers?Acta Psychologica, Vol. 47, 1981, pp. 143-148. 174. Groeger, J. A., and I. D. Brown. Assessing One's Own and Others' Driving Ability: Influence of Sex, Age, And Experience. Accident Analysis and Prevention, Vol. 21, No.2, 1989, pp. 155-168. 175. Rickard, T.C. Bending the Power of Law: A CMPL Theory of Strategy Shifts and the Automization of Cognitive Skills. Journal of Experimental Psychology: General, Vol. 126, No. 3, 1997, pp. 288-311. 176. Elio, R. (1986). Representation of Similar Well-Learned Cognitive Procedures. Cognitive Science, Vol. 10, 1986, pp. 41-73. 177. Miller, K.F., and D.R. Paredes. Starting to Add Worse: Effect of Learning to Multiply on Children's Addition. Cognition, Vol. 37, 1990, pp. 213-242. 178. Speelman, C.P., and K. Kirsner. Predicting Transfer from Training Performance, Acta Psychologica, Vol. 108, 2001, pp. 247-281.  191  179. Myles-Worsley, M., W.A. Johnston, and M.A. Simons. The Influence of Expertise on X-Ray Image Processing. Journal of Experimental Psychology: Learning, Memory, and Cognition, Vol. 14, No.3,1998, pp. 553-557. 180. Mourant, R.R., and T.H. Rockwell. Mapping Eye-Movement Patterns to the Visual Scene in Driving: An Exploratory Study. Human Factors, Vol. 12, 1970, pp. 81-87. 181. Godhelp, H. Vehicle Control During Curve Driving. Human Factors, Vol. 28, No.2, 1986, pp. 211-221. 182. Donges, E. A Two-Level Model of Driver Steering Behavior. Human Factors, Vol. 20, No. 6, 1978, pp. 691-707. 183. Reid, L.D. A survey of recent Driver Steering Behavior Models Suited to Accident Studies. Accident Analysis and Prevention, Vol. 15, No. 1, 1983, pp. 23-40.  192  APPENDICES Appendix A  Department of Civil Engineering 6250 Applied Science Lane Vancouver, B.C., Canada V6T 1Z4 Tel: (604) 822-2637 Fax: (604) 822-6901  Consent Form Methodology to Evaluate Driver Adaptation to Driving Simulators Principle Investigator: Tarek Sayed, Professor Department of Civil Engineering tsayed@civil.ubc.ca Tel: (604) 822-4379 Purpose: The purpose of this study is to record a variety of driver performance measures under a variety of driving tasks and develop a methodology to identify if adaptation has occurred to predict when adaptation is likely to occur. Procedure: As a licensed driver, who is not under the influence of alcohol or a narcotic, you are eligible to participate in this study. Upon reading and signing this consent form you will: • • •  Be seated in the UBC driving simulator Be given verbal instructions on how to drive the UBC driving simulator; and Drive through a driving scenario for no more than 45 minutes.  The driving scenario will be of a rural roadway with tangent and curved road sections. They posted speed limit may vary from 50 to 100 km/hr. While observing all traffic laws, try to maintain the posted speed limit. Your driving behavior will be recorded electronically. The study will take no more than one hour to complete. Potential risks: You may experience some level of eye, head, stomach, balance or temperature discomfort while driving the UBC driving simulator.  193  Benefits: You will not receive any benefit, remuneration, or reimbursement.  Confidentiality: The electronic data files will be password protected and saved on a data DVD which will be accessible to Prof. Sayed and Saeed Sahami. The data DVD will be stored along with this consent form in a locked file at UBC for at least five years, accessible only to Prof. Sayed. The results of this study may be released for publication; however, your identity will be remained confidential. Contact for information about this study: If you have any questions or desire further information about this study, please contact Prof. Sayed at (604) 822-4379. Contact for concern about the right of research subjects: If you have any concerns about your treatment or rights as a research subject, you may contact the Research Subject Information Line in the UBC Office of Research Services at 604-822-8598 or if long distance e-mail to RSIL@ors.ubc.ca Consent: Your participation in this study is entirely voluntary and you may refuse to participate or withdraw from the study at any time without jeopardy. Your signature below indicates that you have received a copy of this consent form for your own records. Your signature indicates that you consent to participate in this study. You can choose to begin the study immediately or request an appointment for another day.  Name  Signature  Date  Tarek Sayed Professor Transportation Group E-mail: tsayed@civil.ubc.ca Tel: (604) 822-4379 Fax: (604) 822-6901  194  Appendix B  195  196  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0063027/manifest

Comment

Related Items