Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Driver head pose sensing using a capacitive array and a time-of-flight camera Ziraknejad, Nima 2014

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2014_november_ziraknejad_nima.pdf [ 5.3MB ]
Metadata
JSON: 24-1.0135540.json
JSON-LD: 24-1.0135540-ld.json
RDF/XML (Pretty): 24-1.0135540-rdf.xml
RDF/JSON: 24-1.0135540-rdf.json
Turtle: 24-1.0135540-turtle.txt
N-Triples: 24-1.0135540-rdf-ntriples.txt
Original Record: 24-1.0135540-source.json
Full Text
24-1.0135540-fulltext.txt
Citation
24-1.0135540.ris

Full Text

DRIVER HEAD POSE SENSING USING A CAPACITIVE ARRAY AND A TIME-OF-FLIGHT CAMERA         by  Nima Ziraknejad  M.A.Sc., The University of British Columbia, 2007      A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  DOCTOR OF PHILOSOPHY in   The Faculty of Graduate and Postdoctoral Studies (Electrical and Computer Engineering)            The University of British Columbia  (Vancouver) August 2014  © Nima Ziraknejad, 2014                                                                                                                         ii Abstract  Improving the safety of vehicle occupants has gained increasing attention among automotive manufacturers and researchers over the past three decades. There is increased potential for injury mitigation techniques to be applied more effectively to vehicle safety systems if the pose (i.e. the position and orientation) of the driver’s head with respect to the Head Restraint (HR) device can be provided to such safety systems in real-time during vehicle operation. This information is valuable to a range of systems including: adaptive HR positioning for whiplash injury mitigation, advanced driver assistance, driver inattention and fatigue detection, and other possible applications. This thesis proposes, implements, and evaluates a new integrated hybrid sensing approach to providing the driver’s head pose both accurately and in real-time by employing two different sensing subsystems inside the vehicle: 1) a novel capacitive proximity sensing array, and 2) an Illumination-Compensated (IC) Time-of-Flight (ToF) range imaging camera. Firstly, for position sensing, a capacitive proximity sensing electrode and array were developed through electrostatic field analysis and then optimized using numerical modeling studies. Experiments (including environmental testing) using a full system prototype of the position sensing array were performed for numerical modeling validation and accuracy testing. Secondly, since orientation sensing (found to be inaccurate with capacitive sensing alone), this work demonstrated how a ToF camera can be utilized to obtain an accurate measurement of the driver’s head pose using a novel light IC technique. A laboratory testbed was also built to accommodate the aforementioned hybrid system. The capacitive array was incorporated inside the frontal compartment of a customized HR device as part of the testbed, and the ToF camera was installed in front of the driver.  Laboratory experiments have demonstrated that the head position can be estimated with a mean Euclidean distance error of 0.33 cm and a Mean Absolute Error in orientation of 3.45° and 1.61° for head yaw and pitch angles, respectively.  The thesis also contains a comprehensive introduction to the problem, review of relevant current literature in the area, comparison with the findings of previous related investigations, discussion of the implications of the work and suggestions for future work.                                                                                                                         iii Preface  All chapters in this thesis including each of the research papers (Chapter 2 through Chapter 5), was written by me (Nima Ziraknejad) and edited by my supervisors, Dr. Peter Lawrence and Dr. Douglas Romilly. The work in each research paper was motivated by my supervisors and the goals were jointly defined and agreed upon. The approach was primarily determined by me, and the design and implementation in Chapter 6 was proposed and carried out either by me or under my supervision by undergraduate co-op students or machine shop personnel.  A version of Chapter 2 has been published in: N. Ziraknejad, P. Lawrence, and D. Romilly, “Quantifying Occupant Head to Head Restraint Relative Position for use in Injury Mitigation in Rear End Impacts,” SAE International, Warrendale, PA, 2011-01-0277, Apr. 2011. A version of Chapter 3 has been accepted for publication. N. Ziraknejad, P. D. Lawrence, and D. P. Romilly, “Vehicle Occupant Head Position Quantification Using an Array of Capacitive Proximity Sensors,” IEEE Transactions on Vehicular Technology, 10 journal pages, June 26, 2014. A version of Chapter 4 has been published in: N. Ziraknejad, P. D. Lawrence, and D. P. Romilly, “The effect of Time-of-Flight camera integration time on vehicle driver head pose tracking accuracy,” presented at the 2012 IEEE International Conference on Vehicular Electronics and Safety (ICVES), 2012, pp. 247 –254. A version of Chapter 5 will be submitted for publication.   A total of eight undergraduate co-op students from the Departments of Electrical and Computer Engineering, and Mechanical Engineering were hired for the purpose of                                                                                                                       iv assisting in the computer hardware, software, electronics, and mechanical development stages of the system prototype explained in Chapter 6 of this thesis. Two Capstone projects in the UBC Mechanical Engineering Department were formed (four students for each team project) to design, build, and commission the first two generations of the electromechanical HR system prototypes. The aforementioned activities took place between Jan 2010 and Dec 2013. The co-op and Capstone students were jointly supervised by me and my supervisors. In all of these projects, my role was to define the projects, act as client, and supervise day-to-day activities. This research has been approved by the UBC Behavioural Research Ethics Board, Project Approval: H10-01805.                                                                                                                           v Table of Contents  ABSTRACT .................................................................................................................................................. ii PREFACE .................................................................................................................................................... iii TABLE OF CONTENTS .............................................................................................................................. v LIST OF TABLES .................................................................................................................................... viii LIST OF FIGURES .....................................................................................................................................ix LIST OF ABBREVIATIONS .................................................................................................................... xii ACKNOWLEDGMENTS ......................................................................................................................... xiii DEDICATION ............................................................................................................................................. xv CHAPTER 1. OVERVIEW AND BACKGROUND ............................................................................... 1 1.1 MOTIVATION FOR THIS THESIS ...................................................................................................... 1 1.1.1 Lack of Driver Attentiveness ................................................................................................... 1 1.1.2 Existing Solutions for Proper Head Restraint Positioning ....................................................... 2 1.1.3 Removing Reliance on Occupants to Avoid Injuries ............................................................... 2 1.1.4 Statistics on Driver Inattention and Injuries ............................................................................ 3 1.1.5 Real-time Driver Head Pose Sensing ...................................................................................... 6 1.1.6 More Severe Injuries for a Turned Head ................................................................................. 7 1.1.7 Summary of Motivations ......................................................................................................... 7 1.2 HEAD TRACKING SENSING SOLUTIONS ......................................................................................... 8 1.2.1 Stereo Cameras and Binocular Vision ..................................................................................... 8 1.2.2 Structured Lighting .................................................................................................................. 9 1.2.3 Time-of-Flight Range Imaging 3D Sensor ............................................................................ 10 1.2.4 Capacitive Proximity Sensing ............................................................................................... 11 1.3 SELECTED HEAD POSE SENSING TECHNIQUE ............................................................................. 12 1.3.1 ToF Camera-based Range Imaging ....................................................................................... 13 1.3.2 Application of Capacitive Proximity Sensing ....................................................................... 14 1.3.3 Summary ............................................................................................................................... 16 1.4 OBJECTIVES OF THIS THESIS ........................................................................................................ 17 1.5 REVIEW OF PREVIOUS VEHICLE STUDIES USING THE SELECTED SENSORS .............................. 18 1.5.1 Capacitive Sensing Systems in Vehicles ............................................................................... 18 1.5.2 Vision-based Head Pose Sensing in Vehicles ........................................................................ 20 1.6 THESIS CONTRIBUTIONS .............................................................................................................. 25 CHAPTER 2. DRIVER HEAD POSITION QUANTIFICATION PROTOTYPE 1 .......................... 27 2.1 INTRODUCTION ............................................................................................................................. 27 2.2 ELECTRIC FIELD SENSING CIRCUITS ........................................................................................... 28 2.3 ELECTRONICS FOR CAPACITIVE SENSING ................................................................................... 28 2.4 PRELIMINARY HEAD POSITION QUANTIFICATION ...................................................................... 31 2.4.1 Sensor Array Design .............................................................................................................. 31 2.4.2 Head Position Experimental Setup ........................................................................................ 33 2.4.3 Preliminary Experimental Results ......................................................................................... 38 2.5 CONCLUSIONS ............................................................................................................................... 41 CHAPTER 3. DRIVER HEAD POSITION ESTIMATION WITH PROTECTION FROM ENVIRONMENTAL DISTURBANCES 2 ................................................................................................ 42 3.1 ELECTRIC FIELD SENSING EQUATIONS ....................................................................................... 42 3.2 SIMULATIONS AND LABORATORY EXPERIMENTS ....................................................................... 44 3.2.1 Concentric Sensor .................................................................................................................. 45 3.2.2 Comb-shaped Interdigitated Sensor ....................................................................................... 48                                                                                                                       vi 3.2.3 Use of a Grounded Backplane ............................................................................................... 51 3.2.4 Selection of Electrode Separation (s) and Finger Width (w) Parameters ............................... 54 3.2.5 Electric Field Intensity Comparison ...................................................................................... 55 3.2.6 Human Head Grounding Effect ............................................................................................. 57 3.2.7 Temperature and Humidity Compensation ............................................................................ 60 3.2.8 Method of Occupant Head Position Estimation .................................................................... 64 3.2.9 Coordinate System Assignment and Data Collection ............................................................ 65 3.2.10 Neural Network Training Process ......................................................................................... 66 3.2.11 Overall System and Experimental Results............................................................................. 67 3.3 DISCUSSION ................................................................................................................................... 70 3.3.1 Sensor Performance Measures ............................................................................................... 70 3.3.2 Contributing Factors to the FEA Results ............................................................................... 71 3.3.3 Adaptive Occupant Head Servoing and Rating ..................................................................... 72 3.4 CONCLUSIONS ............................................................................................................................... 75 CHAPTER 4. DRIVER HEAD ORIENTATION ESTIMATION 3 .................................................... 78 4.1 FACE TEMPLATE-BASED HEAD ORIENTATION ESTIMATION ...................................................... 78 4.1.1 Camera Instrumentation and Data Collection ........................................................................ 79 4.1.2 ROI-based Integration Time Adjustment .............................................................................. 80 4.1.3 Driver’s Head Depth Map Generation ................................................................................... 87 4.1.4 Template-based Driver’s Head Pose Estimation ................................................................... 88 4.1.5 Reference 3D Template Data Generation .............................................................................. 89 4.1.6 3D Registration of Data Points .............................................................................................. 89 4.1.7 Position / Orientation Measurements in the Laboratory ........................................................ 91 4.1.8 Position and Orientation Measurements in a Vehicle ............................................................ 92 4.1.9 Summary - Results of Laboratory and in-Vehicle Studies .................................................... 93 4.2 DISCUSSION ................................................................................................................................... 94 4.3 CONCLUSIONS ............................................................................................................................... 94 CHAPTER 5. DRIVER HEAD POSE SENSING BY COMBINED ELECTROSTATIC AND TIME-OF-FLIGHT SENSING 4 ................................................................................................................ 96 5.1 DESCRIPTION OF THE POSE ESTIMATION PROCESS .................................................................... 96 5.1.1 ToF Camera-to-Head Distance Calculation ........................................................................... 99 5.1.2 ToF Camera Integration Time Selection ............................................................................. 100 5.1.3 Driver Head Point Cloud Generation and Smoothing ......................................................... 102 5.1.4 HK Curvature Analysis for 3D Nose Detection .................................................................. 103 5.1.5 Applying the Gaussian and Mean Curvature ....................................................................... 106 5.1.6 Nose Coordinate Assignment .............................................................................................. 110 5.1.7 Determining Head Orientation using a Geometric Approach .............................................. 111 5.1.8 Algorithm ............................................................................................................................ 112 5.1.9 Experimental Verification ................................................................................................... 114 5.1.10 Experimental Methods ......................................................................................................... 114 5.2 DISCUSSION ................................................................................................................................. 120 5.3 CONCLUSIONS ............................................................................................................................. 123 CHAPTER 6. DEVELOPMENT OF SYSTEM PROTOTYPES ...................................................... 125 6.1 OVERVIEW OF MAIN SYSTEM COMPONENTS ............................................................................ 125 6.2 SYSTEM BLOCK DIAGRAM ......................................................................................................... 127 6.2.1 Adaptive Head Pose Tracking Block Diagram .................................................................... 127 6.2.2 Electromechanical System Block Diagram ......................................................................... 128 6.3 CONCLUSIONS ............................................................................................................................. 130 CHAPTER 7. CONCLUSION .............................................................................................................. 132 7.1 THESIS CONCLUSIONS ................................................................................................................ 132 7.1.1 Re: Achievement of Accurate Head Position Estimation using an Array of Concentric Capacitive Sensors .............................................................................................................................. 132                                                                                                                       vii 7.1.2 Re: Achievement of Accurate and Reasonably Robust Head Position Estimation using an Array of Interdigitated Capacitive Sensors ......................................................................................... 132 7.1.3 Re: Achievement of Accurate Head Pose Measurement using a ToF Camera Sensor ........ 134 7.1.4 Re: Achievement of Fast and Accurate Driver Head Pose Estimation using a Hybrid Sensing System………………………………………………………………………………………………...134 7.1.5 Re: Achievement of Improving System Robustness and Reliability through Synergistic Operation……………………………………………………………………………………………..135 7.2 STRENGTHS AND WEAKNESSES .................................................................................................. 136 7.3 THESIS CONTRIBUTIONS ............................................................................................................ 137 7.4 FUTURE WORK ........................................................................................................................... 138 REFERENCES .......................................................................................................................................... 140                                                                                                                          viii List of Tables  Table 1-1. Error results and range of head detection in the previous head pose estimation work ............... 24 Table 2-1. A statistical comparison between the actual and estimated occupant’s head position. ............... 40 Table 3-1. Comparing the range of the cylindrical and comb-shaped capacitive proximity sensors. ........... 59 Table 3-2. Estimation Accuracy Measures [cm] ........................................................................................... 69 Table 4-1. The integration time, maximum mean intensity, and distance error values for each ROI associated and each camera-to-wall distance. .............................................................................................. 85 Table 4-2. Average maximum mean intensity values, and standard deviation for each region of interest. .. 85 Table 4-3. The estimated positions of the StyrofoamTM head model .............................................................. 91 Table 4-4. The orientation error estimates of the StyrofoamTM head model ................................................. 91 Table 4-5. The estimated positions of the human head in the vehicle ........................................................... 92 Table 4-6. The orientation error estimates of the human head in the vehicle ............................................... 93 Table 5-1. Curvature forms for extrema and zero values of H and K ......................................................... 106 Table 5-2. The estimated head pitch/yaw angles for the proposed head orientation estimation algorithm 120                                                                                                                         ix List of Figures  Figure 1-1. IIHS ratings for head restraint adjustment. ................................................................................. 3 Figure 2-1. The testbed accommodating the system components. A: HR device with the integrated capacitive array sensor (see Section 3.2.11), and B: The ToF camera was installed in a fixed orientation on a post (in a vehicle, a movable rear-view mirror would also be installed above it). .................................... 27 Figure 2-2. (left) The AD7150 block diagram, (right) Capacitance-to-Digital converter ............................ 29 Figure 2-3. The block diagram of the system of conductors and the charge amplifier ................................. 29 Figure 2-4. The proposed capacitive proximity sensor array design and computer interface ...................... 31 Figure 2-5. The human subject positioned in the FoV of the capacitive sensor array .................................. 32 Figure 2-6. The capacitive array data for the occupant’s head positioned: (a) 3 cm and (b) 6 cm from the sensor ............................................................................................................................................................ 33 Figure 2-7. Coordinate system assignment, ROIs, and movements of the occupant head ............................ 34 Figure 2-8. The proposed data collection system to acquire the real-time position of the occupant’s head and the corresponding capacitive proximity data ......................................................................................... 36 Figure 2-9. The proposed occupant’s head position estimation and verification system .............................. 37 Figure 2-10. A portion of the occupant’s head movement in x-y plane and the corresponding capacitive proximity data ............................................................................................................................................... 38 Figure 2-11. The comparison between x-y coordinates of the actual and the estimated occupant’s head position .......................................................................................................................................................... 39 Figure 2-12. The comparison between z coordinates of the actual and the estimated occupant’s head position .......................................................................................................................................................... 39 Figure 3-1. (Left) The schematic of each of two ring-shaped capacitive sensors (A1 and A2) of total area22A R . (Right) A finite grounded plane as the target object positioned at distance d from the sensor. .... 45 Figure 3-2. (Left) COMSOL 3D geometrical schematic of the concentric sensor and the grounded target plane. (Right) The electric potential (V) distribution plot on two planes for the given geometry. ................ 45 Figure 3-3. (Left) The simulated mutual fringe capacitances for the smaller and larger ring-shaped concentric sensors, (Right) The sensor sensitivity plot and dmax calculated for Smin=0.5 fF/mm. ................. 47 Figure 3-4. The schematic of comb-shaped interdigitated sensors with one block of transmitter and receiver (n = 1, w = 5.4 cm, and s = 0.1 cm). (b) The schematic of the sensor with more fingers (n = 10, w = 0.45 cm, and s = 0.1 cm), (c) The side view of the sensors positioned within distance d from the grounded object plane. H = 4.0 cm. For A1, L=5.5 cm, and for A2, L=11 cm. ........................................................................ 48 Figure 3-5. (Left) The 3D geometrical schematic of the comb-shaped interdigitated sensors and the grounded planar object. (Right) The electric potential (V) distribution for the given geometry. .................. 49 Figure 3-6. (Left) The simulated mutual fringe capacitances for the smaller and larger comb-shaped sensors, (Right) The sensor sensitivity and the maximum detection range calculated for Smin=0.5 fF/mm. . 50 Figure 3-7. (Left) The simulated mutual fringe capacitances for the smaller and larger comb-shaped sensors with a grounded backplane, n=5 for A1 and n=10 for A2. (Right) the mutual fringe capacitance variations and the proximity range for Smin = 0.35 fF/mm. ........................................................................... 52 Figure 3-8. (Left) The physical experiment’s mutual fringe capacitances for the larger comb-shaped sensors with a grounded backplane, (Right) the mutual fringe capacitance variations and the proximity range for Smin = 0.35 fF/mm. ......................................................................................................................... 53 Figure 3-9. (a) The mutual fringe capacitance C with the target object at d = 20 cm, (b) The maximum detection range, for grounded object to sensor relative distance between d = 0.5 cm and d = 20 cm. For plots, , and . The sensor has a grounded backplane ................... 54 Figure 3-10. The electric field norm distribution for the comb-shaped (left) and concentric capacitive sensor (right). The color bar was adjusted to represent the electric field intensities between 1.6 to 10 V/m. ....................................................................................................................................................................... 56 Figure 3-11. The grounded head model and the comb-shaped sensors represented inside the 3D geometrical environment of COMSOL ver. 4.3. d is the distance between the head and the sensor’s surface. ....................................................................................................................................................................... 57 Figure 3-12. (Left) The simulated mutual fringe capacitances for the smaller and larger comb-shaped sensors with a grounded backplane, and (Right) the mutual fringe capacitance variations and the proximity range for Smin = 0.25 fF/mm .......................................................................................................................... 58 1 4mm s mm  1.43 54mm w mm                                                                                                                       x Figure 3-13. (Left) The experimental mutual fringe capacitances for the larger comb-shaped sensors with a grounded backplane as a function of relative distance between the grounded head and the sensor, and (Right) the mutual fringe capacitance variations and the proximity range for Smin = 0.25 fF/mm................ 59 Figure 3-14. (Left) The programmable temperature/humidity control chamber at UBC Biomass Research Center, and (Right) The sensing and reference comb-shaped capacitive sensors located inside the chamber. ....................................................................................................................................................................... 61 Figure 3-15. The capacitance increases when relative humidity increases at a fixed temperature. However, the capacitance is lower at higher temperatures for a fixed humidity level. ................................................. 62 Figure 3-16. The sensing and the reference sensors capacitance plot. The ratio was plotted and stayed nearly constant over the range of RH between 60% and 95%. cps=capacitive proximity sensor, rcs=reference capacitive sensor.................................................................................................................... 63 Figure 3-17. (a) The data collection experimental testbed, (b) The side view of the assigned coordinate system, and (c) The front view of the coordinate system assignment for the Y-shaped sensor arrangement.65 Figure 3-18. The self-contained HR positioning device with the customized array containing the comb-shaped capacitive proximity sensing and reference sensor integrated into the frontal compartment. .......... 67 Figure 3-19. The measured and estimated head coordinates obtained from the optical tracker and  the capacitive proximity sensing unit. ................................................................................................................. 69 Figure 3-20. (Left) The electromechanical HR, the primary (A) and the extended detection (B) zones, and the occupant head 3D model with the back of the head positioned inside the extended zone. All the dimensions are in centimeters. (Right) IIHS HR Rating System (printed with permission of IIHS). ............. 73 Figure 4-1. (a) The ToF camera on the dashboard with its mount installed on the windshield in front of the driver. (b) The camera FoV of the driver. ..................................................................................................... 79 Figure 4-2. (left) ToF camera intensity image representing the received light pattern,  (top) cam-to-wall = 70 cm, (bottom) cam-to-wall=110 cm, tint= 0.3 ms, (right) The allocated ROIs for performing depth accuracy measurements. ............................................................................................................................... 82 Figure 4-3. The estimated average distance and image mean intensity vs. integration time for: (top) the center circle ROI, (middle) the inner ring ROI, (bottom) the outer ring ROI. Note: parameter d is defined in the top curves and represents the actual camera-to-wall distance set by the robot. ..................................... 83 Figure 4-4. The average estimated distance and image mean intensity values vs. the integration time for the six paper sheets with different reflectivity values. (top) camera-to-target = 90 cm, (bottom) camera-to-target = 80 cm. .............................................................................................................................................. 86 Figure 4-5. (a) The intensity image captured by the 1.0 ms trial integration time, (b) background pixel elimination, (c) the constructed depth map, (d) the constructed depth map from the selected tint for the three ROIs............................................................................................................................................................... 88 Figure 4-6. (a) The reference template data (facial mask) extracted from the driver head (FLNP) as the result of the offline process, (b) The depth map (model data set) expressed with the 3D Cartesian coordinates of the driver’s upper torso obtained by choosing a proper integration time during the online process, (c) the template data (in white) correlated into the model data set using the homogenous transformation matrices obtained as the results of the ICP process, (d) the relative angular relationship between the reference template data and the correlated template data sets expressed only by the rotation around the axis. .......................................................................................................................................... 90 Figure 5-1. The flowchart of the proposed algorithm for the purpose of real-time driver head orientation estimation. ..................................................................................................................................................... 96 Figure 5-2. The coordinate system assignment for HR unit and the ToF camera ......................................... 97 Figure 5-3. Three sample head orientations while head is rotated 90° to the left (left), forward looking (middle), and fully rotated to right (right), each row: the smoothed point cloud (top), the generated 3D mesh from the point cloud (middle), and the ToF camera intensity image (bottom). .................................. 103 Figure 5-4. HK head curvatures  for three sample head orientations for forward-looking head (top plots, i.e. not oriented to left or right) and 70° oriented to the subject’s left (middle plots) and right (bottom plots) sides. Left-hand-side plots represent K arrays and right-hand-side plots represent H arrays. .................. 107 Figure 5-5. The two-dimensional z-array of the driver face with CoM and three other facial features highlighted. .................................................................................................................................................. 109 Figure 5-6. The extracted nose coordinates from the face oriented in the three sample orientations. ....... 109 Figure 5-7. Coordinate frame assignment to the nose as the main facial feature. ...................................... 111 Figure 5-8. The experimental testbed customized for head pose tracking .................................................. 115 k                                                                                                                      xi Figure 5-9. Head orientation measurement limited to yaw and pitch angles for a set of predetermined points within the vehicle windshield. Point G refers to diver’s head orientation in forward looking with straight gaze. ............................................................................................................................................... 118 Figure 5-10. A snapshot of the GUI of the real-time interactive program for head orientation tracking. The white-color circle indicates the forward-looking head pose (i.e. point “G”), the red-color circle indicates the current head orientation, the yellow-color circles indicate the predetermined head orientations, and those filled with green-color circles indicate that the head orientation was successfully matched with the recorded orientations. ................................................................................................................................. 118 Figure 5-11. The flowchart of the process for collecting the actual and estimated head orientations. ...... 119 Figure 5-12. The estimated and actual head yaw and pitch angle measurements (relative to point G) and the absolute error between each measurement - obtained from the proposed algorithm and the head-mounted 3D orientation tracking sensor. “a”, “P”, “Y”, and “e” refer to actual, pitch, yaw, and estimated measurements, respectively. ........................................................................................................................ 119 Figure 6-1. The second system prototype of the electromechanical HR device and its installation as a replacement with the existing HR device. .................................................................................................... 126 Figure 6-2. Adaptive head pose tracking system and the proposed sensor position configuration. ........... 128 Figure 6-3. The proposed driver head pose tracking electromechanical system block diagram ................ 129 Figure 6-4. The self-contained HR device and its main electromechanical components ............................ 131                                                                                                                        xii List of Abbreviations   ADAS  Advanced Driver Assistance Systems AGV  Automated Guided Vehicle AHRS  Active Head Restraint System BC  British Columbia PB  Back Propagation CCD  Charge-coupled Device CoM  Center of Mass FLNP  Forward Looking and Normally Positioned FEA  Finite Element Analysis FoV  Field of View   GUI  Graphical User Interface HR  Head Restraint IBC  Insurance Bureau of Canada  IC  Illumination-compensated ICP  Iterative Closest Point IIHS  Insurance Institute for Highway Safety MAE  Mean Absolute Error NHTSA National Highway Traffic Safety Administration  NIR  Near Infrared ppt  parts per thousand RH  Relative Humidity ROI  Region Of Interest SUV  Sport Utility Vehicle ToF  Time-of-Flight                                                                                                                             xiii Acknowledgments    This work was supported in part by AUTO21 Network of Centres of Excellence (NCE) under the Grant A402 and Grant A505 and in part by the National Sciences and Engineering Research Council of Canada (NSERC) under Discovery Grant 4924 and Discovery Grant 4677.  My sincerest gratitude is extended to my supervisors Dr. Peter Lawrence and Dr. Douglas Romilly for their dedicated supervision and support and for giving me this unique opportunity to explore my academic and professional interests. Their attention and insightful ideas have been a constant source of inspiration. It has been a privilege and great honor to work under such knowledgeable and understanding supervisors.   I am deeply grateful and indebted to my dearest wife, Nasim Jahangiri, for her patience and encouragement during challenging periods of my life. Her kindness, love, motivation, and unconditional support have always encouraged me to pursue my interests and explore new horizons. My heartfelt appreciation goes to my dearest mother, my first teacher and advisor, Minoo Mohammadi, whose life was too short to witness the completion of this thesis. May her innocent soul rest in perfect peace. Also, I would like to express my warmest appreciation to my dad, Ali, and my brother and sister, Babak and Bahar for their support during the course of my PhD work. I am also very thankful to my in-laws Mohtaram and Naser Jahangiri, Nariman Riahi, Golnaz Mohammadi, and Nazanin and Mehrdad Ahmadifard for their unconditional support during all these years.                                                                                                                        xiv I would like to express my appreciation to all of my friends and colleagues in the robotics lab James Borthwick, Ali Kashani, Andrew Rutgers, Nicholas Himmelman, Mike Lin, Hani Eskandari, Mehdi Ramezani, Adnan Fanaswala, and my dear friends at the Mechanical and Civil Engineering departments Pirooz Darabi and Armin Bebamzadeh. Also, special thanks go to the undergraduate co-op students who helped me during various implementation and experimental evaluation phases of my PhD work Pranav Saxena, Daniel Ko, Anshul Porwal, Matthew Lai, Rayvier Dhak, Connor Schellenberg-Beaver, Thanet Ying-udomrat, and Jacky Chan. I am also deeply grateful to all my dearest friends who appeared as a great source of emotional support during the past five years of my PhD study.       Last, but definitely not least, I am very grateful to AUTO21 for their financial and management support of the work reported here.   Nima Ziraknejad  August 2014                                                                                                                                    xv                  Dedication        To Nasim, Minoo, and Ali                                                                                                                       1 Chapter 1. Overview and Background 1.1 Motivation for this Thesis Whiplash injuries of the occupants of a vehicle can occur when it is struck by a vehicle from behind. These injuries can result in very significant economic, social and personal costs. This thesis primarily focuses on the means for mitigating injury of the passengers in the “struck” (or target) vehicle impacted by the “striking” (or bullet) vehicle from behind, through proper positioning of head restraints. The following sections form the specific thesis motivation provided in Section 1.1.7.   1.1.1  Lack of Driver Attentiveness  The lack of a vehicle driver’s knowledge of safe and attentive behaviour and the commitment to automotive safety regulations can impose significant personal and societal costs and can lead to serious injuries. Most of these injuries can be mitigated or even avoided through the use of automated warnings and/or vehicle safety assistive actions. In terms of driver inattentiveness or distraction, visual and audible signals can be used to warn the driver prior to a collision. Of course, this requires an advanced in-vehicle monitoring system to actively analyze the driver’s behaviour while driving and consequently generate the corresponding warning signals. Inattention to the proper positioning of the Head Restraint (HR) prior to operating a vehicle can also lead to significant injuries in the event of rear-end collisions potentially causing whiplash and related neck injuries as the result of rear-end impacts. This important topic is discussed next and current solutions for mitigating such injuries are discussed.                                                                                                                       2 1.1.2 Existing Solutions for Proper Head Restraint Positioning To mitigate whiplash and related neck injuries, automatic HR positioning can be utilized. The term Active Head Restraint System (AHRS) traditionally refers to the automatic repositioning of the HR device either immediately prior-to or during a crash in an attempt to improve occupant whiplash protection.  Most previously introduced AHRSs have relied on strictly mechanical designs to move the HR forward and/or higher without any knowledge of the occupant’s actual head position.  While these systems vary in their reported effectiveness, there is real data to suggest that some are effective in reducing whiplash [1]. However, these systems do not optimize the HR position for the individual occupants based on real-time quantification of their relative head-to-HR position. As such, real-time occupant head to HR proximity sensing is potentially useful for an AHR system to: 1) adaptively position the HR directly behind the driver’s head, and 2) remove the responsibility from the drivers to properly position their HR.  1.1.3 Removing Reliance on Occupants to Avoid Injuries  As explained in the previous section, the real-time monitoring of the seated occupant’s head position, for use in determining the correct HR positioning, is a strong candidate for the next stage in development of an adaptive HR system. Once the relative position of the head to HR device is successfully quantified, a HR positioning process could be performed automatically and adaptively without involving the occupant. If performed successfully, the responsibility for properly positioning of the HR will be removed from the driver and instead will be assigned to the adaptive HR positioning system. For example, according to the criteria recommended by Insurance Institute for Highway Safety (IIHS) [2], “Good” positioning of a HR device behind the occupant’s head refers to a backset                                                                                                                       3 distance of 2-8 cm between the back of the occupant’s head and the front of the HR device. Figure 1-1 (modified slightly for clarity here from [2]) shows the IIHS recommended ratings for HR positioning. In this rating, the top of the HR device also needs to be aligned with the top of the driver’s head for the purpose of mitigating whiplash injuries. However, the HR rating criteria proposed by Insurance Bureau of Canada [3] and [4] suggest a backset distance of 2-5 cm for the HR device to be rated as “Good”. Thus, these requirements can be expected to be performed properly by an automatic and adaptive AHR system. A detailed discussion is provided in Section 3.3.3 on how the proposed system meets these criteria.          1.1.4 Statistics on Driver Inattention and Injuries  It is important to obtain insight on the problem of driver inattentiveness and lack of responsibility to properly position the HR devices in the vehicle. As such, useful statistics are provided in this section to quantitatively assess the aforementioned problem in the automotive industry. In a study conducted by Klauer et al. [5] and founded by the US National Highway Traffic Safety Administration (NHTSA), an in-depth analysis of driver Figure 1-1. IIHS ratings for head restraint adjustment. 20-2-4-6-8-10-12-14-162 4 6 8 10 12 14 16 18Distance above/below average man’s head (cm)Backset (cm)                                                                                                                      4 inattentiveness, using the driving data collected in the 100-Car Study (by Dingus et al. [6]), was carried out. The data was collected in 109 cars for a period of approximately one year.  In their study, driver inattentiveness was further categorized into subcategories including: secondary task engagement (reaching for a moving object, external distraction, dialing a hand-held device), and fatigue. Secondary task engagement and drowsiness were identified as contributing factors in over 45% of all collisions and near collisions. In another study conducted by Stutts et al. [7], driver distraction-related accidents caused by inexperienced drivers accounted for 15% of accidents (i.e. 5000 US accidents). Similarly, Dong et al. [8] have classified driver inattentiveness into two main categories – distraction and fatigue. According to their research, the goal of a driver inattentiveness monitoring system is to reduce driving risk.  These studies indicate that overall, driver distraction (due to secondary engagements) and driver fatigue contribute to a great portion of all automobile collisions and near-collisions. If these factors can be mitigated, the overall number of collisions will decrease dramatically. Rear-end collisions occur often and account for nearly 26.7% of all vehicle impacts (year 2007 data) [9]. Whiplash-associated disorders are the most common injury caused by these types of collisions and they are responsible for up to 70% of total bodily injury costs [10]. According to Zuby et al. [11], 35% of serious neck injuries could be prevented if people purchased vehicles with good HRs and positioned them appropriately.  As part of an AUTO21 sponsored research program, Romilly et al. [4] initiated an observational study and developed a new protocol to quantitatively assess the relative head-to-HR position for occupants in a moving vehicle based on aspects of an adopted IIHS and                                                                                                                       5 IBC rating system.  According to their research: “This protocol has now been tested, modified appropriately to reduce identified potential errors, and has been utilized to gather and analyze data on proper HR usage in vehicles on the public roads in British Columbia.  To date on public roadways, the collected and analyzed observational study data shows that proper HR usage in British Columbia, or more specifically three cities representing the Greater Vancouver region, has been assessed at 44% “Good”, 17% “Acceptable”, 15% “Marginal”, and 24% “Poor”.  Combining the top two categories of “Good” and “Acceptable” - as was done in previous studies - observes that 61% of the HRs were positioned at least “Adequately”.  As 24% of the sampled vehicle occupants had a “Poor” HR position; it can be concluded that these individuals will be at greater risk for whiplash injury should a rear-end collision occur.” Their findings further showed that “less than half” of the observed vehicle occupants in BC motor vehicles have their HRs “properly” positioned to reduce the risk of whiplash and related neck injuries in the event of a rear-end collision. Also in their study, based on further analysis of the data, it was revealed that as a group, drivers of sub-compact cars and larger vehicles (pickup trucks, large SUVs and minivans) were at a higher risk of potential injury due to improper HR positioning than those of larger passenger cars and should be given special attention.  In another quote by Romilly et al. [4], “Since the first observational study undertaken by IBC in 2002, there have been some regulatory changes and improvement in HR standards which have likely contributed to the improvement in “Good” HR usage in BC from the assessed levels of 18% (based on the 2002 IBC statistic [12]) to 27% (2013 current overall statistic based on the IBC criteria). This is a relatively small improvement                                                                                                                       6 in the level of proper HR use over an eight year period and suggests that much more needs to be done to increase this level to mitigate whiplash injury in BC.”   Lastly, driver inattentiveness is a known international problem. In fact, 84% of distracted-driving-related fatalities in the US were tied to the general classification of carelessness or inattentiveness [13]. Also, 80% of collisions and 65% of near crashes have some form of driver inattention as a contributing factor [14]. The economic expenses caused by traffic collision-related health care costs and lost productivity reach at least $10 billion annually – approximately 1% of Canada's GDP (Government of Canada). It is clear that successful development and commercialization of a driver head pose sensing system will potentially have a powerful impact on the Canadian economy and the well-being of everyday citizens as many collisions can be prevented by warning the user of bouts of inattentiveness. 1.1.5 Real-time Driver Head Pose Sensing The discussion so far suggests that the determination of the driver’s relative head-to-HR distance, as well as the relative orientation (e.g. relative to HR device) is vital in directly enhancing the in-vehicle assistive technologies discussed in the previous sections. Several in-vehicle applications of driver head pose sensing have been discussed in the literature and are discussed here. Driver head position, motion quantification, and tracking can be used for improved airbag deployment [15]. Automatic HR positioning to reduce whiplash injuries [16], driver distraction detection [17]-[18], driver awareness monitoring [19], driver gaze tracking and fatigue detection [20]-[22], and driver assistance systems [23]-[24] all require knowledge of the driver’s head position and orientation, along with motion tracking information to achieve their objectives.                                                                                                                       7 1.1.6 More Severe Injuries for a Turned Head   There is scientific evidence that occupants with their heads turned (thus Out-of-Position - OOP) in the event of rear-end crashes are at higher risk of more severe and persistent symptoms [25]-[26]. This further justifies the need for a head orientation quantification system. It is important to note that in the event of detecting a head turned to either side, the HR device can be commanded to move closer towards the back of the driver’s head or to even touch the back of the head. This can further limit the inevitable inertial motions due to a turned head, in the event of a rear-end collision.  So far, the importance of head position and orientation sensing has been discussed. The next section examines the potential sensing solutions that can lead to enhancing the automotive safety systems with advanced real-time driver head position and orientation sensing.      1.1.7 Summary of Motivations  It is clear that a lack of driver attentiveness plays a vital role in increasing the risk of automotive-related injuries. The economic, personal, and societal burdens of such injuries justify the apparent need for whiplash and other injury mitigation solutions in the automotive industry. This problem is magnified even more when the whiplash mitigation solutions to date have deficiencies due to the lack of utilization of occupant sensing techniques. Therefore, the motivation for the proposed work is to address this problem of providing driver head pose sensing for use in proper HR positioning and other automotive safety systems.                                                                                                                         8 1.2  Head Tracking Sensing Solutions   Various types of transducers are available to be employed inside a vehicle for the purpose of real-time driver head pose sensing. However, there are industrial requirements such as cost efficiency, integration with other vehicular onboard systems, size, weight, etc. that limit the selection to only a few sensing solutions. By considering the aforementioned requirements, this section presents a critical review of the available sensor and measurement technologies considered suitable for real-time monitoring of the occupants’ head pose, including optical and electrostatic sensors. 1.2.1 Stereo Cameras and Binocular Vision Stereo vision systems provide a binocular vision output for the target object in a 3D scene. Stereo matching is generally used to extract the projection of the target object’s points in the left and right image plane. Thus, a stereo camera system consists of two cameras with a certain baseline (the distance between the two CCD planes of the cameras) and orientation related to a base frame. A 3D reconstruction algorithm uses the separation between the projected image locations of the same 3D world scene point to estimate the 3D position of scene point on the target object. Stereo vision has been widely used in 3D sensing and it mimics the human vision system in which 2D images of an object appear with a certain disparity (or separation) in the left and right images. This disparity is a valuable source of information to reconstruct the 3D position of the object using triangulation.  The use of stereo vision is reported in the automotive-related literature where the 3D pose of the occupants (including their heads, upper torso, hands, etc.) was desired. A stereo camera-based face pose estimation system with applications to driver monitoring                                                                                                                       9 was proposed by Jimenez et al. [27]. A head orientation sensing system to observe the driver’s behavior was proposed by Ebisawa [28] in which a stereo camera set was utilized for driver head orientation estimation. Miyaji et al. [29] developed a driver cognitive distraction detection system that used a stereo camera system to track the driver’s eyes and head movements.  1.2.2 Structured Lighting Structured-lighting was introduced in the 1970s as a means of recovering the 3D shape of objects. It uses the same principle of triangulation that is used in stereo vision but avoids the difficulty involved in matching stereo points as explained in [30]. Structured-light scanners are widely used for various applications in robotics and computer vision. They are especially effective in 3D object bin-picking and 3D object modeling applications because of the accuracy and reliability of the range data [31]. Different researchers have proposed structured lighting solutions to detect and track the driver’s head in real-time. Hu et al. [32] proposed a structured-light system to fulfill this requirement in an attempt to improve airbag deployment. This system employed a stereo camera and an infrared illumination unit to track the position of the occupant’s head in real-time. Stereo imaging and infrared-based occupant detection were evaluated by S. Krotosky et al. [33]-[34] to investigate their potential use in an intelligent airbag system.  The use of structured-lighting in occupant sensing applications has been reported in the literature in conjunction with the use of stereo vision sets. The addition of structured-lighting can improve the stereo vision results. However, it does not significantly reduce the complexity of the stereo matching process.                                                                                                                        10 Among the aforementioned optical solutions, stereo vision and structured lighting were not considered in the proposed research due to their four main drawbacks: 1) the disparity matching process is a computationally expensive process and requires high-speed computation hardware, 2) the specific baseline dimension of the stereo set imposes installation size constraints within the interior space of the vehicle (same argument is valid for structured light solutions), 3) the lack of ability to make accurate 3D measurements for surfaces with minimal or no texture or patterns, and 4) the vulnerability of the estimation accuracy from varying external lighting conditions inside the vehicle. Of course, one solution to address the latter is to equip the stereo camera with a Near-Infrared (NIR) lighting module and visible light rejection filters on the camera lenses. However, the latter would add complexity to the system in terms of cost, dimensions, and power consumptions. 1.2.3 Time-of-Flight Range Imaging 3D Sensor  A ToF camera works based on the principle of emitting modulated infrared light onto the scene of interest. The scene distance information for “each” pixel is calculated based on the calculation of the time between the emission and the reception of the reflected infrared light at that pixel. Detailed information about ToF 3D range imaging is provided by Hagebeuker et al. in [35] . ToF cameras have been employed in many occupant classification studies. A ToF camera was employed by Devarakota et al. in [36]-[37] to provide the required inputs for occupant detection and classification inside the vehicle. The authors reported most accuracies in one dimension lying between -8 cm and +2 cm. A team of researchers [38] at Toyota Research Institute developed a system to estimate the driver’s limbs (including arms, heads, and torso) from the 3D data obtained from an infrared ToF camera. The application of a ToF camera for hand gesture recognition was reported by                                                                                                                       11 Kollorz et al. [39]. ToF cameras are also used in computer vision applications for the purpose of 3D facial feature detection as well as head position and motion sensing. A head tracking system employing a combination of face and nose detection using a ToF camera was reported by Bohme et al. [40]. Their first reported application of head tracking using a ToF camera was in human-computer interaction. This application involved automating the process of user text entry by tracking the head and gaze of the user in front of the computer display. Their second reported application was in driver attention detection by monitoring where the driver was looking and correspondingly triggering an alarm if the driver’s attention was not focused on the road. Further applications of ToF cameras for obstacle detection in Automated Guided Vehicles (AGV) and in assisted vehicle parking have also been reported by Bostelman et al. [41] and Gallo et al. [42] respectively.  1.2.4 Capacitive Proximity Sensing In addition to optical sensors which have been widely used in the automotive industry, capacitive sensing has also been widely reported in the automotive industry related research for various purposes including occupant detection and classification and for automatic HR positioning. The application of capacitive sensing has benefited the automotive industry by improving occupant safety and reducing injury (see Section 1.5.1 for reference). Electric field sensing and its applications in capacitive sensing are explained in detail by Smith [43]. Capacitive sensors are used in many applications such as proximity sensing (e.g. personnel detection, light switching, vehicle detection), measurement (e.g. flow, pressure, liquid level, and spacing), and computer graphic input (e.g. laptop mouse pad) [44]. Capacitive sensors offer applications to obstacle avoidance system for teleoperated robots. Novak et al. in [45] proposed a capacitive proximity sensor system to                                                                                                                       12 detect the proximity of human limbs to a robotic manipulator to avoid possible collisions within its workspace.  Capacitive sensing systems and their applications in seat occupancy detection has shown promising results as stated in the literature survey provided in Section 1.5.1. Capacitive sensors are extremely useful when the existence of an occupant needs to be confirmed. This can be done in a logical manner by defining a threshold on the detected mutual capacitance between the electrodes planted inside the seat. For example, if the mutual capacitance of two electrodes falls below a certain threshold (due to the existence of an occupant), then the logic is set to “1” and that signifies the presence of an occupant. This logic is normally set to “0” when there is no occupant presented on the seat. There will be a discussion later in this thesis about the methods in which capacitive sensor arrays can be utilized to quantify the occupant’s head position in real-time within the interior space of a vehicle.  The use of in-vehicle optical and capacitive sensing techniques for occupant head sensing and OOP detection has been explained so far. In the next section, the selected head pose sensing technique for each of the optical and capacitive proximity sensing modalities will be discussed.   1.3 Selected Head Pose Sensing Technique  In this section, the two sensing techniques deemed suitable for the purpose of real-time driver head pose estimation are discussed. In particular, a ToF camera range imaging sensor was selected among the available optical sensing solutions to operate in conjunction with a capacitive proximity sensing technique inside the vehicle. The suitability of each of these sensing techniques is explained next. It must be noted that although inexpensive,                                                                                                                       13 ultrasonic sensors were not considered in this automotive research for the purpose of driver head pose estimation, as they would not operate properly in the presence of hair, head coverings, and other disturbances such as wind.     1.3.1 ToF Camera-based Range Imaging The ToF range imaging cameras comprise a fairly new technology, with predominantly European companies and organizations (such as PMD-Vision®, Mesa Imaging, and SoftKinetic) proving capable of successfully commercializing cameras that work based on the ToF principle. The use of such cameras inside vehicles has been increasing due to the following main advantages (as confirmed by the author after conducting laboratory experiments): 1) compact geometrical design of the camera chassis if used for short range applications, 2) scene depth acquisition with relatively high sampling rate (20 frame per second),  3) ability for every pixel in the image to estimate the depth value. Consequently, there is no need to process and analyze the 2D images from two (left and right) stereo cameras, and 4) the cost of a ToF camera has decreased to about $250 from about $2500 in about one year Due to the discussed drawbacks of a 3D stereo sensing and/or structured lighting system and the aforementioned advantages of the ToF camera, the ToF camera was selected as the 3D vision-based sensor to perform the required 3D measurements of the frontal side of the driver’s upper torso within the interior space of the vehicle in the current research.  Although the deployment of the ToF camera, installed in front of the driver and inside the vehicle, could lead to independent measurements of the driver’s head orientation and position, another sensing modality with a reliable and unobstructed detection field                                                                                                                       14 (covering the back of the head) is still required in the process of HR adaptive positioning since a ToF camera can be visually obstructed.  A ToF sensor could be HR-mounted. However, its detection field could also be occluded by hair, head coverings, or other possibilities. Therefore, it was deemed advisable to assign both HR-mounted capacitive sensing and a front-mounted ToF camera for the purpose of driver head pose estimation inside the vehicle. In the next section, the suitability of capacitive proximity sensing with sensor electrodes to be integrated inside the HR device for the purpose of head position estimation is explained. 1.3.2 Application of Capacitive Proximity Sensing  Capacitive proximity sensing is a concept currently being evaluated and implemented in several new AHR systems [46]-[47].  Unfortunately, little to no published information (i.e. regarding their accuracy, reliability, or limitations, etc.) has been made available on these new safety systems.  Thus, considering the requirement of HR-based installation, it was deemed useful to first evaluate the applicability and functionality of capacitive proximity sensors over the optical solutions considering the following two requirements: 1) when it is desired to quantify the occupant’s relative head-to-HR position without interference from hair or other head coverings, and 2) when installation of such sensors are demanded inside or on the HR device. The existence of the confounding elements and other application requirements are then carefully considered in this evaluation, and it is shown why capacitive proximity sensing was chosen as a sensory component for the purpose of adaptive HR device positioning, where sensor integration inside the HR device was demanded.                                                                                                                       15 The use of capacitive proximity sensors inside the HR device for the purpose of proper positioning of the HR behind the driver’s head offers the advantage of being immune from the following items categorized as “disturbing elements” for the majority of optical systems.  - Head coverings and hair: Head coverings and hair could frustrate optical solutions as they can occlude their Field of View (FoV) due to the close proximity of the driver’s back of the head to the HR device. Head coverings are invisible to capacitive sensors when there are no conductive elements (e.g. buttons, clips, etc.) used within their structure. Nevertheless, additional research is required to either compensate for the effects of conductive elements used in head coverings and hair. For capacitive sensing alone, it has been assumed that head coverings and hair contain no conductive material.  - Lighting changes: Lighting changes constitute the most common disturbing element in optical systems as they can impact the visibility of the perceived objects within the acquired images. There have been numerous solutions provided in the literature to address the problem of lighting variations, such as the solutions proposed by Chen et al. and Tsuboi et al. [48]-[49]. Lighting changes and variations do not perturb electrostatic-based capacitive sensing approaches and one can confidently ignore their existence in the process of object position detection.  - Seat and HR-based implementation: Optical sensors generally require a minimum distance to the object of interest to satisfy the geometrical requirements imposed by the sensor’s FoV. Due to these geometrical requirements, optical sensors have to be installed in locations offering a clear and unobstructed view of the object of interest.                                                                                                                       16 For example, an optical sensor could be obscured by hair, head coverings, clothing, etc. if installed within the HR or seat assembly. - Cost efficiency: Capacitive sensors can be manufactured less expensively relative to current optical solutions, as optical systems typically employ more expensive imaging sensors and processing elements. Atmospheric variations such as pressure, temperature and relative humidity impose major problems for capacitive proximity sensors however. A detailed discussion is provided by Baxter [44] on how pressure, temperature, and relative humidity change the dielectric constant of capacitive sensors. Reference capacitors [50] are commonly used to compensate for these atmospheric variations. A compensation technique is also proposed in this thesis for the purpose of compensating for the atmospheric variations.  1.3.3 Summary Based on the discussion provided in Sections 1.3.1 and 1.3.2 and by assuming the possible effects of any metallic objects on or near the driver’s head are small, it would appear that HR-mounted capacitive proximity sensors would have an advantage over optical systems for the purpose of head position estimation when there is a demand for HR-based head proximity detection sensors. Hence, a detailed investigation of a real-time capacitive proximity sensing modality, when used in an array of capacitive sensors, has been the focus for driver head position sensing in this research.  Estimation of the driver’s head orientation using a ToF camera offers the following additional advantages: 1) improved adaptive HR positioning in the events of detecting turned heads (HR should be moved closer to the head if turned head detected), and 2) redundant approach to measure head position to recover from a failure with the capacitive                                                                                                                       17 sensing system provided that the distance to the face can still be estimated by the camera, and 3) providing useful information to be used by driver awareness and distraction detection systems.  1.4 Objectives of this Thesis  Since the prime candidates for real-time vehicle driver head pose detection were selected to be capacitive proximity and ToF camera sensing, this thesis was undertaken with the objective of employing and evaluating both sensor types, and in the process, take advantage of the synergy, similarities, and differences between these two sensing modalities to aid in achieving acceptable net system head pose accuracy for whiplash mitigation.  The desirable IIHS-related position-sensing performance for HR up/down motion, would be that the top of the driver’s head should be at the top of the HR to avoid obstruction of the driver’s view through the rear-view mirror (i.e. at 0 cm in Figure 1-1) and the maximum/minimum tolerable error would then be ±2 cm to meet the IIHS criteria (the mean absolute error has to be less than 2 cm).  For horizontal motion of the HR, a backset distance of 2-5 cm was desired (i.e. 3.5 cm in Figure 1-1) and the maximum/minimum tolerable error would then be ±1.5 cm to meet the aforementioned backset distance.  For orientation in yaw, an accuracy recommendation by [60] suggests an MAE of 5°. In order to allow a driver to maintain road visibility using eye yaw rotation while making the extreme head yaw rotations described in Chapter 5, a maximum tolerable error of 10° was experimentally observed while driving.                                                                                                                       18 It was envisioned that the synergistic operation of these sensors will aid in achieving suitable net system head pose accuracy. The different sensing modalities should provide some immunity to various types of disturbances such as electrical and environmental. Although zero steady-state error of the mechanical positioning of the HR device is a requirement, the speed and other dynamic performance of the motion were not part of the design criteria for the sensing system itself. However, a laboratory demonstration of the HR device positioning was necessary to demonstrate the feasibility of pose sensing over a complete range of actual motion of the head. Finally, the desired speed of sensing and signal processing for position and orientation was selected to be at least 10 Hz in order to follow continuous head yaw angle changes of 10 degrees/sec or 1 degree/frame.  1.5 Review of Previous Vehicle Studies Using the Selected Sensors  1.5.1 Capacitive Sensing Systems in Vehicles Capacitive proximity sensors are the most common form of electric field sensing. In the automotive industry, capacitive proximity sensors offer capabilities useful to a range of applications such as occupancy detection and classification for airbag deployment, OOP occupancy detection, human vehicle interaction, etc. Lu Y. et al. [51] introduced a child seat and occupant detection system based on two different principles: 1) pressure profile measurement, and 2) capacitive profile measurement and George et al. [52]-[53] used capacitive sensing by planting several transmitter and receiver electrodes in different locations in the passenger compartment for the purpose of occupancy detection. Contactless seat occupation detection systems based on capacitive sensing techniques are reported by Tumpold [54] as well. Zangl et al. [55] introduced a seat occupancy detection                                                                                                                       19 system using capacitive sensing technology. In their work, in addition to what was performed in [52]-[53], the single ended and differential electrode topologies were investigated with the results reported. Kithil [15] proposed the use of capacitive sensor arrays to detect the 3D position of the occupant’s head. In his work, the integration, basic attributes, and the advantages of capacitive proximity sensors for occupancy and position sensing in vehicles were discussed. However, there was no reported experimental implementation or discussion of the accuracy of occupant position detection in 3D in the work reported in [15].   Electric field sensing and its various applications are explained in detail in [43]. In the area of human vehicle interaction, Togura et al.[56] developed a long range human body capacitive sensing system prototype with applications in in-vehicle compartment lighting adjustment. The application of electric field sensing for in-vehicle hand gesture recognition is explained by Pickering in [57]. Capacitive sensing has also been used in combination with other sensing technologies. Schlegl et al. [58] introduced a bumper-mounted combined capacitive and ultrasonic distance measurement sensing system with applications to distance measurement between a vehicle and other external obstacles. Also, George et al. [59] proposed a combined inductive-capacitive proximity sensor with applications in occupancy detection.  Thus, one can see that capacitive proximity sensing has been widely studied in automotive-related applications. Nevertheless, a sensor array configuration optimized with respect to size, shape, components, number of sensors, range, and verified through experimental studies has not yet been reported. In a capacitance-based sensing system,                                                                                                                       20 humidity, temperature, and pressure will affect the measured capacitance. As such, any developed AHRS must have robust sensors unaffected by these environmental factors. 1.5.2 Vision-based Head Pose Sensing in Vehicles  A review of the literature indicates that the use of optical sensing systems is one of the most popular techniques used to quantify a vehicle occupant’s head pose. Previous studies have proposed methodologies that have led to the development of real-time driver head pose sensing. The general problem of head pose sensing has been previously categorized by the evolution of common solution techniques in a comprehensive review in 2009 by Murphy-Chutorian et al. [60]. In the review of [60], nearly all forms of imaging sensors were considered including a reference to a research using a ToF camera (Zhu et al. [61]) which had very recently been introduced at that time. Unfortunately, no report of accuracy on head pose estimation was provided in their work. Since [60] found that tracking methods typically have the high head pose accuracy, thus the review of previous work will focus initially on the performance of head orientation trackers which utilized consecutive frames in continuous video and/or depth data sequences.  The approach examined in this research however estimates yaw and pitch angles from each single image frame which is more desirable than from an image sequence provided that accuracy comparable to tracking method can still be achieved. Consequently, methods of static head orientation estimated from “individual” video or depth images are subsequently reviewed. 1.5.2.1 Head Orientation from Video Sequences Estimation of the head pose from the analysis of video sequences relies on capturing sequences of video with a clear view of the person’s head at different head angles                                                                                                                       21 in each frame. Particle filtering and Kalman filtering are common methods to model the nonlinear motion of the head and introduce more robustness to the process of head pose estimation from the given sequences of video. All of the following methods require sequences of video to achieve head pose tracking in real-time:  a) Murphy-Chutorian et al. in [62], employed a single 2D camera positioned on the windshield and in front of the driver and an IR illuminator (for night-time vision) positioned on the left side of the windshield and generated a texture-mapped 3D model of a representative sample of a human head in order to be used in the initialization phase, and then later be used in their 3D model-fitting and texture-based tracking system phase. Although a 2D camera was used, the third dimension (depth from the camera) can be adjusted such that the subject’s image head width matches that of the model during model-fitting. The combined angular error from initialization and tracking compared to the ground truth data obtained from a Vicon optical motion capture system reported Mean Absolute Error (MAE) of 8.57°, 11.24°, and 8.29° of pitch, yaw, and roll head angles, respectively. See Table 1-1 for a summary of the results. b) Using a manually-initialized texture map of a 3D cylindrical model of the head from video sequences obtained from a 2D camera, Cascia et al. [63] reported a head orientation tracking system with pitch and yaw errors as a function of time. Subsequently, the authors of [63] computed the MAE performance from the data reported in [60] to be 3.3°, 6.1°, and 9.8° for yaw, pitch, roll angles, respectively.  c) Oka et al. [64], proposed a stochastic filtering framework for head pose estimation using stereo imaging with (as estimated and reported in [60]) a MAE error of 2.2°,                                                                                                                       22 2.0°, and 0.7° on yaw, pitch, and roll, respectively.  d) Using a stereo camera and the acquired video sequences, Morency et al. [65] introduced a head orientation tracking system using a linear Gaussian filter with reported MAE of 3.5°, 2.4°, and 2.6° on yaw, pitch, and roll, respectively (as reported in [60]).  The major drawback with all these tracking approaches is that they rely upon multiple images, taken sequentially, to estimate the head orientation. This introduces an unavoidable computational lag between the first image and the last image necessary for the tracking. Although these tracking methods can yield accurate results, they also have a higher potential to suffer from periods where the view is obstructed, the subject is outside the FoV, or there are errors in tracking. These periods can incur large errors and require time to reacquire accurate tracking data.   With respect to the 3D model fitting research presented in [62], and [63], both require the scaling of a standard head model for different scene depths to achieve a depth estimate, which can affect the depth accuracy for heads of various other sizes. The stereo camera-based approaches [64]-[65] require a stereo baseline separation between the two cameras in a stereo camera pair. This baseline requirement adds to the in-vehicle installation size and cost, and for depth estimation, requires a correlation time to compute disparity. 1.5.2.2 Head Orientation from Individual Images A different approach taken by researchers for head pose tracking, avoids the lag of multiple frames by making head pose estimations based on single images. Breitenstein et al. [66] introduced a real-time face pose estimation technique applied on a database                                                                                                                       23 containing depth data of the human head acquired from a proprietary 3D scanner composed of a video projector, two monochrome cameras (stereo pair) and a single color camera (see [67]). Their reported accuracy included yaw and pitch angle Mean Errors (ME) of 6.1° and 4.2°, respectively. Using the database from the same 3D scanning system created by Breitenstein et al. [66], and Fanelli et al. [68] introduced a real-time head pose estimation technique using random regression forests with mean error of 5.7° and 5.1° on yaw and pitch angles, respectively. The same group as in [69] later employed a Microsoft Kinect sensor to estimate head pose in real-time as reported in [60]. Their system accuracy measurement reported a mean error of 8.9°, 8.5°, and 7.9° on yaw, pitch, and roll, respectively. Huang et al. [70], employed a database of 3D models of human heads in [71] and introduced a nonlinear regression technique to estimate head orientation. They achieved their best accuracy on combined yaw and pitch angles with reported MAE of 5.68°. Their work was not tested on real-time images captured from a camera. Ray et al. [72] introduced an automated head pose estimation technique for vehicle operators using a ToF camera and fitting a 3D line to the nose ridge. The reported accuracy for increased frame rate was MAE of 12.9° and 4.8° for yaw and pitch angles, respectively.  With the exception of [72], all the above structured lighting systems are baseline dependent. The research in [72] is not baseline dependent as it used a ToF camera. However, their work depends on training the system with head geometrical features measured from a Principal Component Analysis (PCA) technique along with the geometrical features obtained from a mannequin’s head as ground truth training data at the various yaw angles. Consequently to predict head yaw angle estimates, the system would have to be trained for each new driver of the vehicle, which is a significant practical                                                                                                                       24 disadvantage. Additionally, the yaw angle estimates reported in [72] have higher average error than reported in this thesis work.  As in Ray’s work, the proposed head orientation estimation technique in this thesis required only a single camera (i.e. a ToF camera). Also, in both Ray’s work and this thesis research, head orientation with respect to the fixed camera frame was estimated based on single frame estimation for any individual depth image of the human head and the estimations do not depend on previous estimations.  Table 1-1. Error results and range of head detection in the previous head pose estimation work  Analysis of video sequences Analysis of individual images Previous work [62] [63] [64] [65] [66] [68] [69] [70] [72] MAE/ME MAE MAE MAE MAE ME ME ME MAE MAE Pitch 8.57° 6.1° 2.0° 2.4° 4.2° 5.1° 8.5° 5.68° 4.8° Yaw 11.24° 3.3° 2.2° 3.5° 6.1° 5.7° 8.9° 5.68° 12.9° Roll 8.29° 9.8° 0.7° 2.6° N.A. N.A. 7.9° N.A. N.A.  Many of the above noted head measurement techniques including those using stereo cameras cannot be considered suitable for the proposed in-vehicle driver head pose estimation if they employ any of the following strategies: 1) a reliance on a sequence of images to estimate the head pose (inter-frame tracking [62]-[65]) due to the introduction of an estimation delay and the possibility that external disturbances or sudden head movements will interrupt the tracking and could cause false measurements, 2) a requirement for creating one or several 3D models of the head at different orientations (which could be a costly training process) and finding a match in an augmented reality framework while tracking [19] and [62], and finally 3) the necessity to obtain sample head images from synthetically generated training sets [68] since creating a set of synthesized head images for training purposes introduces a great dependence on the particular user from which the facial features and the head 3D profile were obtained. One solution is to                                                                                                                       25 create several 3D head models by requiring a user to align their head at very specific positions or angles, which is very difficult without a physical aid or external sensor. Also, the performance of the system greatly depends on the precision with which the head is aligned. The latter would be even more difficult if smaller head angle resolutions are required. However, in this thesis, reconstruction of the 3D data of the full face was not needed unlike techniques using texture mapping for repairing the 3D model of the face during video sequences.  1.6 Thesis Contributions The identified contributions of this research towards improving the technology for the purpose of driver head pose sensing with applications in whiplash mitigations and automotive safety systems are listed below. It must be noted that some of the following research contributions may also be applicable to application areas outside of vehicular safety.  1- Developed a new design process for the creation of a set of range-maximized (per unit sensor area) capacitive sensors. The term “capacitive sensor” in this thesis refers to interdigitated transmit and receive electrodes within the sensor area. The detection range was required to be large to compensate for the presence of the grounded backplane shield in point 2 below. A novel head position sensing array was designed and built comprising three (minimum) capacitive sensors. 2- Designed and evaluated the use of a grounded backplane behind the array to: a) shield the capacitance measurements from the electrostatic effects of conductive objects (i.e. appearing as disturbances) behind the sensor array, and b) provide                                                                                                                       26 detection range adjustment depending on the geometrical distance between the surfaces of the backplane and the sensor array. 3- Designed and evaluated a reference capacitance compensation technique in order to compensate electrode capacitance measurements in the array due to the effects of temperature and humidity.  4- Designed and developed a self-contained and adaptive HR electromechanical device equipped with a linear position servo system and the proposed capacitive electrode sensing array.  5- Designed and developed a novel method to adaptively compensate for the lack of 1) even projection of the NIR light pattern onto the scene of interest, and 2) enough reflectivity of the target object by adjusting the ToF camera integration time.  6- Developed a new surface curvature-based methodology for head orientation estimation (pitch and yaw angles) based on extracting the driver’s nose coordinates from a single integration time-optimized ToF camera image.  7- This simple geometric method (as defined in [60]) employed for the purpose of extracting the head yaw and pitch angles was successful due to the high quality 3D surface reconstructions from the ToF camera.                                                                                                                          27 Chapter 2. Driver Head Position Quantification Prototype 1  2.1 Introduction In order to evaluate the feasibility of 3D sensing of the head position using capacitive sensing methods, a laboratory testbed was needed for performing laboratory experiments and assessing the performance of the system. This laboratory testbed, (as shown in Figure 2-1), was designed and built and included a vehicle seat (removed from a 2002 Infiniti passenger vehicle), a set of TSLOTS™ aluminum brackets to accommodate the seat and the sensor mounts, and a customized HR device to replace the existing seat HR device (i.e. a basic HR without electromechanical components).          1 A version of Chapter 2 has been published, N. Ziraknejad, P. Lawrence, and D. Romilly, “Quantifying Occupant Head to Head Restraint Relative Position for use in Injury Mitigation in Rear End Impacts,” SAE International, Warrendale, PA, 2011-01-0277, Apr. 2011 Figure 2-1. The testbed accommodating the system components. A: HR device with the integrated capacitive array sensor (see Section 3.2.11), and B: The ToF camera was installed in a fixed orientation on a post (in a vehicle, a movable rear-view mirror would also be installed above it). AB                                                                                                                      28 The customized electromechanical HR device ultimately included a final capacitive proximity sensing system as shown in Figure 2-1 which required a great deal of attention to be properly designed (as described in Chapter 6) and built in-house. The main focus in this chapter has been placed on: 1) reporting the results of the feasibility study on capacitive proximity sensing for the purpose of human head position detection, and 2) the several electromechanical design aspects of the proposed driver head pose sensing system. 2.2 Electric Field Sensing Circuits To properly develop a suitable capacitive-based sensing system, it was first necessary to review some of the fundamental aspects of the electric field sensing principle.  Three different modes of electric field sensing are reported by Smith et al. [73]: 1) loading mode, 2) transmit mode, and 3) shunt mode. All three modes and their applicability to the proposed work were carefully studied in the early development stage of this research, and preliminary lab testing was performed to select the most practical method for the purpose of occupant head position quantification. Shunt mode was chosen as the electric field sensing principle for this occupant head quantification system as it offers the advantage of employing the transmitter or receiver electrodes without any physical contact with the human body. This is mainly due to the contactless architecture of the shunt mode elements.  2.3 Electronics for Capacitive Sensing To conduct a feasibility study on capacitive proximity sensing and evaluate its suitability for the purpose of human head position sensing, a capacitance-to-digital (C/D) converter chipset, AD7150, manufactured by Analog Devices (AD) was acquired. The AD7150 chipset offers two separate channels of capacitance measurements and allowed users to read the capacitance measurements via a communication interface. The AD7150 chipset is comprised of two main components: 1) the C/D converter unit, and 2) the                                                                                                                       29 capacitance acquisition module (CAPDAC) both of which are shown in Figure 2-2. Since the maximum full-scale input range of the C/D converter is 4pF, for capacitance levels higher than 4pF, the CAPDAC can be used to offset capacitance up to a range of 10pF, allowing the C/D converter to detect capacitances between 10 pF and 14 pF. In other words, the internal of AD7150 provides the capability to measure a contiguous 4 pF range of capacitance input values that are between 0 and 14 pF. Figure 2-3 depicts the transmitter and receiver electrodes arranged in the shunt mode configuration in front of a human head which is capacitively coupled to ground.        Figure 2-2. (left) The AD7150 block diagram, (right) Capacitance-to-Digital converter Figure 2-3. The block diagram of the system of conductors and the charge amplifier  Excitation voltage sourceOccupant’s head (H) (top view)Transmitter (T) Receiver (R) CTHCRTCHGCRHCHRHCintdCxVintVexcCx 12-bit hex  DigitalfilterIntegratorComparitor                                                                                                                      30 The measured capacitance ( , 1,2ixC i ) is connected between the C/D converter and an excitation source (see Figure 2-2, right).  Assuming xC as the equivalent capacitance of the system of conductors as shown in Figure 2-3, each pulse of the excitation voltage,excV , charges the measured capacitance, xC through the input terminal connected to an internal charge amplifier (see Figure 2.3). This in turn, converts input charge,x x excQ C V , into output voltage, intV .The capacitance value,int int /x excC C V V , as shown in Figure 2-3 is actually computed internally using a  modulator circuit which includes the charge amplifier and an A/D converter  resulting in the capacitance being output as a 12-bit value in hex notation.  Without a head present, the capacitance is solely produced by the field around the T and R electrodes. When a grounded head is introduced, capacitance to ground (not measured) increases and the capacitance between the electrodes is reduced (measured). As shown in the shunt mode model depicted in Figure 2-3, the occupant’s head is coupled to the ground throughHGC . Togura et al. [56] discussed that since the human body is highly conductive and is much larger than the sensor electrodes, it can be assumed that the head is directly coupled to ground. In the electrostatic modeling provided in this work, the human head is assumed to be shorted to ground.  Preliminary experiments revealed that if small transmitter and receiver electrodes (e.g. 2×2 cm) were connected to the Cin and the Excitation channels on the AD7150 evaluation board, the system was capable of measuring the change in mutual capacitance between the transmitter and receiver electrodes in the presence of capacitively air-coupled grounded target objects (i.e. placed in the proximity of less than 10 cm) such as human fingers, hand, head, etc. This mutual capacitance change was measured and provided by the board via its I2C interface.                                                                                                                        31 Insight was derived from this initial test to extend the findings of this feasibility study to reading the capacitance signals, as described in the next section, from an array of transmitter and receiver electrodes with the main goal of expanding the detection region to a larger 3D volume that could accommodate the head and the expected head motions during normal driving conditions.  2.4 Preliminary Head Position Quantification 2.4.1 Sensor Array Design A set of concentric transmitter and receiver copper plate electrodes were designed and built based on the shunt mode principle explained in the previous section. The individual transmitter and receiver electrodes were positioned to construct a capacitive sensor array with dimensions of 25×10×1 cm (L-W-T). The geometry and dimensions of the individual capacitive sensors were designed to be potentially housed inside a HR structure with dimensions of 27×20×8 cm (L-W-T). The following simplified block diagram (see Figure 2-4) shows the main components, specifications, and the connections of the experimental capacitive proximity sensor array. The coordinate system  ,s so C   is the sensor array coordinate system where s ss sC i j k   is a right-handed Cartesian frame withso as its origin located at the bottom center of the array.             MUX 16:1Cap/Digital Converter, 0-5 pFChannel SelectionClock Generator, 10 HzComputer Interface CircuitPC, Intel Pentium 4, 1.2 GHz Excitation Voltage, 10 V~i sjsk sCFigure 2-4. The proposed capacitive proximity sensor array design and computer interface                                                                                                                        32 The maximum voltage of the excitation source was selected to stay well below the maximum permissible exposure of the human body to the produced electric field strength [74]. Figure 2-3 shows a side view of the capacitive sensor array and the human subject positioned in front of the sensor. Electric field sensing is performed while the occupant’s head is moved within the FoV of the sensor. Figure 2-5 shows a real human subject in front of the sensing electrodes. The electronics, multiplexing mechanism, and the real-time data acquisition system for the array of ten capacitive sensors allow the system to operate with a sampling frequency of 10 Hz.         Figure 2-6 shows the capacitive proximity data obtained from the occupant’s head positioned at two different distances relative to the coordinate system  ,s so C fixed to the sensor array (see Figure 2-5). In Figure 2-6, surface (a) shows the results for the occupant’s head positioned 3 cm away from the sensor, while surface (b) shows the results when this distance is increased to 6 cm (i.e. 3 cm farther from the sensor).  Such surfaces were fitted to the capacitive proximity data to obtain a more tangible illustration of the signal shape resulting from the array outputs. As expected, based on the shunt mode principle, the capacitance increases as the head moves farther from the sensor.  An experimental setup is introduced in the next section to construct the discussed head position quantification system. s~sisjskFigure 2-5. The human subject positioned in the FoV of the capacitive sensor array                                                                                                                       33                  2.4.2 Head Position Experimental Setup One of the main steps in head position quantification is assigning a proper coordinate system to the occupant’s head. The occupant head coordinate system ,h ho C   is attached to the head center of gravity as shown in Figure 2-7. Walker et al. [75] provided a detailed discussion on the location of the center of mass, coordinate system assignment, etc. of the human head.  In this section, the proximity of the occupant’s head relative to the HR frontal compartment refers to the proximity of point h X  (at the back of the head) relative to the coordinate system ,s so C attached to the sensor array. Point h X  is represented in  ,h ho C  by its Cartesian coordinates  Th h h hX x y z . A linear homogeneous transformation was used to transfer point h X  into the sensor array coordinate system, s s hhX T X . The proximity of the occupant’s head to the head restraint is now denoted as s X  in this section. Point s X is represented in  ,s so C  by its Cartesian coordinates  Ts s s sX x y z . 4.24.44.64.85 -10-5051002468sensor array js disp [cm]Capacitive proximity plot (head to head-restraint: a: 3 cm, b: 6 cm)capacitance [pF]sensor array ks disp [cm](a)(b)Figure 2-6. The capacitive array data for the occupant’s head positioned: (a) 3 cm and (b) 6 cm from the sensor                                                                                                                         34       The region in which the occupant’s head was being moved is referred to as the sensor array Region of Interest (ROI). Two ROIs were assigned to the sensor array and denoted by ROIh and ROIv.  ROIh refers to the maximum allowable head movement within the h hi j  horizontal plane, and ROIv refers to this movement within the h hj k  vertical plane. The dimensions of the ROI were chosen based on the optimum proximity detection range of the capacitive sensor array although the head may move outside of the range. The proximity range of the capacitive sensor array is directly affected by the amplitude of the excitation voltage and the size of the individual capacitive sensors. Based on these parameters, ROIh and ROIv (within which the subject’s head moves) occupy a 6×6 cm square on the h hi j  plane and a 6×6 cm on the h hj k  plane respectively in the presented work. The assigned ROIs are shown in Figure 2-7. The figure also indicates the movement of the occupant’s head within the ROIs. The closest permissible distance between the occupant’s head and the capacitive sensor array is set to be 3 cm. Based on Figure 2-7 above, the sensor detection range (in  ,s so C ) is +3 cm to +9 cm in the backward/forward directions, ±3 cm in the sideways directions, and ±3 cm in the up/down directions respectively.  3 cmCapacitive array sensor6 x 6 cmTop view Front view3 cmhi hjhk ho~hX~ROIh3 cmhjhkho~hiROIv3 cm6 cm6 x 6 cm3 cmFigure 2-7. Coordinate system assignment, ROIs, and movements of the occupant head                                                                                                                       35 The head proximity sensing system involves two main sub-systems. The organization and use of these sub-systems is described here:  1) Data collection and training: The known data collected in this process were the capacitive proximity signals and their corresponding 3D Cartesian coordinates of the occupant’s head ( s X ). The capacitive proximity signals were obtained by collecting the individual capacitive sensor outputs inside the array and organizing them into a series of column vectors, i.e.  1 2 10 Ti iC c c c . Since there are ten individual capacitive sensors within the array, the corresponding column vector holds ten individual capacitive proximity values. In a similar process, the collected head coordinates were organized into a second series of column vectors  Ts i s s s iX x y z  with 1i n  where 𝑛 is the total number of samples that were collected during the data collection process. A nonparametric neural network was used to find a mapping between the capacitive proximity valuesiC  and the real-world 3D coordinates of the occupant head s iX .  A one-hidden-layer feed-forward neural network training process was employed with capacitive proximity values iC  as the finite inputs and the 3D coordinates of the occupant head s iX as the finite outputs. This machine learning process trains the neural net by incorporating the inputs and outputs to produce the required weight and bias parameters. The estimated neural net parameters and the capacitive data were then used during the occupant head position quantification process to find the unknown 3D coordinates of the occupant’s head. Two methods were considered to obtain the occupant’s head position relative to the capacitive sensor array: 1) real-time 3D optical tracking of the head and finding the head position relative to the sensor array, and 2) placing the sensor on the end-effector of a robotic manipulator and moving it while the head remained stationary relative                                                                                                                       36 to the robot base coordinate system. Both methods could have resulted in providing the relative position of the occupant’s head to the sensor array by applying the necessary linear transformations. This work employed the latter method as it was not deemed convenient for the human subject to move his/her head continuously within the field at the small spatial intervals (0.5 cm) needed for data collection. The subject’s head was maintained stationary by placing the person’s chin on a chinrest (see Figure 2-8) which guaranteed that the person’s head stayed stationary during the data collection process. A real-time data collection program was developed to monitor the position of the occupant’s head and the corresponding capacitive proximity data. Figure 2-8 shows a block diagram of the components constituting the data collection system. Column vectors holding the occupant’s head coordinates and the corresponding capacitive data were collected to generate a point cloud to be used in the training process. The nature of such a data collection process is independent of the robot speed, however, the speed of the robotic manipulator affects the speed of the data collection process.                Capacitive to Digital ConverterCRS Robot Controller and AmplifierComputer Serial InterfacePC or Embedded ComputerEnd-effector position commandsCurrent end-effector position Excitation Voltage GeneratorCRS robotCapacitive sensor arrayChinrest Figure 2-8. The proposed data collection system to acquire the real-time position of the occupant’s head and the corresponding capacitive proximity data                                                                                                                       37 2) Occupant’s head position estimation and verification: This process required the capacitive proximity data obtained from the capacitive sensor array and the calculated feed-forward neural net to estimate the Cartesian 3D position of the occupant’s head    ( s estX ). The verification process refers to the comparison between the estimated and the actual positions s X of the occupant’s head in real-time. In this experiment, the subject moved his head randomly within the discussed ROIs in front of the capacitive sensor array. An optical real-time 3D position tracking device and a head-mounted marker mechanism were used to provide the actual position of the head relative to the sensor coordinate system. It is important to mention that the real-time 3D position tracking device was programmed and calibrated to provide the occupant’s head coordinate relative to the sensor coordinate system. The schematic below (Figure 2-9) shows the proposed occupant head position verification system. It is important to consider that the optical 3D head position tracking system was used solely for the purpose of head position verification and will not be included in the final occupant’s head position estimation system. The experimental results are provided next.         Real-time Optical 3D Head Tracking Computer Serial InterfacePC or Embedded ComputerCapacitive to Digital ConverterExcitation Voltage GeneratorHead-mounted passive mark rsPolaris optical 3D trackerFigure 2-9. The proposed occupant’s head position estimation and verification system                                                                                                                       38 2.4.3 Preliminary Experimental Results   The data collection and training process involved collecting the real-time position of the occupant’s head and the corresponding capacitive proximity signals from the capacitive sensor array. The collected data was then given to the feed-forward neural network machine learning process in order to find an accurate correlation model between the occupant’s head position and the capacitive proximity signals. As stated in the previous section, the occupant’s head was held stationary on a chinrest and the capacitive sensor was moved by a robot manipulator behind the head. This process was equivalent to moving the occupant’s head in front of the capacitive array sensor while the sensor remained stationary. Figure 2-10 shows an example of the data collected from the capacitive proximity sensor array. The graph on top shows the occupant’s head trajectory in thesidirection and the graph at the bottom shows the corresponding capacitive proximity signals from each of the ten sensors in the array. As expected, based on the shunt mode principle, the capacitive signals decay as the occupant’s head approaches the capacitive sensor array with the central sensors showing the greatest change.      0 5 10 15 20 25 30 35 40 45 50 55246810Occupant's head trajectory in is direction distance to sensor (cm)5 10 15 20 25 30 35 40 45 50 554.44.64.85Cap proximity signals obtained from head movements in is directionsamplescapacitive signals (pF)Figure 2-10. A portion of the occupant’s head movement in x-y plane and the corresponding capacitive proximity data                                                                                                                       39 Generation of the experimental results in the evaluation process (i.e. Figure 2-11, Figure 2-12, and Table 2-1) involved passing the acquired capacitive signals to the trained neural net and comparing the output (estimated occupant’s head position) with the real-world position of the occupant’s head obtained from the real-time 3D optical tracker (i.e. the measured actual occupant’s head position). This process (as shown in Figure 2-9) required a human subject with a head-mounted passive tracker, an optical 3D tracker, and the capacitive sensor array. The human subject randomly moved his head in different directions while the 3D optical tracker recorded the 3D coordinates of the moving head. In parallel to this process, the capacitive sensor signals were passed to the neural net and the estimated head coordinates were calculated.                      0 20 40 60 80 100 120 140 1600510cma) Actual and estimated head position in is direction  cap sensoroptical 3D tracker0 20 40 60 80 100 120 140 16000.51b) ABS error in is directioncm0 20 40 60 80 100 120 140 160-505c) Actual and estimated head position in js directioncm0 20 40 60 80 100 120 140 160012d) ABS error in js directioncmsamplesFigure 2-11. The comparison between x-y coordinates of the actual and the estimated occupant’s head position 0 10 20 30 40 50 60 70 80 90-10010a) Actual and estimated head position in ks directioncm  cap sensoroptical 3D tracker0 10 20 30 40 50 60 70 80 90012b) ABS error in ks directioncmsamplesFigure 2-12. The comparison between z coordinates of the actual and the estimated occupant’s head position                                                                                                                       40 The actual and estimated head coordinates were compared and the absolute errors between the measurements were calculated. Figure 2-11shows the  ,s sx y and  ,s s estx ycomponents of the actual s X and estimated s estX occupant’s head position coordinates, along with the absolute error between these measurements. Figure 2-12 shows this comparison for thesz and ( )s estz . The ranges of the actual movements in this validation test were as follows: 4.1 8.1scm x cm  , 4.3 4.1scm y cm  , and 2.2 3.5scm z cm   .  Table 2-1 provides a statistical comparison between the estimated occupant’s head position obtained from the capacitive proximity signals and the actual position obtained from the 3D optical tracker. It is important to note that the occupant’s head movement outside of the aforementioned range introduced negative effects on the system accuracy. For example, the range of head movements forsy (sideways directions) was larger than the defined range in   Figure 2-7 (i.e. -3 cm to +3 cm). As expected (and as shown in Figure 2-11-d), larger absolute errors were introduced in positions where the head moved outside of the defined range.  Table 2-1. A statistical comparison between the actual and estimated occupant’s head position.  x y z Mean Absolute Error (MAE) (cm) 0.18 0.37 0.42 Standard Deviation (SD) 0.13 0.25 0.25 Max absolute error (cm) 0.63 1.10 1.12 It must be noted that the provided experimental results were gathered from testing the system on one human subject in a laboratory environment (i.e. with 22 -24 °C temperature range and room relative humidity of 40%).                                                                                                                           41 2.5 Conclusions   In this chapter, a capacitive array sensor was designed and constructed for the research study with head sensing capability and accuracy assessment as its primary goal.  The subsequent experimental testing performed on this sensor array found it to be capable of quantifying the head spatial position to a maximum distance of 9 cm from the sensor, with a mean absolute error (combined x, y, and z coordinates) of less than 0.5 cm, as confirmed using an optical 3D tracker system as the benchmark calibration assessment tool. Although proposals for head position monitoring using capacitive sensors exist in the open literature (including patents), the author believes that this is the first experimental quantification of the accuracy of a capacitive head position sensing system in a laboratory setting.                                                                                                                          42 Chapter 3. Driver Head Position Estimation with Protection from Environmental Disturbances 2 Although the results of the preliminary study on head position quantification indicated that a planar array of capacitive proximity sensors can be used to estimate 3D head position, this array, if influenced by the presence of other conductive objects behind the array would not be suitable for a final implementation in a vehicle. As such, the implementation of a grounded shield behind the array was deemed needed. Early informal experiments indicated that a simple ground plane on the back of the PCB (i.e. accommodating both the transmit and receive electrodes) significantly reduced the detection range of the sensor 3D measurements. New electrode and array designs were then undertaken to: 1) increase the electrode range, 2) minimize the number of electrodes, and 3) reduce the effects of electrostatic and environmental disturbances. This design consideration is explained next.  3.1 Electric Field Sensing Equations The integral and differential forms of Gauss’s law in free space for electric fields are as follows:  ,free encsD nda Q ∮   (3.1)  freeD     (3.2) where n is the unit vector normal to the surface and da  is a scalar increment of surface area in m2. In both the integral and differential forms of Gauss’s law, 0 rD E  is the electric flux density (a.k.a. electric displacement) in SI units of C/m2, 0 is the electric     2 A version of Chapter 3 was accepted for publication. N. Ziraknejad, P. D. Lawrence, and D. P. Romilly, “Vehicle Occupant Head Position Quantification Using an Array of Capacitive Proximity Sensors,” IEEE Transactions on Vehicular Technology, 10 journal pages, June 26, 2014.                                                                                                                       43 permittivity of free space in SI units of C/Vm, 0 is the relative dielectric constant, E is the electric field in SI units of V/m, and  is the differential vector operator. The term ,free encQis the enclosed charge in free space in Coulombs (C in SI units) contained on the surface area in m2, and  refers to the scalar volume charge density in C/m3. Maxwell [45] introduced a system of conductors which deals with the general relationship between the electric charges and voltages of conductors in an electrostatic field. By calculating the electric charge on each electrode, the associated capacitances can be calculated from the definition of the Maxwell capacitance matrix [45], [76] as stated in (3.3) with suitable initial conditions. The terms jkc  are referred to as the “coefficients of induction” where1 3j   charges on three conductors are induced by 1 3k   voltages. The equations here are consistent with those used in the FEA software (COMSOL Multiphysics® ver. 4.3) used throughout this study.   jjkkQc V  (3.3) To assist in developing a homogenous matrix relationship for the electrodes as shown in Figure 2-3 and as a convention following [76] and explicitly used by [45] (3.3) can be written in the form of a linear system of equations. By naming the conductors T (transmitter), R (receiver), and H (occupant’s head), the following capacitance matrix is created representing the proposed capacitive sensor equivalent circuit.  T TT TR TH TR RT RR RH RH HT HR HH Hcc c ccQ c c VQ VcQ Vc                           (3.4) WithTE V  , then (3.1) is solved for RQ by replacing the electric field intensity component by the gradient of TV  as follows:   0R r T asQ V nd    ∮   (3.5)                                                                                                                       44 In this work, it was desired to calculate the mutual fringe capacitance between the receiver and transmitter electrodes (RTC  denoted as RT RT TRC c c     according to Novak et al. [45]) and how it changes as the head approaches or retreats from the sensor. The transmitter electrode is the only electrode to which the excitation voltage is applied                  ( 0R HV V  ) and as such RTC  can be calculated as follows:  RRTTQC V   (3.6) Due to the electrostatic nature of the problem, the volume charge density in (3.2) is zero. Hence, the electric potential distribution, V, can be calculated from the solution of Laplace’s equation, 2 0V  . The FEA method was used to obtain the electric potential distribution which led to the calculation of the component of charge on the receiver electrode. Once RQ was calculated, the mutual capacitance between the electrodes could be calculated. Using the above, FEA simulations were conducted to calculateRTC when models of either a grounded target object or an occupant’s head were given. It was assumed that dry air surrounded the electrodes, thus in all simulations 1r  .  3.2 Simulations and Laboratory Experiments This section compares the sensing range per unit sensor area for two different sensor geometrical arrangements of the capacitive proximity sensors (see Figure 3-1and Figure 3-4). The sensor configurations were investigated through both FEA simulations and laboratory experiments in order to select a planar sensor configuration that provided a long detection range (maxd ). In this research, many sensor configurations were examined and from these, two different sensor geometries were selected for simulation studies: a) a common ring-shaped concentric sensor, and b) an interdigitated comb-shaped sensor. For                                                                                                                       45 the sake of consistency in the reported simulation results, equal transmitter and receiver electrode areas were considered for both sensor geometries. In each study that follows, the mutual fringe capacitance RTC was quantified and reported as a function of the distance between the grounded target object and the planar sensor’s surface.   3.2.1 Concentric Sensor The most common capacitive proximity sensor geometry is a concentric ring sensor [44].    Figure 3-1. (Left) The schematic of each of two ring-shaped capacitive sensors (A1 and A2) of total area22A R . (Right) A finite grounded plane as the target object positioned at distance d from the sensor.  The parameters s and d are the separation between the two electrodes, and the relative distance between the grounded plane (i.e. the target object assumed here to be sensed) and the sensor surface, respectively.          sT R d(a) op view (b) side viewair 1r 2A RT RA2R 1 A1 = 22 cm2 for 1 = 1.8205 cm, R2 = 2.6463 cm, and s = 0.1 cm. A2 = 44 cm2 for R1 = 2.5958 cm, R2 = 3.7424 cm, and s = 0.1 cm.   xyz[cm]Grounded targetRing-shaped electrodes10.80.60.40.201050-500000-5510Figure 3-2. (Left) COMSOL 3D geometrical schematic of the concentric sensor and the grounded target plane. (Right) The electric potential (V) distribution plot on two planes for the given geometry.                                                                                                                       46 The arrangement of parameter values as shown in Figure 3-1 imposed equal transmitter and receiver electrode areas for both the overall sensor areas, i.e.1A and 2A . This equality was imposed for future comparison of this type of sensor with a comb-shaped sensor (also with equal transmitter and receiver electrode areas) discussed later. FEA simulations were performed inside the boundary domain housing the 3D models of the capacitive proximity sensor and the grounded target plane using the 3D environment of the COMSOL 3D FEA simulation software and the 3D electrostatic library. Figure 3-2 shows the 3D schematic of the modeled concentric capacitive sensor and grounded circular target plane. The sensitivity and the detection range of the concentric capacitive sensor were computed and shown in Figure 3-3 in which the following definitions were used: - Fringe capacitance C [F]: For the sake of simplicity, C represents the mutual fringe capacitance between the transmitter and receiver electrodes. Using FEA, two sets of mutual fringe capacitances between the transmitter and receiver electrodes, 1C and 2C , were calculated and plotted (see again Figure 3-3) as a function of the relative distance, d, between the grounded plane and the sensor surface for each of the selected surface areas1A and 2A , respectively. The closest object distance to electrodes was d = 0.5 cm.  - Distance measurement sensitivity d [cm]: Measurements of 1C and 2C  were taken for both1A and 2A , at distance measurement intervals which represents the minimum measurable object distance change. d is a design parameter chosen here to be 0.2 cm.   - Distance measurement interval for plotting [cm]: Capacitance measurements were then interpolated for plotting at distance intervals of 0.1 cm to save simulation time.  - Sensor Sensitivity S [fF/mm]: The change in mutual capacitance (C) over the distance measurement interval (0.1 cm) at each object to sensor distance (d) (i.e. the local slope of the left curves in Figure 3-3) was also calculated and plotted in Figure 3-3 right and is referred to as the (local) sensor sensitivity (S) in fF/mm.                                                                                                                         47 - Detection system sensitivity minS [fF/mm]: The detection system sensitivity ( minS ) is defined by the distance measurement sensitivity d , and the smallest measurable capacitance of the capacitance-to-digital (C/D) electronics. The smallest measurable capacitances that the C/D electronic circuit employed here (Analog Devices AD7150) can measure are 1, 1.4, 1.6, and 2 fF for the possible selected capacitance input ranges of 0.5, 1, 2, 4 pF.  Thus the selected object-to-sensor distance measurement sensitivity of 0.2 cm, along with the C/D electronic circuit resolution of 1 fF results in minS of 0.5 fF/mm (see Figure 3-3). - Maximum detection range maxd : The maximum detection range ( maxd ) refers to a range of d in which the sensor sensitivity (S) stays above the minimum sensor sensitivity          (minS ) over the whole range of the target object motion to the sensor’s surface. In other words, the maximum detection range was defined by the projection of the crossing point of the S curve and the minS line, onto the d axis as shown in Figure 3-3. The value of maxd is directly affected by the choice of d . For the same chosen C/D range, if d is decreased for higher desired position measurement accuracy,minS increases and maxd decreases.  0.5 5 10 15 20 25 30 35 402.00872.21432.422.62562.8313Capacitance (C), [pF]  0.82pFC1, A1=22cm20.5 5 10 15 20 25 30 35 403.00473.34923.69374.03824.38271.38pFObject to electrodes distance (d), [cm]Capacitance (C), [pF]  C2, A2=44cm20 10 20 30 4010-1100101102Object to electrodes distance (d), [cm]S, [fF/mm]  Smin = 0.5 fF/mmdmax(1)=19.36 cmdmax(2)=28.40 cmS1S2Figure 3-3. (Left) The simulated mutual fringe capacitances for the smaller and larger ring-shaped concentric sensors, (Right) The sensor sensitivity plot and dmax calculated for Smin=0.5 fF/mm.                                                                                                                       48 The mutual fringe capacitance between the transmitter and receiver electrodes was calculated when the grounded plane was positioned at different distances from the sensor’s surface (See Figure 3-3). The FEA calculations revealed that the sensor with the larger surface area offered a larger maximum sensor detection range (maxd = 28.4 cm). As expected and verified based on the FEA simulations, the maximum detection range of the larger ring-shaped sensors (2maxd) is ~46% greater than the maximum detection range            (1maxd) of the smaller ring-shaped sensors (i.e. 28.4 cm vs. 19.36 cm). The second type of capacitive sensor (i.e. with comb-shaped interdigitated electrodes) is investigated next.  3.2.2 Comb-shaped Interdigitated Sensor The use of comb-shaped sensors for various purposes was previously proposed by several researchers. For example, Lee et al. [77] introduced a dual-mode capacitive sensor with tactile and proximity detection capabilities for robotic applications. The main objective of this portion of the investigation however, was to examine the precise effect of varying comb electrode geometry parameters on the maximum detection range. The electrode geometry parameters investigated are shown in Figure 3-4.  The repetitive unit of this form of sensor incorporates two finger Receiver (R) and Transmitter (T) structures (shown in green and orange in Figure 3-4.b). The total sensor ws d(c) side view(b) top viewair1r  LHTRw/2 2(w+s)n=# of blocks=10ws(a) top view LTRw/2n = # of blocks=1Figure 3-4. The schematic of comb-shaped interdigitated sensors with one block of transmitter and receiver (n = 1, w = 5.4 cm, and s = 0.1 cm). (b) The schematic of the sensor with more fingers (n = 10, w = 0.45 cm, and s = 0.1 cm), (c) The side view of the sensors positioned within distance d from the grounded object plane. H = 4.0 cm. For A1, L=5.5 cm, and for A2, L=11 cm.                                                                                                                       49 area (A), the electrode finger width (w), the separation (s), and the distance to the grounded target object (d), were considered as the geometrical parameters that primarily affect the calculation of the mutual fringe capacitance between the transmitter and receiver electrodes. Similar to the concentric sensor, two total surface areas (1A and 2A ) were selected in this study. The value of the separation parameter (s) was chosen to address the practical constraint associated with the in-house PCB fabrication of the comb sensor.  An FEA simulation was performed in COMSOL to investigate the effects of the sensor total area (i.e. L×H) on the maximum detection range of the sensor. A finite grounded object plane (16×20m) was added to the system geometry to simulate a target object as a planar projection of a human head (see Figure 3-5).    Figure 3-5. (Left) The 3D geometrical schematic of the comb-shaped interdigitated sensors and the grounded planar object. (Right) The electric potential (V) distribution for the given geometry. The mutual fringe capacitances between the R and T electrodes were then calculated for the grounded plane at distances of 0.5 40cm d cm   in increments of       d =0.2 cm. Several comb-shaped sensors with different areas and number of finger blocks were examined through FEA simulations. Figure 3-6 shows the simulation results of three such combinations which include A1=22 cm2 with five blocks, A2=44 cm2 with one block (Figure 3-4.a), and A2=44 cm2 with ten blocks (Figure 3-4.b). Note that to compare only xyz [cm][cm][cm]Grounded planeComb-shaped electrodes10.80.60.40.201000100100.80.60.40.20101000010                                                                                                                      50 the effects of sensor area, the comb sensor with one block of fingers had its separation gap length equivalent to that of the concentric sensor’s separation gap length (16.03 cm). The FEA simulation of the comb-shaped sensor revealed that both configurations of the larger sensor area (i.e. n=1 and n=10) offered a longer detection range (maxd ) than the concentric sensor of the same area (33.68 cm vs. 28.4 cm for the concentric sensor). As shown in Figure 3-6 (right), the maximum detection range for the larger comb-shaped sensor configuration with n=10 was 33.68 cm. The maximum detection range of the larger sensor with ten finger blocks (3max 33.68d cm) was determined to be ~47% greater than the maximum detection range of the smaller sensor (1max 23.88d cm) (five finger blocks). As shown in Figure 3-6, although the comb-shaped sensor with fewer finger blocks (i.e. n = 1) offered the longest detection range (2max 39.91d cm), the mutual capacitance range between the minimum mutual capacitance (with an object at 0.5 cm) and the maximum mutual capacitance (with an object at 45 cm) was smaller than the mutual capacitance range of the sensor with 10 finger blocks (i.e. 1.52 pF vs. 2.21 pF).           Figure 3-6. (Left) The simulated mutual fringe capacitances for the smaller and larger comb-shaped sensors, (Right) The sensor sensitivity and the maximum detection range calculated for Smin=0.5 fF/mm. 0.5 5 10 15 20 25 30 35 40 454.87255.15575.4395.72226.0054  1.13pFC1=C5-blocks, A1=22cm20.5 5 10 15 20 25 30 35 40 452.82163.2013.58033.95974.3391Capacitance (C), [pF]  1.52pFC2=C1-block, A2=44cm20.5 5 10 15 20 25 30 35 40 459.814110.36710.9211.472912.0258Object to electrodes distance (d), [cm]                                         2.21pFC3=C10-blocks, A2=44cm20 10 20 30 4010-1100101102Object to electrodes distance (d), [cm]                                       S, [fF/mm]  Smin=0.5 fF/mm  dmax(1)=23.88 cmdmax(2)=39.91 cmdmax(3)=33.68 cmC1C2C3S1 S2 S3                                                                                                                       51 Since the mutual capacitance range of 2.21 pF of the comp-shaped sensor is much greater than the mutual capacitance range of 1.38 pF observed from the ring-shaped sensor, thus the comb-shaped sensor offers greater advantage in the mutual capacitance range. This advantage appears when digitization of the capacitance signal into more quantization levels is desired.  It is noted that a smaller mutual capacitance range reduces the number of digital quanta during the process of capacitance transformation conducted by the C/D electronics. For the same sensor area, variations in both the sensor finger width (w) and the separation gap between each two fingers (s) (which can lead to different numbers of finger blocks) are two important factors in the design process of interdigitated comb-shaped sensors. Selection of these parameters is explained in Section 3.2.4. 3.2.3 Use of a Grounded Backplane  In the work presented so far, FEA simulations were conducted according to the three-conductor shunt mode configuration explained in Section 2.2. Due to the specific in-vehicle application of this research, a grounded conductor plane was required and subsequently added behind the transmitter and receiver electrodes. The main advantage of this addition was to avoid the formation of an electric field where proximity detection is not desired (i.e. behind the sensor or inside and behind the HR).  The addition of the grounded backplane was achieved by implementing the comb-shaped sensor on a double-sided PCB where the comb-shaped sensor pattern was implemented on one side of the PCB and the other side was connected to the circuit ground. FEA simulation and laboratory experiments were performed to investigate the effects of adding such a grounded backplane and the substrate insulating layer between the two copper sides of the PCB.                                                                                                                        52         For this simulation, the sensor PCB was FE modeled in COMSOL according to real-world geometrical dimensions and electrical specifications. As such, the relative permittivity of the substrate layer was calculated through a simple laboratory experiment performed on a parallel plate capacitor made out of the same PCB material. A relative permittivity of 4.9 was calculated from this experiment and was used in the FEA simulations. As shown in Figure 3-7 (right), the addition of the grounded backplane reduces the sensor sensitivity (S) in comparison to the case where no grounded backplane was used (see Figure 3-6 right). The lower sensor sensitivity reduces the maximum detection range when the grounded backplane is present.  To further increase the maximum detection range, distance measurement sensitivityd was increased to 0.4 cm. Applying this value to sensor distance measurement, the sensitivity of the C/D electronic circuit’s smallest measureable capacitance was reset to  1.4 fF resulted in a new detection system sensitivity (minS ) of 0.35 fF/mm for this specific Figure 3-7. (Left) The simulated mutual fringe capacitances for the smaller and larger comb-shaped sensors with a grounded backplane, n=5 for A1 and n=10 for A2. (Right) the mutual fringe capacitance variations and the proximity range for Smin = 0.35 fF/mm. 0.5 5 10 15 201.81311.9192.02492.13082.2367Capacitance (C), [pF]  0.42pFC1, A1=22cm20.5 5 10 15 204.01384.25534.49684.73824.97970.97pFObject to electrodes distance (d), [m]Capacitance (C), [pF]  C2, A2=44cm20.8 5 10 1510-1100101102Object to electrodes distance (d), [m]S, [fF/mm]  Smin=0.35 fF/mmdmax(1)=7.41 cmdmax(2)=11.09 cmS1S2[cm] [cm]                                                                                                                       53 sensor geometry.  The C/D change from 1.0 fF to 1.4 fF was accomplished by changing the C/D capacitance input range from 0.5 pF to 1 pF to allow the full 0.97 pF mutual capacitance range observed in the Figure 3-7 simulation. This modification resulted in a maximum detection range of 11.09 cm as shown in Figure 3-7.   In a physical experiment designed to investigate the effects of the grounded back plane addition, the double-sided PCB prototype of the comb-shaped capacitive sensor was employed and the mutual fringe capacitances for different target object distances were calculated and plotted as shown in Figure 3-8. The capacitive sensor PCB prototype was positioned at different distances, at measurement intervals of 0.4 cm relative to a grounded plane made out of copper. Similar to the FEA process explained in earlier sections, the capacitance measurements were interpolated at distance intervals of 0.1 cm and plotted in Figure 3-8.  As shown in Figure 3-8 (right), a maximum detection range of 11.72 cm was achieved in the experimental study. Note that the grounded backplane reduced the maximum mutual fringe capacitance between the transmitter and receiver electrodes (4.98 pF in simulation and 4.67 pF in experimentation) from those observed in simulation with no backplane (12.03 pF in Figure 3-6).        Figure 3-8. (Left) The physical experiment’s mutual fringe capacitances for the larger comb-shaped sensors with a grounded backplane, (Right) the mutual fringe capacitance variations and the proximity range for Smin = 0.35 fF/mm. 0.5 5 10 15 203.63533.89424.15314.4124.6709Object to electrodes distance (d), [m]                                      Capacitance (C), [pF]  1.04pFCexp, A1=44cm20.8 5 10 1510-1100101102Object to electrodes distance (d), [m]                                      S, [fF/mm]  Smin=0.35 fF/mm                  dmax=11.72 cm  Sexp[cm] [cm]                                                                                                                       54 3.2.4 Selection of Electrode Separation (s) and Finger Width (w) Parameters Multiple values and combinations of finger width (w) and electrode separation (s) parameters were investigated during the FEA process while the overall sensor area with a backplane was maintained at 44 cm2 and the target object was selected to be a grounded plate. The mutual fringe capacitances between the comb-shaped sensor transmitter and receiver electrodes (RTC C ), and the maximum detection range ( maxd ) for the specified target object to sensor distances are plotted in 3D in Figure 3-9.  The finger width (w) and separation (s) values are the key parameters in constructing the comb-shaped sensor geometry (as in Figure 3-4) and play an important role in defining the corresponding values of C and maxd . In this study, C at 20 cm is referred to here as the capacitance signal bias (i.e. 4.98 pF in Figure 3-7). It was evident from the results of the analysis (as depicted in Figure 3-9.a) that C increases as finger width and separation parameters decrease. When the object was positioned even closer to the sensor surface (i.e. 0.5 cm, not depicted in Figure 3-9.a) the base of the surface (the “bias”) shifts down but the shape of the curve is otherwise retained.  Figure 3-9. (a) The mutual fringe capacitance C with the target object at d = 20 cm, (b) The maximum detection range, for grounded object to sensor relative distance between d = 0.5 cm and d = 20 cm. For plots, , and . The sensor has a grounded backplane 1 4mm s mm  1.43 54mm w mm                                                                                                                       55 To assess the detection range of the sensor for various values of w and s, the maximum detection range (maxd ) for a grounded target object located at several distances between 0.5 cm and 20.5 cm relative to the sensor plane was calculated by using FEA, with the results plotted in Figure 3-9.b. Smaller values of the separation parameter, s, resulted in relatively larger values of maxd at constant finger width values (w). For a fixed value of s, maxd was increased by increasing finger width. The maximum detection range, maxd , for different finger widths was achieved for s = 1 mm. The C/D circuit provides the capability to measure a contiguous 4 pF range of capacitance input values that are between 0 pF and 14 pF. Care was taken when choosing the finger width and separation values to avoid exceeding 14 pF when the target object is either not present or far away. In order to obtain a reasonably large mutual capacitance range which can potentially result in greater capacitance resolution for each distance measurement interval (see Figure 3-6 (left)), s = 1 mm and w = 4.5 mm were selected    (note: for more details see the algorithm for maximizing the range of the comb sensor in the Section 3.3.1). The selection of these parameters (even within the previously discussed practical constraint on s) still offered an acceptable proximity detection range (i.e. max 11.09d  cm) as indicated by the results in Figure 3-9.b.  Note that there was not a significant difference observed on maxd for finger widths greater than 4.5 mm (w = 54 mm which represents the 1-block comb sensor geometry in Figure 3-4 (a) leading tomax 12.38d  cm as shown in Figure 3-9 (b)).  3.2.5  Electric Field Intensity Comparison  The extent of the electric field near the target object (i.e. placed within 20 cm from the sensor surface) was investigated for both the concentric and comb-shaped sensors by employing FEA methods performed in COMSOL.                                                                                                                        56 In this study, the overall surface areas of both sensors were identical (i.e. 44 cm2) with a separation gap, s=1 mm, between the transmitter and receiver electrodes. The electric field intensity was calculated by taking the vector norm of the electric field distributed within the boundary domain of the FEA model (i.e. 2 2 2x y zE E E E  in V/m). The results of this investigation are plotted in Figure 3-10 where the associated electric field intensity for each sensor type is plotted on the planes perpendicular to the xand y axes, and located at the origin of the FEA boundary region.  The plotted arrow field represents the direction and magnitude of the electric field in the boundary domain where the FEA was performed. For presentation purposes, a specified range of the electric field intensity (i.e. between 1.6 V/m to 10 V/m) was associated with the full color range in the provided color bar. The regions on the perpendicular planes that are not shown with any colors are either above 10 V/m or below 1.6 V/m.   Figure 3-10. The electric field norm distribution for the comb-shaped (left) and concentric capacitive sensor (right). The color bar was adjusted to represent the electric field intensities between 1.6 to 10 V/m.                                                                                                                         57 It can be seen from Figure 3-10 (right) that the electric field intensity associated with the concentric sensor dropped to 1.6 V/m at approximately 8.5 cm from the sensor surface and was negligible at the grounded target object. However, the electric field intensity for the comb sensor (see Figure 3-10, left) was still above 1.6 V/m in the vicinity of the grounded target object. Thus the electric field intensity of the comb sensor exceeds that of the concentric sensor near the target grounded object (i.e. the plate or the head). At longer distances, the comb sensor is thus more effective at increasing the surface charge density on the grounded target object (or equivalently, increasing the capacitance between the transmitter and the grounded object). The comb sensor can thus be said to shunt more charge from the transmitter electrode to the ground which effectively results in reducing the measured mutual capacitance. 3.2.6 Human Head Grounding Effect In this section, FEA was conducted to investigate the capacitance variations and sensor range of the comb-shaped sensor when a grounded human head was chosen as the target object.         xyz[cm][cm][cm]Grounded head modelComb-shaped electrodesdbca100-10102030010 1550-510.80.60.40.2010203010 1550-50100-10a=c=7.75 cmb=9.8 cmFigure 3-11. The grounded head model and the comb-shaped sensors represented inside the 3D geometrical environment of COMSOL ver. 4.3. d is the distance between the head and the sensor’s surface.                                                                                                                       58 As such, a model of the human head based on the dimensions reported by Hubbard et al. [78] was built in the 3D geometrical environment of COMSOL to replace the grounded plane. FEA was performed on the comb-shaped sensors and the resulting 3D head model illustrated in Figure 3-11.  Note that the results were obtained for a head-to-sensor distance range of 0.5 cm to 20 cm, a distance measurement sensitivity of 0.4 cm, and a C/D electronic circuit resolution of 1 fF which together resulted in a system sensitivity (minS ) of 0.25 fF/mm.         The results are plotted in Figure 3-12. In order to validate the simulation results, a laboratory experiment was also conducted in which the mutual capacitance between the transmitter and receiver electrodes was measured at different known distances of the target object (i.e. a real human head) relative to the surface of the sensor.  The mutual fringe capacitance results (expC ) and the associated sensor sensitivity were plotted in Figure 3-13 accordingly. As shown in Figure 3-13 (right), the maximum detection range (maxd ) of the capacitive sensor prototype was found to be 12.05 cm which Figure 3-12. (Left) The simulated mutual fringe capacitances for the smaller and larger comb-shaped sensors with a grounded backplane, and (Right) the mutual fringe capacitance variations and the proximity range for Smin = 0.25 fF/mm 0.5 5 10 15 202.25972.31822.37672.43522.4937Capacitance (C), [pF]  0.23pFC1, A1=22cm20.5 5 10 15 204.50944.62644.74354.86054.97750.47pFObject to electrodes distance (d), [m]Capacitance (C), [pF]  C2, A2=44cm20 5 10 15 2010-1100101102Object to electrodes distance (d), [m]S, [fF/mm]  Smin=0.25 fF/mmdmax(1)=8.73 cmdmax(2)=13.01 cmS1S2[cm] [cm]                                                                                                                       59 was within 92% of the predicted FEA result of 13.01 cm. The laboratory experiments revealed that the capacitive proximity sensor with the interdigitated comb-shaped sensor arrangement was capable of detecting the proximity of a human head subject if located within the detection range of the sensor (i.e. ~12 cm).        To provide an overview of the various studies, the results associated with the two types of sensors and their geometrical variations have been summarized in Table 3-1.   Table 3-1. Comparing the range of the cylindrical and comb-shaped capacitive proximity sensors.   Large concentric sensor Large comb-shaped sensor No GB†† No GB with GB Grounded target object Plate Plate Plate Head Min capacitance measurement [fF] 1 1 1.4 1 Distance measurement sensitivity d [cm] 0.2 0.2 0.4 0.4 Detection Syst. sensitivity S [fF/mm] 0.5 0.5 0.35 0.25 S* maxd[cm] 28.40 33.68 11.09 13.01 E‡ maxd[cm] - - 11.72 12.05 *: Simulations, ‡: Experimental setup, †† Grounded Backplane (GB)  The results organized in Table 1 show that: 1) with no backplane, the comb-shaped sensor offered a longer  detection range (33.68 cm vs. 28.40 cm) than the concentric sensor, 2) with a grounded backplane on the comb-shaped sensor the maximum  detection range was reduced from 33.68 cm to 11.09 cm, and 3) the maximum detection range of the comb-Figure 3-13. (Left) The experimental mutual fringe capacitances for the larger comb-shaped sensors with a grounded backplane as a function of relative distance between the grounded head and the sensor, and (Right) the mutual fringe capacitance variations and the proximity range for Smin = 0.25 fF/mm. 0.5 5 10 154.25054.3574.46354.574.6765Object to electrodes distance (d), [m]Capacitance (C), [pF]  0.43pFCexp0 5 10 1510-1100101102Object to electrodes distance (d), [m]S, [fF/mm]  Smin=0.25 fF/mmdmax=12.05 cmSexp[cm] [cm]                                                                                                                       60 shaped sensor increased from 11.72 cm with a grounded plate target object to 12.05 cm with the grounded head as the target object.  As will be considered further in the Discussion section, servoing of the HR position can be used to keep the head within the detection zone of the sensor array and retain compliance with the static HR positioning recommendations of IIHS [2]. As such, the comb-shaped capacitive sensor was selected to form the capacitive proximity sensor for the purpose of the proposed 3D occupant head position sensing.  3.2.7 Temperature and Humidity Compensation The performance of a capacitive sensor is susceptible to change due to atmospheric variations such as temperature and humidity. In this section, the dependence of the array capacitance on these variables was investigated by examining models of the dependence, and reporting on experimental capacitance measurements as a function of temperature and humidity. To the best of our knowledge there have been no reported experiments showing the attainable accuracy to which a physical variable (such as distance) could be measured using temperature and/or humidity compensated capacitance. It was therefore deemed advisable to carry out this experimental evaluation. Physical Modeling: The dielectric constant of air changes with temperature and humidity [44]. Equation (3.7) was developed by Lea [79] to calculate the permittivity of moist air.  6211 481 10moist air sP P HT T        (3.7) In this equation, imo st air is the permittivity of moist air, T is the absolute temperature in °K, P is pressure of moist air in mmHg, sP  is the pressure of saturated water vapor in mmHg at temperature T (that can be calculated as stated in [80]), and H is the relative humidity in %. As reported by Ford in [81], at higher relative humidity, the permittivity                                                                                                                       61 increases at a greater rate than indicated by the above equation. Also, according to Zahn in [82], the linear relationship between the air permittivity and the relative humidity can hold true up to about 30% relative humidity. The deposition of a film of water on the capacitor plates was assumed to be a contributing factor in his research. In all the previously conducted FEA simulations, it was assumed 1moist air r    and that formed the basis in calculation of the mutual fringe capacitances between the transmitter and receiver electrodes. The experimental analysis in this section was conducted on the comb-shaped sensors to investigate the validity of this assumption in conditions with variable relative humidity and temperature.  Temperature/Humidity Experiments: The effects of temperature/humidity were chosen to be studied in a controlled laboratory experiment in this research. This experiment examined the variation of the capacitance when the comb-shaped sensor was exposed to relative humidity values greater than 30% and temperatures of 20, 30, and 45 C°. Note that there was no conformal coating used on the sensor’s surface.         Reference capacitor Sensing capacitor Figure 3-14. (Left) The programmable temperature/humidity control chamber at UBC Biomass Research Center, and (Right) The sensing and reference comb-shaped capacitive sensors located inside the chamber.                                                                                                                       62 A programmable temperature and humidity control chamber (as shown in Figure 3-14) was employed to conduct these experiments on the developed comb-shaped interdigitated capacitive proximity sensor.         Special consideration was given to avoid transients and ensure that capacitance data was collected only after the chamber had reached the required temperature and humidity values. When capacitance readings were stable, measurements were recorded. From the plots in Figure 3-15, it can be verified that: 1) the overall capacitance decreases with a rise in temperature, and 2) the capacitance represents a pronounced nonlinear behavior at higher relative humidity conditions (> 70%) for a specified temperature – extending the results of Ford [81] up to 85% humidity. Baxter [44] suggested that this change can be easily compensated by utilizing a reference capacitor and using the same materials as the sensing capacitors. This ratio of sensing to reference capacitance should be independent of the permittivity of the air and hence the overall ratio will stay unaffected by the unwanted atmospheric variations, being only sensitive to the measured variable (i.e. occupant head distance). Unfortunately, in the reported work by Baxter, there was no reported 40 60 8055.25.45.65.866.2Relative humidity, [%](a)Capacitance (C), [pF]  T = 20 oCT = 30 oCT = 45 oC40 60 8000.10.20.30.40.50.6Relative humidity, [%](b)C, [pF]  T = 20 oCT = 30 oCT = 45 oCFigure 3-15. The capacitance increases when relative humidity increases at a fixed temperature. However, the capacitance is lower at higher temperatures for a fixed humidity level.                                                                                                                         63 experimental evaluation of the actual accuracy of the proposal. Preliminary tests (not reported here) of its validity using a simple concentric reference capacitor did not provide good results. Ratio Experiment: To examine the feasibility of utilizing a reference capacitor and to investigate variation of the capacitance ratio between the sensing and reference capacitive sensors, two sets of identical comb-shaped sensors were used in this experiment.              First, a comb-shaped sensor was used as a sensing unit with a grounded “object” plate parallel to its surface. The distance between the grounded plate and the sensor surface was 5 cm (see the left-hand side sensor in Figure 3-14 Right). Next, a separate identical comb-shaped sensor (right hand sensor Figure 3-14 Right) was used as a reference sensor with the chamber ceiling acting as a distant ground “object”. As shown in Figure 3-16 , the relative capacitance ratio is nearly constant (i.e. 3 ppt over the range of relative humidity levels between 60% - 95%). The same compensation technique should compensate for 60 80 10044.555.56Capacitance, [pF]  CcpsCrcs60 80 1000.70.780.860.94Relative humidity, [%], T = 30 oCRelative Capacitance  Ccps/Crcs30 4044.555.5Capacitance, [pF]  CcpsCrcs30 400.710.790.87Temperature, [oC], RH=70%Relative Capacitance  Ccps/CrcsFigure 3-16. The sensing and the reference sensors capacitance plot. The ratio was plotted and stayed nearly constant over the range of RH between 60% and 95%. cps=capacitive proximity sensor, rcs=reference capacitive sensor                                                                                                                       64 pressure variation as well (see (3.7)) since the pressure will similarly affect the sensing and reference capacitors, however, this was not evaluated in our experiments.  As a result of this study and to compensate for the effects of the discussed atmospheric variations, a fourth capacitive sensor (with the same geometry and materials) as the other sensors was mounted as shown in Figure 3-18. This sensor was used as a reference capacitor in conjunction with the other three sensors that form the head position sensing array. The head 3D position quantification process using three ratios obtained from the three capacitive sensors and one reference capacitor is explained next.  3.2.8 Method of Occupant Head Position Estimation  The objective of this portion of the research was to investigate the possibility of reducing the number of sensors to a minimum of three sensors (i.e. three pairs of comb-shaped transmitter and receiver electrodes). Various configurations, arrangements, and number of capacitive sensors were examined and the three-sensor configuration illustrated in Figure 3-17 (c) was selected to form the capacitive proximity sensor array. An analog multiplexer activated each of the transmitter and receiver electrodes in a sensor at a given time instant. When a capacitive sensor in the array is activated, the other two sensors “float”. This eliminates any shunting effects between the three sensors. As shown in the schematic, three sensors were positioned in the form of an inverted “Y” with equal angular orientations of 120° between each sensor unit pair. This arrangement of three sensors produces the minimum of three independent capacitive proximity ratios1 2 3( , , )c c c needed to estimate the Cartesian coordinates  2 2 2, , Tx y z of the                                                                                                                       65 head position (see Figure 3-17). For the sake of simplicity, the term “head position” will be used instead of “the position of the back of the head”.  3.2.9 Coordinate System Assignment and Data Collection An essential step prior to quantifying the 3D position of the occupant’s head was the assignment of coordinate systems to the HR and sensors. The back of the occupant’s head is represented by point 2 X as shown in Figure 3-17 (b). This point is represented in the homogenous coordinate system  2 2,o C by its coordinates  2 2 2 2 TX x y z and is estimated as the result of capacitance proximity sensing (see Section 3.2.10). A homogeneous transformation was used to transform 2 X into the base coordinate system  0 0,o C   by incorporating the HR joint displacement data obtained from a pair of integrated linear position sensors (See Section 3.2.11). Hence,  0 0 0 0 TX x y z represents the coordinates of the occupant’s head in 0 0,o C . Similar to the approach utilized in [16] which involved using an optical 3D tracker, “training” and “estimation” processes were utilized to find a mathematical model and correspondingly estimate the head coordinates using this model.         (a)120°  22 2 2TX x y z1i1j1k1o(c)(b)0i 0j0k0o 2i2j2ko2i2j120° 2ko120° 2CTopElecLeftElecRightElecFigure 3-17. (a) The data collection experimental testbed, (b) The side view of the assigned coordinate system, and (c) The front view of the coordinate system assignment for the Y-shaped sensor arrangement.                                                                                                                       66 3.2.10 Neural Network Training Process Various different linear and nonlinear regression methodologies were evaluated to approximate a mapping between the two sets of known variables (i.e. the capacitance ratio data and the head coordinates inside the HR coordinate system (in cm)). It was found that training a model through a BP feed-forward neural network approach using the Levenberg-Marquardt algorithm offered the highest accuracy among the various training methods evaluated in this research.  The neural network training process involved approximating the weight and bias values through backpropagation using the Levenberg-Marquardt algorithm from MATLAB™ Neural Network Toolbox. The collected data, which was divided into training (3000 points) and estimating data sets (1000 points), was used to evaluate the accuracy of the trained network. The capacitive proximity data (i.e. associated with each occupant’s head position) were obtained by collecting the three capacitance ratio outputs at time t from the sensing array. A three layer feed-forward neural network with two hidden layers and one output layer was utilized to associate the input and output vectors by updating the weight and bias values during the training process. The capacitive proximity ratios were used to form the input variable vector (tC ), the measured occupant’s head coordinates            ( 0 tX ), and were also used to form the output variable vector with p equal to the number of input-output training points. The outputs of the network were associated with the input vectors by incorporating the weight (w) and bias (b) parameters according to the mathematical equation below where f  is a hyperbolic tangent sigmoid transfer function (a.k.a. an activation function).   1 2 3 1,2, ,Tt t t tC c c c t p   (3.8)                                                                                                                       67   0 1,2, ,Tt t t tX x y z t p   (3.9) 3 2 1 1 1 2 2 3 31 1 1( ) ( , ) ( ) ( , ) ( ) ( , ) ( )Q G Mt jtq i jX s f f f c w j i b i w i q b q w q s b s                          (3.10) 3.2.11 Overall System and Experimental Results After finalizing the desired capacitive sensor geometry (i.e. a comb-shaped sensor with grounded back plate), developing the compensation method, and obtaining a neural network model, it was desired to integrate the proposed sensor array inside a mechanism suited for 3D occupant head position estimation. 3.2.11.1 Self-contained Autonomous and Adaptive HR Positioning System As identified previously, one of the critical elements of a complete adaptive HR positioning system was the development of an apparatus to physically position the HR properly behind the occupant’s head. Thus a prototype motion apparatus (Figure 3-18) was conceived that is fully contained within the separate HR unit such that it could be easily incorporated into existing seat designs.       Figure 3-18. The self-contained HR positioning device with the customized array containing the comb-shaped capacitive proximity sensing and reference sensor integrated into the frontal compartment.                                                                                                                       68 The HR motion system provides a two degree-of-freedom movement, i.e. allowing forward/backward and up/down positioning covering the full positioning envelope deemed necessary to accommodate proper HR positioning for both the 5th percentile female and the 95th percentile male occupants. The horizontal and vertical motion ranges are 12 cm and 7 cm, respectively. Two brushless DC motors, each equipped with a gearbox mechanism, were employed to generate the required motions in the stated directions. The custom-made prototype of the electro-mechanical HR device is shown in Figure 3-18. The proposed HR system was designed to accommodate three primary sub-systems within its chassis: 1) the electronic circuits in charge of capacitive proximity data collection and processing as well as control command generation to drive brushless DC motors, 2) the mechanical actuation system, actuator limit switches, and a pair of brushless DC motors as well as a pair of absolute linear position sensors to provide the relative position of the HR frontal compartment in the coordinate system  0 0,o C , and 3) three pairs of transmitter and receiver interdigitated comb-shaped sensors forming the capacitive proximity sensors.  3.2.11.2 Experimental Results The estimated 3D head coordinates, 0 Xˆ , were compared against the measured coordinates, 0 X , obtained from the aforementioned optical tracker system. Table 3-2 reports the accuracy measurements associated with the neural network training method discussed in Section 3.2.10. This table reports the errors between the HR origin,0o , and the back of the occupant’s head, 0 Xˆ .  The back of the subject’s head was moved inside an envelope with maximum displacements of ±7 cm along 0i axis, ±3.5 cm along 0j, and 7 cm along the 0k  axis.                                                                                                                              69 Table 3-2. Estimation Accuracy Measures [cm]  Levenberg-Marquardt Neural Network 0i  0j 0k  MAE (Mean Absolute Error) 0.13 0.19 0.18 SD (Standard Deviation) 0.20 0.28 0.27 MSE (Mean Squared Error) 0.04 0.07 0.09 ME (Mean Error) 0.1 0.12 0.13 MEDE (Mean Euclidean Distance Error)  0.33 The plot provided in Figure 3-19shows the measured 3D head coordinates obtained from the estimation process utilizing the neural network model as the result of the employed Levenberg Marquardt training process. All the measurements are provided in the base coordinate system  0 0,o C  (the coordinate frames are also shown in Figure 3-20). An MEDE of 0.33 cm was achieved for the proposed occupant head position quantification methodology using an array of three comb-shaped capacitive proximity sensors.    Figure 3-19. The measured and estimated head coordinates obtained from the optical tracker and  the capacitive proximity sensing unit.    0 200 400 600 800 1000-20-100x0 [cm]actual(blue), estimated(red)0 200 400 600 800 1000152025y0 [cm]0 200 400 600 800 1000101520z0 [cm]samples0 0.5 1 1.5 20200400Euclidean distance error [cm]The Euclidean error histogram                                                                                                                      70 3.3 Discussion   3.3.1 Sensor Performance Measures For a capacitive proximity sensor electrode with a given surface area and separation gap length and with an object distance range between 0.5 cm and 20 cm, the use of interdigitated comb-shaped sensors offered a larger mutual capacitance range between the transmitter and receiver electrodes (i.e. 2.21 pF in Figure 3-6) when compared to the mutual capacitance range obtained from the concentric sensors (i.e. 1.38 pF in Figure 3-3). While the separation parameter s was kept the same for both the comb and concentric sensors, the comb-shaped sensor offered an increase in electric field intensity for a given distance from the electrode surface compared to the concentric sensor. This resulted in a greater sensor sensitivity and maximum detection range for the comb-shaped sensor and thus resulted in its selection for occupant head position quantification in this research application. It must be noted that an analog multiplexer is in charge of activating each pair of the capacitive sensor electrodes at a given time instant. This means that when a pair of capacitive sensor electrodes in the array is activated, the other two pairs are floating. This will eliminate the capacitance coupling between the three pairs of capacitive sensor electrodes within the array. The FEA results provided in Section 3.2 and the provided discussion can be summarized in the form of a process dedicated to comb-shaped sensor design as follows:  - Given the total surface area of the HR front panel (HRA ), and considering the comb-shaped sensor geometry and dimensions shown in Figure 3-4, choose a suitable comb-shaped sensor rectangular surface area (sensA L H  ) and the parameters L and H to accommodate an array of four capacitive sensors within HRA (including one comb-sensor reserved for the reference capacitor). Note: in this research, L was chosen to be 11 cm and H to be 4 cm.                                                                                                                       71 - Choose s considering the maximum detection range (maxd ) plot derived from a set of different w and s values (see Figure 3-9.b). It is clear from the plot that one should choose the smallest feasible value for s. Here, s = 1 mm offered the longest maximum detection range for each value of w. - Given the chosen length (L) and a separation gap (s), obtain a set of different finger widths (w) for each integer value of n using the equation ( 2 )w L n s  which was obtained from the sensor electrode geometry in Figure 3-4. For example, for s =1, the aforementioned equation results in w = 1.75, 4.5, 10, 26.5, 54 mm for example values of n = 20, 10, 5, 4, 2, 1, respectively.  - Choose (n) and use the corresponding (w) in the simulation to obtain the mutual capacitance range and capacitance C such that: a) the simulated full scale mutual capacitance range is maximized but does not exceed the full scale input range of the C/D (chosen here to be 0.5 pF), and b) the total capacitance C with no grounded target object present should not exceed the C/D’s maximum allowable input capacitance (14 pF). It should be noted that the C/D used in this work digitizes capacitance values in a maximum 0.5 pF window provided that the maximum capacitance lies below 14 pF. In the final experimental sensor (Section 3.2.11.2), the finger width was chosen to be      w = 4.5 mm (n = 10) and the separation parameter s = 1 mm for the comb sensor with a grounded backplane and a head as the target object.  For this, the mutual capacitance range was 0.43 pF and the maximum capacitance was 4.67 pF.  3.3.2 Contributing Factors to the FEA Results Two main factors were identified in this research as potentially affecting the accuracy of the mutual fringe capacitance calculations in the performed FEA studies: 1)                                                                                                                       72 the relative permittivity of air, and 2) the finite element size used in the FEA simulations. The dielectric constant of air was assumed unchanged in this study (i.e. 1airr ). As discussed in Section 3.2.7 , the relative permittivity of air is a function of the temperature and relative humidity of the environment in which the experimental study is performed. Choosing a relative permittivity that corresponds to the temperature and relative humidity of the experimental environment led to closer results between the FEA simulations and the experiments. Also, the smallest possible finite element size that could be achieved in the simulations was 0.07 cm. It was revealed through the simulations that decreasing this parameter from 0.09 to 0.07 cm produced a negative offset on the capacitance signal bias and vice versa. For example, when this value was decreased from 0.09 cm to 0.07 cm, the capacitance signal bias was reduced by 0.08 pF and hence made the FEA results closer to the experiments. We believe that if a smaller finite element size could have been achieved (i.e. less than 0.07 cm), the capacitance signal bias (i.e. 4.01 pF as in Figure 3-7) would be even closer to the capacitance signal bias calculated from the experiments (i.e. 3.63 pF as in Figure 3-8). Unfortunately, this could not be achieved due to memory limitations and COMSOL software memory management.  3.3.3 Adaptive Occupant Head Servoing and Rating The device concept shown schematically in Figure 3-20 was created to examine the applicability of a capacitive-based head position quantification in vehicles and to retain compliance with the IIHS standards [2]. This figure incorporates: 1) the custom-made electromechanical HR system with a simple PID controller for servoing the actuators to remove steady-state errors between the desired and the actual outputs, 2) the primary detection zone of the comb-shaped capacitive sensing array (i.e. Zone A), 3) the extended detection zone (Zone B) as the result of the HR servoing in forward/backward and up/down                                                                                                                       73 directions, and 4) the 3D model of the occupant head with two markers on the back and top of the head added for visualization purposes.     According to the accuracy measurements reported in Section 3.2.11.2, the capacitive proximity array offered a head position detection zone (i.e. a rectangular cuboid of 14×7×7cm along2i , 2j, and2k , respectively) labeled as Zone “A” in Figure 3-20 (left). Recalling the HR servoing motion range of ±3.5 cm vertically along 2jand 12 cm horizontally along 2k  (reported in Section 3.2.9) creates Zone “B” (i.e. a rectangular cuboid of 14×14×19 cm) as shown in Figure 3-20 (left). Hence, one can safely assume that while HR servoing is being conducted, the 3D position of the back of the occupant’s head, 2 X , can be detected if it lies within the 14×7×7 cm range of the sensor plus the 0×7×12 cm range of the servoing motion, which in total offers a head position detection zone B of 14×14×19 cm.  Figure 3-20. (Left) The electromechanical HR, the primary (A) and the extended detection (B) zones, and the occupant head 3D model with the back of the head positioned inside the extended zone. All the dimensions are in centimeters. (Right) IIHS HR Rating System (printed with permission of IIHS). 14.019.07.0BA19.6015.5018.511C1j1k7.014.01: Top of the head, 2: Back of the head,20C0j 0k0o1ohdd: backset distance (horizontal distance),v: vertical distance).Vd1i0i2i2j2k 2o2C20-2-4-6-8-10-12-14-162 4 6 8 10 12 14 16 18Distance above/below average man’s head (cm)Backset (cm)                                                                                                                      74 Figure 3-20 (right) illustrates the IIHS HR rating system that ranks the suitability of the HR position into four classifications (Good, Acceptable, Marginal, and Poor).  The horizontal distance (or backset) is measured from the back of the head, 2 X , to the front of the HR (with its coordinates in  2 2,o C ), while the vertical distance is measured from the top of the head to the top of the HR, taking into account whether the top of the HR is above or below the occupant’s head. Considering the calculated dimensions of Zone B, the Cartesian coordinates of back of the occupant’s head can be detected as long as it lies within this zone. For example, if HR repositioning is needed and the back of the head is positioned 18 cm relative to the HR (horizontally), the HR servoing can be conducted towards the head (e.g. for another 12 cm) until the back of the head lies within Zone A and will eventually be detected by the sensor array.  Considering Figure 3-20 and Table 3-2, this HR positioning process positions the HR in vertical direction 2 2vcm d cm    relative to the top of the driver’s head (with 0 cm as the setpoint and a MAE of 0.18 cm along2jaxis) and a backset distance of 2 5hcm d cm   (with 3.5 cm as the setpoint and a MAE of 0.19 cm along 2k axis). This will maintain a “Good” rating according to the classification of horizontal distance reported in Figure 3-20 (right). Note: see the backset distances associated with the green zone for a “Good” rating. Also, when the HR is fully retracted in the vertical direction, the proposed capacitive-based head position detection system will be capable of maintaining a “Good” rating if the back of the occupant’s head lies inside zone B. This arrangement will enforce the back of the head to be positioned inside zone A after servoing upward (0k axis).                                                                                                                        75 3.4 Conclusions This research has developed and utilized a new design process for the creation of a capacitive head position sensing array capable of detecting and monitoring the 3D position of the back of the head as needed for use in active head restraint positioning applications. Application of the design process resulted in the design of a comb-shaped sensor (comprised of a pair of transmitter and receiver electrodes) and the inclusion of three sensors symmetrically arranged in a sensing array. This constituted the minimum number of sensors required to effectively detect and accurately quantify the 3D position of the back of the head within the required detection zone forward of the head restraint.   Sensitivity of the capacitive sensing system to changes in temperature and humidity has been very significantly reduced through the implementation of a reference capacitance compensation technique. This technique, which utilizes a reference electrode pair which is physically identical (in both geometry and materials) to the sensing electrode pair, was shown through analysis and experimental testing to suitably compensate for capacitance changes due to realistic variations changes in temperature (25 to 45 C°) and humidity (60 to 95%) using capacitance ratio measurements to a ratio accuracy of 3 ppt.  Although not able to be evaluated with available laboratory equipment, the same reference electrode is also subject to environmental variations in pressure and therefore should also be compensated. Since a grounded backplane was installed between the sensor array side of the HR and the space behind the front seat HR for example, the array is shielded from the electrostatic interference of other electromechanical components behind the array. This shielding came at the cost of a significant reduction in maximum detection range for the                                                                                                                       76 sensor compared to the same sensor without a grounded backplane, see Table 3-1. Ideally, to test for full robustness to all disturbances, a final vehicle-ready head position measurement system should be evaluated in a particular target vehicle with a range of seat angle adjustments and a number of human subjects wearing a variety of wet or dry clothing and personal accessories. This research study has validated the effectiveness of the proposed array of range-maximized interdigitated sensors, along with a grounded backplane shield, and an environmental compensation electrode by demonstrating that one can estimate the 3D head position of an occupant in real-time and in a detection zone (of volume 14×7×7cm) ahead of the array with a mean Euclidean distance error of 0.33 cm. The detection zone of the new sensor array is thus larger than that previously reported in [16] and has the added shielding and environmental robustness benefits. Based on the work from this study, it is concluded that, when combined with a linear position servo system, the sensor array which was created based on the design approach described here (and subsequently tested) is fully capable (in terms of accuracy and robustness) of fulfilling the IIHS requirements for proper HR positioning to effectively prevent or mitigate whiplash injury in the case of a rear-end collision. Real-time head position sensing in the laboratory setting was achieved with an estimation frequency of 25 Hz on the 1.6 GHz embedded computer integrated inside the self-contained HR device. These findings are being presented with the intention of assisting automotive seat manufacturers in the pursuit of improving occupant safety in future vehicle designs. The laboratory experiments revealed that the addition of the grounded backplane to the sensor array successfully eliminated the negative effects of other conductive objects                                                                                                                       77 (behind the array) on the capacitance signals. In those experiments, grounded conductive objects such as human hand, fingers, and small size metallic objects were used.                                                                                                                               78 Chapter 4. Driver Head Orientation Estimation 3 4.1 Face Template-based Head Orientation Estimation It was evident from experiments that capacitive sensing could not be used for accurate head orientation detection. Thus, a ToF camera was required to perform accurate measurements of the head orientation. As a side benefit, it also provides a redundant measurement of the head position.  The goal of this portion of the research was to examine the effect of the ToF camera integration time and its impact on the accuracy of driver head pose sensing. In achieving this goal, a method was first proposed to detect the driver’s head position and orientation, and then testing was performed both in the laboratory and in the vehicle environment. The following tasks were completed to satisfy the above goal: 1) Experiments and analyses were performed to study the effects of integration time adjustment and the projected NIR light pattern on the accuracy of the generated depth map. 2) The results of the above study were used to devise a method to select the integration time to achieve an accurate depth map of the driver’s head. 3) A method was then developed to estimate the driver’s head pose from the 3D depth map of the head by utilizing an Iterative Closest Point (ICP) algorithm.  The ToF camera was mounted inside the vehicle to acquire unobstructed depth maps of the driver’s head within the camera FoV.      3 A version of Chapter 4 has been published, N. Ziraknejad, P. D. Lawrence, and D. P. Romilly, “The effect of Time-of-Flight camera integration time on vehicle driver head pose tracking accuracy,” presented at the 2012 IEEE International Conference on Vehicular Electronics and Safety (ICVES), 2012, pp. 247 –254.                                                                                                                       79 The following sections describe how the PMD-vision® ConceptCam ToF 3D camera was installed and utilized inside the vehicle, the methods used for data acquisition, and the experimental procedures.      4.1.1 Camera Instrumentation and Data Collection The PMD-vision® ConceptCam requires a 12VDC power supply and a USB communication interface to a PC. The requirement for an experimental prototype offers a straightforward in-vehicle installation as well as a data acquisition process such that a laptop computer can be used to collect and process the generated 3D depth maps while driving. Potential installation sites for the camera included the rear-view mirror, front A-pillars, as well as various locations on the dashboard. These were investigated and a location in front of the driver and above the dashboard was selected. The camera installation mount was secured on the windshield to achieve the proposed installation. Figure 4-1 (a) shows the ToF camera mount installed on the windshield above the dashboard while Figure 4-1 (b) shows the camera FoV and the raw intensity image of the driver’s head processed simply by scaling it between 0 and 255.            Figure 4-1. (a) The ToF camera on the dashboard with its mount installed on the windshield in front of the driver. (b) The camera FoV of the driver.                                                                                                                       80 4.1.2 ROI-based Integration Time Adjustment  4.1.2.1 Integration Time vs. Accuracy   Common to all ToF cameras, the integration time defines the duration in which each pixel is exposed to the incident modulated light. The phase shift is calculated from the integrated samples collected over the integration time for each pixel. The object-to-camera distance is then estimated from the calculated phase shift. This process is explained in detail by Möller et al. [83].  In this process, the signal-to-noise ratio is directly affected by the integration time selected. Failure to collect sufficient modulated signal decreases the signal-to-noise ratio and leads to uncertain distance measurements. Long exposure to the incident light potentially introduces saturation of the received light at each pixel, which also distorts the estimated distance. Therefore, accurate depth measurement of the scene of interest is directly dependent upon properly setting the integration time.  Various researchers have introduced different techniques to properly adjust the ToF camera integration time depending on the proposed application. Hahne et al. [84] proposed an exposure fusion algorithm in which several exposures of a scene are taken with different integration times. Each exposure is multiplied per pixel according to a weight map and the depth images are fused together. The fusion in this process can be realized as a multi-resolution blending. Gil et al. [85] proposed an integration time adaptation for a robotic visual servoing application using offline and online processes. The offline process involved setting up the minimum and maximum integration times for near (29 cm) to far (72.5 cm) objects. The online process involved updating the integration time according to the robot end-effector velocity. May et al. [86] proposed a closed-loop proportional control technique to tune the integration time when the mean intensity of the scene was at a predetermined value.                                                                                                                         81 Unfortunately the above identified prior work does not clearly discuss or address the effects of the non-uniform projected light patterns of the NIR LEDs on the target object and its relationship to depth map accuracy. As this is deemed critical to system success, the proposed research in this chapter examines the non-uniform projected light pattern of the NIR LEDs in conjunction with camera integration time as possible sources of error in the depth map calculations.  4.1.2.2 Integration Time Experiments A white flat wall was chosen as the object of interest in the proposed integration time experiments. The ToF camera was installed on the end-effector of a 6 DOF robotic manipulator with the camera distance to the wall adjusted between 70 cm and 110 cm. The robot moved the camera towards the wall in 10 cm distance intervals. The intensity images representing the projected light pattern on the wall are plotted in Figure 4-2 (left) for camera-to-wall distances of 70 cm and 110 cm. The generated intensity images were scaled between 0 and 255 to be used as typical grayscale (intensity-based) images. A color map was added to the intensity image in Figure 4-2 for better visualization of different segments of the projected light pattern. The intensity image was segmented into three distinct ROIs as shown in Figure 4-2 (right). A circle with a 25 pixel radius located at the center, a ring with a smaller 25 pixel radius and a larger 50 pixel radius surrounding the inner circle, and a ring with a smaller 50 pixel radius and a larger 75 pixel radius surrounding the smaller ring comprises the three distinct ROIs. At each camera-to-wall distance sampled, as positioned by the robotic manipulator, a customized MATLAB™ data collection and processing program acquired the 3D Cartesian coordinates of the pixels located in each ROI. This process, as applied in the experiments, is described next.                                                                                                                       82         The ToF camera provided the estimated Cartesian coordinates (  ( , ) Ti jP x y z ) of each pixel within the ROIs inside the intensity image. The average value of the ‘z’ components (camera-to-wall vertical distances) for each ROI was then calculated and the accuracy of the measurements was obtained by comparing the average estimated values to the real depth values. The real depth values were obtained by reading the relative positions of the ToF camera (installed on the 6 DOF robotic arm) with respect to the flat wall. The following formulas were used to collect the corresponding intensity pixels and Cartesian coordinates for the three distinct ROIs within in the intensity image.    min max( , ) 2( , ),0i j k c ki j kI if r ij ij rIelse          (4.1)   min max( , ) 2( , ),0 0 0i j k c ki j k TP if r ij ij rPelse         (4.2) In the above formulas, I and I   represent the image intensity and the selected intensity matrices respectively with i and j as image 2D indices. The variable ij represents the vector formed by i and j with the origin attached to the image bottom left corner, while is the cijhorizontal pixels  50 100 150 20050100150200250300050100150200250horizontal pixels  50 100 150 20050100150200250300050100150200250horizontal pixels  50 100 150 20050100150200250300050100150200250horizontal pixels  50 100 150 200501001502002503000501 05205horizontal pixels  50 100 150 2005010015020025030050100150200250horizontal pixelsvertical pixelsToF cam amplitude image (rep. received light pattern at 70 cm)  50 100 150 20020406810012140160 406080100120140160180200220240h rizontal pixelsvertical pixelsToF cam amplitude image (rep. received light pattern at 70 cm)  50 100 150 2002040608100120140160 406080100120140160180200220240horizontal pixelsvertical pixels  50 100 150 2002040608010012014016018020050100150200250CircleRadius=25 Ringsmall radius=25, large radius=50  Ring small radius=50, large radius=75  Figure 4-2. (left) ToF camera intensity image representing the received light pattern,  (top) cam-to-wall = 70 cm, (bottom) cam-to-wall=110 cm, tint= 0.3 ms, (right) The allocated ROIs for performing depth accuracy measurements.                                                                                                                        83 vector with the same origin but with its vertex at the center of the image. The variable  ( , ) Ti jP x y z   represents the selected Cartesian coordinates matrix for each of the ROIs indexed by k=1,2,3 for the center, inner ring, and outer ring ROIs respectively. The ROIs were defined by two sets of radii values  min 0,25,50kr   and  max 25,50,75kr  . For each known camera-to-wall distance, curves of the estimated distance values (determined by taking the average of the distance values within each ROI) vs. integration time along with the corresponding image mean intensity value vs. integration time for each of the ROIs were plotted, as shown in Figure 4-3.            The top, middle, and bottom graphs represent the data acquired from measurements for Figure 4-3. The estimated average distance and image mean intensity vs. integration time for: (top) the center circle ROI, (middle) the inner ring ROI, (bottom) the outer ring ROI. Note: parameter d is defined in the top curves and represents the actual camera-to-wall distance set by the robot.   ROI mean intensityROI mean intensityROI mean intensity                                                                                                                      84 the center, inner ring, and outer ring ROIs respectively. For each of the ROIs, the left graph reports the average estimated camera-to-wall distance measurement for different camera distances vs. the integration time and the right graph reports the image mean intensity corresponding to each distance vs. the integration time.  One might expect that the image mean intensity would increase, saturate, and stay at a maximum value (e.g., 240) as the integration time is increased, however that was not seen in the measurements as illustrated in the right-hand side plots of Figure 4-3. For example, the image mean intensity for the pixels inside the center ROI for the distance measurement at d = 90 cm was increasing from tint = 0.2 ms to tint = 0.4 ms (as expected) but decreased afterwards. This decrease in image mean intensity (starting after the saturation) occurred due to the introduction of random artifacts as a result of pixel saturation. The random artifacts are clearly visible on the chest of the driver shown in the intensity image in Figure 4-3 (a).  The ToF camera saturation process has been explained previously (see [83] and [87]). This rise and fall behavior of the image mean intensity was found in the measurements for all three of the ROIs as shown in Figure 4-3.  A set of integration times corresponding to the peak image intensities (right plots of Figure 4-3) for the center, middle, and outer ROIs were selected which produced a small distance error at each distance. The error in distance estimations is reported in Table 4-1. A star-shaped marker (‘*’) indicates the selected integration time for each distance in the left plots. The image peak intensity values resulted in selection of the integration times were also marked with a (‘*’) in the right plots. In other words, the camera reports the most accurate distance measurement just before the pixel saturation mode as a result of too large an integration time. Table 4-1 reports the performed                                                                                                                       85 measurements and the calculated errors associated with estimated distances using the selected integration time settings.      Table 4-1. The integration time, maximum mean intensity, and distance error values for each ROI associated and each camera-to-wall distance.  d [cm] Circle ROI (0-25 pix) Ring ROI (25-50pix) Ring ROI (50-75pix) 𝑡𝑖𝑛𝑡 [ms]  err [cm] 𝑡𝑖𝑛𝑡 [ms]  err [cm] 𝑡𝑖𝑛𝑡 [ms]  err [cm] 110 0.6 239.8 0.25 0.8 235.8 0.22 1.1 230.2 0.04 100 0.5 239.6 0.31 0.7 236.1 0.23 1.0 229.9 0.09 90 0.4 240 0.32 0.6 236.4 0.39 0.9 229.8 0.19 80 0.3 239.2 0.41 0.5 234.9 0.49 0.8 229.6 0.24 70 0.2 237.5 0.52 0.4 236.4 0.65 0.7 230.2 0.41  In Table 4-1, d is the actual camera-to-wall distance in centimeters, and tint is the integration time in milliseconds. If I is the mean intensity of the pixels in the ROI, then maxI denotes the maximum mean intensity value of I over the range of integration times (see *’s on the right side of Figure 4-3). The err is the absolute error between the actual camera-to-wall distance and the average ToF measured distance in centimeters. For each ROI, the maximum mean intensity values for all distances d are relatively close to each other. The average over all distances d of the maximum mean intensity values maxI is denoted  , and its standard deviation is  . These are reported in Table 4-2. Based on the error results shown, the ToF camera needs a predetermined image mean intensity value for each of the defined ROIs to compensate for the nonuniform projected light pattern.   Table 4-2. Average maximum mean intensity values, and standard deviation for each region of interest. d = 70 cm to 110 cm  Circle ROI (0-25pix) Ring ROI (25-50pix) Ring ROI (50-75pix)    239.22 235.92 229.94   1.00 0.62 0.26   maxI maxI maxI                                                                                                                      86 Another study was conducted to investigate the effects of target reflectivity variations on the accuracy of the employed ToF distance measurements. For this purpose, six sheets of paper with different reflectivity values were placed at distances of 80 cm and 90 cm from the ToF camera. Note that only the center ROI was utilized in this study. Figure 4-4 plots the average estimated distance values within the center ROI vs. the integration time, as well as the image mean intensity values vs. the integration time. As shown in Figure 4-4, the maximum image mean intensity for all six sheets is near 240, thus indicating that this peak value does not vary with limited changes in reflectivity.     Figure 4-4. The average estimated distance and image mean intensity values vs. the integration time for the six paper sheets with different reflectivity values. (top) camera-to-target = 90 cm, (bottom) camera-to-target = 80 cm.  However the peak value does occur at different integration times for each individual sheet as the reflectivity is changed.  This parallels the previous results (see Table 4-1) which showed that the maximum image mean intensity values were not changed as the camera-0 0.5 1 1.580859095Average estimated distance vs. tintAverage estimated distance [cm]0 0.5 1 1.5170180190200210220230240250Image ROI mean intensity vs. tintImage mean intensity0 0.5 1 1.570758085Integration time [ms]Average estimated distance [cm]0 0.5 1 1.51701801902002 02230240250Integration time [ms]Image mean intensity                                                                                                                      87 to-target distances were varied over the tested range. Hence, it can be concluded that the proper  integration time values for the ToF camera should be determined based on a criterion of peak intensity values which occur just before saturation in order to achieve the best distance calculation, and are independent of the camera-to-target distance used (within the range tested).    4.1.3 Driver’s Head Depth Map Generation Based on the previous findings, an algorithm was proposed to construct the driver’s head depth map by properly adjusting the integration time within the three allocated ROIs. The proposed algorithm includes the following steps with the example results shown in Figure 4-3:  1) A temporary depth map containing the 3D Cartesian coordinates of the points within the FoV of the camera (including the driver’s head) is acquired by defining a trial tint (1.0 ms). 2) Assuming the closest parts to the camera belong to the driver’s body, and by defining a proper distance threshold (e.g. 100 cm), the background pixels from the intensity image are eliminated.  3) By monitoring the center ROI, and by assuming it covers any part of the driver’s head and upper torso, the integration time is increased from 0.2 ms to 0.6 ms in 0.1 ms intervals. The mean intensity of the pixels with values greater than zero and located inside this ROI is monitored during this increment. The set of 3D Cartesian coordinates associated with each of the integration times is also acquired and stored temporarily in the system memory. The last integration time before observing decay in the image mean intensity is chosen as the integration time required for this region.                                                                                                                       88 4) The set of 3D Cartesian coordinates of the driver’s body that appear in the ROI and associated with the selected integration time is used to construct the partial depth map of the driver’s body in the ROI.   5) The previous two steps are repeated for smaller and larger ring-shaped ROIs until the integration times are selected and the corresponding depth maps are constructed.    6) All the constructed depth maps are merged into a single depth map of the driver’s head and shoulders.             4.1.4 Template-based Driver’s Head Pose Estimation The core part of the proposed algorithm in the present work employed the ICP method  which was originally introduced by Besl and McKay [88] and Chen and Medioni [89] for the registration of rigid 3D shapes. The proposed driver’s head position and orientation algorithm includes two main parts: i) generating reference 3D template data (i.e. the driver’s facial mask) obtained from the  driver’s head while the driver is requested Figure 4-5. (a) The intensity image captured by the 1.0 ms trial integration time, (b) background pixel elimination, (c) the constructed depth map, (d) the constructed depth map from the selected tint for the three ROIs.                                                                                                                          89 to be positioned as Forward Looking and Normally Positioned (FLNP) during an offline process and ii) using an implementation of the  ICP algorithm provided in [90], correlating the reference 3D template data with the obtained 3D depth maps of the driver’s head (in real-time) and determining the homogenous transformation representing the relative angular and translational relationships between the reference and the correlated template data. This real-time process is referred to as the online process in the present work. The proposed driver’s head position and orientation algorithm includes the following processes (which are illustrated in Figure 4-6). 4.1.5 Reference 3D Template Data Generation The offline process devised to obtain the 3D template data of the driver’s face includes the following steps: 1) Construction of a 3D depth map which includes the upper torso body parts of the FLNP driver template position.    2) Processing of the obtained upper torso depth map to obtain a 3D template data or the driver’s facial mask containing the nose, chin, etc.  Note that the term “template data” refers to the matrix holding the coordinates of the reference extracted mask throughout this chapter. The reference template data is stored permanently so it can be used in the correlation part of the proposed algorithm (see next section).  4.1.6 3D Registration of Data Points The online process devised to correlate the template data points into the driver’s upper torso depth map generated in real-time includes the following steps:                                                                                                                        90 1) Construction of the real-time depth map of the driver’s upper torso using the algorithm as explained in Section 4.1.2.2. The 3D Cartesian data points of the depth map are used as the “model” data points.  2) Insertion of the template (TP ) and the model ( MP ) data sets into the ICP algorithm to minimize the distance measures between the template and the model [91]:  2,min ( )M Ti iR T P RP T    (4.3) The Euler angles representing the orientation of the driver’s head relative to the FLNP driver’s head template are obtained by applying the rotation matrix (R) to the “Matrix to Euler angles” formula in [92]. Vector T also represents the translation of the correlated template data points in the 3D space relative to the FLNP driver’s head template originally obtained and stored during the offline process.         kjioFigure 4-6. (a) The reference template data (facial mask) extracted from the driver head (FLNP) as the result of the offline process, (b) The depth map (model data set) expressed with the 3D Cartesian coordinates of the driver’s upper torso obtained by choosing a proper integration time during the online process, (c) the template data (in white) correlated into the model data set using the homogenous transformation matrices obtained as the results of the ICP process, (d) the relative angular relationship between the reference template data and the correlated template data sets expressed only by the rotation around the axis.     ka) b) c) d)                                                                                                                       91 4.1.7 Position / Orientation Measurements in the Laboratory A set of ToF camera images and depth maps were taken using a StyrofoamTM head model mounted at the end-effector of the robotic arm explained earlier. The camera was positioned in such a way that the horizontal distance between the tip of the nose and the camera was measured. A total of 20 ToF camera measurements were taken and processed (as described in Section 4.1.3) from a distance of 70 to 90 cm from camera to nose using distance intervals of 1 cm. The estimated positions as calculated using the proposed algorithm are reported in the Table 4-3.    Table 4-3. The estimated positions of the StyrofoamTM head model Min error [cm] Max error [cm] MAE [cm]  𝜎 [cm] 0.32 0.85 0.57 0.19 In a related experiment, images were acquired from the head model while mounted on a robot end-effector and rotated to specific angles between -15 to +15 degrees from camera-facing, and a template was created.  Next, the accuracy of the driver’s head orientation was examined and processed using the methods previously reported in Section 4.1.4. The results are shown in Table 4-4. The reported orientation is about the axis of the coordinate frame shown in Figure 4-6. Table 4-4. The orientation error estimates of the StyrofoamTM head model Act.Orien. [deg] -15 -10 -5 0 5 10 15 Est.Orien [deg] -14.22 -9.40 -4.67 -0.22 4.57 9.38 14.12 ABS Error [deg] 0.78 0.60 0.33 0.22 0.43 0.62 0.88 MAE [deg] 0.55  The measurements reported in Table 4-4 indicate a relatively small MAE for the estimated head rotation angles.  k                                                                                                                      92 4.1.8 Position and Orientation Measurements in a Vehicle In this section, the measurement processes were repeated for a driver seated inside a vehicle, and the experimental results, including the transformation data (i.e. position and orientation) relative to the FLNP, are reported in Table 4-5 and Table 4-6. The horizontal distance between the tip of the driver’s nose and the camera (while forward looking and normally positioned relative to the vehicle) was measured to be 70 cm and the head was moved 10 cm towards the camera in 2 cm intervals (i.e. a total of 6 measurements). In this process, the distance between the head and the HR was measured for each interval using a ruler (est. measurement error of approximately 1 mm using the ruler). The second row of the Table 4-5 indicates the measurements taken during an experiment in which the trial integration time was set to 1.0 ms for all three ROIs. This added testing was performed to allow the comparison of measurement accuracy between measurements taken when the integration time was adjusted as proposed in this research and when the integration time was set to the trial integration time.    Table 4-5. The estimated positions of the human head in the vehicle  Min error [cm] Max error [cm] MAE [cm]  [cm] Proposed Algorithm 0.54 1.02 0.77 0.19 tint=1.0 ms 2.79 3.94 3.39 0.42  The measurements conducted inside the vehicle introduced a relatively larger error than the case where a StyrofoamTM head model was used.  The next experiment was conducted inside the vehicle to examine the accuracy of the proposed driver’s head orientation estimation methodology. An optical tracker with a position accuracy of 0.35 mm. (NDI Polaris®) was employed to determine the driver’s head orientation from a set of head mounted IR light sensitive passive markers. The determined                                                                                                                       93 head orientations measured by the optical tracker were utilized as the actual head orientations as ground truth. Similar to the laboratory head orientation measurements, the driver’s head was rotated in 5 degree increments and for each rotation the estimated orientation along the k axis was measured (see Table 4-6). Table 4-6. The orientation error estimates of the human head in the vehicle Act.Orien. [deg] -15 -10 -5 0 5 10 15 Est.Orien [deg] -13.15 -8.22 -4.06 0.37 4.14 8.87 13.08 ABS Error [deg] 1.85 1.78 0.94 0.37 0.86 1.13 1.92 MAE [deg] 1.26  As confirmed by the results reported in the above table, the driver’s head rotation angle estimation offered a relatively larger mean absolute error for the measurements when conducted in the vehicle.    4.1.9 Summary - Results of Laboratory and in-Vehicle Studies    The laboratory results offered a MAE of 0.57 cm for the measurements taken for the horizontal distance between the camera and the tip of the nose of a StyrofoamTM head model while the rotation measurements reported a MAE of 0.55 degrees. In contrast, the measurements performed inside the vehicle offered a relatively larger MAE of 0.77 cm for the driver head position estimation (35% larger) and a MAE of 1.26 degrees for the orientation estimation (130% higher). From the measurements reported in the second row of Table 4-5, it was evident that the lack of suitable adjustment of the integration time introduced an approximate 2.6 cm larger mean absolute error in comparison to the measurements conducted with an integration time selected by the proposed algorithm. A discussion on the conducted measurements in the laboratory and inside the vehicle is provided next.                                                                                                                         94 4.2 Discussion The position and orientation accuracy of the head measurements were observed to be notably higher in the laboratory than in the vehicle. These errors may be due to the fact that measurements in the laboratory were made using a homogeneously white smooth head model in a normally illuminated lab, whereas the measurements in the vehicle were taken from a human driver in a vehicle during daytime (although not in direct sunlight) – i.e. two extremely different surfaces and different lighting conditions.  Although position accuracy measurements were taken at different depths, the head model in the laboratory, and the driver’s head were not translated horizontally or vertically and thus were subsequently tested again. Similarly, the face template was obtained for one 3D horizontal and vertical position only and the rotation estimation measurements (obtained as the result of the ICP algorithm) were conducted exactly at that same location. More measurements will be undertaken in the future at different locations to gain additional knowledge of potential errors as a function of driver horizontal location and height.  Accurate head rotation measurements for closer or farther head proximities to the camera than studied here require the obtained template to be scaled larger and smaller respectively. This process is not presented in this chapter and will be addressed in future work. ToF camera depth measurement accuracy may be affected by synthetic and reflective clothing. To avoid this, the FoV of the camera and ROI can be adjusted to include the driver’s head and neck only. 4.3 Conclusions The goal in Chapter 4 was to examine the role of integration time in improving the accuracy of head pose estimation. A new methodology was introduced and tested and success was achieved in fulfilling this objective.                                                                                                                       95 It was clear from the conducted experiments that a suitable intensity level was required for accurate depth measurements. The performed and reported laboratory experiments reveal that the adjustment of the ToF camera integration time was a key element in acquisition of accurate depth maps. Hence, a method was proposed to adjust the camera integration time to achieve the required maximum image intensity for a defined ROI of the image in order to achieve accurate depth measurements within that ROI. The performed experimental results indicated that the required image intensity to achieve accurate depth estimates was relatively constant. The results also indicated that depending on the illuminance distribution of the NIR light within the image, different regions within the image may require different integration times to achieve maximum accuracy. Similarly, it was shown that accuracy can be improved if surface reflectance was taken into account.  It was shown in Table 4-5 that distance errors, produced as the result of using the proposed integration time selection algorithm, were reduced significantly. The proposed ToF camera-based driver head position and orientation measurement methodology showed promising results from the testing both in the laboratory and in the vehicle. The accuracy measurements in this chapter report a MAE of less than 1 cm for driver head position estimation, and less than 2 degrees for the orientation estimation (within the range of ±15 degrees) by employing the proposed integration time adjustment technique. For head orientation greater than ±15°, additional face templates would be required.                                                                                                                           96 Chapter 5. Driver Head Pose Sensing by Combined Electrostatic and Time-of-Flight Sensing 4 As pointed out in both [93] and Chapter 4, to achieve an accurate depth map of a driver’s face in real-time using a ToF camera, optimal integration times for the ROI in the scene must be obtained. Two goals were pursued in this chapter: 1) to develop a new integration time adjustment process that would improve real-time operation compared with the previous depth map estimates reported in [93] while retaining sufficient depth fidelity to use in pose estimation, and 2) to define and extract an invariant human facial feature (i.e. the tip of the driver’s nose) from a ToF camera depth map was defined and extracted by calculating facial surface curvatures from which pitch and yaw angles could be rapidly and  accurately estimated without requiring per user calibration. The proposed integration time adjustment technique in this chapter differentiates from the method introduced in Chapter 4 as it only requires one single image of the ROI in which the driver’s head is located. 5.1 Description of the Pose Estimation Process The following process outlines the steps involved in the process of head orientation sensing when fusion of capacitive proximity sensing and range imaging was utilized to provide driver head orientation measurements.     A laboratory testbed was designed and built to Figure 5-1. The flowchart of the proposed algorithm for the purpose of real-time driver head orientation estimation.  4 A version of this section will be submitted for publication. A.Estim e ToF Camera-Head distance (capacitive-based)B.Select optimal ToF camera int gration timD.Calculate curvature equations E.Apply HK curvature analysis to find nose tipF.Assign nose coordinate systemC.Acquire the accurate driver’s head 3D point cloudG.Determine driver’s head orientation                                                                                                                      97 accommodate the head position and orientation sensors. The capacitive proximity sensor array was installed on the frontal compartment of the HR device of the testbed. The ToF camera mounted in front of the driver with an offset to the right side of the driver to simulate ToF camera installation underneath the real-view mirror (see Figure 5-2). The capacitive and ToF camera sensor locations along with coordinate system assignments are illustrated in Figure 5-2. For better illustration purposes, the ToF camera installation has been shown in its intended installation location (i.e. underneath the rear-view mirror compartment).                                  Figure 5-2. The coordinate system assignment for HR unit and the ToF camera 0k 0j0i 0o2k2j2i 2o ckcjciso0cTni2 cX Xnj2F Cd 2BH Cdˆ 15.5beCapacitive sensor arraycC sjsicocY ,s sx yToF cameraToF camera 2D image planeno1o1i 1k1j sCnknCDriver’s head sphere                                                                                                                      98 According to the in-vehicle experiments conducted, and by considering the FoV of the ToF camera (60° horizontally), this location can be used appropriately to include the driver’s head when properly seated.  The description of the notation in Figure 5-2 (approximating the driver’s head as a sphere) is as follows: Using  ,o C  to represent a Cartesian frame C  with origin o , -  0 0,o C is the base coordinate system where 0 0 00C i j k   is a right-handed Cartesian frame with 0o as its origin . This coordinate system is fixed to the HR device. -  0 0,o C and  1 1,o C are the HR device coordinate systems assigned for the device joints in charge of vertical and horizontal motions. -  2 2,o C is the coordinate system attached to the HR device frontal plane which moves in vertical and horizontal directions. - Point 2 X represents the driver’s back of the head detected by the capacitive sensor array. This point is represented in  2 2,o C  by a vector in frame 2C  as  2 2 2 2 TX x y z . -  ,c co C is the 3D ToF camera coordinate system where c c ccC i j k    is a right-handed Cartesian frame with co as its optical origin.   - Point c X  is assigned to the same location as 2 X (representing the driver’s back of the head) but is represented in ,c co C  with its coordinates  Tc c c cX x y z . - Point cY  is a point on the driver’s face which is represented in  ,c co C  with its coordinates  Tc c c cY x y z   . -  ,s so C is the 2D ToF camera sensor image plane coordinate system while the subscript “s” refers to “sensor”.                                                                                                                        99 - Point  ,s sx y in pixels (is the projection of point c X in the 2D image plane of the ToF camera with  ,s so C  as its coordinate system.   - Vector 2BH Cd represents the back of the driver’s head, 2 X , originating from  ,c co C with its Euclidean distance of 2BH Cd d  . - Vector 2F Cd represents a point, c X , on the driver’s face originating from  ,c co Cwith its Euclidean distance of 2ˆ F Cd d. Note that ˆ ˆd d b  .  . -  ,n no C is the tip of the nose coordinate system where n n nnC i j k    is a right-handed Cartesian frame with no as its origin and located at tip of the nose.    5.1.1 ToF Camera-to-Head Distance Calculation  The use of a capacitive proximity sensor array, as discussed in [16], allows the estimation of the position of the back of the driver’s head in the HR coordinate system in order to adjust the head restraint position to a safe position behind the head. To do this, a homogeneous transformation matrix between the HR and the ToF camera (0cT ) was estimated, by means of an iterative least-squares calibration technique. This matrix was then used to transform the coordinates of the object of interest (i.e. the back of the driver’s head) from the HR coordinate system,  0 0,o C , onto the ToF camera coordinate system,  ,c co C . The following equation represents such a transformation when the back of the driver’s head (i.e. represented by  2 2 2 2 TX x y z ) was originally represented with its Cartesian coordinates in the HR coordinate system 2 2,o C  (see Figure 5-2). Note that the homogeneous transformations of 0 1T  and 1 2T  were already calculated for the HR device.   0 1 20 1 2 Tc c c c cT T T X X x y z   (5.1)                                                                                                                       100 Assuming the driver’s head as a complete sphere, the Euclidean distance, d  , between the back of the driver’s head and the ToF camera is calculated by taking the norm of the c X vector (i.e. 2 2 22 cBH C c c cd d X x y z     ). Considering the average human head breadth, bˆ , of 15.5±0.25 cm (as reported in [78]), the relative Euclidean distance between a point on the driver’s face and the ToF camera, dˆ , can be calculated or “calibrated” by subtracting the aforementioned quantities (i.e. 2ˆ ˆF Cd d d b  ). Parameter e inside the relation, ˆ 15.5b e  , (see Figure 5-2) represents the inevitable amount of error in an average human head breadth that could be different for each driver. 5.1.2 ToF Camera Integration Time Selection   As explained in Section 4.1.2.1, proper adjustment of the ToF camera integration time is a key factor in collecting accurate 3D points. The relative distance between the driver’s face and the ToF camera was identified as an essential element in selecting the proper integration time for each ROI. However, this distance was an unknown parameter in the process of the previously proposed algorithm. The experiments in [93] revealed that higher integration times were required for objects at farther distances from the camera. Even though the proposed algorithm in [93] led to accurate depth maps of the driver’s head, the system’s real-time operation was significantly affected by the number of ROIs assigned to each image. For example, when finding the distance from the ToF camera to a wall, and designating three concentric ROIs in the image, required adjusting the integration time and capturing separate images for each ROI (three times), which could result in a considerable drop in system sampling frequency.  Recalling the experiments conducted in [93], the new approach in this chapter involves selecting specific camera integration times for known camera-to-wall distances for each                                                                                                                       101 concentric ROI. The corresponding ROI mean intensity values vs. integration time for each ROI and the estimated average distance vs. the integration time were also plotted in Figure 4-3.  It is clear by comparing the “Estimated average distance” with the actual distance (d) for each curve (d=70, 80…110 cm), shown on the left-hand images in  Figure 4-3., that for a given camera-to-wall distance, the optimal integration time (marked with a marked star on each left-hand curve) achieves the best distance estimation accuracy and maximum mean intensity.  In the present work, the set of known camera-to-wall distances in the center circle ROI (Figure 4-3., top) was used as input to fit a first order polynomial with the optimal integration times (the peaks on the right-hand curves that correspond to the marked stars on the left-hand curves). This process was repeated for each of the subsequent inner and outer annuli ROIs. The following three first-order polynomial equations were obtained to estimate the optimal integration time for each of the center circle (cT ), inner ring ( iT ), and outer ring (oT ) ROIs.     1 01 01 0ˆˆˆc c ci i io o oT t d tT t d tT t d t     (5.2) As explained previously, the distance between the driver’s face and the ToF camera           ( dˆ ) was estimated from the measurements conducted by the capacitive proximity sensor array. The linearity of these equations can be justified by the straight line in the left-hand plots of Figure 4-3. This accurate and independently acquired distance was then given as an input to the above first order polynomials and a set of integration times ( , ,C i oT T T ) were obtained accordingly. These were then utilized to maximize the accuracy in the captured                                                                                                                       102 point clouds of the driver’s head and require only (5.2)  from one calibration image taken by the sensor.  In order to achieve real-time processing of driver head pose estimation, it is important to note that only a single ROI out of the aforementioned three regions of interests was selected for the purpose of head pose sensing in this research. The area in this ROI offers the largest coverage of the driver’s face. This was achieved by a 3D to 2D inverse transformation of the ROI along the vector  2 TF C c c cd x y z   when both 2BH Cd and the scalar value of the driver’s head breadth, bˆ , were given. This resulted in finding a point on the driver’s face in the 2D intensity image space of the ToF camera and consequently identifying the desired ROI. Note that the driver’s head was assumed to be a complete sphere during the aforementioned calculations.  5.1.3 Driver Head Point Cloud Generation and Smoothing  Utilizing the aforementioned ToF camera integration time adjustment technique, three point clouds corresponding to the defined three ROIs were obtained and then merged into a single depth map of the driver’s head. A 3D mesh, constructed of an array of points i i iTc i c c cX x y z   with 1i n  was obtained within the 3D ROI. A 2D spatial low-pass filter was applied to the z-array component (i.e. the depth values of the resulting surface) of the 3D mesh in order to discard the high frequency components and reconstruct a geometrically smoother surface.   Three sample head rotation configurations and their corresponding raw 3D point clouds and smoothed surfaces are presented in Figure 5-3.                                                                                                                        103  Figure 5-3. Three sample head orientations while head is rotated 90° to the left (left), forward looking (middle), and fully rotated to right (right), each row: the smoothed point cloud (top), the generated 3D mesh from the point cloud (middle), and the ToF camera intensity image (bottom).    5.1.4 HK Curvature Analysis for 3D Nose Detection  Due to the specific in-vehicle application of the proposed research, it was deemed necessary to select a facial feature that remains geometrically invariant for nearly all drivers. Among facial features such as eyes, cheeks, chin, forehead, etc., the tip of the nose is rarely covered or occluded by other head and face coverings such as hats, sunglasses, facial hair, etc. Also, the curvature characteristics of the nose make it a more distinctive “feature” among the other facial features. Even though every driver has a different nose, the unique curvature characteristics distinguish it from the other facial features. Hence, the tip of the nose was selected as the facial feature to be detected and used as a basis to reconstruct the transformation matrix of the driver’s head relative to the camera base ckcjcicCco                                                                                                                      104 coordinate system. The selection of the nose as an important facial feature has also been recommended by other researchers. The tip of the nose has been used for different face recognition applications in two and three-dimensional spaces in the work of others as stated in [94]-[95].  The problem of nose detection has been tackled by many researchers for various applications other than for head pose estimation. Using an ordinary USB camera, Gorodnichy [95] employed a local template matching technique to track the nose as an important facial feature. According to Yin et al. [96], the nostril and side of the nose are two important features that can be used to determine different facial expressions. In their research, geometrical template models of nostrils and nose sides were used to extract their shapes from samples of 2D images. Over the last ten years, the technological advancement of 3D vision sensors based on stereo cameras and structured light scanners has made them a reliable 3D data acquisition platform in many research and industry fields. These technologies have enabled the real-time extraction of facial features such as the nose. Curvature analysis applied to three-dimensional data of a human face has offered promising results in facial feature tracking and recognition. Colombo et al. [97] employed a laser range scanner to extract a facial triangle with the eyes and the tip of the nose as its vertices. The goal of their feature-based classification technique was to recognize faces in frontal 3D acquisitions. With the nose introduced as an important facial feature, Chang et al. [98] developed a face recognition technique that worked under varying facial expressions.   A facial tracking technique was introduced by Haker et al. [94] to determine the position of the tip of the nose from both the 3D data and the amplitude image acquired from a ToF camera.                                                                                                                        105  The mathematical concepts of Mean curvature and Gaussian curvature have formed the basis in all the aforementioned research where three-dimensional data acquisition was conducted. To the best of the authors’ knowledge, extracting the driver’s nose in real-time and experimentally tested on real human subjects using a ToF camera and by utilizing the HK Curvature Analysis method for the purpose of driver head orientation sensing has not been reported previously.    The extraction of HK parameters begins with a discrete surface map acquired from a fixed-location range sensor and is referred to as a 2.5-Dimensional (2.5D) range image or the Monge patch [99]. Considering the definition of the Monge patch [100] and the notation in Figure 5-2, every pixel (i.e. ( , )s sx y ) in the two-dimensional intensity image space of sensor pixels (i.e. 2U ) can be mapped to a three-dimensional ToF camera coordinate in meters as follows where 3:f U  , c sx kx  , c sy ky  in m/pixel, and  , .c s sz f x y     ( , ) ( , , ( , ))s s c c s sx y x y f x y   (5.3) The simplified expressions for Gaussian curvature (K) and the mean curvature (H) for 2.5D surfaces, respectively, can be calculated as follows [99]:   242 2 2det( )(1 )Txx yy xyx yf f f fK f f f      (5.4)  2 22 2 3/2 22 .(1 ) 1xx yy xx y yy x x y xyx yf f f f f f f f f fHf f f                (5.5) The quantities , , ,x y xx yyf f f f , and xyf are five partial derivatives of f , T is the Hessian matrix operator and  is the gradient operator in the two dimensional space of U.  Among the applicable properties of Gaussian and mean curvature quantified by Besl et al. [99], the following are well related to the proposed head orientation sensing methodology: 1) arbitrary linear (homogeneous) transformations (i.e. rotation and translation) of ( , )s sx y                                                                                                                      106 parameters of the surface do not affect the Gaussian and mean curvature results, 2) Gaussian and mean curvature are indicated as local surface properties which allow K and H to be determined in regions where occlusions are present, and 3) Gaussian curvature together with mean curvature can lead into the HK curvature analysis process, revealing important geometrical information about the surface. The latter has been organized in the table below.  Table 5-1. Curvature forms for extrema and zero values of H and K         5.1.5 Applying the Gaussian and Mean Curvature A function was implemented to numerically calculate the partial derivative quantities (leading to the calculation of H and K quantities) as indicated in equations (5.4) and (5.5). The inputs to the functions (5.4) and (5.5) were the 3D points (equally dimensioned) of the driver’s head point cloud. The Gaussian curvature (K) and the mean curvature (H) formed the outputs of this function. Since the numerical values of K and H (for a given surface) are incorporated inside two-dimensional arrays, their values can be correlated as color-coded quantities on the given surface. This method of representation of Gaussian and mean curvature is plotted in Figure 5-4. Three different head orientations representing the head in forward looking, rotated 70° to the subject’s left (middle set) and the subject’s right (bottom set), are plotted in Figure 5-4. In the left-hand-side plots, the maximum positive value of K occurs at the tip of the nose while the root of the nose (i.e. the beginning of the nasal ridge) offers a negative K for all head orientations. In the right- + 0 - - Peak Ridge Saddle ridge 0 (none) Flat Minimal surface + Pit Valley Saddle valley H KH                                                                                                                       107 hand-side plots however, the minimum negative values of H occur at the tip of the nose for all head orientations while the root of the nose offers a negative H.   K(min)<0K(max)>0H<0H(min)<0K(min)<0K(max)>0H<0H(min)<0K(min)<0K(max)>0H<0H(min)<00.650.75-0.20.650.75-0.20.10.1ckcjcicCco[m][m][m][m]0.10.10.650.75-0.20.650.75-0.20.10.1[m][m][m][m]Figure 5-4. HK head curvatures  for three sample head orientations for forward-looking head (top plots, i.e. not oriented to left or right) and 70° oriented to the subject’s left (middle plots) and right (bottom plots) sides. Left-hand-side plots represent K arrays and right-hand-side plots represent H arrays.                                                                                                                       108 For the three sample head orientations in Figure 5-4, the tip of the nose offers the maximum value in the K array and the minimum negative value in the H array max min( 0, 0)K H . This represents a “peak” on the surface according to Table 5-1. It must be noted that such “extrema” quantities are related to the largest peak, which corresponds to the tip of the nose. Also, by considering the K array, the root of the nose offers the smallest negative value min( 0)K  with negative values on the H array ( 0H  ) which represents a “saddle ridge” according to Table 5-1. In the proposed work, the tip of the nose was determined by searching the K and H surface array, to be the 3D point where max min( , )K H , and the nose root was located at the 3D point where  ( min 0K   and 0H  ) was experienced. Depending on the head orientation, the driver’s chin and lips are yet additional facial features which could represent a “peak” as stated in Table 5-1. These can cause prediction errors for nose tip detection, due to the introduction of unwanted surface peaks. A simple technique was employed to avoid the confusion of these facial features with the tip of the nose. The center of mass (3D faceCoM ) for the points in the depth data (i.e. residing in the z-array) was calculated as shown in the figure below. The nose detection algorithm has been configured to only accept the first detected “peak” surface below this 3D faceCoMpoint (along y axis) while any other facial features recognized as “peaks” will be ignored. Figure 5-5 shows a 2D image reconstructed from the head z-array and the 3D faceCoM point has been shown with a ‘*’. This will also reject the forehead as being a possible “nose tip” when looking down.                                                                                                                         109              The 3D position of the tip of the nose is a critical element to extract all the coordinates (including the nose root) associated with the subject’s nose. The nose coordinates can be extracted from the point cloud of the head by defining a virtual sphere, with a certain radius (r = 3 cm was selected), centered at the tip of the nose. The coordinates that reside within this virtual sphere will be labeled as “nose coordinates”. The plurality of nose coordinates represents the driver’s nose, which could be located at any orientation relative to the ToF camera coordinate frame. Figure 5-6 shows the extracted nose coordinates in three different head orientations as discussed previously.                  CoM3D faceNose tipUpper lipChinckcj ci[m][m]Figure 5-5. The two-dimensional z-array of the driver face with CoM and three other facial features highlighted.   Figure 5-6. The extracted nose coordinates from the face oriented in the three sample orientations.                                                                                                                        110 5.1.6 Nose Coordinate Assignment The next step involves assigning a “local” Cartesian frame to the nose with the tip of the nose to coincide with the origin of this coordinate frame. As stated earlier, this coordinate system was referred to as  ,n no C and such a coordinate system assignment has been illustrated in Figure 5-7. Point no represents the tip of the nose or the origin of the nose coordinate system (measured in the camera frame and vector c no o represent that). In addition to the tip of the nose, two more points were assigned to the nose 3D point cloud: 1) the center of mass of the 3D coordinates representing the center of the mass of the nose point cloud (i.e. represented with point 1P  in Figure 5-7) (with respect to frame nC ) , and 2) the nose root (i.e. represented with 2P  in Figure 5-7) with respect to frame nC .  Points no , 1P , and 2P  were the minimum number of points required to construct plane P which passes through the nose along 1no P  (or 1v ) and that contains point 2P . For the sake of simplicity, the vector2no P has been labeled as 2v . Frame nC  is an orthogonal right-handed frame with unit vectors n n n nC i j k   . Since both 2v and 1v reside on plane P, 3 1 2v v v  and will be perpendicular to P. The unit vector, ni , normal to plane P was calculated as 3 3( )ni v v .  Also, as shown in Figure 5-7, 1 1( )nk v v . Finally, the cross product of ni and nkresulted in obtaining n n nj k i   and the same operation applies to 4 1 3v v v  . The nose coordinate frame (nC ) and origin ( no ) were obtained as the results, that together formed the coordinate system attached to the tip of the nose. So, as the head rotates, the nose frame rotates as well.                                                                                                                       111         5.1.7 Determining Head Orientation using a Geometric Approach  Now that a coordinate system has been assigned to the tip of the nose and the translation vector,c no o , from the camera to the nose origin is available, the coordinate frame, cC can be transformed homogenously to nC  and the following rotation operation can be formulated:   n c ncR C C   (5.6)  3 4 13 4 11 2 3 1 2 33 4 14 5 6 4 5 63 4 17 8 9 7 8 93 4 13 4 1( ) ( ) ( )( ) ( ) ( )1 0 0( ) ( ) ( )0 1 0 (0 0 1( ) ( ) ( )ncn nnncCv x v x v xv v vi x j x k xR R R R R Rv y v y v yR R R or R R R i yv v vR R R R R Rv z v z v zv v vR R                               ) ( ) ( )( ) ( ) ( )nnn nnnCj y k yi z j z k z        (5.7) Based on the definition of roll-pitch-yaw angles provided in [101], driver head orientation angular parameters were calculated as follows: Figure 5-7. Coordinate frame assignment to the nose as the main facial feature. Pno 1P2P1v2vninj nk3 1 2v v v  ,n no C[m][m][m]4 1 3v v v ckcicj coc no oToF cameracCnC                                                                                                                      112    121 111 2 231 32 33132 33tan ( / )tan ( / )tan ( / )Roll R RPitch R R RYaw R R       (5.8)  Among the three components of head orientations (i.e. yaw, roll, and pitch), head yaw and pitch angle measurements are the most critical measurements for driver distraction detection and assistance systems (see [102], [103]). Hence, head yaw and pitch angle measurement accuracy were investigated in the laboratory experiments reported next. The term “driver head orientation” refers to pitch and yaw angle calculations in the following.  5.1.8 Algorithm The main steps of the proposed driver head pose sensing algorithm are outlined as follows: a) Estimate ToF camera to head distance  - Utilizing the capacitive sensor array installed on the HR device, obtain the three capacitive proximity ratios 1 2 3( , , )c c c  as stated in Section 3.2.8. - Utilizing the neural network model obtained from the training process explained in Section 3.2.10 and the above capacitive proximity ratios as its inputs, estimate the back of the driver’s head position (  2 2 2 2 TX x y z ).  - Utilizing the previously calibrated homogeneous transformation between the driver’s head and the optical origin of the ToF camera (i.e. 0 10 1 2cT T T  as expressed in equation (5.1)), transform the back of the head position into the camera coordinate system resulting in  Tc c c cX x y z . - Find the Euclidean distance between the camera optical origin and the back of the head (i.e. 2 2 22 cBH C c c cd d X x y z     ). - Considering the average human head breadth ( bˆ ), calculate the Euclidean distance between the camera optical origin and a point on the driver’s face (i.e.2ˆ ˆF Cd d d b  , see Figure 5-2)  b) ToF camera integration time selection                                                                                                                        113 - Calculate a set of first-order polynomial equations for the center, inner, and outer ROIs (1 0 1 0 1 0ˆ ˆ ˆ, ,c c c i i i o o oT t d t T t d t T t d t     ) using the method explained in Section 5.1.2. - Using dˆ (i.e. referring to the Euclidean distance between the camera origin and the driver’s face) and the above set of polynomial equations, calculate an optimal integration time (i.e.cT , iT , and oT ) for each of the ROIs.  - Find a single ROI containing the largest coverage of the driver’s face using a simple 3D to 2D inverse transformation that results in the 2D representation of the driver’s face inside the intensity image.  c) Acquire driver’s head 3D point cloud - Once the integration time is selected, capture an accurate 3D point cloud for the ROI selected in step (b) above. - Assign the 3D ROI boundaries to eliminate background objects behind the head (e.g. the HR device). - Apply a 2D spatial low-pass filter on the z-axis data (depth coordinates) to remove high frequency components and effectively smooth the 3D structure. MATLAB function fspecial (type, parameter) was used to calculate the 2D filter with type selected as “disk" (i.e. a circular averaging filter) and parameter=2 (i.e. the radius of the disk). MATLAB function imfilter was used to apply the aforementioned filter onto the z-axis data (i.e. organized inside a 2D array).  d) Calculate the curvature equations to be applied to the driver’s face point cloud  - Calculate the necessary partial derivatives as explained in Section 5.1.4 and find the Gaussian (K) and Mean (H) curvatures associated to the driver’s face. MATLAB function sufature [104] was used to calculate the partial derivative equations.    e) Apply HK Curvature Analysis and find the nose tip and root - Considering the curvature forms as stated in Table 5-1, analyze the curvature data and identify the nose tip and root using the HK curvature analysis approach explained in Section 5.1.5.  - At the nose tip, extract all 3D points within a certain Euclidean distance (nose coordinates) and then find the nose Center of Mass (CoM) from the obtained nose point cloud by applying MTLAB mean function on each of the three columns of the nose coordinates (i.e.   Tc c c nCoM x y z where subscript n is used as an indicator that these coordinates belong to the driver’s nose).                                                                                                                       114  f) Assign a coordinate system for the nose   - Use the three calculated points – nose tip, root, and CoM and obtain the nose plane (P) as shown in Figure 5-7. - With the plane identified, calculate the unit vectors associated to the nose coordinates (i.e. 3 3 4 4 1 1Tn n ni v v j v v k v v    ) that together with the tip of the driver’s nose (no ) construct the nose coordinate frame,  ,n no C . This process was explained in Section 5.1.6.  g) Determine driver’s head orientation, pitch and yaw angles  - Calculate the rotation matrix between the camera and the nose coordinate systems to find the Pitch/Yaw angles using equations (5.6) and (5.7). 5.1.9 Experimental Verification 5.1.10 Experimental Methods  An experimental testbed was designed and built in the laboratory to perform data collection and conduct the head orientation sensing experiments in real-time. The testbed was integrated with a self-contained electromechanical HR developed in the previous research work. It consisted of four main components as shown in Figure 5-8: 1) the aforementioned capacitive proximity sensor array embedded inside the frontal compartment of the HR device, and 2) a motorized HR restraint, 3) a head-mounted 3D orientation sensor, and 4) the ToF camera mounted with a certain height and distance relative to the HR device - simulating an in-vehicle installation of the camera below the rear-view mirror.   The self-contained HR device incorporates the following sub-systems: 1) a scissor mechanism to actively position the head restraint in a safe position behind the driver’s head, and 2) an embedded PC with integrated limit switches and displacement sensors to control the onboard motors using the motion commands generated by driver’s head position sensing system implemented in previous research work.                                                                                                                           115               A MotionFit™ wireless orientation sensor manufactured by Invensense Inc. was employed to first make head angle measurements in a vehicle and later added to the head-mounted 3D orientation tracking device to provide measurements of head orientations in the laboratory in real-time. The MotionFit™ sensor employs a set of calibrated accelerometers, gyroscope, and compass sensors. The device was programmed to provide real-time Euler angle measurements of the object of interest (i.e. the driver’s head). The MotionFit™ sensor is equipped with Bluetooth wireless serial communication as well as a miniature rechargeable battery. These features made the sensor convenient for the experiments since no data or power cables were required to be attached to the head-mounted orientation tracking device. The MotionFit™ sensor has been experimentally found to be capable of providing pitch and yaw angles within ±1° accuracy when angle measurements for pitch and yaw angles were maintained within ±90° [105]. Capacitive proximity sensor arrayToF cameraHead-mounted 3D orientation sensorMotorized head restraint deviceFigure 5-8. The experimental testbed customized for head pose tracking                                                                                                                       116 The accuracy of the proposed driver’s head orientation sensing was evaluated by conducting a set of experiments. Using the aforementioned experimental testbed, a total of five human subjects (3 males, 2 females) participated in this study. The laboratory experiments involved determining the accuracy of head yaw and pitch estimations, during normal driving conditions in which the driver’s head angles are constrained to direction vectors that pass through any point on the windshield surface (which still allowed the driver to view any significant world point through front and side windows via eye rotation, see Discussion). By initially wearing the head-mounted 3D orientation tracking device inside a real vehicle (in this case, a Honda Civic), actual head orientations associated with the windshield points (Figure 5-9) were recorded in a separate recording session and yaw/pitch angles relative to point G (representing the driver’s head in a forward looking orientation) were calculated and recorded.   Figure 5-9 illustrates a virtual image on an LCD screen of the windshield in the laboratory with the array of 15 points spread over its surface. To acquire a full range of different representative pitch and yaw angles, the test subjects were asked to point their heads towards specific points as shown in the Figure 5-9. For the sake of experiencing consistent measurements for all human subjects during the experimentation on the vehicular testbed, each subject (wearing the head mounted orientation tracking device) was asked to hold his/her head in a forward-looking pose while the back of the head was in contact with a small horizontal bar as shown in Figure 5-9. This process ensured that the back of the head position relative to the HR device did not change for different human subjects during the experiments.                                                                                                                       117  The head yaw and pitch angles corresponding to the forward-looking head orientation from the head-mounted sensor were recorded and automatically associated with point “G” as shown in Figure 5-9 (using a customized tracking algorithm designed for this experimental set up. It is important to note that each test subject was then asked to look at each of the same 15 points (shown as yellow circles on the screen) and to then rotate their head while observing the screen, so that their current head orientation (displayed as a red circle on the screen) sequentially matched each of the yellow circles (which had a previous head orientation in pitch and yaw recorded from inside the vehicle). The subject’s head orientation when a red circle overlaid a green circle, was then approximately the same as the orientation of the subject’s head in the vehicle when the head was directed at the corresponding point on the vehicle windscreen. The system automatically rendered a small green circle inside each of the yellow circles, as a visual confirmation signal when a recording is triggered at the event when the red circle overlaid a yellow circle. This process enabled the test subjects to closely match the same head orientations in the lab as were recorded from the vehicle. With the  head orientations observed in the lab by the ToF camera mounted on the lab-based testbed, the ToF camera image analysis system  then estimated their head pose and computed the error between it and the wireless orientation sensor “ground-truth” measurements.  Figure 5-10 shows a snapshot of the Graphical User Interface (GUI) of the head tracking and simulation program. The snapshot was taken when the subject’s head orientation was successfully matched with those six points indicated with green and yellow-color circles. All head orientation matches were recorded relative to point “G” which is shown with a white-color circle in Figure 5-10.                                                                                                                        118                           A real-time Visual C++ software program was designed and implemented to collect the actual and estimated head orientations (pitch and yaw angles only) as shown in the following flowchart. The ToF camera integration time adjustment process (as discussed previously) was conducted during the tracking process for the purpose of collecting more accurate 3D point cloud of the driver’s upper torso.  ABCDEFGHIKLMNJPHorizontal barFigure 5-9. Head orientation measurement limited to yaw and pitch angles for a set of predetermined points within the vehicle windshield. Point G refers to diver’s head orientation in forward looking with straight gaze. Figure 5-10. A snapshot of the GUI of the real-time interactive program for head orientation tracking. The white-color circle indicates the forward-looking head pose (i.e. point “G”), the red-color circle indicates the current head orientation, the yellow-color circles indicate the predetermined head orientations, and those filled with green-color circles indicate that the head orientation was successfully matched with the recorded orientations.                                                                                                                         119     Head orientations for one of the human subjects were shown in Figure 5-12 for better visualization purposes. The statistical information for all human subjects was organized in Table 5-2. Important note, the absolute angular measurements relative to point G are shown Figure 5-12.                   In the testbed:Display the 15-circle grid shown in Fig 11. and map collected data points into pixel valuesRead head angles from the head mounted device and display the red circle within the grid   Collect and report the actual and estimated head orientation data for comparison Run the proposed head orientation tracking algorithm and mark each yellow circle with green circle inside when a match is found15 points collectedYesNoIn the car:Collect 15 reference data points (yaw/pitch) from MotionFit™ sensor worn inside the car EndFigure 5-11. The flowchart of the process for collecting the actual and estimated head orientations.   a: P=10.36° , Y=25.92°e: P=11.45°, Y=23.43°Err(P)=1.09° , Err(Y)=-2.49°  a: P=10.59° , Y=35.7°e: P=9.08°, Y=32.25°Err(P)=-1.51° , Err(Y)=-3.45°  a: P=10.51° , Y=47.84°e: P=10.04°, Y=46.31°Err(P)=-0.47° , Err(Y)=-1.53°  a: P=10.48° , Y=-2.07°e: P=11.1°, Y=-1.42°Err(P)=0.62° , Err(Y)=0.65°  a: P=10.39° , Y=-17.67°e: P=11.49°, Y=-21.56°Err(P)=1.1° , Err(Y)=-3.89°  a: P=0.18° , Y=-14.1°e: P=1.45°, Y=-19.49°Err(P)=1.27° , Err(Y)=-5.39°  a: P=-8.11° , Y=-13.78°e: P=-5.34°, Y=-17.66°Err(P)=2.77° , Err(Y)=-3.88°  a: P=-8.42° , Y=-2.57°e: P=-9.32°, Y=1.61°Err(P)=-0.9° , Err(Y)=-0.96°  a: P=-7.93° , Y=22.68°e: P=-8.37°, Y=20.8°Err(P)=-0.44° , Err(Y)=-1.88°  a: P=-8.19° , Y=32.09°e: P=-7.89°, Y=34.55°Err(P)=0.3° , Err(Y)=2.46°  a: P=0.48° , Y=37.91°e: P=1.45°, Y=34.16°Err(P)=0.97° , Err(Y)=-3.75°  a: P=-0.21° , Y=17.83°e: P=0.56°, Y=19.12°Err(P)=0.77° , Err(Y)=1.29°  a: P=-0.79° , Y=29.64°e: P=-1.16°, Y=32.65°Err(P)=-0.37° , Err(Y)=3.01°  Ga: P=-7.74° , Y=9.77° e: P=-8.44° , Y=12.23° Err(P)=-0.7° , Err(Y)=2.46° *Values are relative to point GFigure 5-12. The estimated and actual head yaw and pitch angle measurements (relative to point G) and the absolute error between each measurement - obtained from the proposed algorithm and the head-mounted 3D orientation tracking sensor. “a”, “P”, “Y”, and “e” refer to actual, pitch, yaw, and estimated measurements, respectively.                                                                                                                       120 Table 5-2. The estimated head pitch/yaw angles for the proposed head orientation estimation algorithm  MAE(pitch) STD(pitch) MAE(yaw) STD(yaw) Max_Err(p) Max_Err(y) Sub1, male 0.95° 0.61° 2.65° 1.29° 2.77° 5.39° Sub2, male 1.49° 0.93° 3.35° 1.24° 2.82° 5.83° Sub 3, male (darker skin) 1.76° 1.23° 3.68° 2.63° 3.47° 6.32° Sub 4, female 1.52° 0.93° 3.43° 2.30° 3.78° 8.73° Sub 5, female 1.08° 0.55° 3.47° 1.82° 2.76° 6.98° Sub 1, male (W/G) 1.45° 1.30° 3.11° 1.51° 4.70° 6.02° Subj2, male (W/G) 2.02° 1.62° 3.59° 1.30° 5.31° 5.10° Subj3, male (W/G) 2.29° 0.97° 3.96° 1.06° 4.16° 5.92° Sub 1, male (W/SG) 1.94° 1.56° 3.87° 1.21° 5.65° 6.14°  Mean=1.61°  Mean=3.45°  Max=5.65° Max=8.73° W/G: With Glasses, W/SG: With Sunglasses  5.2 Discussion  It was decided that for this study the head angle pointing directions should be constrained within the bounds of the average automobile windshield as shown in Figure 5-12. At head yaw angles between -17.67° and 47.84° (i.e. approximately 65°) as reported for two top corner points in Figure 5-12, side mirrors are easily viewable. Since shoulder checks are hazardous in the event that a vehicle in front stops during the shoulder checks, existing blind spot warning systems or the development of other driver warning systems would be advised. Similarly head pitch angle pointing directions were constrained to lie within the windshield area since this still permitted rear-view mirror and visor adjustment, and view of the higher control panel using eye motions alone.  The departure of head angle pointing directions outside of the windshield region could then still be detected by this system and used to trigger appropriate automated vehicle safety provisions.   The capacitive sensor array was found to be capable of providing a reliable and accurate head position estimate. Head position was also utilized to choose the optimal integration time for the ToF camera. This ensured the highest possible accuracy in all head orientation measurements. Also, the utilization of the ToF camera provides a very reliable measurement of the 3D position of tip and root of the nose.  For rotations of the head that                                                                                                                       121 permit that the tip of the nose being visible in the acquired 3D depth maps, the use of the ToF camera allows an estimate of head yaw and pitch to be obtained for every image frame without resorting to tracking methods.    The use of a ToF camera offered important advantages over a stereo camera as discussed in Section 1.3.1. Those advantages as well as the lack of reliance on a sequence of images make the proposed methodology in this research differentiable from the other reported head measurement techniques where stereo cameras and sequences of frames are used for the purpose of head orientation measurement (e.g. the work reported in [65] with better accuracy in yaw measurements). Also, the accuracy of the ToF camera-based head pose measurements approach reported in [72] does not offer a better accuracy than the reported accuracy in this chapter and per user training is required.    The proposed method in this chapter will generate one depth image for each set of calculated nose coordinates and that results in estimation of the driver’s head yaw and pitch angles relative to the ToF camera coordinate system. Although camera origin co  to HR frontal origin 2o  transformation needs to be calibrated at the time of vehicle manufacture, no per-user training or calibration is required.  During the experimental analysis, it was observed that the error in head pose estimation results for the subject with darker skin color (i.e. subject #3) were higher in comparison to the subjects #1 and #2 with lighter skin color. It is known than the reflectivity of the human face is affected by the color of the skin and as reported in [93], a slightly higher integration time is required and recommended for objects with smaller reflectivity (e.g. a driver’s face with darker skin color).                                                                                                                         122  The proposed work in this chapter succeeds in meeting four of the five relevant design criteria for head pose tracking as proposed by Murphy-Chutorian in the survey paper [60] for a head pose tracking system. The requirements that have been met are as follows:   Accurate: The system offers a MAE of less than 5° for each of the head yaw and pitch angles.   Monocular: It is based on monocular imagery with a single ToF camera as the only optical sensor used in the system.  Autonomous: The system is autonomous as there are no requirements for manual actions, tracking, or prior knowledge of the driver’s head pose.  Real-Time: The system is capable of estimating the head pose from individual frames in real-time (16 Hz).   However, the system does not meet the requirement to be identity and lighting invariant. At this point in its development, the system has not been tested in environments of dynamic lighting variations. However, it must be noted that the ToF camera is only sensitive to the NIR lighting generated by its own NIR light source and it may be reasonably robust to dynamic natural lighting changes. The final three requirements are not relevant to vehicle drivers which are listed as follows:  Multiperson: Since the current system was intended for driver head pose detection only, it does not currently simultaneously estimate the pose of all the other occupants in the vehicle.   Resolution independent: Since the driver is in the near-field, the resolution offered by the ToF camera (i.e. currently 200×200 pixels) would suffice to estimate the driver’s head pose accurately. Unlike stereo cameras, ToF distance                                                                                                                       123 estimate accuracies do not drop off exponentially with the object at a farther range.   Full range of head motion: Since the two applications for head pose estimation are firstly, to adjust the head restraint to a suitable position behind the driver’s head, and secondly to sense driver lack of visual attention to the road, a greater range of head motion may not be required.  5.3 Conclusions   In this chapter, a novel driver’s head orientation estimation methodology based on HK-classification was introduced and implemented. The head orientation tracking was performed in such a way that the estimations did not depend on previous estimations while the whole process could still be conducted in real-time. To the best of our knowledge, detection of the tip of the nose and head pose in real-time while conducted on real human test subjects using a ToF camera and by utilizing the HK curvature analysis method has not been reported previously. Laboratory experiments were conducted corresponding to real-world driver’s head orientations which were limited to pitch and yaw angles in which the nasal vector, 1v , could only pass through some point on the real-world vehicle’s windshield. For head orientations within this constraint, head orientation accuracy with MAE of 3.45° and 1.61° was achieved for head yaw and pitch angles respectively.   In addition to the advantages of the capacitive-based head position estimation system for mitigating whiplash injuries through adaptive adjustment of HR devices, head orientation measurement can then be used for three main purposes: 1) improved HR adaptive positioning in the events of detecting turned heads, and 2) redundancy in the event                                                                                                                       124 of failure of the capacitive sensing system, provided that the distance to the face can still be estimated, and 3) providing useful information to be used by driver awareness and distraction detection systems. This important strategy makes the proposed work novel in terms of introducing a combined use of capacitive and range imaging sensors for the purposes of measuring head orientation and position in real-time.    During the experiments, the position of the ToF camera was maintained in a location corresponding to the installation of the camera under the rear-view mirror. The driver head orientation estimation methodology takes advantage of the synergistic nature of the sensory system comprising an array of capacitive proximity sensor s and the illumination-compensated ToF camera. The ToF camera integration time can be adjusted properly using the aforementioned synergistic sensing system which leads to faster, more accurate and robust head orientation measurements.                                                                                                                          125 Chapter 6. Development of System Prototypes  A description of the previous research, i.e. the electronic, algorithmic, and software aspects of the proposed driver head pose estimation methodology was provided in Chapter 1 through 5. Nevertheless, an important engineering requirement of this research dealt with the design and the fabrication process of a reliable self-contained electromechanical HR device that could serve the following main goals:  Accommodate the electronics components into a single HR unit,    Provide a real-time data acquisition and computation platform for the purpose of head pose estimation,  Generate the required servoing motions and demonstrate zero steady-state error in positioning of the HR behind the occupant’s head,  To experimentally verify that head pose sensing provided the desired accuracy over all targeted range of head poses.  To achieve the above main goals, several prototypes, including a final fully integrated and self-contained HR device was designed and built in-house. This process and the details related to the electromechanical aspects of the developed prototype are provided below. 6.1 Overview of Main System Components  The current 3rd generation electromechanical device has evolved from two previously designed system prototypes (1st and 2nd generation prototypes) built in-house. A system prototype and its installation in a vehicle have been shown in Figure 6-1.                                                                                                                        126   The device was designed and fabricated in such a way that it can directly replace the existing HR device already installed in the seat. The final prototype of the electromechanical HR device is composed of the four main subsystems that together, provided this research study with the capability of demonstrating the feasibility of a full driver head pose estimation system in a laboratory environment. These four main subsystems are identified below: 1) Electric field sensing subsystem comprising the capacitive sensing array, analog data acquisition and multiplexer, and a signal acquisition and processing software program 2) Illumination-compensated ToF camera subsystem comprising a PMD-vision® ToF camera and a set of NIR LEDs 3) Driver head pose estimation processing subsystem comprising a single board industrial PC with on-board storage, digital data acquisition, and a data communication interface for communicating with the above subsystems Figure 6-1. The second system prototype of the electromechanical HR device and its installation as a replacement with the existing HR device.                                                                                                                       127 4) Mechanical and motion control subsystem comprising the mechanical actuators, position sensors, and a closed-loop adaptive motion control electronics and software The capacitance and 2D+range (a.k.a. 2.5D) vision data associated with the driver’s head are acquired in real-time by the driver head pose estimation processing subsystem and the position and orientation of the driver’s head are estimated accordingly. The mechanical and motion control subsystem receives the estimated head position and converts it appropriate into motion control set point commands to be used as inputs in the closed-loop motion control subsystem. The closed-loop motion control system maintains a target backset offset of 2-3 cm behind the driver’s head while the top of the frontal compartment is aligned with the top of the driver’s head (based on IIHS requirements).  The HR device motions are constrained to two degrees of freedom (DOF) enabling the horizontal (in/out) and vertical (up/down) actuations of the frontal compartment and of the entire HR device, respectively. It is important to note that vertical motion commands result in moving the entire HR device in the up or down directions.  The system block diagram associated with the proposed head pose estimation system is discussed next.  6.2 System Block Diagram 6.2.1 Adaptive Head Pose Tracking Block Diagram   As discussed in the previous section, there are four main subsystems in charge of performing the proposed head pose tracking process in the interior space of the vehicle. These subsystem blocks are shown in Figure 6-2. The capacitive sensor array is installed on the frontal compartment of the HR device while its field of detection covers the back of the driver’s head. The illumination-compensated ToF camera is installed underneath                                                                                                                       128 the rear-view mirror with its FoV covering the frontal view of the driver’s upper torso. The FoV for each sensor is shown in Figure 6-2 accordingly.      The illumination-compensated ToF camera is installed on a frame in the testbed corresponding to the proposed location underneath the rear-view mirror inside the vehicle cabin.           Figure 6-2. Adaptive head pose tracking system and the proposed sensor position configuration.  6.2.2 Electromechanical System Block Diagram  The system block diagram in Figure 6-3 shows the main electrical and mechanical components for each block in the driver head pose tracking system explained in the previous section. The system was designed to function with an operating voltage of 12 VDC which is typically already available to the vehicle seat. The different sections in the block diagram have been labeled with the corresponding sub-systems previously explained. USB 2.0 communication was utilized for the purpose of data and motion command communications between the embedded PC and the peripheral motor motion control and capacitive proximity sensing units.  Measurement of mutual capacitance ratiosC1, C2, C3Generation of driver’s head and upper torso 3D point cloud data2D/3D intensity and point cloud dataData acquisition+Head pose tracking+HR position set point generation  Motion Set pointsMotion controller+Position sensors+Mechanical actuators Capacitive sensor array Illumination-compensated ToF cameraSubsystem #1 Subsystem #2Subsystem #3 Subsystem #4                                                                                                                      129    The organization of the components and the main computation operations of the four subsystems of the block diagram in Figure 6-3 are as follows: the four comb-shaped capacitive sensors inside the array (including the reference capacitive sensor) were connected via a set of coaxial cables to the Capacitance-to-Digital and the multiplexer Figure 6-3. The proposed driver head pose tracking electromechanical system block diagram FT2232D USB-Serial Interface(JTAG + UART)UARTUSBMUX AddressCapacitance-to-Digital (C/D) +Multiplexer CircuitsSTM32F2 MicrocontrollerARM Cortex-M3 120 MHzI2CCapacitive Sensor Array + Ref capC1 C2 C3 C(ref)Coaxial CablesUSBMaxon EC-Max 22 BDC Motor12 VDC+Hall SensorHR Horizontal axis Firgelli L12Linear Actuator 12VDC+potHR Vertical axisPosition Sensor-SoftPot #1Horizontal axisPosition Sensor-SoftPot #2Vertical axisLinear Potentiometers + pushbuttons for HR Manual AdjustmentToF Camera+Illumination-compensated NIR LEDsUSBPCM-3362 Embedded PC104Intel 1.66 GHzUser InterfaceMouse/keyboardDisplayNetwork1-W-EC DECMaxon Motor Driver+Hall Sensors ArdumotoLinear Actuator MotorDriverArduino UNO+DC2DC ConverterMechanical components on HR deviceSubsystem #4Subsystem #2Subsystem #1Subsystem #3                                                                                                                      130 circuits unit. An STM32F2 microcontroller was in charge of commanding the analog multiplexer and receiving the digital capacitance data via I2C communication, accordingly. A USB-serial interface chip transfers the capacitance data to the embedded PC104 unit (i.e. Advantech PCM-3362) labeled as subsystem #3 in the block diagram. As mentioned previously (Section 3.4), the frequency of sampling and processing in subsystem #1 was 25 Hz. The ToF camera in subsystem #2 was also connected to the embedded PC via USB with frame acquisition frequency of 20 Hz. The computation operations such as neural network training and head position estimation explained in Section 3.2.10, head pose estimation using the hybrid system explained in Chapter 5, and generation of the HR device motion commands were performed on the embedded PC104 platform of subsystem #3 with a processing frequency of 16 Hz. An Arduino microcontroller unit inside subsystem #4 receives the motion commands from the embedded PC and performs the steady-state positioning of the HR device. Due to the delays associated with the dynamics of the HR device, the aforementioned processing frequency of 16 Hz does not apply to the mechanical actuators of the HR device. However, zero steady-state positioning of the HR device was achieved. For better pictorial representation of the reported sub-systems and components, Figure 6-4 was prepared which illustrates the final prototype (i.e. the 3rd generation prototype) of the electromechanical HR device. 6.3 Conclusions  In this chapter, the overall design of the capacitive and ToF camera-based sensing system and the integration of the electromechanical components into a self-contained HR device has been explained. It was also reported that the system excluding the mechanical actuators ran at 16Hz with zero steady-state error. The results met the design goals stated                                                                                                                       131 in the beginning of this chapter. Considering the high-quality parts chosen for the final prototype, the net cost of the system dominated primarily by DC brushless motor/actuator and the embedded PC was approximately $2500. The cost for final manufacturing would depend on selected final design features and volume production costs.              Figure 6-4. The self-contained HR device and its main electromechanical components ABDHGFJLA) Mechanical scissor arms for horizontal motionB) 3D-printed back coverC) BDC motor for horizontal motion D) Customized capacitive sensor arrayE) Linear actuator for vertical motionF) Embedded PC104 single board computerG) Motor and linear actuator driver circuitsH) Capacitance-to-Digital converter and board I)  Push buttons/sliding potentiometers for manual control J)  External device interface (m/k, display, etc.)K)  Reference capacitor electrode L) Back plane behind the reference capacitorIKEC                                                                                                                      132 Chapter 7. Conclusion  7.1 Thesis Conclusions   7.1.1 Re: Achievement of Accurate Head Position Estimation using an Array of Concentric Capacitive Sensors  This research work initially produced a new 3D head position sensing system (see Chapter 2) which was designed, implemented and experimentally tested. Comprised of a single planar array of concentric capacitive sensors, this array was designed to be mechanically suitable for inclusion in a head restraint and well suited to detecting head to head restraint horizontal positioning with a positioning error of less than 0.5 cm within a detection range of 9 cm. The system was shown to be suitable to meet IIHS guidelines when linked with a HR electromechanical system that could be servoed to maintain a safe distance behind the occupant’s head at all times. Although the initial system as described in Chapter 2 was neither robust to changing atmospheric conditions nor electrical disturbances behind the HR, it is believed that this was the first experimental quantification of the accuracy of a capacitive-based head position sensing system in a laboratory setting.  7.1.2 Re: Achievement of Accurate and Reasonably Robust Head Position Estimation using an Array of Interdigitated Capacitive Sensors  To address the sensor system shortcomings listed above, a second 3D head position sensing system (as described in Chapter 3) was designed, implemented and experimentally tested. This system offered four new features: a) The back of this array was shielded from rear-seat electrical interference through incorporation of a grounded backplane.                                                                                                                        133 b) As this shielding reduced the effective range of the simple electrode array (e.g. circular concentric electrodes), a novel range-optimized interdigitated electrode was designed and tested (both analytically through FEA and experimentally) to provide an acceptable electrode sensing range with the grounded backplane. A complete electrode design algorithm was also reported in this work. c) A novel reference electrode with identical geometry to the sensing electrodes was developed to compensate for potential capacitance measurement errors due to temperature and humidity changes producing an output ratio error of 3 ppt. d) In order to detect 3D position of the head, with respect to the HR, a novel array comprised of the minimum number of the new interdigitated sensor electrodes (i.e. three sensors) was then designed and experimentally tested. This sensing array was found to offer a mean Euclidean error of 0.33 cm in a volumetric range of 14×7×7 cm. (WLH). This improved system was also found to meet the IIHS guidelines when linked to the electromechanical HR positioning system. e) A process demonstrating zero steady-state error in positioning of the HR in vertical direction with maximum/minimum tolerable error of ±2 cm relative to the top of the driver’s head (with 0 cm as the setpoint and a MAE of 0.18 cm along the HR vertical axis) was achieved. HR horizontal positioning was also achieved for a backset distance of 2-5 cm relative to the back to the head (with 3.5 cm as the setpoint and a MAE of 0.19 cm along the HR horizontal axis). This process confirmed that a “Good” rating according to IIHS criteria could be achieved.                                                                                                                       134 7.1.3 Re: Achievement of Accurate Head Pose Measurement using a ToF Camera Sensor Initial use of a ToF camera with a fixed integration time parameter provided inaccurate 3D measurement results. Thus, a novel methodology was introduced to adjust the integration time for the various regions of interest in the scene. The result was that the Mean Absolute Error in distance measurement improved using the integration time adjusted by a factor of 4.4 in the vehicle measurements. The MAE position error of 0.77 cm and the MAE orientation error of 1.26 degrees (in a range of ±15 degrees) demonstrated the feasibility of using this ToF camera for both head position and orientation in this vehicle safety system application. For a final system however, the speed of image acquisition and the orientation estimation range needed to be improved.  7.1.4 Re: Achievement of Fast and Accurate Driver Head Pose Estimation using a Hybrid Sensing System The developed hybrid system comprised of the capacitive sensing array and the ToF camera offers a set of unique features as identified below: a) The novel use of the capacitive sensor to estimate the position of the head allowed the ROI to be immediately identified and to thus speed up the camera pose estimation to 5 Hz to 16 Hz-i.e. approximately a factor of 3. b) The use of high quality distance measurements by the hybrid system also permitted the 3D point cloud of the facial surface to be very accurate which in turn allowed a relatively simple geometric method based on nose detection (using HK curvature analysis) to be employed to provide relatively accurate pose estimations. To the best of the author’s knowledge, the use of a ToF camera sensor using HK curvature analysis for nose detection and real-time                                                                                                                       135 estimation of head position, and angles (yaw and pitch) from human subjects has not been reported previously.  c) Head pitch and yaw angles were estimated with MAE of 3.45° and 1.61° on yaw and pitch angles, respectively; compared to the work reported by Ray et al. [72] that reported head yaw and pitch angle measurements of 12.9° and 4.8°, respectively.  d) The proposed hybrid head pose estimation methodology achieved an MAE of 3.45° and maximum error of 8.73° (according to Table 5-2). Thus the criteria defined in Section 1.4 were met for acceptable driver head pose estimation that required an average head yaw angle error of 5° and a maximum error of 10°.  e) A system processing frequency of 16 Hz was achieved for the hybrid driver head pose estimation system. Although faster than the design processing frequency of 10 Hz, this frequency is lower than the frame rate of the ToF camera due to the use of MATLAB™ Engine for some of the calculations in developed Visual C++ code (e.g. the computation of HK curvatures).  7.1.5 Re: Achievement of Improving System Robustness and Reliability through Synergistic Operation The synergistic operation of the two sensor types offers the following advantages:  a) Head position measurements are redundant since each sensor is capable of detecting the head position in two directions. b) Accurate range estimates (and consequently 3D facial surfaces) are collected from the ToF camera since the integration time is optimally adjusted using the distance measurements from the capacitive array.                                                                                                                         136 7.2 Strengths and Weaknesses The greatest strength of the performed research and the resulting technology is to eliminate the need to rely on the occupants to properly position the head restraint device. This is anticipated to have a direct and positive impact on public health. Successful commercialization and adoption of such a system by Canadian seat and vehicle manufacturers should greatly benefit Canada’s automotive manufacturing sectors by providing new seat technology within future vehicles. Adoption also has the potential to greatly increase the rate of proper head restraint use to nearly 100% in future vehicles from the current measured 44% observed on BC Provincial roadways today as documented in [4], providing a potential savings of over 50% of current whiplash costs in new equipped vehicles.  While the year to year cost savings would be gradual as newly equipped vehicles entered the traffic fleet, the cost savings for whiplash related injuries would be significant - as whiplash costs in BC alone are over $850 Million/year in addition to the pain and suffering of those affected.    Additional strengths of the system are listed below: a) Driver attentiveness measurement while driving b) The technology is extensible to all passengers (in the case of whiplash injury mitigation) c) The proposed hybrid system does not require the system to capture or store template images of the driver’s face or head to be used during the head pose estimation process. d) Considering the national and international societal cost of whiplash and related neck injuries, there is a possibility for insurance companies to reduce the insurance                                                                                                                       137 premium for those vehicles equipped with the introduced adaptive HR device. Although the proposed technology possesses the aforementioned strengths, the following weaknesses can also be considered: a) Although desirable, the system is not entirely HR-based since the installation of the ToF camera was proposed to be underneath the rear-view mirror. However, this location is relatively simpler than other locations since at least new rear-view mirrors are electrically powered. Data could then be wirelessly transmitted to a self-contained HR device to avoid additional wiring which can add to the cost and system complexity. b) The most significant form of disturbance to the ToF camera is occlusion (e.g. hand, head coverings, hats, etc.) which can be detected since the occluding object appears closer to the camera. Although this prevents orientation measurements from being acquired, position measurements are still available from the capacitive sensing array. c) Due to the lack of a degree of freedom on the HR device to properly adjust its tilt angle, extreme seatback angles (e.g. seatback leaned backward) would affect the desired nearly vertical alignment of the HR frontal compartment shown in Figure 6-1.   d) The electrostatic sensor can be affected by the presence of other objects in its detection range such as a human or a dog’s head as well as wet curly hair.     7.3 Thesis Contributions  The contributions of the thesis are listed in Section 1.6.                                                                                                                       138 7.4 Future Work  The ToF camera uses a modulated beam of NIR light produced by integrated NIR LEDs. However, the performance and the accuracy of the ToF camera can be affected as the system is not immune to the solar NIR. Also, static and dynamic changes of solar light (e.g. going in and out of a tunnel) can negatively affect the accuracy of the system. Additional research and experiments need to be performed to further investigate this problem. Similarly, the sudden static and dynamic changes of solar. It was discussed in Section 1.1.6 that occupants with their heads turned (thus Out-of-Position – OOP) are at higher risk of whiplash injuries in the event of rear-end crashes. One solution proposed in that section to mitigate such injuries was to rapidly move the HR device closer to the back of the head or even touch the head. This would then require a high dynamic performance from the HR servo system. Another realignment system could also be considered. For example, the HR device could be equipped with a set of side airbags to be deployed during the event of a rear-end crash to reorient a turned head if an OOP event is detected. A flexible electrode array could be designed (if necessary) and calibrated to follow the contours of a curved HR front compartment. This will benefit the system in such a way that the sensor array does not need to be installed behind the foam inside the frontal compartment of the HR device which will add to the detection range of the sensor array.  To further strengthen the possibility of technology adoption by major vehicle and seat manufacturers, the functionality of the proposed adaptive HR positioning system can be slightly modified to form an intermediate whiplash mitigation system that guides the occupants to properly position their HRs either manually or under motor power. This will                                                                                                                       139 significantly reduce the liability associated with such safety systems in case of possible system operation failure. The development of such a whiplash mitigation warning system can be very well-suited to be integrated as part of Advanced Driver Assistance Systems (ADAS).                                                                                                                            140 References   [1] D. C. Viano and S. Olsen, “The Effectiveness of Active Head Restraint in Preventing Whiplash,” J. Trauma Inj. Infect. Crit. Care, vol. 51, pp. 959–969, Nov. 2001. [2] “IIHS., Procedures for Rating seat/head Restraints,” IIHS.org, . http://www.iihs.org/ratings/head_restraints/head_restraint_info.html. [3] B. Wright, T. Maffey, and K. Macaulay, “Head Restraint Positioning in Canada,” Proc CMRSC-XIX, Jun. 2009. [4] D. P. Romilly, P. Eng, H. Luk, M. Chien, K. Poon, M. White, and E. Desapriya, “Whiplash Prevention Campaign Initiative: Further Development and Implementation of an Observational Study Protocol for Assessing Proper Head Restraint Use,” Nov. 2012. [5] S. G. Klauer, T. A. Dingus, V. L. Neale, J. D. Sudweeks, and D. J. Ramsey, “The Impact of Driver Inattention on Near-Crash/Crash Risk: An Analysis Using the 100-Car Naturalistic Driving Study Data,” Apr. 2006. [6] T. A. Dingus, S. G. Klauer, V. L. Neale, A. Petersen, S. E. Lee, J. D. Sudweeks, M. A. Perez, J. Hankey, D. J. Ramsey, S. Gupta, C. Bucher, Z. R. Doerzaph, J. Jermeland, and R. R. Knipling, “The 100-Car Naturalistic Driving Study, Phase II - Results of the 100-Car Field Experiment,” Apr. 2006. [7] J. C. Stutts, D. W. Reinfurt, L. Staplin, and E. A. Rodgman, The role of driver distraction in traffic crashes. AAA Foundation for Traffic Safety Washington, DC, 2001. [8] Y. Dong, Z. Hu, K. Uchimura, and N. Murayama, “Driver Inattention Monitoring System for Intelligent Vehicles: A Review,” IEEE Trans. Intell. Transp. Syst., vol. 12, no. 2, pp. 596–614, 2011. [9] “ICBC Traffic Collision Statistics.” Insurance Corporation of British Columbia, 2007. [10] F. Navin, S. Zein, and E. Felipe, “Road Safety Engineering: An Effective Tool in the Fight against Whiplash Injuries,” Accid. Anal. Prev., vol. 32, no. 2, pp. 271–275, Mar. 2000. [11] D. S. Zuby and A. K. Lund, “Preventing Minor Neck Injuries in Rear Crashes—Forty Years of Progress,” J. Occup. Environ. Med., vol. 52, no. 4, pp. 428–433, Apr. 2010. [12] “Insurance Bureau of Canada., Car Insurance - Rest Up! Save Your Neck. IBC.ca., http://www.ibc.ca/en/besmartbesafe/documents/brochure/headrest_brochure_eng.pdf, http://www.ibc.ca/en/in_the_community/road_safety/rest_up_save_your_neck.asp.” . [13] “National Highway Traffic Safety Administration.” 2009. [14] “National Highway Traffic Safety Administration.” 2010. [15] P. W. Kithil, “Capacitive Occupant Sensing,” SAE International, Warrendale, PA, 982292, Sep. 1998. [16] N. Ziraknejad, P. Lawrence, and D. Romilly, “Quantifying Occupant Head to Head Restraint Relative Position for use in Injury Mitigation in Rear End Impacts,” SAE International, Warrendale, PA, 2011-01-0277, Apr. 2011. [17] M. Wollmer, C. Blaschke, T. Schindl, B. Schuller, B. Farber, S. Mayer, and B. Trefflich, “Online Driver Distraction Detection Using Long Short-Term Memory,” IEEE Trans. Intell. Transp. Syst., vol. 12, no. 2, pp. 574–582, Jun. 2011. [18] B. Metz and H.-P. Krueger, “Measuring visual distraction in driving: The potential of head movement analysis,” IET Intell. Transp. Syst., vol. 4, no. 4, pp. 289–297, Dec. 2010. [19] E. Murphy-Chutorian and M. M. Trivedi, “Head Pose Estimation and Augmented Reality Tracking: An Integrated System and Evaluation for Monitoring Driver Awareness,” IEEE Trans. Intell. Transp. Syst., vol. 11, no. 2, pp. 300–311, Jun. 2010. [20] Qiang Ji, Zhiwei Zhu, and P. Lan, “Real-time Nonintrusive Monitoring and Prediction of Driver Fatigue,” IEEE Trans. Veh. Technol., vol. 53, no. 4, pp. 1052– 1068, Jul. 2004. [21] A. Doshi and M. M. Trivedi, “On the Roles of Eye Gaze and Head Dynamics in Predicting Driver’s Intent to Change Lanes,” IEEE Trans. Intell. Transp. Syst., vol. 10, no. 3, pp. 453–462, Sep. 2009. [22] Lingling Li, Yangzhou Chen, and Zhenlong Li, “Yawning Detection for Monitoring Driver Fatigue based on two Cameras,” presented at the 12th International IEEE Conference on Intelligent Transportation Systems, 2009. ITSC ’09, 2009, pp. 1–6. [23] A. Doshi, Shinko Yuanhsien Cheng, and M. M. Trivedi, “A Novel Active Heads-Up Display for Driver Assistance,” IEEE Trans. Syst. Man Cybern. Part B Cybern., vol. 39, no. 1, pp. 85–93, Feb. 2009. [24] L. Fletcher, G. Loy, N. Barnes, and A. Zelinsky, “Correlating Driver Gaze with the Road Scene for Driver Assistance Systems,” Robot. Auton. Syst., vol. 52, no. 1, pp. 71–84, Jul. 2005.                                                                                                                       141 [25] G. P. Siegmund, M. B. M. Davis, K. P. B. Quinn, E. Hines, B. S. Myers, S. Ejima, K. Ono, K. B. Kamiji, T. M. Yasuki, and B. A. Winkelstein, “Head-Turned Postures Increase the Risk of Cervical Facet Capsule Injury During Whiplash. [Miscellaneous Article],” Spine July 1 2008, vol. 33, no. 15, pp. 1643–1649, 2008. [26] M. M. Panjabi, P. C. Ivancic, T. G. Maak, Y. Tominaga, and W. Rubin, “Multiplanar cervical spine injury due to head-turned rear impact.,” Spine, vol. 31, no. 4, p. 420, 2006. [27] P. Jimenez, J. Nuevo, L. M. Bergasa, and M. A. Sotelo, “Face tracking and pose estimation with automatic three-dimensional model construction,” Comput. Vis. IET, vol. 3, no. 2, pp. 93–102, 2009. [28] Y. Ebisawa, “Head pose detection with one camera based on pupil and nostril detection technique,” presented at the Virtual Environments, Human-Computer Interfaces and Measurement Systems,. VECIMS . IEEE Conference on, , pp. 172–177, 2008. [29] M. Miyaji, H. Kawanaka, and K. Oguri, “Driver’s cognitive distraction detection using physiological features by the adaboost,” presented at the Intelligent Transportation Systems,  ITSC ’09. 12th International IEEE Conference on,  pp. 1–6, 2009. [30] D. Chen, W. Gao, and X. Chen, “A new approach of recovering 3-D shape from structure-lighting,” in Signal Processing, , 3rd International Conference on,, vol. 2, pp. 839–842, 1996. [31] J. Park, G. N. DeSouza, and A. C. Kak, “Dual-beam structured-light scanning for 3-D object modeling,” in 3-D Digital Imaging and Modeling, . Proceedings. Third International Conference on, , pp. 65–72, 2001. [32] Zhencheng Hu, T. Kawamura, and K. Uchimura, “Grayscale Correlation based 3D Model Fitting for Occupant Head Detection and Tracking,” presented at the Intelligent Vehicles Symposium,  IEEE, , pp. 1252–1257, 2007. [33] S. J. Krotosky, S. Y. Cheng, and M. M. Trivedi, “Face detection and head tracking using stereo and thermal infrared cameras for ‘smart’ airbags: a comparative analysis,” presented at the Intelligent Transportation Systems, . Proceedings. The 7th International IEEE Conference on,  pp. 17–22, 2004. [34] S. J. Krotosky, S. Y. Cheng, and M. M. Trivedi, “Real-time stereo-based head detection using size, shape and disparity constraints,” presented at the Intelligent Vehicles Symposium , Proceedings. IEEE,  pp. 550–556, 2005. [35] D. I. . Hagebeuker and P. Marketing, “A 3D Time of Flight Camera for Object Detection.”, 2007. [36] P. R. Devarakota, M. Castillo-Franco, R. Ginhoux, B. Mirbach, and B. Ottersten, “Occupant Classification Using Range Images,” Veh. Technol. IEEE Trans. On, vol. 56, no. 4, pp. 1983–1993, 2007. [37] P. R. Devarakota, M. Castillo-Franco, R. Ginhoux, B. Mirbach, S. Kater, and B. Ottersten, “3-D-Skeleton-Based Head Detection and Tracking Using Range Images,” IEEE Trans. Veh. Technol., vol. 58, no. 8, pp. 4064–4077, Oct. 2009. [38] D. Demirdjian and C. Varri, “Driver pose estimation with 3D Time-of-Flight sensor,” presented at the Computational Intelligence in Vehicles and Vehicular Systems,  CIVVS ’09. IEEE Workshop on,  pp. 16–22, 2009. [39] E. Kollorz, J. Penne, J. Hornegger, and A. Barke, “Gesture recognition with a time-of-flight camera,” Int. J. Intell. Syst. Technol. Appl., vol. 5, no. 3/4, pp. 334–343, 2008. [40] M. Bohme, M. Haker, T. Martinetz, and E. Barth, “Head tracking with combined face and nose detection,” presented at the International Symposium on Signals, Circuits and Systems,  ISSCS , pp. 1–4, 2009. [41] R. Bostelman, P. Russo, J. Albus, T. Hong, and R. Madhavan, “Applications of a 3D Range Camera Towards Healthcare Mobility Aids,” presented at the Networking, Sensing and Control, 2006. ICNSC ’06. Proceedings of the 2006 IEEE International Conference on,  pp. 416–421, 2006. [42] O. Gallo, R. Manduchi, and A. Rafii, “Robust curb and ramp detection for safe parking using the Canesta TOF camera,” presented at the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPRW ’08,  pp. 1–8, 2008. [43] J. R. Smith, “Electric field imaging,” Citeseer, 1998. [44] L. K. Baxter, Capacitive Sensors: Design and Applications. John Wiley & Sons, 1996. [45] J. L. Novak and I. T. Feddema, “A capacitance-based proximity sensor for whole arm obstacle avoidance,” presented at the Robotics and Automation. Proceedings., 1992 IEEE International Conference on,  pp. 1307–1314 vol.2, 1992. [46] M. H. Barron and P. W. Kithil, “Integral Capacitive Sensor Array,” US5844486 A01-Dec-1998.                                                                                                                       142 [47] D. S. Breed, “System and method for moving a headrest based on anticipatory sensing,” US6746078 B208-Jun-2004. [48] T. Chen, Wotao Yin, Xiang Sean Zhou, D. Comaniciu, and T. S. Huang, “Total Variation Models for Variable Lighting Face Recognition,” Pattern Anal. Mach. Intell. IEEE Trans. On, vol. 28, no. 9, pp. 1519–1524, 2006. [49] H. Tsuboi and H. Saji, “Robust Facial Element Extraction Under Lighting Variation,” presented at the SICE Annual Conference,  pp. 2064–2067, 2007. [50] F. N. Toth and G. C. M. Meijer, “A Low-cost, Smart Capacitive Position Sensor,” IEEE Trans. Instrum. Meas., vol. 41, no. 6, pp. 1041 –1044, Dec. 1992. [51] Y. Lu, C. Marschner, L. Eisenmann, and S. Sauer, “The New Generation of BMW Child Seat and Occupant Detection System SBE2,” SAE International, Warrendale, PA, SAE Technical Paper 2000-05-0274, Jun. 2000. [52] B. George, H. Zangl, T. Bretterklieber, and G. Brasseur, “Seat Occupancy Detection Based on Capacitive Sensing,” Instrum. Meas. IEEE Trans. On, vol. 58, no. 5, pp. 1487–1494, 2009. [53] B. George, H. Zangl, T. Bretterklieber, and G. Brasseur, “A Novel Seat Occupancy Detection System based on Capacitive Sensing,” presented at the Instrumentation and Measurement Technology Conference Proceedings. IMTC 2008. IEEE,  pp. 1515–1519, 2008. [54] D. Tumpold and A. Satz, “Contactless Seat Occupation Detection System Based on Electric Field Sensing,” presented at the Industrial Electronics. IECON ’09. 35th Annual Conference of IEEE,  pp. 1823–1828, 2009. [55] H. Zangl, T. Bretterklieber, D. Hammerschmidt, and T. Werth, “Seat Occupancy Detection Using Capacitive Sensing Technology,” SAE International, Warrendale, PA, 2008-01-0908, Apr. 2008. [56] T. Togura, Y. Nakamura, and K. Akashi, “Long-Range Human Body Sensing Modules with Electric Field Sensor,” SAE International, Warrendale, PA, 2008-01-0909, Apr. 2008. [57] C. A. Pickering, “Human Vehicle Interaction Based On Electric Field Sensing,” in Advanced Microsystems for Automotive Applications 2008, J. Valldorf and W. Gessner, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg,  pp. 141–154, 2011. [58] T. Schlegl, T. Bretterklieber, M. Neumayer, and H. Zangl, “Combined Capacitive and Ultrasonic Distance Measurement for Automotive Applications,” IEEE Sens. J., vol. 11, no. 11, pp. 2636–2642, Nov. 2011. [59] B. George, H. Zangl, T. Bretterklieber, and G. Brasseur, “A combined inductive-capacitive proximity sensor and its application to seat occupancy sensing,” presented at the Instrumentation and Measurement Technology Conference. I2MTC ’09. IEEE,  pp. 13–17, 2009. [60] E. Murphy-Chutorian and M. M. Trivedi, “Head Pose Estimation in Computer Vision: A Survey,” Pattern Anal. Mach. Intell. IEEE Trans. On, vol. 31, no. 4, pp. 607–626, 2009. [61] Y. Zhu and K. Fujimura, “3D head pose estimation with optical flow and depth constraints,” in Fourth International Conference on 3-D Digital Imaging and Modeling. 3DIM 2003. Proceedings,  pp. 211–216, 2003. [62] E. Murphy-Chutorian and M. M. Trivedi, “HyHOPE: Hybrid Head Orientation and Position Estimation for vision-based driver head tracking,” presented at the 2008 IEEE Intelligent Vehicles Symposium, pp. 512–517, 2008. [63] M. La Cascia, S. Sclaroff, and V. Athitsos, “Fast, Reliable Head Tracking under Varying Illumination: an approach based on registration of texture-mapped 3D models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 4, pp. 322–336, 2000. [64] K. Oka, Y. Sato, Y. Nakanishi, and H. Koike, “Head Pose Estimation System Based on Particle Filtering with Adaptive Diffusion Control”  pp. 586–589, 2005. [65] L. Morency, A. Rahimi, and T. Darrell, “Adaptive View-based Appearance Models,” presented at the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Proceedings,  vol. 1, pp. I–803–I–810 vol.1, 2003. [66] M. D. Breitenstein, D. Kuettel, T. Weise, L. van Gool, and H. Pfister, “Real-time Face Pose Estimation from Single Range Images,” presented at the Computer Vision and Pattern Recognition. CVPR 2008. IEEE Conference on, pp. 1–8, 2008. [67] T. Weise, B. Leibe, and L. Van Gool, “Fast 3D Scanning with Automatic Motion Compensation,” presented at the Computer Vision and Pattern Recognition. CVPR ’07. IEEE Conference on, , pp. 1–8, 2007.                                                                                                                       143 [68] G. Fanelli, J. Gall, and L. Van Gool, “Real time Head Pose Estimation with Random Regression Forests,” presented at the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),  pp. 617–624, 2011. [69] G. Fanelli, T. Weise, J. Gall, and L. V. Gool, “Real Time Head Pose Estimation from Consumer Depth Cameras,” in Pattern Recognition, R. Mester and M. Felsberg, Eds. Springer Berlin Heidelberg,  pp. 101–110, 2011. [70] D. Huang, M. Storer, F. De la Torre, and H. Bischof, “Supervised Local Subspace Learning for Continuous Head Pose Estimation,”  pp. 2921–2928, 2011. [71] L. Yin, X. Wei, Y. Sun, J. Wang, and M. J. Rosato, “A 3D Facial Expression Database for Facial Behavior Research,” presented at the 7th International Conference on Automatic Face and Gesture Recognition. FGR 2006,  pp. 211–216, 2006.  [72] S. J. Ray and J. Teizer, “Automated Head Pose Estimation of Vehicle Operators,” 2011. [73] J. Smith, T. White, C. Dodge, J. Paradiso, N. Gershenfeld, and D. Allport, “Electric Field Sensing for Graphical Interfaces,” Comput. Graph. Appl. IEEE, vol. 18, no. 3, pp. 54–60, 1998. [74] “IEEE Standard for Safety Levels With Respect to Human Exposure to Radio Frequency Electromagnetic Fields, 3 kHz to 300 GHz,” IEEE Std C95.1, 1999 Edition, 1999. [Online]. Available: 10.1109/IEEESTD.1999.89423. [Accessed: 27-Sep-2010]. [75] L. B. Walker Jr., E. H. Harris, and U. R. Pontius, “Mass, Volume, Center of Mass and Mass Moment of Inertia of Head and Head and Neck of the Human Body,” Mar. 1973. [76] J. C. Maxwell, A treatise on electricity and magnetism, vol. 1. Clarendon Press, 1873. [77] H.-K. Lee, S.-I. Chang, and E. Yoon, “Dual-Mode Capacitive Proximity Sensor for Robot Application: Implementation of Tactile and Proximity Sensing Capability on a Single Polymer Platform Using Shared Electrodes,” Sens. J. IEEE, vol. 9, no. 12, pp. 1748 –1755, Dec. 2009. [78] R. P. Hubbard and D. G. Mcleod, “Definition and Development of A Crash Dummy Head,” SAE International, Warrendale, PA, 741193, Feb. 1974. [79] N. Lea, “Notes on the stability of LC oscillators,” J. Inst. Electr. Eng. - Part III Radio Commun. Eng., vol. 92, no. 20, pp. 261 –274, Dec. 1945. [80] B. Hardy, “ITS-90 Formulations for Vapor Pressure, Frostpoint Temperature, Dewpoint Temperature, and Enhancement Factors in the Range–100 to+ 100 c,” 1998. [81] L. H. Ford, “The Effect of Humidity on the Calibration of Precision Air Capacitors,” Proc. IEE - Part III Radio Commun. Eng., vol. 96, no. 39, pp. 13 –16, Jan. 1949. [82] C. T. Zahn, “Association, Adsorption, and Dielectric Constant,” Phys. Rev., vol. 27, no. 3, p. 329, 1926. [83] T. Möller, H. Kraft, J. Frey, M. Albrecht, and R. Lange, “Robust 3d Measurement with PMD Sensors,” Range Imaging Day Zür., 2005. [84] U. Hahne and M. Alexa, “Exposure Fusion for Time‐Of‐Flight Imaging,” Comput. Graph. Forum, vol. 30, no. 7, pp. 1887–1894, Sep. 2011. [85] P. Gil, J. Pomares, and F. Torres, “Analysis and Adaptation of Integration Time in PMD Camera for Visual Servoing,” presented at the Pattern Recognition (ICPR), 20th International Conference on, pp. 311–315, 2010. [86] Stefan May, Bjorn Werner, Hartmut Surmann, and Kai Pervolz, “3D Time-of-Flight Cameras for Mobile Robotics,” presented at the Intelligent Robots and Systems, IEEE/RSJ International Conference on, pp. 790–795, 2006. [87] T. Oggier, F. Lustenberger, and N. Blanc, “Miniature 3D TOF Camera for Real-Time Imaging,” Perception and Interactive Technologies, 2006. [Online]. Available: http://dx.doi.org/10.1007/11768029_26. [Accessed: 16-Nov-2009]. [88] P. J. Besl and H. D. McKay, “A Method for Registration of 3-D Shapes,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, no. 2, pp. 239–256, Feb. 1992. [89] C. Yang and G. Medioni, “Object Modelling by Registration of Multiple Range Images,” Image Vis. Comput., vol. 10, no. 3, pp. 145–155, Apr. 1992. [90] MATALB Central database, Iterative Closest Point,    http://www.mathworks.com/matlabcentral/fileexchange/27804-iterative- closest-point. [91] S. Malassiotis and M. G. Strintzis, “Robust Real-time 3D Head Pose Estimation from Range Data,” Pattern Recognit., vol. 38, no. 8, pp. 1153–1165, Aug. 2005. [92] K. Shoemake, “Animating Rotation with Quaternion Curves,” ACM SIGGRAPH Comput. Graph., vol. 19, no. 3, pp. 245–254, 1985.                                                                                                                       144 [93] N. Ziraknejad, P. D. Lawrence, and D. P. Romilly, “The Effect of Time-of-Flight Camera Integration Time on Vehicle Driver Head Pose Tracking Accuracy,” presented at the 2012 IEEE International Conference on Vehicular Electronics and Safety (ICVES), 2012, pp. 247 –254. [94] M. Haker, M. Bohme, T. Martinetz, and E. Barth, “Geometric Invariants for Facial Feature Tracking with 3D TOF Cameras,” presented at the Signals, Circuits and Systems, 2007. ISSCS 2007. International Symposium on, vol. 1, pp. 1–4, 2007. [95] D. O. Gorodnichy, “On Importance of Nose for Face Tracking,” presented at the Fifth IEEE International Conference on Automatic Face and Gesture Recognition. Proceedings, pp. 181–186, 2002. [96] L. Yin and A. Basu, “Nose Shape Estimation and Tracking for Model-based Coding,” presented at the 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. (ICASSP ’01), vol. 3, pp. 1477–1480 vol.3, 2001. [97] A. Colombo, C. Cusano, and R. Schettini, “3D Face Detection using Curvature Analysis,” Pattern Recognit., vol. 39, no. 3, pp. 444–455, 2006. [98] K. I.Chang, K. W. Bowyer, and P. J. Flynn, “Adaptive Rigid Multi-region Selection for Handling Expression Variation in 3D Face Recognition,” presented at the IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops. CVPR Workshops, pp. 157–157, 2005. [99] P. J. Besl and R. C. Jain, “Invariant Surface Characteristics for 3D Object Recognition in Range Images,” Comput. Vis. Graph. Image Process., vol. 33, no. 1, pp. 33–80, 1986. [100] E. W. Weisstein, CRC Concise Encyclopedia of Mathematics, Second Edition. CRC Press, 2010. [101] L. Sciavicco and B. Siciliano, Modelling and Control of Robot Manipulators, 2nd edition. London ; New York: CreateSpace Independent Publishing Platform, 2000. [102] D. Slieter, M. Gebhard, and P. Levi, “Evaluation of Robust Pose Estimation Methods within Automotive Environments,” presented at the 2012 IEEE International Conference on Vehicular Electronics and Safety (ICVES), pp. 241–246, 2012. [103] J. Jo, H. G. Jung, K. R. Park, J. Kim, and S. J. Lee, “Vision-based Method for Detecting Driver Drowsiness and Distraction in Driver Monitoring System,” Opt. Eng., vol. 50, no. 12, pp. 127202–127202–24, 2011. [104] http://www.mathworks.com/matlabcentral/fileexchange/11168-surface-curvature/content/surfature.m [105] Motion Metrics International Corp., Personal Communications.       

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0135540/manifest

Comment

Related Items