"Science, Faculty of"@en . "Computer Science, Department of"@en . "DSpace"@en . "UBCV"@en . "Viswanathan, Pooja"@en . "2012-08-16T18:08:26Z"@en . "2012"@en . "Doctor of Philosophy - PhD"@en . "University of British Columbia"@en . "Cognitive impairments prevent older adults from using powered wheelchairs because of safety concerns, thus reducing mobility and resulting in increased dependence on caregivers. An intelligent powered wheelchair system (NOAH) is proposed to help restore mobility, while ensuring safety. Machine vision and learning techniques are described to help prevent collisions with obstacles, and provide reminders and navigation assistance through adaptive audio prompts. The intelligent wheelchair is initially tested in various controlled environments and simulated scenarios. Finally, the system is tested with older adults with mild-to-moderate cognitive impairment through a single-subject research design. Results demonstrate the high diversity of the target population, and highlight the need for customizable assistive technologies that account for the varying capabilities and requirements of the intended users. We show that the collision avoidance module is able to improve safety for all users by lowering the number of frontal collisions. In addition, the wayfinding module assists users in navigating along shorter routes to the destination. Prompting accuracy is found to be quite high during the study. While compliance with correct prompts is high across all users, we notice a distinct difference in the rates of compliance with incorrect prompts. Results show that users who are unsure about the optimal route rely more highly on system prompts for assistance, and thus are able to improve their wayfinding performance by following correct prompts. Improvements in wheelchair position estimation accuracy and joystick usability will help improve user performance and satisfaction. Further user studies will help refine user needs and hopefully allow us to increase mobility and independence of several elderly residents."@en . "https://circle.library.ubc.ca/rest/handle/2429/42950?expand=metadata"@en . "NAVIGATION AND OBSTACLE AVOIDANCE HELP (NOAH) FOR ELDERLY WHEELCHAIR USERS WITH COGNITIVE IMPAIRMENT IN LONG-TERM CARE by Pooja Viswanathan B.Math., The University of Waterloo, 2006 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES (Computer Science) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) August 2012 \u00C2\u00A9 Pooja Viswanathan, 2012 ii Abstract Cognitive impairments prevent older adults from using powered wheelchairs because of safety concerns, thus reducing mobility and resulting in increased dependence on caregivers. An intelligent powered wheelchair system (NOAH) is proposed to help restore mobility, while ensuring safety. Machine vision and learning techniques are described to help prevent collisions with obstacles, and provide reminders and navigation assistance through adaptive audio prompts. The intelligent wheelchair is initially tested in various controlled environments and simulated scenarios. Finally, the system is tested with older adults with mild-to-moderate cognitive impairment through a single-subject research design. Results demonstrate the high diversity of the target population, and highlight the need for customizable assistive technologies that account for the varying capabilities and requirements of the intended users. We show that the collision avoidance module is able to improve safety for all users by lowering the number of frontal collisions. In addition, the wayfinding module assists users in navigating along shorter routes to the destination. Prompting accuracy is found to be quite high during the study. While compliance with correct prompts is high across all users, we notice a distinct difference in the rates of compliance with incorrect prompts. Results show that users who are unsure about the optimal route rely more highly on system prompts for assistance, and thus are able to improve their wayfinding performance by following correct prompts. Improvements in wheelchair position estimation accuracy and joystick usability will help improve user performance and satisfaction. Further user studies will help refine user needs and hopefully allow us to increase mobility and independence of several elderly residents. iii Preface The methods described in section 3.6 and the experiments in sections 4.2 \u00E2\u0080\u0093 4.4 were partly published in: P. Viswanathan, J. Boger, J. Hoey and A. Mihailidis, \u00E2\u0080\u009CA Comparison of Stereovision and Infrared as Sensors for an Anti-Collision Powered Wheelchair for Older Adults with Cognitive Impairments,\u00E2\u0080\u009D Proceedings of the 2nd International Conference on Technology and Aging (ICTA), Toronto, 2007. I was responsible for all development, testing and writing of the manuscript. A full-length version of this paper (with the same title) was also published in Technology and Aging - Selected Papers from the 2007 International Conference on Technology and Aging, A. Mihailidis, J. Boger, H. Kautz, L. Normie (Eds), vol. 21, pp 165- 172, Assistive Technology Research Series, IOS Press, 2008. P. Viswanathan, J. Boger, J. Hoey, P. Elinas, A. Mihailidis, \u00E2\u0080\u009CThe Future of Wheelchairs: Intelligent Collision Avoidance and Navigation Assistance,\u00E2\u0080\u009D Geriatrics and Aging, vol. 10, no. 4, pp. 253-256, 2007. P. Viswanathan, A. Mackworth, J. J. Little, J. Hoey, and A. Mihailidis. \u00E2\u0080\u009CNOAH for wheelchair users with cognitive impairment: Navigation and Obstacle Avoidance Help,\u00E2\u0080\u009D Proceedings of AAAI Fall Symposium on AI in Eldercare: New Solutions to Old Problems, pp. 150 - 152, Washington D.C., 2008. I was mainly responsible for writing the above manuscripts. I was also responsible for integrating and modifying algorithms implemented by J. Hoey and P. Elinas for the collision avoidance module. iv P. Viswanathan, D. Meger, T. Southey, J. J. Little, and A. K. Mackworth, \u00E2\u0080\u009CAutomated Spatial-Semantic Modeling with Applications to Place Labeling and Informed Search,\u00E2\u0080\u009D Proceedings of Canadian Conference in Computer and Robot Vision, Kelowna, Canada, 2009. P. Viswanathan, T. Southey, J. J. Little, and A. Mackworth, \u00E2\u0080\u009CAutomated place classification using object detection,\u00E2\u0080\u009D Proceedings of Canadian Conference in Computer and Robot Vision, pp. 324-330, Ottawa, Canada, 2010. P. Viswanathan, T. Southey, J. J. Little, and A. Mackworth, \u00E2\u0080\u009CPlace Classification Using Visual Object Categorization and Global Information,\u00E2\u0080\u009D Proceedings of Canadian Conference in Computer and Robot Vision, Halifax, Canada, 2011. The above papers report on collaborative work completed with D. Meger and T.Southey. The results from these papers are briefly mentioned, however the details are omitted from this dissertation. P. Viswanathan, P. Alimi, J. Little, A. Mackworth, A. Mihailidis, \u00E2\u0080\u009CNavigation Assistance for Intelligent Wheelchairs,\u00E2\u0080\u009D Proceedings of International Conference on Technology and Aging (ICTA), Toronto, Canada, 2011. I was mainly responsible for the development and testing. P. Alimi assisted with data collection as well as installation of software and debugging. I wrote the manuscript. P. Viswanathan, J. Little, A. Mackworth, A. Mihailidis, \u00E2\u0080\u009CAdaptive Navigation Assistance for Visually-Impaired Wheelchair Users,\u00E2\u0080\u009D Proceedings of IEEE/RSJ IROS Workshop on New and Emerging Technologies in Assistive Robotics, San Francisco, California, 2011. I was responsible for development, testing, and writing the manuscript. v Chapter 5 has also been partly published. P. Viswanathan, J. Little, A. Mackworth, A. Mihailidis, \u00E2\u0080\u009CNavigation and Obstacle Avoidance Help (NOAH) for Older Adults with Cognitive Impairment: A Pilot Study,\u00E2\u0080\u009D Proceedings of ACM SIGACCESS Conference on Computers and Accessibility (ASSETS), Dundee, Scotland, 2011. I was responsible for the development and testing of the system, and wrote the manuscript. P. Viswanathan, R. H. Wang, T. How, G. R. Fernie, J. Little, A. Mackworth, A. Mihailidis, \u00E2\u0080\u009CDriving Forward: The Next Steps in Intelligent Wheelchair Research,\u00E2\u0080\u009D Presented at Toronto Rehabilitation Institute Research Day, Toronto, Canada, 2011. I was responsible for the write-up of results from my Ph.D. research and for editing the poster. R. Wang and T. How summarized results from their dissertations. T. How created the first draft of the poster. I presented the poster at the conference. Further details on the contributions of various authors/collaborators can be found in Appendix D. The efficacy study described in Chapter 5 was approved by the Health Sciences Research Ethics Board at the University of Toronto (Ethics Protocol Reference #: 23377) and by the Clinical Research Ethics Board at the University of British Columbia (H12-01289). vi Table of Contents Abstract .................................................................................................................................... ii Preface ..................................................................................................................................... iii Table of Contents ................................................................................................................... vi List of Tables ........................................................................................................................ xiii List of Figures ....................................................................................................................... xiv List of Abbreviations ........................................................................................................... xix Acknowledgements .............................................................................................................. xxi Dedication ............................................................................................................................ xxii Chapter 1: Introduction ........................................................................................................ 1 1.1 Motivation ................................................................................................................. 1 1.2 Objectives ................................................................................................................. 2 1.3 Research Questions ................................................................................................... 3 1.4 Hypotheses ................................................................................................................ 4 1.5 Contributions ............................................................................................................. 4 1.5.1 System Development ............................................................................................ 4 1.5.2 System Testing ...................................................................................................... 5 1.5.3 Efficacy Study ....................................................................................................... 5 1.6 Thesis Overview ....................................................................................................... 5 Chapter 2: Background and Literature Survey .................................................................. 7 2.1 Long-term Care Setting ............................................................................................. 7 2.2 Mobility and Independence ....................................................................................... 8 vii 2.3 Prevalence and Implications of Cognitive Impairments ........................................... 9 2.4 Safety Requirements and Interventions .................................................................. 10 2.5 Intelligent Navigation/Prompting Systems for the Elderly and/or Cognitively- Impaired .............................................................................................................................. 12 2.5.1 Intelligent Wheelchairs ....................................................................................... 12 2.5.1.1 Anti-collision Wheelchair with Bumper Skirt ............................................ 15 2.5.1.2 IWS ............................................................................................................. 17 2.5.1.3 OMNI .......................................................................................................... 18 2.5.1.4 Hephaestus Smart Wheelchair .................................................................... 19 2.5.1.5 CALL Centre Smart Wheelchair ................................................................ 20 2.5.1.6 PALMA ....................................................................................................... 21 2.5.1.7 Collaborative Wheelchair Assistant (CWA) ............................................... 22 2.5.1.8 Intelligent Wheelchair (University of Zaragoza, Spain) ............................. 23 2.5.1.9 Limitations of Previous Intelligent Wheelchairs ........................................ 24 2.5.2 Intelligent Wayfinding Devices .......................................................................... 27 2.5.3 Prompting Devices for ADL ............................................................................... 29 2.6 Design Considerations for NOAH and User Studies .............................................. 30 Chapter 3: System Development ........................................................................................ 32 3.1 System Objectives ................................................................................................... 32 3.2 Key System Functionalities and Criteria ................................................................ 33 3.3 Overview of Design Process ................................................................................... 34 3.4 System Design ........................................................................................................ 35 3.5 Hardware ................................................................................................................. 37 viii 3.5.1 Wheelchairs ......................................................................................................... 37 3.5.2 Direction Control Logic Module ........................................................................ 38 3.5.3 Camera ................................................................................................................ 40 3.5.4 Laptop and Speakers ........................................................................................... 40 3.6 Software .................................................................................................................. 40 3.6.1 Collision Detector ............................................................................................... 40 3.6.1.1 Collision Avoidance .................................................................................... 41 3.6.1.2 Free Space Detection .................................................................................. 43 3.6.2 Path Planner ........................................................................................................ 44 3.6.2.1 Mapping ...................................................................................................... 45 3.6.2.2 Map Annotation .......................................................................................... 47 3.6.2.3 Localization ................................................................................................. 48 3.6.2.4 Trajectory Generation and Analysis ........................................................... 48 3.6.3 Prompter .............................................................................................................. 50 3.6.3.1 User Model Specification ........................................................................... 51 3.6.3.2 Policy Generation ........................................................................................ 56 3.6.3.3 Prompt Generation ...................................................................................... 57 3.6.4 System Integration .............................................................................................. 57 Chapter 4: System Testing .................................................................................................. 59 4.1 Introduction ............................................................................................................. 59 4.2 Collision Avoidance ................................................................................................ 59 4.2.1 Experimental Setup ............................................................................................. 59 4.2.2 Results ................................................................................................................. 62 ix 4.2.3 Discussion ........................................................................................................... 67 4.3 Trajectory Analysis ................................................................................................. 70 4.3.1 Experimental Setup ............................................................................................. 70 4.3.2 Results ................................................................................................................. 73 4.3.3 Discussion ........................................................................................................... 73 4.4 Full System Testing in Simulated Scenarios .......................................................... 74 4.4.1 Results ................................................................................................................. 75 4.4.2 Discussion ........................................................................................................... 77 Chapter 5: Efficacy Study ................................................................................................... 78 5.1 Introduction ............................................................................................................. 78 5.2 Ethics and Informed Consent .................................................................................. 78 5.3 Inclusion Criteria .................................................................................................... 79 5.4 Exclusion Criteria ................................................................................................... 80 5.5 Participants .............................................................................................................. 81 5.6 Apparatus and Setup ............................................................................................... 82 5.7 Method .................................................................................................................... 83 5.7.1 Single Subject Research Design ......................................................................... 83 5.7.2 Case Study Design .............................................................................................. 84 5.7.3 Procedure ............................................................................................................ 85 5.7.4 Outcome Measures .............................................................................................. 86 5.7.5 Data Collection and Measurement ...................................................................... 90 5.8 Data Analysis .......................................................................................................... 91 5.8.1 Quantitative Analysis of Subject Performance ................................................... 91 x 5.8.2 Qualitative Analysis of Participant Observations ............................................... 92 5.8.3 System Performance Analysis ............................................................................ 92 5.9 Efficacy Study Results ............................................................................................ 92 5.9.1 Subject Performance ........................................................................................... 93 5.9.1.1 Participant 1 ................................................................................................ 94 5.9.1.2 Participant 2 .............................................................................................. 100 5.9.1.3 Participant 3 .............................................................................................. 104 5.9.1.4 Participant 4 .............................................................................................. 109 5.9.1.5 Participant 5 .............................................................................................. 114 5.9.1.6 Participant 6 .............................................................................................. 118 5.9.2 Custom Questionnaire Results .......................................................................... 122 5.9.3 System Performance ......................................................................................... 124 5.9.3.1 Participant 1 .............................................................................................. 124 5.9.3.2 Participant 2 .............................................................................................. 127 5.9.3.3 Participant 3 .............................................................................................. 128 5.9.3.4 Participant 4 .............................................................................................. 130 5.9.3.5 Participant 5 .............................................................................................. 132 5.9.3.6 Participant 6 .............................................................................................. 134 5.9.3.7 User Model Results ................................................................................... 136 5.10 Efficacy Study Discussion .................................................................................... 137 5.10.1 Subject Performance ..................................................................................... 137 5.10.1.1 Collision Avoidance .............................................................................. 137 5.10.1.2 Wayfinding ........................................................................................... 138 xi 5.10.1.3 Completion Time .................................................................................. 141 5.10.2 Thematic Analysis of Qualitative Data ......................................................... 142 5.10.2.1 Prior Driving Experience ...................................................................... 142 5.10.2.2 Attentiveness and Mood ....................................................................... 143 5.10.2.3 Perceptions of Safety ............................................................................ 144 5.10.2.4 Social Acceptance ................................................................................. 145 5.10.2.5 User Confidence and Intent .................................................................. 145 5.10.2.6 Memory and Wayfinding Abilities ....................................................... 146 5.10.2.7 Wheelchair Speed ................................................................................. 148 5.10.2.8 Decrease of Confusion and Anxiety ..................................................... 149 5.10.2.9 Need for Powered Mobility and Control .............................................. 150 5.10.2.10 Shared Decision-Making ...................................................................... 151 5.10.2.11 Justification of Prompts ........................................................................ 152 5.10.2.12 Independent Operation of the System ................................................... 153 5.10.3 System Components Analysis and Refinement ............................................ 154 5.10.3.1 System Set-up ....................................................................................... 155 5.10.3.2 Hardware ............................................................................................... 155 5.10.3.3 Mapping ................................................................................................ 156 5.10.3.4 Localization ........................................................................................... 156 5.10.3.5 Trajectory Generation and Analysis ..................................................... 158 5.10.3.6 Collision Detection ............................................................................... 158 5.10.3.7 Prompting .............................................................................................. 160 5.10.3.7.1 User Model ........................................................................................ 161 xii 5.10.3.7.2 Prompting Response .......................................................................... 164 5.10.4 Limitations of Efficacy Study ....................................................................... 166 Chapter 6: Challenges and Future Work ........................................................................ 170 6.1 Collision Detection ............................................................................................... 172 6.2 Path Planning ........................................................................................................ 173 6.3 Prompting .............................................................................................................. 174 6.4 User Studies .......................................................................................................... 175 Chapter 7: Conclusion ....................................................................................................... 177 Appendices ........................................................................................................................... 193 Appendix A Questionnaires .............................................................................................. 193 A.1 NASA-TLX ....................................................................................................... 193 A.2 QUEST 2.0 ........................................................................................................ 194 A.3 Custom Questionnaire ....................................................................................... 195 Appendix B Data Collection Form ................................................................................... 196 Appendix C NASA-TLX Raw Data ................................................................................. 197 Appendix D Research Process .......................................................................................... 200 Appendix E Information and Consent Form ..................................................................... 204 xiii List of Tables Table 2.1 Sensor comparison. ............................................................................................... 14 Table 4.1 Performance of the Collision Avoidance module for each test condition using the infrared (IR) and stereovision (SV) sensors. Trials per condition = 20. [PBHM08] .............................................................................................................................. 63 Table 4.2 Mean stopping distances (with standard deviation) for the infrared and stereovision sensors when the wheelchair was moving. The stopping distance threshold was set to 700 mm, velocity = 0.16 m/s. Trials per condition = 20. [PBHM08] ............................................................................................................ 64 Table 4.3 Mean detection distances for when the object and wheelchair were stationary for the infrared (IR) and stereovision (SV) sensors. Trials per condition = 10. [PBHM08] ............................................................................................................ 65 Table 4.4 Free space detection performance for the (IR) and stereovision (SV) sensors. Trials per condition = 20. [PBHM08] .................................................................. 65 Table 4.5 Trajectory analysis results. [VALMM11] ............................................................ 72 Table 5.1 Participant information ......................................................................................... 81 Table 5.2 Collision avoidance performance. Statistically significant results are bolded. .. 138 Table 5.3 Wayfinding performance. Statistically significant results are bolded. ............... 139 Table 5.4 Completion times. Statistically significant results are bolded. ........................... 141 xiv List of Figures Figure 3.1 NOAH wheelchair system prototype. The system is made up of a commercially available powered wheelchair equipped with a stereovision camera (a). It also consists of a custom-made directional control logic module (DCLM) [MEBH07] and a laptop placed under the seat (b). ........................................... 35 Figure 3.2 Architecture of the intelligent wheelchair system and its modules. Offline processes are indicated using dotted lines. ........................................................ 36 Figure 3.3 Pride Mobility wheelchair with Bumblebee camera (a), laptop and newer DCLM (b). ...................................................................................................................... 38 Figure 3.4 The Collision Detector module in NOAH. ......................................................... 41 Figure 3.5 Images of a person with a cane captured using the stereovision camera: (a) original image, (b) depth image (brighter pixels correspond to closer objects), and (c) occupancy grid (the solid grey region denotes the area outside the camera\u00E2\u0080\u0099s field of view). ..................................................................................... 42 Figure 3.6 The Path Planner module in NOAH. Offline processes are indicated using dotted lines. ................................................................................................................... 45 Figure 3.7 The Prompter module in NOAH. Offline processes are indicated using dotted lines. ................................................................................................................... 51 Figure 3.8 Diagram of the user (POMDP) model used for prompting. ............................... 54 Figure 4.1 Collision avoidance test conditions. Wall, walker, cane and standing person were positioned .................................................................................................. 60 xv Figure 4.2 Free space detection test conditions. Objects were placed to the left (a) and right (b) of the Target Location. ................................................................................. 61 Figure 4.3 Original images of a room with windows (a). Occupancy grids produced by stereovision (b) and infrared (c) sensors with blinds closed and opened. The noise generated by the IR sensor is circled. [PBHM08] .................................... 66 Figure 4.4 Map of laboratory created by the mapping component. Locations chosen as start and end positions are numbered 1-4. Blue arrows denote wheelchair position and heading as estimated by the Localization component while driving along route \u00E2\u0080\u009C1-3\u00E2\u0080\u009D. [VALMM11] ................................................................................. 71 Figure 4.5 Example of system actions (prompts and stops) performed to assist the user. Arrows indicate system estimates of wheelchair position and orientation. Note that duplicate actions are omitted for visual display purposes. [VLMM11] ..... 76 Figure 5.1 Maze (a) and obstacles (b) constructed using foam boards. ............................... 82 Figure 5.2 Example of system prompts for Participant 5 during phase B. .......................... 93 Figure 5.3 Total frontal collisions for participant 1. Without NOAH (\u00CE\u00BC=8.0; \u00CF\u0083=2.62), with NOAH (\u00CE\u00BC=1.38; \u00CF\u0083=0.92). .................................................................................. 94 Figure 5.4 Total length of route taken by participant 1. Without NOAH (\u00CE\u00BC=18.21m; \u00CF\u0083=1.88m), with NOAH (\u00CE\u00BC=11.31m; \u00CF\u0083=0.0m). .................................................. 96 Figure 5.5 Total time to reach destination for participant 1. Without NOAH (\u00CE\u00BC=1125.88s; \u00CF\u0083=216.49s), with NOAH (\u00CE\u00BC=702.38s; \u00CF\u0083=71.48s). ............................................. 98 Figure 5.6 NASA-TLX average ratings for participant 1. Possible ratings were low, medium or high demand. ................................................................................... 99 xvi Figure 5.7 Total frontal collisions for participant 2. Without NOAH (\u00CE\u00BC=1.13; \u00CF\u0083=1.89), with NOAH (\u00CE\u00BC=0.0; \u00CF\u0083=0.0). .................................................................................... 101 Figure 5.8 Total length of route taken by participant 2. Without NOAH (\u00CE\u00BC=11.31m; \u00CF\u0083=0.0m), with NOAH (\u00CE\u00BC=11.31m; \u00CF\u0083=0.0m). .................................................. 102 Figure 5.9 Total time to reach destination for participant 2. Without NOAH (\u00CE\u00BC=434.75s; \u00CF\u0083=199.04s), with NOAH (\u00CE\u00BC=327.38s; \u00CF\u0083=130.22s). ......................................... 103 Figure 5.10 Total frontal collisions for participant 3. With NOAH (\u00CE\u00BC=0.0; \u00CF\u0083=0.0), without NOAH (\u00CE\u00BC=0.13; \u00CF\u0083=0.35). ................................................................................ 105 Figure 5.11 Total length of route taken by participant 3. With NOAH (\u00CE\u00BC=11.31m; \u00CF\u0083=0.0m), without NOAH (\u00CE\u00BC=13.92m; \u00CF\u0083=2.31m). ........................................................... 106 Figure 5.12 Total time to reach destination for participant 3. With NOAH (\u00CE\u00BC=381.0s; \u00CF\u0083=69.90s), without NOAH (\u00CE\u00BC=252.13s; \u00CF\u0083=34.58s). ....................................... 107 Figure 5.13 NASA-TLX average ratings for participant 3. Possible ratings were 0-20, where 0 indicates low demand and 20 indicates high demand. .................................. 108 Figure 5.14 Total frontal collisions for participant 4. With NOAH (\u00CE\u00BC=0.13; \u00CF\u0083=0.35), without NOAH (\u00CE\u00BC=0.25; \u00CF\u0083=0.46). ................................................................................ 110 Figure 5.15 Total length of route taken by participant 4. With NOAH (\u00CE\u00BC=11.31m; \u00CF\u0083=0.0m), without NOAH (\u00CE\u00BC=11.68m; \u00CF\u0083=1.06m). ........................................................... 111 Figure 5.16 Total time to reach destination for participant 4. With NOAH (\u00CE\u00BC=252.25s; \u00CF\u0083=94.24s), without NOAH (\u00CE\u00BC=155.63s; \u00CF\u0083=43.55s). ....................................... 112 Figure 5.17 NASA-TLX average ratings for participant 4. Possible ratings were 0-20, where 0 indicates low demand and 20 indicates high demand. .................................. 113 xvii Figure 5.18 Total frontal collisions for participant 5. Without NOAH (\u00CE\u00BC=0.5; \u00CF\u0083=0.93), with NOAH (\u00CE\u00BC=0.13; \u00CF\u0083=0.35). ................................................................................ 114 Figure 5.19 Total length of route taken by participant 5. Without NOAH (\u00CE\u00BC=18.91m; \u00CF\u0083=4.27m), with NOAH (\u00CE\u00BC=11.94m; \u00CF\u0083=1.17m). .............................................. 115 Figure 5.20 Total time to reach destination for participant 5. Without NOAH (\u00CE\u00BC=422.75s; \u00CF\u0083=115.46s), with NOAH (\u00CE\u00BC=350.75s; \u00CF\u0083=187.15s). ......................................... 116 Figure 5.21 NASA-TLX average ratings for participant 5. Possible ratings were 0-20, where 0 indicates low demand and 20 indicates high demand. .................................. 117 Figure 5.22 Total frontal collisions for participant 6. With NOAH (\u00CE\u00BC=0.25; \u00CF\u0083=0.46), without NOAH (\u00CE\u00BC=3.13; \u00CF\u0083=2.90). ................................................................................ 118 Figure 5.23 Total length of route taken by participant 6. With NOAH (\u00CE\u00BC=11.31m; \u00CF\u0083=0.0m), without NOAH (\u00CE\u00BC=11.31m; \u00CF\u0083=0.0m). ............................................................. 120 Figure 5.24 Total time to reach destination for participant 6. With NOAH (\u00CE\u00BC=513.38s; \u00CF\u0083=126.62s), without NOAH (\u00CE\u00BC=252.13s; \u00CF\u0083=74.13s). ..................................... 120 Figure 5.25 NASA-TLX average ratings for participant 6. Possible ratings were 0-20, where 0 indicates low demand and 20 indicates high demand. .................................. 121 Figure 5.26 Prompts issued to participant 1. ....................................................................... 125 Figure 5.27 Responses to correct prompts by participant 1. ............................................... 126 Figure 5.28 Responses to incorrect prompts by participant 1. ............................................ 126 Figure 5.29 Prompts issued to participant 2. ....................................................................... 127 Figure 5.30 Responses to correct prompts by participant 2. ............................................... 128 Figure 5.31 Responses to incorrect prompts by participant 2. ............................................ 128 Figure 5.32 Prompts issued to participant 3. ....................................................................... 129 xviii Figure 5.33 Responses to correct prompts by participant 3. ............................................... 130 Figure 5.34 Responses to incorrect prompts by participant 3. ............................................ 130 Figure 5.35 Prompts issued to participant 4. ....................................................................... 131 Figure 5.36 Responses to correct prompts by participant 4. ............................................... 132 Figure 5.37 Responses to incorrect prompts by participant 4. ............................................ 132 Figure 5.38 Prompts issued to participant 5. ....................................................................... 133 Figure 5.39 Responses to correct prompts by participant 5. ............................................... 134 Figure 5.40 Responses to incorrect prompts by participant 5. ............................................ 134 Figure 5.41 Prompts issued to participant 6. ....................................................................... 135 Figure 5.42 Responses to correct prompts by participant 6. ............................................... 136 Figure 5.43 Responses to incorrect prompts by participant 6. ............................................ 136 xix List of Abbreviations 2D Two Dimensional 3D Three Dimensional ADL Activities of Daily Living AI Artificial Intelligence ASCII American Standard Code for Information Interchange CALL Communication Aids for Language and Learning CMOS Complementary Metal-Oxide Semiconductor COACH Cognitive Orthosis for Assisting with aCtivites in the Home CPS Cognitive Performance Scale CWA Collaborative Wheelchair Assistant DB9 D-subminiature 9 DC Direct Current DCLM Directional Control Logic Module FMM Fast Marching Method GB Gigabyte GHz Gigahertz GPS Global Positioning System GUI Graphical User Interface IR Infrared IWS Intelligent Wheelchair System LTC Long-Term Care xx MDP Markov Decision Process MMSE Mini Mental State Examination NASA-TLX National Aeronautics and Space Administration Task Load Index NOAH Navigation and Obstacle Avoidance Help OMNI Office Wheelchair High Maneuverability and Navigational Intelligence for People with Severe Handicap PALMA Assistive Platform for Alternate Mobility PGM Portable Graymap POMDP Partially Observable Markov Decision Process QUEST Quebec User Evaluation of Satisfaction with assistive Technology RFID Radio-Frequency Identification ROS Robot Operating System RS-232 Recommended Standard 232 SDM Substitute Decision Maker SLAM Simultaneous Localization and Mapping SSRD Single-Subject Research Design TBI Traumatic Brain Injury USB Universal Serial Bus xxi Acknowledgements I would like to thank my supervisors Drs. Alan Mackworth and James Little for all their advice and support throughout my Ph.D. I would also like to thank the other members of my supervisory committee, Drs. Ian Mitchell and Cristina Conati for their valuable suggestions, and Dr. Alex Mihailidis for introducing me to the world of assistive technology! In addition, I would like to acknowledge Drs. Bill Miller and Laura Clarke, as well as Jen Boger, for their enthusiasm and encouragement. I would like to thank Ken Alton for providing me with his path planning code and Parnian Alimi for her assistance in software installation and integration. I would like to thank all of my lab members, especially Tristram Southey, for all the interesting discussions and collaborations that made my graduate experience truly exciting. I am thankful to Amanda Calvin for all her effort and time during the clinical trials, as well as Tuck Voon How, Rosalie Wang and Tammy Craig for their helpful suggestions and timely aid in data collection. I would like to thank Kath Imhiran for processing all my travel claims and expenses in a timely fashion. I am grateful to Bruce Dow for all his technical assistance. I am also thankful to the staff at The Harold and Grace Baker Centre for their cooperation and support. Last but not least, I would like to thank my family for their unconditional support. The research in this thesis was supported by NSERC Postgraduate Scholarships (CGSM and CGSD), by NSERC Discovery Grants to James Little and Alan Mackworth, by CanWheel, the Canadian Institutes of Health Research (CIHR) Emerging Team in Wheeled Mobility for Older Adults, by a UBC 4YF Fellowship, by Precarn scholarships and by Google Scholar awards. In additional, I received travel support from ICICS, FOGS, and ACM-W. xxii Dedication I would like to dedicate this thesis to my family members, who are probably more interested in seeing a \u00E2\u0080\u0098Mrs.\u00E2\u0080\u0099 rather than a \u00E2\u0080\u0098Dr.\u00E2\u0080\u0099 title right now \u00EF\u0081\u008A. On a more serious note, I\u00E2\u0080\u0099d like to dedicate this work to the six participants in the efficacy study, without whom this research would not have been possible. They have touched my life in more ways than I ever imagined. I hope to devote the rest of my academic career to improving the quality of healthcare provided to cognitively-impaired older adults, all over the world. Here\u00E2\u0080\u0099s to a lifetime of learning! 1 Chapter 1: Introduction 1.1 Motivation As the aging population continues to grow, there is a greater need for technology that ensures continued mobility and independence, while being accessible and adaptive to user needs. Older adults commonly use powered wheelchairs for enhanced mobility, since they lack the strength to propel themselves in manual wheelchairs. However, operation of these devices requires significant cognitive capacity. Among the 1.5 million nursing home residents in America in 1995, 60-80% were diagnosed with dementia, primarily Alzheimer\u00E2\u0080\u0099s disease [Mar00]. These residents lack the cognitive abilities to safely maneuver powered wheelchairs, and are thus not permitted to use them. In addition, feelings of disorientation caused by dementia might be further increased by loss of vision and visuoperceptual difficulties related to old age and/or certain types of dementia, making independent mobility difficult, and, in some cases, impossible. Cognitive impairments in the elderly population thus lead to reduced independence and mobility, and, then in turn, to depression, social isolation, and an increased dependence on caregivers. We propose an intelligent wheelchair that provides navigation and obstacle avoidance help (NOAH) to enhance mobility and help improve the quality of life of older adults with cognitive impairment in long-term care facilities, while simultaneously reducing the burden on caregivers. Vision-based anti-collision technology is used by our system to prevent collisions with obstacles to overcome challenges presented by active sensors used in existing intelligent wheelchairs [Sim05]. In addition, audio prompts that suit the user\u00E2\u0080\u0099s needs and 2 capabilities (estimated by a user model) are provided to the user to ensure improved safety and timeliness during wheelchair navigation. Specifically, the wheelchair can issue reminders and assist in navigation to the destination, while accounting for the user\u00E2\u0080\u0099s capabilities as well as obstacles in his/her path. The use of audio prompts also ensures accessibility to cognitively-impaired users with visual impairment. The efficacy study presented in this dissertation, to our knowledge, is the first study that tests a collision avoidance and wayfinding system on a powered wheelchair with cognitively-impaired older adults. An intelligent wheelchair was recently developed and tested with older adults with dementia by collaborators at the University of Toronto [HWM11]. This system implements a vision-based collision avoidance module that modifies our design described in [VBHM08] and in this dissertation. Specifically, their research focused on creating an embedded system with improved computation speed, and provided comparisons to our initial prototype, with respect to accuracy and runtime. It should be noted that audio prompts in their system only provided free space information and ignored the complex task of route planning to specific locations, which is an additional functionality in our system. 1.2 Objectives The objectives of our research are to: 1) Develop an intelligent wheelchair system for cognitively-impaired older adults in long- term care facilities that \u00EF\u0082\u00B7 improves safety by reducing the number of frontal collisions; \u00EF\u0082\u00B7 ensures timely and effective navigation by providing audio prompts; 3 \u00EF\u0082\u00B7 automatically determines the frequency and type of prompts based on the user\u00E2\u0080\u0099s navigation ability (which might be affected by his/her cognitive and visual abilities). 2) Test components of the system in different controlled scenarios to evaluate each component\u00E2\u0080\u0099s performance, and to test the entire system to identify potential weaknesses. 3) Test the entire system with cognitively-impaired older adults, through an efficacy study, to evaluate system performance and usability. 1.3 Research Questions The primary questions we aim to answer in our research are: 1) How does NOAH impact safety during navigation with a powered wheelchair by the user, through vision-based collision detection? 2) How does NOAH impact the users\u00E2\u0080\u0099 ability to navigate to a specified location, with respect to time and distance travelled, through adaptive audio prompts? 3) How well does NOAH meet users\u00E2\u0080\u0099 needs in terms of satisfaction and usability? In addition, we also ask secondary questions to identify weaknesses and future improvements to the system: 4) What types of errors occur while detecting and avoiding collisions? 5) What types of errors occur while providing navigation prompts? 6) What future improvements need to be made to increase system performance? 7) What future improvements need to be made to increase user satisfaction? 4 1.4 Hypotheses Our hypotheses for the primary research questions are as follows: 1) NOAH will increase safety during navigation with a powered wheelchair by the user by reducing the number of frontal collisions. 2) NOAH will allow users to navigate to desired destinations successfully. It will reduce the distance traveled to reach the goal, thus possibly reducing driving times. However, NOAH might increase driving times in the presence of obstacles, due to the stopping action of the wheelchair. 3) NOAH will increase users\u00E2\u0080\u0099 perceived levels of safety and decrease anxiety due to the collision avoidance feature. It will also decrease mental/physical demands and increase perceived levels of performance through the navigation assistance feature. 1.5 Contributions We describe the contributions of our research by dividing them into three main areas: development, testing, and the efficacy study. 1.5.1 System Development Our research involves designing, modifying, and integrating AI and robotics methods to develop a collision avoidance and navigation assistance wheelchair system that adapts automatically to cognitively-impaired older adults in an indoor environment. A new vision- based collision avoidance module is developed. Existing mapping, localization and path planning methods are modified and integrated to determine the optimal route to desired locations. Trajectory generation and analysis are performed and used to determine the 5 wheelchair heading and position relative to the optimal route. A new probabilistic user model is also developed in order to provide adaptive audio prompts that aid in navigation. 1.5.2 System Testing We conduct controlled testing of NOAH and its individual components as follows: - Collision Avoidance - in a lab environment with specific objects (Chapter 4.2) - Path Planning - in a realistic environment (Chapter 4.3) - System testing in a realistic environment in simulated scenarios (Chapter 4.4) 1.5.3 Efficacy Study Our research includes an efficacy study (Chapter 5) to determine the effectiveness of the system with the target user population. Qualitative and quantitative analyses are conducted to evaluate the effectiveness, acceptance and usefulness of this technology. This study provides key insights on the needs of the intended users, and highlights areas for further research in order to enable independent and safe navigation of wheelchairs by cognitively-impaired older adults. 1.6 Thesis Overview The rest of the dissertation is organized as follows. Chapter 2 provides a literature survey of work completed in the field. Chapter 3 provides details on the implementation of the system, and Chapter 4 reports on the experiments conducted to test the system under controlled and simulated conditions. Chapter 5 discusses an efficacy study with the target user population. 6 We discuss future work and challenges in Chapter 6. Finally, we highlight the main conclusions of the research in Chapter 7. 7 Chapter 2: Background and Literature Survey 2.1 Long-term Care Setting In Canada, long-term care facilities typically offer 24-hour supervision, health services, and personal care [Hea11]. Provincial and territorial legislation govern long-term facilities-based care, and the Canada Health Act does not provide insurance coverage for such care. The terms used to refer to these facilities, such as nursing home, personal care facility, and residential continuing care facility, vary by jurisdiction. Differences are also found in the specific services and care levels, financial coverage, as well as facility management. The research in this dissertation has been carried out in a long-term institutional care setting in Ontario, Canada. The Ontario Ministry of Health and Long-Term Care provides 24-hour nursing, care and support [Ont11]. It should be noted that the terminology used to refer to different types of long-term care facilities tends to vary by country. In the U.S., nursing homes are generally paid by residents themselves; however, Medicare covers some skilled nursing support if medically required [Med11]. The reader is referred to [Sen12] for other terms used in the United States. 8 2.2 Mobility and Independence As the older adult population in Canada continues to grow, there is an increased need for improved health care and new assistive technologies to ensure continued independence and a high quality of life. Independent mobility has been identified as a key component of physical well-being and happiness, enabling people to interact with their surroundings [BBCK02]. Unfortunately, the mobility and independence of many older adults are often reduced due to physical disabilities. Reduced mobility often results in decreased opportunities to explore and socialize, leading to social isolation and depression. For example, one study has reported that among non-institutionalized U.S. adults, 31% of people with major mobility difficulties were frequently depressed or anxious, versus only 4% of those without mobility difficulties [IMD+01]. Loss of mobility also results in increased dependence on caregivers in order to fulfill daily tasks. A National Population Health Survey was conducted by Statistics Canada in 1995 with more than 2000 residents from 232 long-term care facilities. According to survey results, \u00E2\u0080\u009Chalf the residents spent most of the day in a bed or chair\u00E2\u0080\u009D [TM95]. Wheelchairs have been found to positively enhance the mobility of several long-term care (LTC) residents [PGK86]. However, independent propulsion of manual wheelchairs within the facility is often an unmet goal for wheelchair users [FG03]. Wheelchairs are commonly used by staff for seating and transporting residents; however, only 4-14% of residents use wheelchairs to increase self-mobility [BL99]. Despite the above evidence for inadequate wheelchair self-mobility, there is often minimal recognition of these issues by caregiving staff [PGK86]. Thus, steps need to be taken to address these concerns and to develop methods to increase independent wheelchair mobility of long-term care residents. 9 2.3 Prevalence and Implications of Cognitive Impairments Powered wheelchairs can enable independent mobility and are typically prescribed by clinicians to residents who lack the strength to propel themselves in manual wheelchairs. However, safe operation of powered wheelchairs requires a significant level of cognitive function, including decision-making, memory, judgment and self-awareness [Bri03]. It is estimated that 60-80% of residents in long-term care facilities have dementia [PSS + 02]. Symptoms of impaired attention, agitation, and poor impulse control [BR97, MMS96] are known symptoms related to Alzheimer\u00E2\u0080\u0099s disease (the most common form of dementia in older adults) and severe traumatic brain injury (TBI). Elderly residents with cognitive impairment may thus be excluded from powered wheelchair use due to safety risks [FLS00, Har04], making them highly dependent on caregivers to porter them around. Other symptoms of dementia include loss of memory and disorientation, which also cause difficulties in remembering how to navigate to specific locations, thus resulting in wandering behaviors [SC08]. Visuoperceptual difficulties have been reported for several types of dementia including Alzheimer\u00E2\u0080\u0099s disease, dementia related to Parkinson\u00E2\u0080\u0099s disease, Lewy body dementia and vascular dementia (in cases where stroke-type damage is on or near the visual pathways in the brain) [MSF+00, MMW+04, RN98, RKJ94]. These visuoperceptual difficulties present further challenges in seeing and avoiding obstacles, perceiving depth and motion, as well as recognizing visual cues in the environment, thus making independent navigation challenging, and in some cases, impossible. 10 2.4 Safety Requirements and Interventions Safety of elderly residents in the communal living environment is of utmost concern due to their high vulnerability to falls. It has been reported that 73-80% of older adults trip or fall after being hit by a wheelchair [CCD+01]. Even a minor collision can lead to a fall, and 5\u00E2\u0080\u0093 10% of these falls result in a fracture, particularly a hip fracture, in the older adult population [NCK+89]. Hip fractures have serious consequences for this population, usually leading to a severe reduction in mobility and up to a 40% mortality rate within 6 months as a result of complications [JSS96]. Clinicians, who have the responsibility of prescribing powered wheelchair use, need to consider the trade-off between the residents\u00E2\u0080\u0099 need for independent mobility and the safety of drivers and others in the environment [MMB+06]. Perceptions of powered mobility safety in three long-term care facilities, two of which had a predominantly older adult population, are explored in [MMB+05, MMB+06]. The authors found that some residents \u00E2\u0080\u009Cwere able to drive safely despite dementia, poor motor function, and/or legal blindness\u00E2\u0080\u009D [MMB+06]. They suggested that eligibility for powered wheelchair use should be determined based on driving ability rather than the drivers\u00E2\u0080\u0099 clinical diagnosis. In addition, study results suggested that clinicians excluded residents who were unable to avoid collisions and to learn from their experience. Property damage was also stated to be a major concern. However, authors pointed out that exclusion of these residents overlooks the alternative solution of modifying the wheelchair and the environment to ensure safe and independent powered mobility. 11 Various methods have been used to facilitate safe operation of powered wheelchairs in long- term care settings. Personalized training is a typical method that is commonly provided to new powered wheelchair users as well as users who deteriorate in their driving ability, in order to increase safety. The Power-mobility Indoor Driving Assessment (PIDA) [DKCG06] is an instrument developed specifically for long-term care residents in order to help assess driving performance (with respect to driving abilities and safety), and to guide power mobility training or interventions [DCK94]. Risk assessment protocols and driving safety guidelines [MPSW03, THMOP01, MMB06] have also been put in place to help inform clinical decisions regarding power mobility use and to minimize safety risks. However, the effectiveness of these approaches in improving safety is largely unknown. Additionally, in the event that residents are found to be ineligible for powered wheelchair use due to severe physical or cognitive impairment, their needs for independent mobility remain unaddressed. The issues discussed above highlight the need for intelligent systems that can compensate for the lack of cognitive capacity in the target population to safely maneuver powered wheelchairs. Automatic collision detection and avoidance would ensure that cognitively- impaired drivers are able to navigate through their environments while posing minimal risk to themselves and other residents. In addition to safety, wheelchairs that provide activity reminders and wayfinding assistance would also improve user independence and social engagement. Due to high variability in abilities of the target user population, a system that is able to automatically determine the type of assistance required would potentially improve user satisfaction. 12 2.5 Intelligent Navigation/Prompting Systems for the Elderly and/or Cognitively- Impaired Several intelligent devices have been developed for the elderly and/or cognitively impaired. In this section, we specifically review related work in three areas that are of relevance to the design of NOAH: intelligent wheelchairs, intelligent wayfinding devices, and prompting devices to assist in activities of daily living (ADL). We further limit our review to devices that have been designed for and tested with the elderly and/or those with cognitive impairments. We review the above areas separately, since to our knowledge, NOAH is the first intelligent wheelchair that encompasses all three areas and has been tested with cognitively-impaired older adults. 2.5.1 Intelligent Wheelchairs Several intelligent wheelchairs have been developed recently. A literature review by Richard Simpson [Sim05] discusses work in the field until 2005, and compares various intelligent wheelchairs with respect to their functionality, sensing devices, level of autonomy, user interface, and form factor. The wheelchairs described in the above review are capable of various functionalities including collision avoidance, autonomous navigation to locations, wall following and virtual path following, using various active sensors (acoustic, sonar, infrared, laser, etc.). In addition to common joystick interfaces, some wheelchairs have also used brain-computer and voice recognition interfaces [JHLY07, HAB + 10]. The above wheelchairs have been developed for users with various disabilities, and thus use different implementation approaches. The authors in [SLC08] suggest that intelligent 13 wheelchairs capable of collision avoidance and path planning would greatly benefit wheelchair users with cognitive and visual impairment. A small proportion of existing intelligent wheelchairs, however, have been tested with cognitively-impaired individuals. Thus, usability issues faced by our target population are poorly understood and documented. Before discussing various intelligent wheelchairs in detail, we provide a high-level summary of the most common sensors used in these wheelchairs in Table 2.1, highlighting the advantages and disadvantages of each as noted by collaborators and other researchers in [Sim05, DF05, HF05, LSMN02, WGHF11]. 14 Table 2.1 Sensor comparison. Sensor Advantages Disadvantages Bump - Low cost - Low power - Can only detect obstacles that make contact with sensor - Increasing coverage (e.g. by using bumper skirt-like sensors) can lead to increased form factor and bulkiness - Force required to stop wheelchair must be minimal to avoid harm to drivers and other residents Ultrasonic - Low power - Low cost - Sensitive to obstacle properties (such as sound absorbing material) - Cannot detect very small/thin or concave obstacles - Cross-talk issues with other sounds or multiple ultrasonic sensors Laser Range Finder - High precision depth estimation - High cost (~$1000-$5000) - Possible eye-safety issues - Only detects obstacles on a plane parallel to the floor - High power Infrared (IR) - Low power - False positives in natural light (due to IR interference) - Sensitive to flooring materials - High power Stereovision Camera - Low power - Medium cost ($500- $2000) - Wide horizontal and vertical viewing angle - Can be used for high-level scene understanding (e.g. object and location recognition) - Cannot detect textureless objects - Cannot perform in poorly lit conditions - Challenges posed by reflective and transparent surfaces 15 A specific comparison between time-of-flight infrared laser range and stereovision sensors can be found in section 4.2. Based on research carried out by collaborators and the performance of several of the above sensors in realistic environments, the stereovision sensor was chosen. The main reasons for this choice were the camera\u00E2\u0080\u0099s relatively low power requirements, its ability to perform well in natural environments, and the decreasing cost of cameras. In addition, the use of stereovision cameras for localization and high-level scene understanding makes it a good choice for the wayfinding task. We expect that future work will require integration with additional cheap sensors to overcome challenges faced by stereovision sensors (such as low lighting). Next, a discussion of two intelligent wheelchairs that have been developed for and tested with the target user population (by collaborators at the University of Toronto) is provided, along with the key lessons learned from these studies. Other intelligent wheelchairs that have been developed for younger individuals with cognitive impairment are also explored. Since these wheelchairs were tested with younger users, reports on system usability by the test subjects might not be useful for the intended target population. These systems are instead discussed with respect to choice of sensors, implementation methods and reported system performance. Brief comments on the (in)appropriateness of the system design for our target population are also provided. 2.5.1.1 Anti-collision Wheelchair with Bumper Skirt The anti-collision wheelchair consists of a bumper skirt attached around the base of the Nimble Rocket\u00E2\u0084\u00A2 powered wheelchair (Nimble Inc., Toronto, Ontario). The bumper skirt 16 stops the wheelchair automatically upon contact with an obstacle. Joystick movement towards the obstacle is blocked, thus only allowing users to steer away from the obstacle. Indicator lights are also positioned in front of the joystick to display possible directions of motion. The anti-collision wheelchair was tested with six nursing home residents with dementia [WGHF11]. The authors measured and compared the distances traveled by residents in manual as well as anti-collision wheelchairs. In addition, safety observations were tracked during device use. Two residents were found to be capable of using the device. Improved mobility and well-being was reported for one of these residents, while the other resident thought the device was \u00E2\u0080\u009Cbulky and unhelpful\u00E2\u0080\u009D [WGHF11]. One of the recruited residents withdrew because of usability and aesthetic issues. The device was unable to make up for the inadequate driving skills of two other residents. In addition, the bumper skirt could not prevent all collisions during the study due to lack of complete coverage. The bulky appearance of the wheelchair with the addition of the bumper skirt was highlighted to be a key issue. Thus, sensors that are more compact and require little to no modification of regular powered wheelchairs would potentially increase user satisfaction. Wheelchair speed was also found to be a concern for one participant, suggesting the need for systems that are able to perform effectively at higher wheelchair speeds. The authors also suggested that vision-based proximity sensors might improve safety by increasing the limited coverage provided by the bumper skirt. Finally, the need for ongoing prompting from the researcher and the inability of participants to understand the purpose of the indicator lights 17 led the authors to suggest more advanced control systems, including mixed-initiative or semi- autonomous driving modes, and automated verbal prompting strategies to encourage mobility. The authors suggested that a powered wheelchair capable of preventing collisions \u00E2\u0080\u009Ccould improve the well-being and mobility of some nursing home residents with complex physical and cognitive impairments\u00E2\u0080\u009D [Wan11]. 2.5.1.2 IWS The Intelligent Wheelchair System (IWS) from the University of Toronto [HWM11] is designed to be an add-on component to commercially available wheelchairs. A stereovision camera is used to detect obstacles and a semi-autonomous control strategy is implemented as follows. When an imminent collision is detected, the wheelchair is stopped and the user is not permitted to move in the direction of the obstacle(s). A verbal prompt indicating the area with the most free space is provided to the user, thus aiding the user\u00E2\u0080\u0099s safe control of the wheelchair. Note that this system only provides navigation assistance in order to avoid collisions, and does not provide any high-level path planning assistance. The methods used in this system built on our previous work [VBHM08], and focused on building an embedded system with improved computational speed. Additionally, the system was tested recently with three cognitively-impaired older adults through two phases (baseline and intervention) [HWM11]. Each phase consisted of at most five trials, and the ordering of phases was randomized. Participants were required to navigate through an obstacle course. Results showed that the system was able to increase safety by decreasing the number of frontal collisions, although the reliance on the anti-collision system was found to vary among 18 participants. In addition, adherence to verbal prompts was found to be low. A possible reason was suggested to be the high error rate in prompting. A higher number of trials was recommended in future studies, in order to determine statistical significance of results. 2.5.1.3 OMNI OMNI (Office Wheelchair with High Maneuverability and Navigational Intelligence for People with Severe Handicap) is designed for people with severe or multiple physical disabilities [HBJ99]. It consists of an omnidirectional base with an elevating seat system, and uses ultrasonic, infrared and bumper sensors for obstacle detection. Various types of functionality that are available through a human-machine interface include: collision avoidance, movements in response to the environment (e.g. wall following), recording and play-back of complex maneuvers, and landmark-guided movements. The wheelchair was tested with several users; however, details regarding the testing protocol are unavailable. The system was reported to be configurable and flexible. The large number of sensors required in this system (infrared, ultrasound, bump, and encoders) reduces cost-effectiveness. In addition, the mode needs to be chosen by the user, which is undesirable, since cognitively-impaired users might not be able to or remember how to switch modes. Mode selection would need to be carried out by the caregiver. Since testing methods were not disclosed, system usability by cognitively-impaired older adults is unknown. 19 2.5.1.4 Hephaestus Smart Wheelchair The Hephaestus system is a module that can be added on to commercial powered wheelchairs [SPB02]. It acts as an aid that increases safe and independent mobility, and as a training tool that allows users to safely acquire the ability to operate a powered wheelchair independently. Ultrasound and bump sensors are used for collision avoidance. When the user approaches an obstacle, the system slows down and eventually stops the wheelchair in front of the obstacle. The system also attempts to drive around obstacles in the wheelchair\u00E2\u0080\u0099s path ahead by modifying the user\u00E2\u0080\u0099s joystick input. This system was evaluated with four able-bodied and four disabled users (three with cerebral palsy and one with post-polio syndrome) [SPB99]. Participants were required to complete three distinct tasks in baseline (without navigation assistance) and intervention (with navigation assistance) phases, with four consecutive trials per phase. Results showed that able-bodied participants preferred the baseline condition, and found the intelligent system \u00E2\u0080\u009Cintrusive rather than helpful\u00E2\u0080\u009D [SPB99]. In contrast, disabled participants preferred the intervention condition since the system increased their perceived level of security. However, the system did not directly enhance the level of user performance. The experiments reported in this study suggest that a collision avoidance system could improve perceived level of safety for users with cognitive impairment (although the level of impairment and age of users was not reported). However, due to the simplicity and short duration of the task, no significant differences were found in the performance of users with and without the system. The authors suggested that future studies should involve more 20 complex and realistic tasks. In addition, since two subjects were found to demonstrate improved performance during the trials, the authors suggested longer training periods prior to actual trials to achieve stable baseline. 2.5.1.5 CALL Centre Smart Wheelchair The CALL (Communication Aids for Language and Learning) Centre smart wheelchair is a commercial augmentative mobility aid for children with severe and multiple disabilities [OWNC00]. Upon collision with an object, bumpers are used to stop the wheelchair. The system then either steers away from the obstacle on its own, or allows the user to do so. The smart wheelchair thus provides the user with varying levels of control, and its design is individualized to each child to meet his/her needs. The system is also capable of line following and helping the user navigate between rooms and through doorways. The system can confirm instructions and report events back to the user using a speech synthesizer. The Centre for Cerebral Palsy in the UK conducted a study to evaluate the impact of the smart wheelchair on the driving skills and psychosocial outcomes of four children (aged between four and fourteen) with cerebral palsy [MMGT09]. The smart wheelchair was tested over six weeks (two one-hour sessions per week). Mixed methods (quantitative and qualitative) were used to collect and analyze the data. Study results showed that \u00E2\u0080\u009Cthree out of four children gained independence in at least three driving skills or more\u00E2\u0080\u009D [MMGT09]. Some of the reported psychosocial benefits included increased positive affect and independence. 21 The study findings highlight the psychosocial benefits of increased mobility in children with cognitive impairment, which might be applicable to the older population as well, where independent mobility has been correlated with higher quality of life [BBCK02]. Since collision avoidance was not a tested skill, the performance of the collision avoidance module is unknown. 2.5.1.6 PALMA PALMA (assistive platform for alternative mobility) is designed to aid in the mobility and mental development of children with cerebral palsy [CPCJA05]. The system uses ultrasonic sensors for obstacle avoidance. A user board is used to control the motion of the wheelchair. This wheelchair also consists of an interface that enables the educator to select the desired level of autonomy of the wheelchair. The wheelchair has six levels of autonomy, ranging from autonomous driving with obstacle avoidance to fully user-controlled where the sensors are de-activated. The educator decides on the rate of advancement through the above levels of autonomy based on his/her assessment of the child\u00E2\u0080\u0099s driving ability. The system was evaluated with five children (aged between three and seven) with cerebral palsy. Six trials were completed by each child over two months. The children were required to drive in a large test environment with specific obstacles and to complete multiple tasks. Results showed that the children were enthusiastic about the wheelchair, and the acceptance was generally high. Children were able to progress to higher driving levels at varying rates over the course of the user study. 22 The effectiveness of the collision avoidance module using ultrasound sensors is unknown. This system might increase caregiver burden, since it requires the level of autonomy to be adjusted with continuous monitoring of the user\u00E2\u0080\u0099s performance. A system that is able to adapt automatically to the user\u00E2\u0080\u0099s capabilities would eliminate the need for caregiver monitoring and input. 2.5.1.7 Collaborative Wheelchair Assistant (CWA) The Collaborative Wheelchair Assistant (CWA) is intended for people who are unable to operate a standard powered wheelchair, but are aware of their desired destination and have the ability to avoid collisions [ZTRB08]. The wheelchair uses wheel encoders and a barcode scanner for wheelchair localization. The wheelchair is pushed through the environment, and software is used to create virtual guide paths between the start and end locations. The desired path for driving is selected by users through a graphical user interface (GUI). The user is allowed to steer away from the guide path, while feeling a passive attraction toward the path. The system was tested with five participants (aged between sixteen and forty-eight), with cerebral palsy and traumatic brain injury (TBI), who had been excluded from powered wheelchair use [ZBT09]. The system was tested in two different modes: free mode (no navigation assistance was provided), and guided mode (navigation assistance was provided). After training with both modes, the participants were asked to navigate from one room to another, around obstacles (tables). The participants completed the task ten times, alternating between the two modes of assistance. Completion times, collisions, user interaction and intervention level were measured. Results showed that all collisions were eliminated by the 23 guided mode. Analysis of users\u00E2\u0080\u0099 joystick movements showed that the guided mode also reduced the number of joystick movements. While no participant was able to drive a wheelchair independently prior to the trials, all participants were able to operate the powered wheelchair in the guided mode, thus gaining mobility. Four out of five participants also successfully completed the navigation task in free mode after sufficient training. Wheelchair localization requires the addition of barcodes to the environment in this system. Wheel encoders are additionally used to measure distance and would need to be added to a commercial wheelchair. Users are required to specify the desired path. Although this interface would be appropriate for drivers who are aware of their route and simply lack the physical ability to operate the wheelchair, it would be ineffective for drivers who are unaware of the navigation route (such as those with memory loss). 2.5.1.8 Intelligent Wheelchair (University of Zaragoza, Spain) The intelligent wheelchair from the University of Zaragoza is intended for users with cognitive disabilities and mobility impairment [MDBM10]. The wheelchair uses a planar laser to detect obstacles, and uses wheel encoders for odometry measurements. The intelligent wheelchair uses a touch screen as the primary input device. This interface was found to be more robust than the previously-used voice interface, which presented challenges in speech recognition and training [MMAM06]. Users select desired destinations through the visual display. The display provides a 3-D environment visualization and is constructed in real time by the autonomous navigation system, which drives to the selected destination while avoiding static and dynamic obstacles. The use of an online rather than a pre- 24 constructed map allows the system to deal with dynamic obstacles and unknown scenarios more effectively. The system was tested with four students (aged between eleven and sixteen) with cerebral palsy. In the training phase, participants used a game simulator to learn to use the navigation interface. In the test phase, participants were required to navigate along an established route using the autonomous system (baseline performance was not tested). All participants were able to complete the navigation task. Six collisions were reported; however, these errors were considered to be acceptable since the experiment was carried out in a realistic and dynamic environment. The use of a planar laser leads to collisions with obstacles that are above or below the laser, resulting in lowered safety. Once again, the system required wheel encoders to be installed on the wheelchair. The system assumes that the user is aware of his/her destination, which might be an invalid assumption for cognitively-impaired users. Finally, automatic movement of the wheelchair might take away feelings of control and independence and possibly lead to confusion and frustration for the target user population. 2.5.1.9 Limitations of Previous Intelligent Wheelchairs Several existing intelligent wheelchairs and walkers have used various non-contact active sensors (laser, acoustic, sonar, etc.) [Sim05, MDK + 03]. However, these sensors are often large, expensive, power-hungry, unsafe, and prone to cross-talk issues as seen in Table 2.1. A 3D infrared sensor was used in another intelligent wheelchair to detect and prevent 25 imminent collisions with objects, and was found to operate effectively in a controlled environment [MEBH07]. However, the high false alarm rate in the presence of sunlight limited the system\u00E2\u0080\u0099s operating environment. We therefore use a passive vision-based sensor to detect obstacles and prevent collisions in more natural settings. In addition, through the use of vision-based technology, we eliminate the need for environmental and additional wheelchair modifications, which are required by several wayfinding components described above. Although stereovision sensors are known to perform poorly in environments that lack texture, methods that use projected light (artificial texture) are able to overcome these challenges [SS03], and are thus left for future integration. In addition, most of wheelchairs above leave planning and navigation to the user and only provide collision avoidance support. This assistance is not sufficient for users with memory deficits and/or poor decision-making capabilities. Wheelchairs that do assist in high-level navigation tend to be autonomous and require little or no supervision by the driver. This type of assistance might lead to confusion and frustration among users with cognitive impairments, particularly if they do not realize that the wheelchair is moving on its own, or if the wheelchair\u00E2\u0080\u0099s actions are not consistent with the user\u00E2\u0080\u0099s intent. NOAH is the first system we are aware of that has been tested with cognitively-impaired older adults in the tasks of both collision avoidance and high-level navigation to a specified location with a powered wheelchair. The anti-collision wheelchair with the bumper skirt [WGHF11] provides collision avoidance through a contact sensor, but does not provide any other form of navigation assistance. In addition, this skirt consists of sensors requiring 26 physical contact with an obstacle in order to detect it. Although the contact force is maintained below 100g, it is sufficient to startle vulnerable elderly residents in the path of the wheelchair and potentially result in a trip or fall, thus a non-contact sensor is preferred. The IWS [HWM11] provides audio prompts to aid only in collision avoidance, and study results showed that adherence to prompts was low due to errors in prompting. In addition to collision avoidance support, NOAH provides passive wayfinding assistance through adaptive, customized audio prompts using a user model. An adaptive method is used in order to increase effectiveness and acceptance by the target population, as suggested by [CB88]. Many intelligent wheelchairs, such as NavChair [LBJ + 99], Wheelesley [Yan98] and MAid [PSF01], remain untested with their target populations, possibly due to the difficulties in study recruitment. In addition, some of these systems might not be robust enough for clinical use. Thus, there is a lack of understanding of the user acceptance of these technologies. Further clinical evaluation is imperative in building systems that will help the intended user population. Although the specific wheelchairs described above (OMNI, PALMA, etc.) have been tested with children with cognitive impairment, only the IWS and the bumper skirt have been tested with cognitively-impaired older adults. It is not clear whether all systems that have been tested only with children will be able to achieve the same level of efficacy with the older adult population, especially since older adults generally have multiple medical problems that might affect intelligent wheelchair use. In 1999, 24% of Medicare beneficiaries, aged 65 years or older, had four or more chronic conditions [WSA02]. In addition, intra-individual variability in latency is greater in individuals diagnosed with mild dementia than in adults who are neurologically intact, regardless of their health status 27 [HMHLS00]. Thus, it is imperative that systems are tested with cognitively-impaired older adults in order to ensure that they are appropriate for this population. We hope that the insights provided by the research in this dissertation will further our understanding of the needs of older adults with cognitive impairment, and thus help us in restoring their mobility and independence. 2.5.2 Intelligent Wayfinding Devices Recently, a hierarchical shortest path planning algorithm was developed for wheelchair users [YM07]. However this approach relies on the user to specify preferences and constraints (such as dynamic obstacles). The intelligent wheelchair must be able to detect obstacles automatically and avoid them. Other wayfinding systems designed for older adults include the Nursebot Project [PMPRT03], which provides reminders and assists the elderly in navigating their environment. Baus et al. [BWA + 02] developed a system for people with visual impairments and older adults that uses auditory perceptible landmarks to assist in pedestrian navigation. The system was tested in a field experiment on a university campus. A study in [GBG05] showed that an electronic pedestrian image-based navigation device based around landmarks was more effective for older adults than an analogous paper version. A feasibility study [LHK + 06] of user interface was conducted by the University of Washington using a \u00E2\u0080\u009CWizard of Oz\u00E2\u0080\u009D approach with cognitively-impaired older adults during indoor navigation. Users preferred image-based cues in comparison with speech and text cues, however only one of them used a mobility device (powered wheelchair). Another system uses radio-frequency identification (RFID) to provide wayfinding assistance at decision 28 points [CPW + 10]. This system was tested with six cognitively-impaired users and was found to be effective. Although the above systems use AI techniques for planning and/or reminding, they do not incorporate user modeling. Opportunity Knocks [PLG + 04] and another project at the University of Washington [LFK04] provide text-based wayfinding directions during public transportation for users with GPS-enabled cellular phones as part of the Assisted Cognition Project. The system learns user behaviors to determine when assistance is required. However, this system, similar to most outdoor wayfinding systems, relies on GPS, which is unreliable in indoor settings. The indoor wayfinding systems mentioned above typically use beacon and RFID technologies, which require modifications to the environment. Assistive wayfinding systems have also been implemented on walkers. GUIDO is an advanced walker for people with visual and/or mental deficiency [RMJL05]. This system integrates multiple sensors for map construction and navigation, localization, obstacle detection, and presents audio prompts to the user. The MIT robotic walker also provides collision avoidance support and wayfinding assistance through a visual display of the desired direction of travel [MDK + 03]. These systems, however, do not contain a user model and require manual selection of the operation mode (such as manual or automatic). In addition, powered wheelchair driving, and specifically joystick operation, might require additional effort and cognitive abilities in comparison to walker usage. 29 2.5.3 Prompting Devices for ADL A vision-based system. called COACH, for assisting persons with dementia during the handwashing task is described in [HVCPA10]. This system contains a user model that estimates the cognitive state of the user and issues adaptive prompts. This system was tested in an efficacy study conducted with six older adults with moderate-to-severe dementia [MBCH08]. Results showed that participants with moderate-level dementia were able to complete an average of 11% more handwashing steps independently and required 60% fewer interactions with a human caregiver when the COACH system was used. Four of the participants achieved complete or very close to complete independence. With regards to system performance, the majority (78%) of COACH's actions were considered clinically correct. Thus, adaptive prompts were found to be effective for older adults with moderate dementia. Similar user models are also found in a system designed to aid users with dementia in making a cup of tea [HPJ + 11], as well as in an art therapy system for older adults with dementia [MBB + 10], however these systems have not been formally tested with their target users. Archipel recognizes user intent during the task of cooking, and provides adaptive prompting (audio, video, and strategic lighting) based on a pre-determined cognitive impairment level [PLBGL08]. Autominder uses artificial intelligence (AI) techniques and sensors to schedule daily events and to detect the status of activities [Pol06]. If required, the system provides the user with context-aware reminders regarding unattended activities. The Gator Tech Smart House uses sensors distributed throughout the house to recognize user activity and context [HME+05]. It can also provide medication reminders and automatically order soap and toilet paper refills. 30 A Wizard-of-Oz study of powered wheelchair use with five cognitively-impaired older adults evaluated the effectiveness of a multi-modal feedback interface for a simulated collision- avoidance system [WMDF11]. Although this wheelchair was not \u00E2\u0080\u009Cintelligent\u00E2\u0080\u009D since collision avoidance was performed by the researcher, the study allowed investigation of the feedback interface and preference for specific modalities. Movement of the wheelchair was stopped by the researcher when the user approached nearby obstacles, and audio, haptic and visual feedback was provided. Results suggested that the system was effective in assisting residents with basic driving tasks. It allowed residents to achieve their personal objectives for indoor mobility. In addition, high levels of user satisfaction were reported. Residents found the additional feedback useful in avoiding obstacles. Three out of five residents found all feedback modes helpful. Audio feedback was the preferred modality for all participants. Haptic feedback was also found to be effective in guiding most users around obstacles, although one resident found this modality \u00E2\u0080\u009Ctoo controlling\u00E2\u0080\u009D and expressed a desire for warning prompts before the wheelchair is stopped. 2.6 Design Considerations for NOAH and User Studies The limitations discussed in previous intelligent wheelchairs, wayfinding devices, and prompting aids need to be addressed in several ways. Firstly, the needs of the target user population with respect to powered mobility is poorly understood due to the small number of user studies carried out with this population. Studies reported in [HWM11, WGHF11] do however suggest that collision avoidance is an important feature in a powered wheelchair for the target users and can result in increased independence. Prompting devices for the target population such as [WMDF11, MBCH08] also show that audio prompting is an effective 31 means of assistance for older adults with cognitive impairment. In tasks relating to both handwashing and collision avoidance, older adults with mild-to-moderate impairment are found to follow instructions that they hear and/or see delivered by an adaptive system. We thus employ a similar prompting approach and conduct user studies to evaluate the effectiveness of these techniques in the task of wheelchair navigation to desired destinations. Several of these authors also indicate the need for a higher number of trials to effectively compare performance. We therefore perform multiple baseline and intervention runs in order to produce useful comparisons. Cost-effectiveness and the ability to perform reliably in various indoor environments with little or no modification are desirable characteristics of the system, thus leading to a stereovision camera as the choice of sensor over others described in Table 2.1. Compactness and more high-level assistance are also prioritized as important improvements suggested by [WGHF11, HWM11]. High prompting accuracy is also recommended by [HWM11] in order to possibly overcome the low level of prompting adherence found in their study. Finally, the system should be able to compute the optimal route automatically, unlike some of the wayfinding systems mentioned above, since the user might not be aware of his/her destination. The system should also provide passive navigation assistance through audio prompts to maximize user independence and prevent any frustration that might be caused by wheelchairs that move on their own. A system that adapts to the users\u00E2\u0080\u0099 needs through a user model would also help to promote user independence while increasing effective navigation. 32 Chapter 3: System Development 3.1 System Objectives In order to address the limitations of existing devices, we suggest an intelligent wheelchair that provides supportive, semi-autonomous navigation assistance in order to increase independence, while ensuring safety. We provide assistance not only in avoiding collisions, but also in high-level path planning to navigate to specific locations. We choose audio prompting to allow target users with visual impairment to benefit from the navigation assistance provided, and to minimize distractions that might be caused by the use of visual cueing. In addition, we seek to build a system that is portable, cost-effective, performs reliably in real-world settings and requires minimal or no modifications to the environment in order to facilitate large-scale deployment of the system. To this end, we develop a system that relies on a stereo-vision camera for sensing, due to its low power consumption, ability to perform in natural environments, and relatively low cost. In addition, cameras capture and provide a richer dataset than can be used for high-level scene understanding to build maps and determine what type of room the wheelchair is in (e.g., kitchen). Abandonment is a common issue faced by developers of assistive devices. Since older adults with dementia might differ in their individual needs for navigation assistance, an adaptive system that adjusts automatically to the user\u00E2\u0080\u0099s specific needs is more likely to be accepted. We thus implement an adaptive prompting strategy. Specific objectives of the system are: \u00EF\u0082\u00B7 Reducing frontal collisions by preventing the user from moving into obstacles. 33 \u00EF\u0082\u00B7 Building a map of the wheelchair\u00E2\u0080\u0099s environment and automatically determining the current location at any time with respect to the map. \u00EF\u0082\u00B7 Reminding the user about scheduled activities and goals. \u00EF\u0082\u00B7 Providing adaptive navigation assistance: \u00EF\u0082\u00B7 Determining an optimal route to the desired location and prompt appropriately (upon deviation from the shortest path, failure to move, etc.). \u00EF\u0082\u00B7 Modifying the high-level route and prompts to avoid obstacles encountered as the wheelchair moves. \u00EF\u0082\u00B7 Automatically choosing type and timing of prompts to suit user needs and capabilities, and minimize user frustration. 3.2 Key System Functionalities and Criteria Although NOAH is required to achieve all objectives above, it will be specifically assessed in an efficacy study in terms of its ability to do the following: 1. Reduce frontal collisions using a vision-based sensor 2. Issue adaptive wayfinding audio prompts based on a user model Note that we only focus on frontal collisions in this work. Future work will involve using cameras with wider viewing angles (i.e. 360 degrees), installing more than one camera, or the addition of other types of sensors to prevent side and rear collisions. An optimal navigation assistance strategy will accomplish the following (possibly conflicting) goals according to their order of priority: 34 1. Improve safety (through collision avoidance) 2. Maximize effective completion of scheduled tasks (through directions and reminders) 3. Minimize user frustration (by minimizing incorrect and excessive prompting) 3.3 Overview of Design Process First, a thorough literature review of intelligent wheelchairs and other assistive devices for the elderly, as well as other populations with cognitive impairment was conducted. Limitations of existing devices for the target population were identified. Criteria and objectives for the intelligent wheelchair were outlined. The system was broken up into key components/functionalities. Objectives for each component were specified, and existing methods available to fulfill the objectives were investigated. Software decisions were made based on several criteria: 1) Availability of existing code and ease of integration 2) Ability to achieve close to real-time performance 3) Generalizability to more complex environments/situations/models Existing methods were then modified, or new methods were created, as necessary, to fulfill the objectives. Each component was tested separately in simulated or controlled environments. Finally, all necessary components were integrated and tested with real users. Further details on the research process and constraints can be found in Appendix D. 35 3.4 System Design The NOAH system consists of a commercially available powered wheelchair, a stereovision camera mounted on the front of the wheelchair, and a laptop computer (see Figure 3.1). The wheelchair is modified with a customized directional control logic module (DCLM) [MEBH07], which sends signals from the laptop to the wheelchair and enables/disables motion of the wheelchair in specific directions. Details regarding the hardware and software can be found in sections 3.5 and 3.6 respectively. The stereovision camera is used as the main sensor for collision avoidance, mapping and path planning subsystems. In addition, it provides visual input required for the prompting subsystem. The laptop (placed under the wheelchair seat) is responsible for all computation, and its speakers (or external speakers) are used to play all audio prompts. Following is a discussion of the system architecture, and interactions between various subsystems. The system architecture is illustrated in Figure 3.2. Figure 3.1 NOAH wheelchair system prototype. The system is made up of a commercially available powered wheelchair equipped with a stereovision camera (a). It also consists of a custom-made directional control logic module (DCLM) [MEBH07] and a laptop placed under the seat (b). (a) (b) DCLM Stereovision Camera Laptop 36 Figure 3.2 Architecture of the intelligent wheelchair system and its modules. Offline processes are indicated using dotted lines. We perform Mapping and Map Annotation offline once for a specific long-term-care facility. We also complete the User Model Specification step once and compute the optimal policy in the Policy Generation module offline. The Collision Detector components, as well as the Localization, Trajectory Generation and Analysis, and Prompt Generation modules run in Path Planner Collision Detector Prompter Mapping Localization Trajectory Generation and Analysis Prompt Generation Collision Avoidance Free Space Detection Policy Generation Map Annotation User Model Specification 37 real-time. The Trajectory Generation and Analysis module receives position estimates from the Localization module at a pre-specified time interval, and reports the wheelchair\u00E2\u0080\u0099s heading relative to the optimal path (on route, off route, stopped, upcoming turn). Obstacles are handled by the Collision Detector module in a reactive manner, until the user successfully avoids the obstacle. The Prompter module determines the system action (prompt direction, call caregiver/issue reminder, do nothing) based on the learned policy as it acquires noisy observations of the wheelchair\u00E2\u0080\u0099s position. The Prompter module contains the user model. It also uses collision and free space information to ensure that wayfinding prompts do not direct the user into obstacles. The main sub-systems mentioned above are described in further detail below. These sub- systems are developed independently, tested for accuracy, and subsequently integrated for full system testing. 3.5 Hardware 3.5.1 Wheelchairs Most of the experiments in this dissertation were conducted with the Nimble Rocket TM powered wheelchair (Nimble Inc., Toronto, Canada) seen earlier in Figure 3.2. However, the final efficacy study was conducted with the Pride Mobility Quantum 6000z powered wheelchair (seen in Figure 3.3). The only software change required for the system to perform with the Pride Wheelchair was to allow the code to communicate with a serial interface rather than a parallel port interface, which was used by the old DCLM (more information on 38 Figure 3.3 Pride Mobility wheelchair with Bumblebee camera (a), laptop and newer DCLM (b). (a) (b) the DCLM and controllers is provided in the next section). Thus, the system can be easily ported to Nimble wheelchairs and other wheelchairs that use a similar drive control system. 3.5.2 Direction Control Logic Module The DCLM used with the Nimble Rocket wheelchair is a programmable PICSTK-2k chip, that uses 2 lines of analog input, 2x output, 8x digital input, and 8x digital output [MEBH07]. This module was designed and developed at the Centre for Studies in Aging, Sunnybrook Health Sciences Centre. It acts as a filter for the control signals from the joystick to the wheelchair motors by preventing motion of the wheelchair in the forward direction upon receiving the \u00E2\u0080\u009Cstop\u00E2\u0080\u009D command (sent by the collision avoidance software module through a 39 parallel port interface). The parallel port interface was constructed using an ExpressCard Parallel Adapter. The newer DCLM is designed for use with a proportional joystick and Pride Mobility\u00E2\u0080\u0099s Quantum Q-Logic (third party interface device) [Pri09], and is attached to them through two DB9 connections. Additionally, it uses a serial RS-232 communication line to receive commands from the laptop that are encoded as 8-bit ASCII characters. Based on the commands received, the DCLM filters the joystick\u00E2\u0080\u0099s analog signals and sends them to the Quantum Q-Logic device. This interface device then converts the filtered joystick signals into proprietary digital signals that are used by digital motor controllers in Pride Mobility\u00E2\u0080\u0099s powered wheelchairs. The DCLM allows for six different regions of motion to be enabled or disabled (forward, forward-left, forward-right, backward, backward-left and backward-right). Power to the DCLM is drawn directly from the Q-logic (i.e., from the powered wheelchair) through the DB9 connection. Although the DCLM is not currently commercially available, it can be reproduced using schematics and more hardware details in [How11]. The DCLM has been custom-built for use with the Quantum Q-Logic device. Use of the DCLM with powered wheelchairs that use alternate controllers will require modifications to the DCLM, however the software will remain largely unchanged. Development of a custom interface device that is able to convert joystick signals into many different proprietary signals in the market will allow NOAH to be used easily with other wheelchair brands. 40 3.5.3 Camera We used a Bumblebee 3D stereovision camera built by Point Grey Research, Inc., Vancouver, BC (www.ptgrey.com). The camera is able to grab 640x480 resolution images at approximately 30 frames per second. The camera has a 12 cm baseline, 3.8mm focal length and 66 o horizontal field-of-view. It is pre-calibrated to within 0.1 pixel root mean square (RMS) error. In addition, it includes software to grab images and provide depth estimates (explained further in the software section). Note that more recent releases of this camera (e.g. Bumblebee2) can grab images at faster rates and also capture a wider field-of-view (100 o ). These cameras can be investigated for future use. 3.5.4 Laptop and Speakers We used a IBM Lenovo ThinkPad W700ds laptop for all computation. The laptop specifications are as follows: 2GHz Intel Core 2 Processor Q9000, 4GB (2x2GB) RAM. 640GB (2x320GB RAID 1) 7200rpm Hard Drive, nVIDIA Quadro FX3700M 1GB Graphics. The laptop was running Ubuntu 10.04 (Lucid) and consisted of a firewire port (IEEE 1394) and five USB ports. It also contained two ExpressCard slots (34 and 54 mm). Laptop speakers were used to deliver prompts in most cases. During the efficacy study, external speakers were used to ensure audibility for participants with hearing impairments. 3.6 Software 3.6.1 Collision Detector Safety of wheelchair users and those sharing the environment is a key consideration in the design of an intelligent wheelchair. We thus use a non-contact method of collision avoidance 41 Figure 3.4 The Collision Detector module in NOAH. to ensure the safety of residents in long-term-care (LTC) facilities who are particularly vulnerable to falls, and use a vision-based sensor to overcome the challenges presented by active sensors. The Collision Detector module consists of two modules, Collision Avoidance and Free Space Detection (see Figure 3.4), both of which we describe next. 3.6.1.1 Collision Avoidance The Bumblebee camera mounted on the wheelchair acquires rectified images from the left and right lens at 640x480 (full resolution). Point Grey\u00E2\u0080\u0099s software (http://www.ptgrey.com/products/triclopsSDK) is used to generate depth maps at approximately 30 Hz from half-resolution images (320X240) for computation speedup. This software uses a fast patch-based normalized cross correlation technique, which generates depth estimates by computing the horizontal shift between corresponding pixels in Path Planner Collision Detector Prompter Collision Avoidance Free Space Detection 42 Figure 3.5 Images of a person with a cane captured using the stereovision camera: (a) original image, (b) depth image (brighter pixels correspond to closer objects), and (c) occupancy grid (the solid grey region denotes the area outside the camera\u00E2\u0080\u0099s field of view). the left and right images. An example depth map is shown in Figure 3.5(b), computed from the image in Figure 3.5(a). Brighter pixels in the depth map represent closer objects. Depth maps are used to construct the local occupancy grid in Figure 3.5(c), which is a dynamic 2-D top-view representation of obstacles in the camera\u00E2\u0080\u0099s current field-of-view. The minimum depth value (corresponding to the closest object) in each column of the depth map is stored for the entire depth map in a 1-D vector. Then, ray tracing methods are used to convert the 1-D vector into a 2-D horizontal plane. The occupancy grid is then continually updated based on the position of the objects in the most recent 2-D horizontal plane as the wheelchair moves through its environment as in [ML00]. Each cell in the occupancy grid represents a 1cm X 1cm space and contains a grey-scale value between 0 (black - known obstacle) and 255 (white - free space) that represents the belief that an object occupies the cell. Initially, all cells are initialized to the default value 128 (grey \u00E2\u0080\u0093 unknown). The value of each cell is then updated by a constant K. At any given time, if a cell corresponds to an occupied region in the current 2-D horizontal plane, then the cell value is decremented by K. If the cell corresponds to a free space region, then the cell value is incremented by K. Thus, as a cell becomes more (a) (b) (c) 43 decremented/incremented, the belief that the cell is occupied or free increases. A larger value of K corresponds to a faster update rate of the occupancy grid, as well as increased sensitivity to random noise. The K value was set to 40, since this value was empirically determined to enable fast grid updates, while minimizing random noise. A stopping distance threshold is manually set (700 mm) and when an obstacle is detected within the threshold on the occupancy grid, the wheelchair is stopped to avoid a collision. This distance was chosen since it would allow the wheelchair driver to safely get out of the wheelchair without hitting the obstacle in front of the wheelchair (for example, in a scenario where the driver needs to transfer out of the wheelchair and into his/her bed). This threshold can be adjusted as required. Future work could involve adapting this threshold according to the specific scenario at a given time. For example, while a driver might want to stop further away from a bed to allow a safe transfer, he/she might want to move closer to a table to allow docking under it during meal times. 3.6.1.2 Free Space Detection Two types of enabling/disabling mechanisms are used in the different wheelchairs (Nimble and Pride). With the Nimble wheelchair, all forward motion is stopped and free space detection is carried out by computing the sum of occupancy grid cell values in the left and right halves of the occupancy grid above the distance threshold. The region with the highest sum (most free space) is suggested to the driver through an audio prompt (e.g. \u00E2\u0080\u009Ctry turning left\u00E2\u0080\u009D or \u00E2\u0080\u009Ctry turning right\u00E2\u0080\u009D). Prompts are issued at pre-specified intervals until the driver successfully avoids and passes the obstacle. 44 In the Pride wheelchair, since the DCLM allows for finer directional control, three regions of blocked motion are defined \u00E2\u0080\u0093 forward left, middle, and forward right. Incoming depth maps are divided into three columns that correspond to the above regions. Motion towards regions that contain detected obstacles is then blocked, while other regions are determined to be safe. This allows users to, for example, move diagonally to the right of an obstacle that is located in the forward-left region of the wheelchair. Since an accurate motion model of the wheelchair was not available, we approximated these safe regions through experiments. If an accurate wheelchair motion model is acquired in the future, this model, along with the camera model, can be used to determine safe motion regions automatically. The second version was only used in the efficacy study with older adults since the Pride wheelchair was located in Toronto (where the efficacy study was conducted). In addition, since adherence to collision avoidance prompts by cognitively-impaired older adults was found to be low in a previous study [HWM11], collision avoidance prompts issued were turned off in this version (only wayfinding prompts were used as described in the Prompter section). 3.6.2 Path Planner In this section, we describe the components of the Path Planner module, which consists of: Mapping, Map Annotation, Localization and Trajectory Generation and Analysis (see Figure 3.6). The Mapping module constructs a map of the environment. This map is annotated in the Map Annotation module. The Localization module determines the position of the wheelchair with respect to the map as the user drives through the environment. Finally, the Trajectory 45 Figure 3.6 The Path Planner module in NOAH. Offline processes are indicated using dotted lines. Generation and Analysis module computes the optimal route to the destination and determines the heading of the wheelchair with respect to the route. 3.6.2.1 Mapping In order to assist in navigation, the wheelchair must construct a map of its environment and be able to determine its location relative to this map (simultaneous localization and mapping - SLAM) [TBF00]. SLAM can be thought of as a chicken and egg problem: an accurate map is needed for localization, and an accurate pose estimate is needed to build that map. Many statistical techniques that use Monte Carlo methods and scan matching of range data have Path Planner Collision Detector Prompter Mapping Localization Trajectory Generation and Analysis Map Annotation 46 been used to solve this problem [APSL08]. Open-source code for several methods can be found on openSLAM.org. We use the GMapping package provided by the Robot Operating System (ROS) (http://www.ros.org/wiki/gmappin) to build the initial map [GSB06] since it has been well tested in various environments, is able to compute accurate maps quickly, and is easily integrated with the rest of our system. This package implements a particle filter, which represents the posterior distribution of the robot\u00E2\u0080\u0099s trajectory using a set of samples or \u00E2\u0080\u009Cparticles\u00E2\u0080\u009D. Each particle carries an individual map of the environment. At each time step, particles are extended according to a motion model in a prediction step, and maps are updated based on sensor observations in an update step. Particles are then weighted according to the likelihood of the observations given the sampled poses and previous observations, and are re-sampled based on these weights in order to give a higher presence to highly-likely trajectories. The authors use adaptive techniques to reduce the number of particles in the particle filter for learning grid maps. They are also able to drastically decrease the uncertainty about the robot\u00E2\u0080\u0099s pose at each prediction step by taking into account not only the movement of the robot but also the most recent observation. They apply an approach to selectively re-sample particles, thus reducing the problem of particle depletion (the elimination of particles with low weights). With this approach, fewer than 80 particles are required to build accurate maps (up to 1 cm resolution) of areas as large as 250 m by 250 m. We construct this map once in every test environment offline using a robot equipped with a SICK laser range finder and the Gmapping software. This allows us to create an accurate and 47 dense map that can be used by the rest of the system. We built the map for our lab environment (part of which was used for the trial experiments described later) overnight autonomously using an ActivMedia Powerbot. The test environment in the efficacy study with cognitively-impaired users was mapped by a small Pioneer 3-AT robot driven manually since the region that needed to be explored by the robot was quite small (the map was built in less than 15 minutes). The final map is an image file in portable graymap (PGM) format and we used a resolution of 0.05 m. 3.6.2.2 Map Annotation After the construction of the metric map, we need to identify potential destinations on this map such as \u00E2\u0080\u009Clounge\u00E2\u0080\u009D, \u00E2\u0080\u009Cbathroom\u00E2\u0080\u009D, and other known areas of activity. Several methods have been proposed for labeling maps automatically by detecting features or objects in images or laser scans of the environment [RD07, Kui00, PCJC06, VS08]. In the NOAH system tested in trial experiments and with older adult users, the metric map is annotated with desired destinations manually. However, we have presented methods to enable automated map annotation, which will facilitate large-scale deployment of intelligent wheelchairs that can adapt to their environments automatically. In [VMSLM09], we showed that objects in environments can be used in place recognition and map annotation. We have also reported on place recognition results using real object detections in images [VSLM10], as well as results combining object detections with global image descriptors in [VSLM11]. Preliminary results have shown that these object-centric methods are more generalizable to previously unseen environments and might thus prove to be useful in the NOAH system. 48 Future work involves integrating these place recognition and map annotation methods with NOAH. 3.6.2.3 Localization For localization of the wheelchair during testing, we feed the map constructed by the robot to a vision-based SLAM package provided by ROS, vSLAM [KBC + 09], running on the wheelchair. This technique uses incoming camera images to estimate the position and orientation of the wheelchair (a process known as visual odometry). The initial position and orientation of the wheelchair is specified through a graphical user interface (GUI) provided by ROS, Rviz, by loading the map and clicking (and dragging) on the starting location. Although other vision-based SLAM methods in [ESL06, EHE12] were also investigated, the vSLAM algorithm was selected since it has been tested in realistic environments and demonstrated real-time performance with high accuracy and robustness. Its implementation in ROS also allows for easy integration with other wheelchair components. Unlike the Gmapping technique that uses a particle filter approach, vSLAM uses a constraint graph of relative pose information between frames. In addition, vSLAM uses an online place recognition technique to perform re-localization and loop-closure. 3.6.2.4 Trajectory Generation and Analysis Before a trajectory can be computed, we must determine the user\u00E2\u0080\u0099s goal location and provide a reminder to the user. The Autominder [MP02] uses a list of tasks that the client needs to perform, provides an optimal schedule, and offers reminders. The ESI Planner II [MMPK05] from the Aphasia project also provides a daily planner and reminder system. The Activity 49 Compass learns previous routes taken by the user and estimates the destination based on the user\u00E2\u0080\u0099s recent motion as in [LFK04]. This approach, however, requires a large amount of training data corresponding to \u00E2\u0080\u009Ccorrect\u00E2\u0080\u009D routes followed by the user to various goal locations. For this project, we bypass the scheduling problem and assume the availability of the user\u00E2\u0080\u0099s daily schedule, which can be provided by the user\u00E2\u0080\u0099s caregiver, for example. Goal locations can be directly inferred from the user\u00E2\u0080\u0099s schedule and the current time of day. In all experiments in this dissertation, goal locations are specified through Rviz (similar to the manner in which starting positions are selected, through mouse clicks on the map). Given the map, destination, and current location, the optimal path to the goal can be computed using Dijkstra's algorithm, which is an easy-to-program, efficient and accurate method for solving shortest path problems on a discrete graph using dynamic programming. In this project, however, we use existing code (provided by Ken Alton) that implements a variant of Dijkstra\u00E2\u0080\u0099s algorithm, the fast marching method (FMM), since it more accurately approximates the underlying continuous space by using orthogonal grid discretization. FMM is also shown to often produce shorter paths than Dijkstra's algorithm, which can produce paths that are not optimal because they follow grid lines, as was noted in simulation experiments. Details regarding the FMM method implemented can be found in [AM06]. The value function is computed offline and the optimal trajectory can quickly be computed from the wheelchair's current position using gradient descent. We use a simple forward Euler scheme implemented by Alton [Alt10] to compute trajectories. In order to increase 50 computational speed, the trajectory is only computed for a short distance in front of the wheelchair, since we provide just-in-time prompts in NOAH. We first determine whether the user's heading is correct by comparing the orientation of the wheelchair to the orientation required to follow the optimal trajectory. If the heading is incorrect (the difference between the current and required orientation is greater than 50 o ), we report a detour. If the user's heading is correct, we also analyze the trajectory for upcoming turns. Upcoming turns are detected based on the cosine of the angle between the starting direction vector (at the beginning of the trajectory) and direction vectors along the trajectory up to a few meters ahead of the wheelchair. If the cosine value falls below a threshold, then an upcoming turn is reported. In addition, the system determines whether the wheelchair is progressing towards the goal or regressing (or stopped) if the heading is correct by comparing the current path length to that in the previous step [Vis11]. The output of the route planner is referred to as the observed (wheelchair) status. 3.6.3 Prompter Audio prompting is an effective technique in assisting cognitively-impaired adults with activities of daily living [LM06]. A study of users with cognitive impairment reports that speech-based prompts are more effective than image-based and text prompts in route finding, and are also preferred by users [SFHF07], due to difficulties faced by users in reading screens. In addition, we hope that providing audio rather than visual prompts will minimize distractions and prevent an overload of visual information. 51 Figure 3.7 The Prompter module in NOAH. Offline processes are indicated using dotted lines. We now describe the components of the Prompter module: User Model Specification, Policy Generation, and Prompt Generation (see Figure 3.7). The User Model Specification module involves the encoding of information regarding the user\u00E2\u0080\u0099s cognitive state and behavior, the heading of the wheelchair, as well as the costs associated with various system actions. The Policy Generation module computes the optimal strategy (policy) for system actions based on the User Model. Finally, audio prompts are selected and delivered to the user by the Prompt Generation module. 3.6.3.1 User Model Specification Although we can acquire estimates regarding the position of the user and his/her behaviors through the camera images, these observations are often noisy due to the presence of Prompter Path Planner Prompt Generation Collision Detector Policy Generation User Model Specification 52 occlusions, motion blur, glares and textureless surfaces causing insufficient feature matches. We thus need a method that can account for these noisy observations, while also being robust to stochastic user behaviors. In addition, we require a system that is able to automatically adapt to the users\u00E2\u0080\u0099 needs and capabilities. For example, a user with only mild cognitive impairment might only need assistance in avoiding collisions, while one with more severe dementia and vision loss might require directional prompts that assist in navigation to a specific destination as well. Markov Decision Processes (MDPs) provide a mathematical framework for decision-making under uncertainty [Bel57]. At each time step, the process is in some state. The decision- maker can choose any action available in the current state (at a pre-defined cost), causing a transition to a new state according to a state probability function, and resulting in a specified reward. In NOAH, we use an extension of an MDP called a Partially Observable Markov Decision Process (POMDP) [Lov91], since the state of the wheelchair is not directly observable. This framework enables us to determine the optimal prompting strategy, while accounting for noisy observations and stochastic user behaviors. It also allows us to balance trade-offs mentioned in the criteria of the required system (e.g. maximizing independence while minimizing frustration), and can adapt to specific users and scenarios. A discrete-time POMDP consists of: a finite set S of states, a finite set A of actions, a finite set O of observations, a stochastic transition model that specifies the probability of moving from state s to s' when action a is taken, an observation model that specifies the probability of observing o in state s, and a reward function that assigns a reward when action a is taken in s. 53 Since the state is not known with certainty, a probability distribution is maintained over all possible values of the state. This probability distribution is referred to as a belief state. Given a POMDP, our goal is to find a policy, mapping belief states into actions, that maximizes the expected discounted sum of rewards attained. Since the target users of NOAH are similar to those of the COACH system [HVCPA10], which provides handwashing assistance to users with dementia, we specify a user model similar to theirs. We modify their model to include states and observations in NOAH\u00E2\u0080\u0099s navigation task. Specifically, the planstep state that they use to describe the user\u00E2\u0080\u0099s current step in the handwashing process is referred to in our model as status and describes the status of the user with respect to the optimal route. This state is partially observable, and the noise in the observations, ob status, is specified using an observation function. We also model the user\u00E2\u0080\u0099s cognitive state using two variables, independence (defined as aware in COACH) and responsive, as in [HVCPA10]. These variables are used to describe whether the user is able to perform the task independently, and whether he/she will respond if a prompt is issued. Although there might be other factors that influence navigation performance in addition to the ones specified in COACH, we initially choose to model the same variables. Future work will extend the model as necessary based on efficacy study observations and results. We now provide a detailed description of the POMDP model seen in Figure 3.8. 54 Figure 3.8 Diagram of the user (POMDP) model used for prompting. States responsive \u00E2\u0080\u0093 a binary variable that describes whether or not the user follows prompts. This variable is assumed to stay constant throughout a route-following task (to one goal). This variable is initialized as a uniform probability distribution over both possible values. When a task reminder is issued however, contextual information might lead to a change in responsiveness, thus the state is reset. independent \u00E2\u0080\u0093 a binary variable that describes whether or not the user can navigate to the goal (move towards the goal along the correct path) without assistance (independently). This variable is assumed to stay constant throughout a route-following task (to one goal). This variable is initialized as a uniform probability distribution over both possible values and reset when a task reminder is issued. independent\u00E2\u0080\u0099 responsive\u00E2\u0080\u0099 behavior\u00E2\u0080\u0099 status\u00E2\u0080\u0099 status ob status\u00E2\u0080\u0099 action reward independent responsive t0 t1 55 status \u00E2\u0080\u0093 describes the status of the chair along the route (on_route, off_right, off_left, off_u, turn_left, turn_left, stopped) and is partially observable. Turn_left and turn_right correspond to upcoming turns, while off_right, off_left and off_u correspond to immediate turns (corrections) required in order to navigate along the optimal route. behavior \u00E2\u0080\u0093 describes the user\u00E2\u0080\u0099s action. Possible values for this variable are: nothing, forward, slight_left, slight_right, hard_left, hard_right, u_turn. This hidden variable typically starts at nothing. Based on the user's responsiveness and independence, the user will perform the correct/incorrect behavior given the current status and system action, with some probability (estimated using domain knowledge), thus inducing changes in status. For example, the user is highly likely to perform the correct behavior without any prompts if he/she is independent. However, if the user is not independent and is responsive, he/she is most likely to perform the correct behavior when an appropriate prompt is issued. Observation ob status \u00E2\u0080\u0093 the observed wheelchair status (output from the Route Planner). It is generated by the status variable through an observation function and thus contains the same values. The observation function encodes the sensor noise (we assume 10% noise based on our observations during controlled experiments). For example, the probability that the wheelchair is actually on route when it is observed to be (according to the Route Planner) is 90%, and the probability that it is in one of the other states is 10%. The observation function can also be specified based on recall and precision values (determined empirically) of the Route Planner module. 56 Actions System actions are nothing, prompt_fix_right, prompt_turn_right, prompt_forward, prompt_fix_left, prompt_turn left, prompt_u, issue_reminder. They induce changes in the user behavior, and lead to rewards. Examples of audio direction prompts are: \u00E2\u0080\u009CMove slightly to the right\u00E2\u0080\u009D, \u00E2\u0080\u009COff route \u00E2\u0080\u0093 turn left\u00E2\u0080\u009D, and \u00E2\u0080\u009CMove forward\u00E2\u0080\u009D. The reminder prompt was as follows: \u00E2\u0080\u009C[Name], try finding the [goal]\u00E2\u0080\u009D or \u00E2\u0080\u009C[Name, let\u00E2\u0080\u0099s go to the [goal]\u00E2\u0080\u009D based on the specified task. Rewards Directional prompts cost more than nothing, and less than issue_reminder. Since we wanted to encourage the user to follow directional prompts in our study, we set the cost of the issue_reminder action to be very high (50) to discourage its selection, while the cost of nothing is 0 and prompt is 5. High rewards are received when the user is on route (+10), while negative rewards (-10) are received when the user is off route. In addition, when the user is independent, costs are increased for all prompts (since they might lead to higher frustration). This cost can be customized for each user if certain characteristics such as agitation levels are known. 3.6.3.2 Policy Generation An optimal policy for the model specified above is computed offline using the Symbolic Perseus [Pou05] package with the default parameters (http://www.cs.uwaterloo.ca/~ppoupart/software/symbolicPerseus/). The optimal policy can be thought of as a decision tree that provides the optimal action (the action that maximizes rewards/minimizes cost) based on the observations. This policy only takes a few minutes to 57 compute. A text file specifying the user model above can be found in the online code repository [Vis11] and is in the format required by the Symbolic Perseus software. We used this software since input files are easy to specify, and future work could even allow someone without technical expertise to specify the model (such as the caregiver). 3.6.3.3 Prompt Generation The optimal policy provided by the Policy Generation module is queried in real-time for an appropriate system action. If the optimal action is to play a specific direction or reminder audio prompt, the suggested prompt is selected from a list of pre-recorded prompts (recorded by the researcher) and is played to the user. If the optimal action is to do \u00E2\u0080\u0098nothing\u00E2\u0080\u0099, no prompt is played. If the prompt generated is deemed to be unsafe due to the detection of an imminent collision by the Collision Detector, the Prompt Generation module is suspended until the obstacle is successfully avoided. 3.6.4 System Integration We use ROS (Robot Operating System) (http://www.ros.org) to grab images and perform collision avoidance as well as visual SLAM. We choose this framework since it allows us to run multiple processes in a distributed fashion. The ROS software we use is mostly implemented in C++ and python. The path planning and prompting code is implemented separately and is integrated with the ROS software through output files. The visual SLAM module outputs the most recent wheelchair position (2-D map coordinates) and orientation to a file. This information is read by the Path Planner module (implemented in C++) in order to compute the optimal route. In addition, when an imminent collision is detected, a file is 58 written by the ROS software. The Prompt Generator module constantly checks for this file to ensure that the optimal driving direction prompted is safe. If an imminent collision is detected along the optimal route, the Prompt Generator is suspended until the user successfully avoids the obstacle. The status of the wheelchair (on-route, off-route, etc.) is written to a file read by the prompting module (implemented in Java and MATLAB), which uses this information as input to the POMDP model and generates an audio prompt. Note that we use lock files to ensure the serialization of updates to any given file. Future work involves re-implementing the path planning and prompting modules in ROS to eliminate the need for file reading/writing/locking. 59 Chapter 4: System Testing 4.1 Introduction In this chapter, we test several individual components of the system: collision avoidance, map annotation and trajectory analysis. We provide information on the experimental setup, the results obtained, and discussions that highlight strengths and weaknesses of the system, as well as areas for future work. Finally, we test the entire system in a realistic environment to determine system performance. We discuss the results of this experiment as they relate to our research questions. 4.2 Collision Avoidance We conducted controlled experiments to assess the performance of the vision-based anti- collision sensor. We also compared the results to those achieved using a 3D time-of-flight infrared (IR) sensor (built by Canesta Inc., San Jose, CA), which was used in [MEBH07]. This sensor uses a pulsed laser and measures the phase shift of the pulse in the reflected light over a complementary metal-oxide semiconductor (CMOS) chip, thus allowing depth maps to be generated in hardware. The following experiments and results can be found in [VBHM08]. 4.2.1 Experimental Setup We tested collision avoidance and free space detection capabilities for both sensors in the same environment, which had fixed fluorescent lighting and no natural light. We chose this setup in order to control for differences that might be caused by variations in lighting rather 60 than sensor choice. Note that we also conducted preliminary experiments in the presence of large amounts of sunlight in order to assess lighting effects. However, due to the unacceptable performance of the infrared sensor seen in those settings, we simply provide an example of the results achieved by both sensors in indoor lighting conditions and compare performance. Figure 4.1 Collision avoidance test conditions. Wall, walker, cane and standing person were positioned at the target location directly in front of the wheelchair (a). The moving person moved from the left and stopped at the Target Location when the wheelchair was within the 700 mm range (b). (a) (b) We assessed collision avoidance performance with objects commonly found in a long-term care facility, namely a wall, a four-legged walker, a cane, a stationary person, and a moving person. An experienced driver drove the wheelchair straight towards each object initially positioned 3m directly in front of the wheelchair. This initial location of the test objects is referred to as the Target Location. The wheelchair was driven at a constant velocity (0.16 m/s) until the wheelchair stopped upon detecting an imminent collision or until a collision occurred. We used a value of 700 mm as the stopping distance threshold. Refer to Object at Target Location Wheelchair moving at 0.16 m/s 3m Person moving towards Target Location 3m Wheelchair moving at 0.16 m/s 700 mm 61 Figure 4.1(a) for the setup. In the \u00E2\u0080\u0098moving person\u00E2\u0080\u0099 condition, the person started at the left of the Target Location, walking towards it and stopping at the Target Location when the wheelchair reached the 700 mm range as in Figure 4.1(b). In order to determine the effects of wheelchair movement, we recorded distances at which the system detected imminent collisions with one of the objects (the cane) while the wheelchair was stationary. We moved the cane straight towards the stationary wheelchair in 10mm increments at 5 second intervals until the system detected a possible collision. We repeated this procedure 10 times with both sensors. Figure 4.2 Free space detection test conditions. Objects were placed to the left (a) and right (b) of the Target Location. (a) (b) In addition to collision avoidance tests, we conducted experiments to determine the efficacy of the Free Space Detection module. A third of the trials had a four-legged walker placed to the left of the Target Location and another third of the trials had the walker placed to the right of the Target Location (see Figure 4.2). The last third of trials had no object present (to Object left of Target Location Wheelchair moving at 0.16 m/s 3m Object right of Target Location Wheelchair moving at 0.16 m/s 3m 62 determine false positives). Audio prompts issued by the system indicating the direction with the greatest amount of free space (left or right) were compared to ground truth. 4.2.2 Results For the collision avoidance tests, we conducted a total of 120 trials with each sensor, out of which 100 trials were with an object present, and 20 were with no object present (to determine a false positive rate). The anti-collision results and average stopping distances are presented in Table 4.1 and Table 4.2 respectively. The results can be interpreted as follows: True positive - object present, object detected/prompt issued False negative - object present, no object detected/no or incorrect prompt issued False positive - no object present, object detected/prompt issued True negative - no object present, no object detected/no prompt issued In addition, we provide total precision and recall rates computed as follows: Precision = #True positives / (#True positives + #False positives) (1) Recall = #True positives / (#True positives + #False negatives) (2) 63 Table 4.1 Performance of the Collision Avoidance module for each test condition using the infrared (IR) and stereovision (SV) sensors. Trials per condition = 20. [PBHM08] Test Condition True Positive False Negative False Positive True Negative IR SV IR SV IR SV IR SV No Object 0 0 20 20 Wall 20 18 0 2 Walker 20 20 0 0 Cane 15 18 5 2 Person Stand 20 20 0 0 Person Walk 20 20 0 0 Totals 95 96 5 4 0 0 20 20 Overall precision rates of the Collision Avoidance module are found to be 100% with both sensors. Recall rates are found to be 95% and 96% with the IR and SV sensors respectively. The only test conditions that resulted in missed detections for the stereovision sensor was the wall and the cane, although the stereovision sensor outperformed the infrared sensor in the cane condition. Walls are difficult to detect with stereovision sensors due to the lack of texture, thus leading to insufficient visual features for depth estimation. The cane was also missed by both sensors due to its thin profile, and reflective surface, which is known to be problematic for IR sensors. 64 Table 4.2 Mean stopping distances (with standard deviation) for the infrared and stereovision sensors when the wheelchair was moving. The stopping distance threshold was set to 700 mm, velocity = 0.16 m/s. Trials per condition = 20. [PBHM08] Largest differences in mean stopping distances are noted in the \u00E2\u0080\u0098wall\u00E2\u0080\u0099 (due to lack of texture), \u00E2\u0080\u0098walker\u00E2\u0080\u0099 and \u00E2\u0080\u0098person walking\u00E2\u0080\u0099 conditions. Although the stereovision sensor was always able to detect the walker, it often underestimated its distance from the walker. This was due to the fact that there was a large basket at the bottom of the walker that was hidden from the stereovision sensor\u00E2\u0080\u0099s view as the wheelchair moved towards the walker. The performance of the IR sensor was not affected by the basket since it had a larger vertical field of view. Finally, since the experiments for the IR and stereovision sensors were conducted on different days, the differences in the \u00E2\u0080\u0098person moving\u00E2\u0080\u0099 condition is most likely attributed to day-to-day variations in lighting or the walking speed of the person, which was difficult to control. 0 100 200 300 400 500 600 700 800 Wall Walker Cane Person Stand Person Walk Test Condition D is ta n c e f ro m w h e e lc h a ir t o o b je c t (m m ) . Infrared Stereovision Stopping Threshold 65 Table 4.3 Mean detection distances for when the object and wheelchair were stationary for the infrared (IR) and stereovision (SV) sensors. Trials per condition = 10. [PBHM08] Test Condition Detection Distance (mm) Std. Deviation (mm) IR SV IR SV Cane 627 599 113 36 The average distance at which the system detected an imminent collision with the cane when the wheelchair was stationary is presented in Table 4.3. As seen, detection distances are much higher (closer to the threshold distance) in the cane condition when the wheelchair is stationary as opposed to moving (refer to the \u00E2\u0080\u0098cane\u00E2\u0080\u0099 condition in Table 4.2). The above results suggest that motion of the wheelchair has an effect on system performance. Table 4.4 Free space detection performance for the (IR) and stereovision (SV) sensors. Trials per condition = 20. [PBHM08] Test Condition True Positive False Negative False Positive True Negative IR SV IR SV IR SV IR SV No Object 0 0 0 0 0 0 20 20 Object - left 12 20 8 0 0 0 0 0 Object - right 20 20 0 0 8 0 0 0 Totals 32 40 8 0 8 0 20 20 Results of the Free Space Detection module using the 4-legged walker are presented in Table 4.4. As can be seen, the free space detection performance is higher with the stereovision sensor (100% precision and recall). The difference in performances is caused by the 66 difference in free space detection algorithms used by the two prototypes rather than the sensor itself. While the earlier prototype with the IR sensor calculated the amount of free space by simply adding the number of occupancy grid cells with grey-scale values above a specific threshold, the new system assigns higher weights to cells with higher grey-scale values. Thus, lighter cells contribute more towards free space than shaded cells. Figure 4.3 Original images of a room with windows (a). Occupancy grids produced by stereovision (b) and infrared (c) sensors with blinds closed and opened. The noise generated by the IR sensor is circled. [PBHM08] Original Image Stereovision Infrared Controlled lighting (blinds closed) Bright, natural lighting (blinds opened) (a) (b) (c) Finally, we provide an example of the ability of both sensors to perform in natural/outdoor environments in Figure 4.3 in order to determine their usefulness in natural settings. Notice the \u00E2\u0080\u009Cblinds opened\u00E2\u0080\u009D condition leads to the generation of noise in the occupancy grid when the infrared sensor is used, but does not seem to affect the occupancy grids produced by the stereovision camera. This can be explained by the difference in techniques used to calculate 67 depth. The infrared sensor uses a modulated infrared laser, and calculates the depth of an object by detecting the phase shift of the modulated light reflected from the target. Sunlight leads to infrared contamination (since it contains infrared light), thus resulting in noisy depth maps when the infrared sensor is used. The stereovision camera calculates depth by measuring the horizontal shift between pixels in the images acquired by the left and right lenses, automatically adjusting for variances in illumination in the environment. Thus, its performance is less affected by the presence of sunlight. Moreover, we also note the difference in appearances of the occupancy grids produced by both sensors in the \u00E2\u0080\u0098blinds closed\u00E2\u0080\u0099 condition. One of the legs of the walker is completely missed by the IR sensor due to its reflective surface, while the stereovision sensor is able to detect the edges of the leg, thus further highlighting the benefits of using a stereovision sensor. 4.2.3 Discussion Results indicate that the stereovision camera performs as well as the infrared sensor in detecting objects (96% accuracy with the stereovision sensor versus 95% with the infrared sensor). We also achieved perfect accuracy in providing an appropriate prompt to the driver with the vision-based sensor. All stopping distances with both sensors were shorter than the stopping threshold. This outcome is mostly attributed to delays in grabbing images as well as in updating the 68 occupancy grids as the wheelchair moves towards an object. These explanations are supported by much longer detection distances when the wheelchair was stationary. Possible solutions to this problem would be to use sensors with increased frame rates (such as Bumblebee2 cameras that can grab images at 48 frames per second) and increasing the rate at which the occupancy grid adapts to environmental changes (the parameter K), although this might increase the amount of noise in the occupancy grid. Although stopping distances for some objects were shorter using stereovision, less variability was found with this sensor. Overall, we found that the stereovision was able to avoid collisions in most cases, thus potentially improving safety for drivers who are unable to do so. In addition, the absence of false positives would help minimize user frustration and improve usability. It is also important to note that while the distance threshold was set to 700 mm, the wheelchair continued to move a distance of 70-75 mm after the stop command was sent to the DCLM. Thus, stopping distances were also increased by delays in the process required to actually stop the wheelchair (the DCLM sending filtered signals to the joystick/ controller and the mechanical process of applying the brakes to stop the wheelchair). Additional delays might have also been caused by changes in lighting and shadowing on the object as the wheelchair moved towards it as well as motion blur in images acquired by the stereovision sensor. As delayed detection and/or misses are unacceptable in a clinical environment, greater sensitivity is required in detecting obstacles, particularly walls and thin objects such as canes. Detection of walls becomes a challenge when the stereovision camera is used. Since the construction of depth maps using stereovision relies on features in the images, objects with a homogenous surface lacking in features, such as a long unadorned wall, result in poor depth 69 maps. This issue can be resolved by adding markers to the surface (i.e. mounting paintings or small, textured objects on the wall). However, this is not an ideal solution. Structured light can be used to create artificial texture on plain surfaces to generate more accurate depth maps [SS03]. Although the performance of both sensors is similar, the stereovision sensor has significantly lower power consumption. It is easily powered by the laptop\u00E2\u0080\u0099s USB hub (which provides 5 volts and up to 0.5 amperes), while the Canesta sensor requires 3 amperes at 5 volts DC. In addition, the stereovision camera captures and provides a richer dataset, is lower in cost (approximately $2000 USD vs. $5000 USD for the Canesta sensor), and can perform in bright natural light/sunlight. Changes in lighting conditions and, specifically, infrared interference do not affect the maps generated by the stereovision sensor. It is also able to detect objects with reflective surfaces, which are problematic for IR sensors. This makes stereovision the more promising of the two sensors for collision avoidance. The Microsoft Kinect sensor is a cheap sensor that has stirred significant interest in the robotics community (www.xbox.com/kinect). This sensor includes an IR and RGB camera and produces depth maps from a projected IR pattern, thus showing high accuracy even with textureless surfaces. However, these depth maps are found to contain holes in the presence of reflective or transparent surfaces. On the other hand, stereo vision is able to detect disparities at edges of these surfaces. Recent work has shown that cross-modal stereo pairs of IR and RGB images can be used to improve the reliability of built-in depth maps generated by Kinect by combining the strengths of IR and stereovision sensors [CBF11]. Future work 70 could involve using the Kinect sensor in NOAH, although safety issues arising from the use of projected IR need to be investigated. Finally, although there was a high accuracy for free space detection in the trials, cluttered environments might require a wider field of view for free space detection. The camera can be mounted on a pan-tilt unit, so that regions around the wheelchair can be scanned after the chair stops and before a prompt is provided. 4.3 Trajectory Analysis In these experiments, we test the accuracy of the mapping and localization, path planning and trajectory analysis modules. Specifically, we show the performance of the planning and prompting system on twelve unique routes traveled by the wheelchair, each containing several deviations, stops and turns. 4.3.1 Experimental Setup First, a map of the environment (an image file in PGM format with a resolution of 0.05 m) was constructed autonomously overnight using a Powerbot equipped with a SICK laser range finder and the Gmapping software. The map was then divided into four regions, and locations from each region were selected as start and end locations using the GUI. Twelve unique routes were then constructed using these start and end locations, and fed as input to the Localization and Path Planning modules respectively, at the beginning of every run (route). The map constructed by the mapping component using the laser range finder readings is shown in Figure 4.4, along with estimates made by the Localization module for one run. 71 Figure 4.4 Map of laboratory created by the mapping component. Locations chosen as start and end positions are numbered 1-4. Blue arrows denote wheelchair position and heading as estimated by the Localization component while driving along route \u00E2\u0080\u009C1-3\u00E2\u0080\u009D. [VALMM11] The Nimble wheelchair was driven by an able-bodied user, at a speed of 0.15 m/s, which was determined to be a safe driving speed for the intended user population. The environment chosen was realistic (a computer science lab), containing dynamic obstacles (people walking around in the lab). In addition, the experiments were conducted during varying times of the day (morning, evening, and night) in order to test the robustness of the system to different lighting conditions. The output of the Trajectory Generation and Analysis module during each run was recorded for subsequent analysis. 1 2 3 4 72 Table 4.5 Trajectory analysis results. [VALMM11] Route Left deviation Right deviation U-turn Upcoming left turn Upcoming right turn Stop TP FP FN TP FP FN TP FP FN TP FP FN TP FP FN TP FP FN 1 \u00E2\u0080\u0093 4 3 0 0 2 0 0 3 0 0 1 0 0 0 0 0 3 0 0 1 \u00E2\u0080\u0093 3 5 1 0 6 2 0 4 4 0 3 0 0 3 0 0 5 0 0 1 \u00E2\u0080\u0093 2 5 0 0 7 0 0 4 0 0 3 0 0 3 0 0 0 0 0 2 \u00E2\u0080\u0093 1 3 0 0 8 1 0 2 2 0 4 0 0 0 0 0 2 0 0 2 \u00E2\u0080\u0093 3 3 0 0 0 0 0 1 0 0 0 0 0 2 0 0 2 0 0 2 \u00E2\u0080\u0093 4 4 0 0 4 1 0 2 0 0 2 0 0 4 0 0 2 0 1 3 \u00E2\u0080\u0093 1 5 0 0 4 0 0 2 0 0 3 0 1 5 0 0 3 1 0 3 \u00E2\u0080\u0093 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 3 \u00E2\u0080\u0093 4 9 1 0 3 1 0 1 2 0 4 0 0 2 0 0 3 0 0 4 \u00E2\u0080\u0093 1 4 0 0 0 1 0 2 0 0 0 0 0 1 0 0 2 0 0 4 \u00E2\u0080\u0093 2 4 1 0 8 3 0 4 4 0 2 0 0 3 0 0 3 0 0 4 \u00E2\u0080\u0093 3 3 0 0 2 2 0 2 0 0 4 0 0 1 0 0 3 0 0 Totals 50 3 0 44 11 0 27 12 0 26 0 1 24 0 0 28 1 2 Avg Recall (TP/ TP+FN) 1.0 1.0 1.0 0.96 1.0 0.93 Avg Precision (TP/ TP+FP) 0.94 0.8 0.69 1.0 1.0 0.97 73 4.3.2 Results Results of the Trajectory Generation and Analysis module are shown in Table 4.5. Left and right deviations (of more than 50 o ) from the optimal route, 180-degree deviations (u-turns), upcoming turns, as well as lack of motion (stops) are identified by the system. We determined true positives (TP), false positives (FP), and false negatives/missed detections (FN) for each of the different types of detections. 4.3.3 Discussion As seen in Table 4.5, most deviations and turns are detected with high accuracy and precision. Most of the errors (specifically for right deviations and u-turns) noted during the experiments were caused by the following: 1) Errors made by the localization module due to lack of texture or reflective surfaces. 2) Inaccurate starting position estimates. 3) Obstacles in the map that did not exist in the test environment. The first error is a common pitfall of vision-based systems, since estimating camera motion requires the matching of landmarks between incoming images. Untextured areas such as blank walls, as well as reflective surfaces such as windows, result in few, incorrect or no matches, thus causing localization errors that persist until the system is able to re-localize using previously-seen landmarks. These errors could be corrected by integrating an inertial measurement unit, which measures orientation and velocity, and can be used to determine wheelchair motion in the absence of visual landmarks. 74 In these experiments, approximate starting positions were specified by clicking on the map through a graphical user interface by the researcher. Since localization estimates are relative to the starting position, any errors in the starting position propagated throughout the route. Future work can involve verification of these positions using landmarks on the map, or acquiring these positions directly during the mapping process. Alternatively, positions of known landmarks in the environment can be used to correct localization estimates when these landmarks are detected by the camera. Since the map was generated once several months prior to these experiments, the positions of many movable obstacles such as chairs and boxes had changed since the mapping process. Thus, the planning and prompting modules occasionally instructed the user to move around \u00E2\u0080\u0098invisible\u00E2\u0080\u0099 obstacles that did not exist in the test environment. Unlike the errors described previously, invisible objects resulted in temporary errors that the system was able to recover from when the user moved away from them. However, these errors can be reduced by reconstructing the global map when the environment changes significantly. Alternatively, a more feasible solution would be to update the global map when persistent changes in the environment are detected during real-time localization. For example, if free space is consistently found by the camera in a region corresponding to an obstacle in the global map, the obstacle can be removed from the global map. 4.4 Full System Testing in Simulated Scenarios Several experiments were conducted to test the entire system, including adaptive navigation prompts (generated by the user model). The objective of these trial experiments was to 75 provide answers to secondary questions regarding system performance in a more realistic setting than previous experiments, and identify necessary future improvements pertaining to system performance. Due to the difficulties in recruiting the target users (older adults with cognitive impairment) for multiple experiments, these trial experiments were conducted in order to provide insights on system performance in a realistic indoor environment, which could lead to increased robustness in real-world settings. Since user behaviors were simulated in these experiments, system actions were evaluated qualitatively to ensure reasonable policies were computed by the user model, assuming that the specified user model was correct. P. Viswanathan and P. Alimi simulated various user cognitive states (independent and responsive, independent and unresponsive, etc.) while navigating through a pre-specified route (shown in Figure 4.6). Any missed collision events as well as false alarms (wheelchair stopped in the absence of an obstacle) were noted. System actions were logged and checked for correctness. Although we do not report on all trial experiments conducted, we summarize key findings and provide some examples of system behaviour. 4.4.1 Results We do not provide any quantitative results (due to the lack of a baseline and repeatability in this experiment) in this section. However, we illustrate an example simulation when the user is not independent but is responsive. In this simulation, the user is unsure of the route and does not move the joystick. The system detects that the user is not independent and starts prompting the user. As the user responds to the prompt, the system correctly recognizes that 76 the user is responsive and continues to prompt the user throughout the task until she reaches the destination. Figure 4.5 shows some examples of the system actions (prompts and stops). The arrows in the figure indicate system estimates of the wheelchair\u00E2\u0080\u0099s position and heading obtained from the Localization module. As seen, the system delivers appropriate prompts and at reasonable times, thus allowing the participant to drive along the optimal route to the destination. Also note that one false positive collision event was detected due to glare in the window in front of the wheelchair. Figure 4.5 Example of system actions (prompts and stops) performed to assist the user. Arrows indicate system estimates of wheelchair position and orientation. Note that duplicate actions are omitted for visual display purposes. [VLMM11] 77 4.4.2 Discussion 1) What types of errors occur while detecting and avoiding collisions? Only a few false positive collisions were detected during the trial experiments due to glares from one of the windows in the test environment. Specular reflection is viewpoint dependent and can cause large intensity differences at corresponding points in stereo images, thus resulting in significant depth errors. No false positives were noticed when the blinds were closed. 2) What types of errors occur while providing navigation prompts? No errors were noted in navigation prompts. The accuracy of the Prompter module is largely dependent on that of the Localization module. Since no major localization errors were made, the system was able to provide accurate navigation prompts. 3) What future improvements need to be made to increase system performance? In order to ensure robust performance in the presence of windows, which are quite common in home- and office-like environments, a possible solution is to detect windows in incoming camera images using object detection techniques and ignore depth values contained inside these regions. 78 Chapter 5: Efficacy Study 5.1 Introduction In order to assess the effectiveness of the system with the intended users, older adults with cognitive impairment, we designed an efficacy study described in this chapter. The objectives of this study were to answer all the research questions outlined earlier with the target population. To this end, we tested the entire system (on the Pride wheelchair) in a controlled environment to allow us to evaluate system efficacy and usability. In addition, we solicited feedback from the users to gain a further understanding of their needs and preferences relating to powered mobility. 5.2 Ethics and Informed Consent Ethics for this study was approved by the University of Toronto Research Ethics Board in February 2011 as a major amendment to the protocol found in [How11] (details regarding the amendment can be found in Appendix D). The study was conducted at Harold and Grace Baker Centre in Toronto (the collaborating institution). The institution acknowledged the ethics approval process from the University of Toronto and offered a letter of support for this study. The substitute decision makers (SDMs) of potential participants were contacted for informed consent prior to the screening process. Participation was on a voluntary basis and no compensation was given for this study. In addition, participants were informed of their right to withdraw from the study at any time and that the study had no effect on their level of care at Harold and Grace Baker Center. The consent form can be found in Appendix E. 79 5.3 Inclusion Criteria To be included in the study, participants had to: \u00EF\u0082\u00B7 be over the age of 65; \u00EF\u0082\u00B7 have a mild-to-moderate cognitive impairment (assessed by the Mini Mental State Exam MMSE) or cognitive performance scale (CPS) described below); \u00EF\u0082\u00B7 provide written consent from his/her substitute decision maker; \u00EF\u0082\u00B7 be able to sit in a powered wheelchair for an hour per day; \u00EF\u0082\u00B7 be able to follow prompts and have basic communication skills; \u00EF\u0082\u00B7 be able to operate a joystick and identify directions. \u00EF\u0082\u00B7 typically use a walker or a manual wheelchair for mobility. Priority was given to participants who, in addition to meeting the inclusion criteria, experienced feelings of disorientation and/or had visual impairments. MMSE and CPS scores were used in screening due to the availability of recent CPS scores for most residents and the ease of conducting an MMSE exam in cases where recent results/diagnoses were not available. Residents who score in the moderate range (10 to 25 out of 30) in the MMSE are especially likely to be assessed for powered mobility by clinicians [Wan11]. As noted, these residents are often denied access to powered wheelchairs on safety grounds. However, we hypothesize that they could use an intelligent powered wheelchair safely, thus making them ideal candidates for the efficacy study. 80 Residents with mild-to-moderate cognitive impairment (as defined by MMSE/CPS scores) of any diagnoses were included, without restricting the specific diagnoses, because the system is targeted toward users with multiple and complex cognitive impairments, who form a heterogeneous population. In addition, recruiting residents with only a certain diagnosis, for example Alzheimer\u00E2\u0080\u0099s disease, would have resulted in too few participants. The MMSE is a brief questionnaire introduced by Folstein et al. in 1975 [FFM75] that tests an individual\u00E2\u0080\u0099s memory, orientation and arithmetic. It is commonly used to screen for dementia and to estimate the severity of cognitive impairment at a given time. It is also used to monitor cognitive changes in an individual over time, thus making it an effective documentation tool for an individual's response to treatment. MMSE scores range from 0 (severe) to 30 (intact). The Cognitive Performance Scale (CPS) combines information on memory impairment, level of consciousness, and executive function, with scores ranging from 0 (intact) to 6 (very severe impairment) [MFM + 94]. The CPS has been shown to be highly correlated with the MMSE in many validation studies. Thus, in cases where only participants\u00E2\u0080\u0099 CPS scores were available, corresponding MMSE scores, as described in [MFM + 94], were used (Table 5.1). 5.4 Exclusion Criteria Participants were excluded if they had a history of aggression or significant prior experience with a powered wheelchair due to potential past experience-dependent effects on the validity of the outcome measures. 81 5.5 Participants A purposive sampling method was used. We first identified potential candidates based on recommendations by caregiving staff for residents with mild-to-moderate cognitive impairments that would restrict their potential for powered wheelchair use, and with the ability to follow basic instructions. We also included recommendations by a researcher who had experiences working with residents at this facility in other studies for the COACH project. We then sought informed consent from SDMs of all identified candidates, successfully recruiting six participants who had informed consent (by SDMs) and met the inclusion criteria. Note that a minimum of four single subjects is suggested to give preliminary evidence that the initial findings did not occur by chance [BH84]. According to their quarterly assessments, three of the selected participants had short-term memory deficits (participants 1, 3 and 5), and participant 1 also had a severe visual impairment. Refer to Table 5.1 for information on each participant\u00E2\u0080\u0099s age, gender and dementia level. Table 5.1 Participant information Participant ID Age Gender Dementia Level (MMSE score) 1 97 Female Moderate (15) 2 71 Male Mild (19) 3 66 Male Moderate (15) 4 86 Female Moderate (15) 5 91 Female Mild/Intact (25) 6 80 Female Mild (19) 82 Figure 5.1 Maze (a) and obstacles (b) constructed using foam boards. (a) (b) 5.6 Apparatus and Setup The study was conducted in a dedicated research room (approximately 50 m x 50 m in size) of the long-term care facility. A maze was assembled in this room out of Styrofoam boards (see Figure 5.1), with a stop sign placed on a board at the end of the maze. The use of Styrofoam for obstacles ensured that collisions did not harm the participants. Since one side of the foam boards was plain and un-textured, newspapers and colored tape were used to create artificial texture on these surfaces (required for obstacle detection and visual SLAM). The course included 5 types of movements: 90\u00C2\u00B0 right turn(s), 90\u00C2\u00B0 left turn(s), entering a narrow straight line path, weaving motion (around maneuverability obstacles along the route) and stopping. These movements were based on existing tests used to assess powered wheelchair mobility [DKC06, Kir08]. The maximum speed of the wheelchair was set to 0.25 m/s to ensure safety. In order to reduce learning effects, we alternated between two different layouts of maneuverability obstacles (the smaller foam obstacles seen in Figure 5.1 b), so that subsequent runs contained slightly different positions of obstacles. The maze (position of START END 83 larger wall-like foam obstacles), was the same throughout the study, although it had to be re- assembled and stacked away every day, as per the staff\u00E2\u0080\u0099s request. In addition, we constructed a random ordering of five different starting orientations, such that the participant typically started every run facing in a different direction than the previous run. This ordering was repeated in both phases. 5.7 Method The efficacy study was an exploratory study using a concurrent mixed methods design. Both quantitative and qualitative approaches for data collection and analysis were used in order to gain a holistic understanding of the research questions [Cre08]. The rationale behind this mixed approach is that the combination of quantitative and qualitative methods allows a better understanding of the problem than either method alone. A single-subject research design is used to acquire and analyze quantitative data. Qualitative data collection methods that include participant observations as well as standardized and custom questionnaires are employed using a case study design. 5.7.1 Single Subject Research Design The single subject research design (SSRD) is typically used to study the behavioral change an individual demonstrates as a result of some treatment [Dom05]. In single-subject designs, each participant serves as her or his own control. The participant is exposed to a baseline and an intervention phase and performance is measured during each phase. The behavior of the individual is observed repeatedly during each phase, allowing the researcher to identify 84 patterns in performance within each phase, as well as comparison of performance patterns across phases. The SSRD was chosen for our efficacy study for multiple reasons including its ability to demonstrate individual differences and treatment effectiveness, the difficulties in recruiting a large and homogenous group of target users and the relative ease with which it can be carried out [Mil11]. The efficacy study consisted of two phases: A and B. In phase A (baseline), the automated collision avoidance and wayfinding system was deactivated, while phase B (intervention) was conducted with the system in use. Each participant completed both phases, as required by SSRD. We used a counterbalanced study design where we randomly chose half of the participants for A-B phase ordering, and assigned the other half B-A ordering. Each phase consisted of one training session and eight driving sessions (runs). Participants completed only one session a day, and a total of sixteen runs over a period of a month. 5.7.2 Case Study Design Case studies are generally used to study \u00E2\u0080\u009Ccomplex social phenomena\u00E2\u0080\u009D, and are selected to answer \u00E2\u0080\u009Chow\u00E2\u0080\u009D or \u00E2\u0080\u009Cwhy\u00E2\u0080\u009D questions [Yin03]. The focus is on a \u00E2\u0080\u009Ccontemporary phenomenon within its real-life context\u00E2\u0080\u009D, where the \u00E2\u0080\u009Cboundaries between the phenomenon and its context are not clearly evident\u00E2\u0080\u009D [Yin03]. The objective of case studies is to arrive at generalized theoretical propositions. In addition, due to the depth of the analysis and the high time requirements, a small number of case study units is expected [Mey01]. 85 5.7.3 Procedure At the beginning of each phase, a training session was conducted for every participant, where he/she was taught how to operate the powered wheelchair (with or without the anti-collision and wayfinding system depending on the phase being conducted) in an open area. Participants were taught how to navigate around sample obstacles in both phases, and were also taught to steer backwards when blocked by obstacles in front of them to create more free space. In the phase B training session, the researcher additionally explained the stopping mechanism of the collision avoidance and taught them how to operate the wheelchair when wheelchair motion in specific directions was blocked. The various audio prompts delivered by the system were also played to the participants during the training session in phase B while they were stationary. Participants were asked to adhere to the prompts and their responses were noted in order to ensure that they were able to hear, understand and follow all prompts. At the end of both training sessions, the researcher escorted the participants in their manual wheelchair or walker along the optimal route to the specified goal (the stop sign) at the end of the maze. They were informed that they had to follow this route during subsequent runs. These training sessions were conducted for at most twenty minutes. At the beginning of each run, the user was asked to report on whether they were confident in navigating along the specified route using learning transference acquired from the training session and/or previous runs. The participant was then asked to navigate to the stop sign by following the route specified during the training session. At the end of each run (that lasted from two to twenty-two minutes, depending on the driving abilities of the participant), the participant completed a survey regarding wheelchair usability. At the end of each phase, 86 participants were asked questions regarding their level of satisfaction, as well as open-ended questions regarding the device. A video camera was mounted above the wheelchair to capture joystick motion while the user was driving, and an additional camera was used by the research assistant to capture the scene view. All participants provided consent to videotape their sessions and to log any verbal feedback or observations during the period of the study. During the trials, the researcher followed each participant closely in order to provide assistance in case the participant was confused or anxious, or to stop the wheelchair in the case of an emergency. 5.7.4 Outcome Measures The outcome measures related to subject performance in the study were: 1. The number of frontal collisions encountered with obstacles by the participant. A frontal collision was defined as a single point of impact between the front of the powered wheelchair and an obstacle. If several impacts occurred in succession, each impact was considered a collision. However, only a single collision was recorded if the wheelchair hit an obstacle and dragged/pushed it without changing the point of impact. 2. The length of the route navigated by the participant. The length of the route navigated was determined by using a measuring wheel. Adjusting maneuvers (e.g. back and forth motions of less than 0.5 m) were ignored. Ideally, the system should enable participants to reach the goal by traveling a shorter distance, thus decreasing 87 participant fatigue. 3. The amount of time taken to reach the goal. The total time to complete the course was measured after every run. Ideally, the system should enable participants to reach the goal faster, increasing usability of the system. The outcome measures related to user satisfaction were: 1. NASA-TLX (Task Load Index) scores (see Appendix A.1). The NASA task load index is a subjective measure of workload imposed by a given task [HS88]. NASA-TLX was found to be a reliable and sensitive measure of perceived workload in an analysis of its psychometric properties [Nyg91]. It has been used to study adults (including older adults) with traumatic brain injury and their response to driving tasks [CSG + 09]. A total workload score is composed of six dimensions: 1) mental demands, 2) physical demands, 3) temporal demands, 4) perceived performance, 5) effort, and 6) frustration (see Appendix A.1 for questionnaire). Each dimension is described as follows: 1) Mental demand \u00E2\u0080\u0093 the perceived amount of mental and perceptual activity required for the task. 2) Physical demand \u00E2\u0080\u0093 the perceived amount of physical activity required for the task. 88 3) Temporal demand \u00E2\u0080\u0093 perceived time pressure related to the task. 4) Performance \u00E2\u0080\u0093 perceived success at accomplishing the goals of the task. 5) Effort \u00E2\u0080\u0093 perceived amount of work (mental and physical) put in to achieve the level of performance demonstrated. 6) Frustration \u00E2\u0080\u0093 perceived levels of insecurity, discouragement, irritation, stress, or annoyance during the task. For this study, the task was defined as: navigating a powered wheelchair along a specified route in a maze with as few collisions as possible. Each of the dimensions was self-graded through a questionnaire process on a scale that ranged from 0 to 20, with 0 corresponding to minimal workload and 20 corresponding to high workload. Scores from the various dimensions can be added together and weighted to form a total workload score. Dimension weighting was not used in this study because it has a negligible impact and would complicate the questioning process [Nyg91]. 2. Quebec User Evaluation of Satisfaction with Assistive Technology (QUEST 2.0) (see Appendix A.2). QUEST 2.0, or the Quebec User Evaluation of Satisfaction with Assistive Technology (version 2.0), is an outcome measure related to user satisfaction of their assistive devices [DWS02]. It has been validated for test re-test reliability, interrater reproducibility [DSGW99], and content validity [DWWSW99]. Reliability of the QUEST 2.0 was also validated with adults with multiple sclerosis [DMLAW02]. This questionnaire has been used 89 in a satisfaction survey related to wheelchairs for older adults in nursing homes and community dwelling settings [KCHC09]. Satisfaction of assistive technology is an important measure since it is likely that users with low satisfaction will abandon the assistive device. For this study, only eight questionnaire items out of the twelve were considered, since the other four questions were related to service of the assistive technology. The items related to the device were: 1) dimensions, 2) weight, 3) adjustments, 4) safety, 5) durability, 6) simplicity of use, 7) comfort, and 8) effectiveness of the device. Each item was graded by the user through a 5-point Likert scale ranging from not satisfied at all (1), to very satisfied (5). 3. User\u00E2\u0080\u0099s report of self-confidence in following the route specified during training. The user was asked before each trial if he/she remembered the task (finding the stop sign) and the specified route. This report represents users\u00E2\u0080\u0099 perceived levels of independence during the navigation task. 4. General feedback regarding the device obtained using the custom questionnaire (see Appendix A.3). The user was asked specific questions about NOAH and how they would use it. Examples of questions asked are: \u00E2\u0080\u009CDid you like the system?\u00E2\u0080\u009D, \u00E2\u0080\u009CWould you use the system if it was available to you?\u00E2\u0080\u009D, and \u00E2\u0080\u009CWhat would you do with it?\u00E2\u0080\u009D. Responses to these questions were expected to provide more qualitative feedback and be useful in designing future prototypes. 90 5. Verbal comments and visual observations relating to user interactions with the device. Verbal comments and visual observations providing cues about the user\u00E2\u0080\u0099s satisfaction and frustration were used to supplement information acquired through the questionnaires. Since some of the users had memory impairments preventing them from remembering details regarding system usability, this real-time feedback was expected to be useful in improving the system. 5.7.5 Data Collection and Measurement During each run, the researcher recorded the number of collision events that occurred, the time taken to reach the goal, as well as the length of the route navigated by the participant on the log in Appendix B. At the end of each run, the participant answered questions for perceived ease of use of the powered wheelchair, using the standardized NASA-TLX questionnaire. At the end of each phase, the researcher administered a QUEST 2.0 questionnaire regarding the participant\u00E2\u0080\u0099s perceived satisfaction, as well as a custom questionnaire to solicit general feedback from the user regarding the device and their mobility needs. Collision information and time taken were verified through video recordings of the trials. All runs were videotaped (by a research assistant) in order to capture participant observations during the navigation task. These observations noted during the use of the device were hypothesized to be valuable and less accessible during interviews at the end of the session, since participants were expected to have memory impairments or other cognitive limitations. Key observations were documented during playback of video recordings. 91 5.8 Data Analysis 5.8.1 Quantitative Analysis of Subject Performance Visual analysis is often the primary method of analysis for SSRDs. Thus, frontal collisions, wayfinding and completion time data are analyzed visually through comparison of the sample mean (\u00CE\u00BC), standard deviation (\u00CF\u0083), and trend. The C-statistic [Try82] is used to determine effectiveness of the treatment by determining whether there is a trend in sequential evaluation measures in terms of slope and magnitude of change. This method is chosen since it only requires a minimum of eight data points per phase, can be used with serially dependent data, and is relatively easy to compute. The logic underlying the C-statistic is similar to that underlying visual analysis since variability in successive data points is evaluated, relative to changes in slope from one phase to another [Try82]. A trend is identified when the C-statistic is high, and a negative C-statistic implies lack of trend. The standard error (SE) of C is calculated and C/SE is assumed to be normally distributed with p- values based on the Z distribution. The baseline data is first analyzed with the C-statistic to detect a significant trend (p<0.05). If a significant trend does not exist, the baseline data is combined with the intervention data, and the C-statistic is re-computed for the combined data. A statistically significant C-statistic value for the entire series might provide evidence for a shift in level and/or trend; however, it cannot conclude that the change was caused by the intervention. The key advantage of the C-statistic is that it can be computed on relatively small data sets without loss of power, however it is vulnerable to autocorrelation and can overestimate treatment effects [Blu84]. The C-statistic is thus used to supplement visual analysis of the data. NASA-TLX score averages are illustrated in bar graphs, although we do not show data for the individual categories (raw scores can be found in Appendix C). 92 5.8.2 Qualitative Analysis of Participant Observations Participant observations (visual and audio) from video recordings of study runs and interviews were examined by the researcher with techniques similar to those used in thematic analysis [Pat02, Wan11]. Documented observations were first annotated with a representative code. Subsequently, themes or categories were created using these codes, and observations with similar codes were grouped together. Categories were then inductively and iteratively explored for inter-relationships and merged into fewer categories. Formal validation of the themes used was not carried out during analysis, but will be carried out in the future to help eliminate possible researcher bias. 5.8.3 System Performance Analysis Missed and false detections of obstacles were noted during the trials. Video recordings and system logs were then used to identify the reason for system errors. Incorrect wayfinding prompts and localization errors were also found through video recordings and logs and analyzed. We provide graphs showing the accuracy of the prompting system during each trial. In addition, we provide information on user responses to the prompting system, since it ultimately determines the effectiveness of the wayfinding system. 5.9 Efficacy Study Results In this section, we provide details regarding the results of the efficacy study described previously for each participant. We analyze quantitative results obtained through the data collected throughout the study relating to the outcome measures. In addition, we provide quantitative and qualitative feedback obtained from participants through surveys and 93 questionnaires, as well as verbal comments made by the participants during the study. We also provide information regarding system performance. 5.9.1 Subject Performance For each participant, we report on the number of collisions, the distance traveled to the goal, as well as the time taken to reach the goal. We also discuss feedback provided by each participant through the questionnaires/surveys. Figure 5.2 shows the sample system output for a participant (5) during a run in the intervention phase (B). As seen, the system estimates that the user is not independent (since the user does not move forward at the beginning of the run), and thus continues to prompt the user throughout the trial since it also estimates that she is responsive (based on the observed wheelchair status after prompts are issued). Figure 5.2 Example of system prompts for Participant 5 during phase B. START END 94 5.9.1.1 Participant 1 Participant 1 had a severe visual impairment. In addition, she could not understand some of the audio prompts during the training session, so the recordings were slightly simplified and modified to include words translated to her native language. She had severe mood swings, as indicated in her assessment, and thus her participation in the trials was highly inconsistent. While other participants completed all trials in approximately three weeks, participant 1 completed trials over a month. She was able to propel herself in her manual wheelchair and did not have prior experience driving a powered wheelchair. Participant 1 had A-B phase ordering. Figure 5.3 Total frontal collisions for participant 1. Without NOAH (\u00CE\u00BC=8.0; \u00CF\u0083=2.62), with NOAH (\u00CE\u00BC=1.38; \u00CF\u0083=0.92). 95 Figure 5.3 shows the frontal collisions for participant 1. Visually, there is a large discontinuity in performance between the last baseline run and the start of the intervention phase (which is a criterion for acknowledging that a mean changed occurred because of the intervention [Ott86]). The mean of the number of collisions is lower with the intervention. Specifically, the minimum number of collisions in the baseline phase is greater than the maximum number of collisions in the intervention phase. The variance in the number of collisions also appears to be lower in the intervention phase. The C-statistic reveals that although no significant trend is found in the baseline data (Z=1.41), a significant trend is found when the intervention data is appended to baseline data (Z=2.53, p<0.01), suggesting that the magnitude of change when the intervention is introduced is unlikely to have occurred by chance. The results suggest that the system increased safety for participant 1. Due to severe visual impairments, participant 1 could not see obstacles in front of the wheelchair and often drove through them when NOAH was not activated. The stopping mechanism decreased her frontal collisions. However, we found she was often unable to detect free space herself (due to her poor vision), and thus might have benefited from additional audio prompts that provided free space information. In addition, the participant initially did not understand how to drive backwards to maneuver away from obstacles (in cases where forward and sideways motions were restricted by the system or obstacles) and needed to be told by the researcher to pull the joystick towards her. However, her ability to drive backwards improved over time as she learnt how to operate the joystick. Participant 1 was also generally confused about joystick operation at times or did not push the joystick hard enough to initiate wheelchair motion. In 96 these cases, the researcher asked the participant which direction she wanted to move in and assisted her in operating the joystick (by telling her to push harder or pushing her hand on the joystick towards her desired direction for a few seconds). This suggests that further training or an alternate feedback mechanism (in addition to just audio prompts) might be required by some users. Additionally, the usability of the joystick interface on the wheelchair could be improved or other interfaces could be explored. Also, although NOAH was able to reduce the number of front collisions, it did not completely eliminate them due to the presence or appearance of obstacles in the camera\u00E2\u0080\u0099s blind spots. Figure 5.4 Total length of route taken by participant 1. Without NOAH (\u00CE\u00BC=18.21m; \u00CF\u0083=1.88m), with NOAH (\u00CE\u00BC=11.31m; \u00CF\u0083=0.0m). Figure 5.4 shows the length of the route taken by participant 1. There is a large discontinuity in performance between the last baseline run and the start of the intervention phase. The mean of the route length is lower with the intervention. The minimal route length in the baseline phase is much greater than the maximum route length in the intervention phase. The 97 variance in the distance travelled also appears to be lower in the intervention phase. From inspection it appears that the intervention (NOAH) has an impact on the distance travelled for participant 1. A statistically significant change is found with the C-statistic (Z=2.93, p<0.01). Without the system, the participant was found to wander in the maze since she could not remember the specified route due to memory impairment (she also needed to be reminded of the task before every run), often revisiting previous locations. However, when the system was in use, participant 1 was found to be very responsive to prompts, often responding to instructions by echoing or saying \u00E2\u0080\u009Cyeah\u00E2\u0080\u009D. During one occasion, the system issued an incorrect prompt due to a localization error (the wheelchair was estimated to be closer to a turn than it really was). It prompted her to turn left into an obstacle outside the camera\u00E2\u0080\u0099s view. The participant responded by saying \u00E2\u0080\u009CNo sense!\u00E2\u0080\u009D and correctly ignored the prompt. This interaction suggests that the participant saw the system as a collaborator that helped her but was also likely to make mistakes, and was thus able to engage in a shared decision- making process. The participant was also found to laugh and respond positively to prompts that contained her native language, suggesting that language can help to improve usability of the system. 98 Figure 5.5 Total time to reach destination for participant 1. Without NOAH (\u00CE\u00BC=1125.88s; \u00CF\u0083=216.49s), with NOAH (\u00CE\u00BC=702.38s; \u00CF\u0083=71.48s). Figure 5.5 shows the completion time for participant 1. There is a large discontinuity in performance between the last baseline run and the start of the intervention phase. The mean completion time and variance is lower with the intervention. From inspection it appears that the intervention (NOAH) has an impact on the completion time for participant 1. A statistically significant change is found with the C-statistic (Z=1.70, p<0.05). Results indicate that driving times tended to be lower in the intervention phase (except for run six in the baseline phase, when the participant tended to stop less often). This was mainly due to the fact that the participant was taking the shortest route to the destination when the system was in use, rather than wandering (as mentioned previously). In addition, by encouraging the participant to stay away from obstacles, the system was able to help the participant to navigate in open spaces, thus saving time spent maneuvering out of major collisions (which the participant found difficult to do). 99 Due to language barriers, we were unable to get QUEST 2.0 ratings or feedback through the custom questionnaire from the participant (she said she did not understand the questions). However, the participant was able to respond to NASA-TLX questions (possibly because the questions contained simpler words that she could understand). Although she was unable to provide us with the usual ratings (0-20), she was able to provide \u00E2\u0080\u009Clow/good\u00E2\u0080\u009D (0), \u00E2\u0080\u009Cmedium/OK\u00E2\u0080\u009D (10) and \u00E2\u0080\u009Chigh/bad\u00E2\u0080\u009D (20) ratings. Figure 5.6 shows her NASA-TLX average ratings. Figure 5.6 NASA-TLX average ratings for participant 1. Possible ratings were low, medium or high demand. Results indicate that her average ratings related to mental, physical and temporal demand were higher with the system. However, the participant\u00E2\u0080\u0099s perceived performance was much better (supported by her frequent utterance of the word \u00E2\u0080\u009Cgood!\u00E2\u0080\u009D as she avoided obstacles with 100 the system), and she also provided lower ratings for effort and frustration in the intervention phase. It is interesting to note that when the system was in use, the participant repeatedly said \u00E2\u0080\u009Cmore!\u00E2\u0080\u009D at the end of the trial, indicating through gestures that she wanted more driving time, presumably since she was less fatigued due to shorter driving times. In contrast, the participant would say \u00E2\u0080\u009Cenough!\u00E2\u0080\u009D as she neared the destination when the system was not in use. Thus overall, the system possibly lowered her fatigue (effort) by ensuring safety and shorter driving times. The NASA-TLX item related to frustration included information regarding anxiety. We noticed the participant was less anxious regarding collisions with the intervention, but this might have also been due to increased familiarity with the task. It is possible that the prompts increased mental and temporal demand since she was observed to pay close attention to prompts, often repeating after them. She did not understand what \u00E2\u0080\u009Cfeeling rushed\u00E2\u0080\u009D meant, and so could not provide temporal demand ratings. Instead, she provided ratings to describe how fast she felt she completed the task. The increased ratings are explained by the fact that she finished the task much faster in the intervention phase. Also, it is important to note that the participant said on many occasions that \u00E2\u0080\u009Cmedium is good\u00E2\u0080\u009D for items related to physical, mental and temporal demand. The fact that the majority of her ratings for those items were either low or medium implies that this participant was fairly satisfied in both phases with respect to the first three NASA-TLX items. 5.9.1.2 Participant 2 Participant 2 had used a similar wheelchair in a few previous studies, and used a manual wheelchair on a regular basis, mainly propelling himself backwards. He had A-B phase ordering. 101 Figure 5.7 shows the frontal collisions for participant 2. There is no visual discontinuity in performance between the phases. The mean frontal collisions and variance are slightly lower with the intervention. However, the trend in the data suggests a learning effect (the participant had eliminated all collisions by the end of the baseline phase). As the participant drove the wheelchair more, he became more comfortable with the joystick operation and was found to improve his performance. The C-statistic shows a statistically significant trend in the baseline data (Z=2.06, p<0.05), thus its usefulness is limited in this case to determine the effectiveness of the treatment. Figure 5.7 Total frontal collisions for participant 2. Without NOAH (\u00CE\u00BC=1.13; \u00CF\u0083=1.89), with NOAH (\u00CE\u00BC=0.0; \u00CF\u0083=0.0). 102 Figure 5.8 Total length of route taken by participant 2. Without NOAH (\u00CE\u00BC=11.31m; \u00CF\u0083=0.0m), with NOAH (\u00CE\u00BC=11.31m; \u00CF\u0083=0.0m). Figure 5.8 shows the length of the route taken by participant 2. No visual discontinuity is found between the phases. The mean and variance are the same in both phases. Thus, the wayfinding module did not appear to help the participant, especially since his baseline wayfinding performance was quite high (he was able to identify the goal and said that he remembered the route before every run). This shows that although the collision avoidance module might benefit all users, the need for wayfinding assistance varies between users. 103 Figure 5.9 Total time to reach destination for participant 2. Without NOAH (\u00CE\u00BC=434.75s; \u00CF\u0083=199.04s), with NOAH (\u00CE\u00BC=327.38s; \u00CF\u0083=130.22s). Figure 5.9 shows the completion time for participant 2. There is a visual discontinuity in performance between the phases, with the intervention initially causing an increase in completion time. However, the mean completion time appears to be lower with the intervention. Also, there appears to be a trend in both phases, indicating that the participant is able to complete the task faster over time. This learning behavior is also seen in his collision avoidance performance, thus further suggesting that the participant is able to improve performance (in terms of safety and completion rate) over time. No statistically significant trend is found with the C-statistic in the baseline phase (Z=1.27). In addition, no statistically significant trend is found when the intervention data is appended to the baseline data (Z=1.34). It was found that participant 2 was very motivated to learn and improve his own driving ability. He was also very enthusiastic about the trials and wanted to offer only positive feedback. He thus chose the lowest (best) score during the NASA-TLX survey (0) for all 104 items in every trial (in both phases), so we do not analyze his ratings. His perceived levels of safety in the QUEST 2.0 survey were also the same (\u00E2\u0080\u009Cvery satisfied\u00E2\u0080\u009D) in both phases. However, we were able to acquire more informative feedback during the custom questionnaire session, in which he mentioned that he did not trust himself to drive safely and felt that he needed the anti-collision system. This is likely because participant 2 has experienced minor collisions in his manual wheelchair in the long-term care facility (while propelling himself backwards) and is thus more concerned with safety than other participants. The participant indicated during the trials and during the questionnaire session that he wanted to be able to drive faster (sometimes yawning or projecting a bored appearance), thus suggesting that acceptable driving speeds might vary between users. 5.9.1.3 Participant 3 Participant 3 had also used a similar wheelchair in a few previous studies, and used a manual wheelchair on a regular basis. He had B-A phase ordering. Figure 5.10 shows the frontal collisions for participant 3. No visual discontinuity is found between the phases. Although the magnitude of collisions is lower with the intervention, there was only one collision in the baseline phase (possibly a \u00E2\u0080\u009Cbad driving day\u00E2\u0080\u009D). No statistically significant change is found with the C-statistic (Z=-0.25). 105 Figure 5.10 Total frontal collisions for participant 3. With NOAH (\u00CE\u00BC=0.0; \u00CF\u0083=0.0), without NOAH (\u00CE\u00BC=0.13; \u00CF\u0083=0.35). Although the system does not seem to have a significant impact on safety, it is able to completely eliminate frontal collisions for participant 3. In a realistic environment where even one collision can be harmful, these results still suggest that the collision avoidance module can enable safer driving. Figure 5.11 shows the length of the route taken by participant 3. There is a large discontinuity in performance between phases. The mean route length and variance are lower with the intervention. There also appears to be a learning trend in the baseline phase. From visual inspection it appears that the intervention (NOAH) has an impact on the distance travelled for participant 3. No statistically significant change is found with the C-statistic (Z=1.02). 106 Figure 5.11 Total length of route taken by participant 3. With NOAH (\u00CE\u00BC=11.31m; \u00CF\u0083=0.0m), without NOAH (\u00CE\u00BC=13.92m; \u00CF\u0083=2.31m). Due to short-term memory impairment, participant 3 said he could not remember the specified route. His increased route lengths during the baseline phase were mainly due to a detour made at the beginning of the task. He had sufficient short-memory to remember paths he had already traversed along during a run, and thus was able to plan his route accurately as he approached the destination, without revisiting previous locations. He did learn the objective of the task over time (i.e. finding the stop sign). It is unclear whether he learnt the route over time since his self-reports of confidence indicated that he did not, suggesting that the learning might have occurred sub-consciously. It is also possible that the apparent learning trend might have disappeared with more trials (i.e. he just happened to guess the right direction at the first decision point in the last two good runs). Participant 3 was found to ignore incorrect prompts correctly when he was able to see that the suggested direction led to a dead-end that did not contain the stop sign. He did appear to rely on the wayfinding prompts when he was at a decision point (a \u00E2\u0080\u0098T\u00E2\u0080\u0099, \u00E2\u0080\u0098L\u00E2\u0080\u0099 or \u00E2\u0080\u0098Y\u00E2\u0080\u0099 107 intersection) and felt that either direction could lead to the stop sign. When asked how he felt about the wayfinding assistance, he mentioned that he liked the just-in-time method of prompting and was happy to receive directions as long as it was not excessive and distracting. Figure 5.12 Total time to reach destination for participant 3. With NOAH (\u00CE\u00BC=381.0s; \u00CF\u0083=69.90s), without NOAH (\u00CE\u00BC=252.13s; \u00CF\u0083=34.58s). Figure 5.12 shows the completion time for participant 3. There appears to be a discontinuity between phases. The mean completion time and variance is found to be higher with the intervention. A statistically significant change is found with the C-statistic (Z=2.13, p<0.05). Although participant 3 was found to travel shorter distances with the system, the stopping mechanism was found to slow down the participant and thus caused frustration. He said a few times \u00E2\u0080\u009Cit\u00E2\u0080\u0099s not doing what I want it to do\u00E2\u0080\u009D. This suggests the need for a better control mechanism, possibly providing automatic steering correction rather than the stopping behavior, although it is unclear how the users would react to wheelchair motion that is different than what they expect. 108 Figure 5.13 NASA-TLX average ratings for participant 3. Possible ratings were 0-20, where 0 indicates low demand and 20 indicates high demand. Figure 5.13 shows the NASA-TLX ratings (averaged over all trials in the phase) for participant 3. Although the average ratings indicate that the intervention lead to an increase in task load (across all dimensions except temporal demand), no visual discontinuities were seen in the data. Thus, it is unclear whether the difference in results occurred due to the intervention or increased comfort with the task/system. In addition, it is important to note that these ratings were out of 20. Thus, even though the load is higher with the intervention, the overall ratings are all less than 1/3 of the highest rating. The participant did show signs of frustration (through his body language and comments during the trials) when the system was activated, however. He mentioned that he wanted justification as to why the system was preventing motion. In some cases, he was frustrated that the system would not let him move closer to objects, even when he perceived the motion to be safe. This suggests that the 109 distance threshold should also be adaptive rather than fixed, to allow safer drivers to drive closer to obstacles, or that the system should implement a time-to-collision approach so that users are allowed to move closer to obstacles if they are moving slowly. The participant\u00E2\u0080\u0099s perceived level of safety in the QUEST 2.0 survey were found to be the same (\u00E2\u0080\u009Cquite satisfied\u00E2\u0080\u009D) in both phases. Due to the participant\u00E2\u0080\u0099s high baseline collision avoidance performance, he did not seem to be concerned about safety. 5.9.1.4 Participant 4 Participant 4 was unable to propel herself in her manual wheelchair and required total assistance to complete activities of daily living according to her assessment. She had B-A phase ordering. Figure 5.14 shows the frontal collisions for participant 4. There appears to be a small discontinuity between phases, however the mean frontal collision and variance is only slightly lower with the intervention. In addition, the total magnitude of collisions in both phases is quite low, however the system maintains a lower overall number of frontal collisions. No statistically significant change is found with the C-statistic (Z=-0.87). 110 Figure 5.14 Total frontal collisions for participant 4. With NOAH (\u00CE\u00BC=0.13; \u00CF\u0083=0.35), without NOAH (\u00CE\u00BC=0.25; \u00CF\u0083=0.46). The system was able to eliminate all but one frontal collision. The missed detection occurred due to interference by the participant who tilted the camera slightly upwards. Although the magnitude of collisions in the baseline phase was found to be low, the participant often looked away from the direction she was driving in. This behavior can lead to more and dangerous collisions in a realistic environment, thus suggesting the need for a collision avoidance module. The participant mentioned that a collision avoidance module \u00E2\u0080\u009Cwouldn\u00E2\u0080\u0099t hurt\u00E2\u0080\u009D and she also differentiated between the wheelchairs in the baseline and intervention as being \u00E2\u0080\u009Cmore responsive\u00E2\u0080\u009D and \u00E2\u0080\u009Cmore regulated\u00E2\u0080\u009D, respectively. 111 Figure 5.15 Total length of route taken by participant 4. With NOAH (\u00CE\u00BC=11.31m; \u00CF\u0083=0.0m), without NOAH (\u00CE\u00BC=11.68m; \u00CF\u0083=1.06m). Figure 5.15 shows the length of the route taken by participant 4. No visual discontinuity is found between the phases. The mean is similar in both phases, however the system helps maintain a lower magnitude of route length with no variance. In the baseline phase, there is one run with a large route length. No statistically significant change is found with the C- statistic (Z=-0.25). The wayfinding module does not appear to significantly impact the participant\u00E2\u0080\u0099s performance. However, the participant was found to be very disoriented during the run with the larger route length in the baseline phase (she said she forgot where she was going), indicating that the participant can occasionally benefit from wayfinding assistance. The system corrected the participant when she deviated from the optimal path during the intervention phase, thus ensuring shorter route lengths. However, at one time, the participant mentioned that she wanted to try a different route, and was hesitant to since the system was 112 prompting her to choose the pre-specified route. This suggests the need to differentiate between errors and intentional deviations. A possible solution is to use a speech-based interface to confirm the user\u00E2\u0080\u0099s intention. She also mentioned that justifications would be useful to inform her about why a direction was being prompted in the presence of alternatives, e.g. \u00E2\u0080\u009Cturn left to reach the kitchen faster\u00E2\u0080\u009D. Figure 5.16 Total time to reach destination for participant 4. With NOAH (\u00CE\u00BC=252.25s; \u00CF\u0083=94.24s), without NOAH (\u00CE\u00BC=155.63s; \u00CF\u0083=43.55s). Figure 5.16 shows the completion time for participant 4. There is no visual discontinuity between the phases. The mean and variance appear to be slightly lower in the baseline phases, however there is an overall trend that suggests that driving time decreases as the participant completes more runs. No statistically significant trend is detected in the baseline phase (Z=-0.81). In addition, no statistically significant change is found when the intervention data is prepended to the baseline data (Z=0.75). 113 The stopping behavior was found to slow down the participant in the intervention phase; however, as she drove the wheelchair more, she was able to decrease her completion time. Figure 5.17 NASA-TLX average ratings for participant 4. Possible ratings were 0-20, where 0 indicates low demand and 20 indicates high demand. Figure 5.17 shows the NASA-TLX ratings (averaged over all trials in the phase) for participant 4. The average ratings indicate that the intervention lead to an increase in task load (across all dimensions), no visual discontinuities were seen in the data. Visual analysis of the raw data shows a learning trend in all dimensions during the intervention (first) phase, thus suggesting that the difference in results might have occurred due to increased comfort with the task/system. Once again, the overall ratings are quite low (less than 1/4 of the highest rating). The participant\u00E2\u0080\u0099s perceived level of safety in the QUEST 2.0 survey were found to be the same (\u00E2\u0080\u009Cquite satisfied\u00E2\u0080\u009D) in both phases. Similar to the previous participant, participant 4 114 demonstrated high baseline collision avoidance performance, and thus did not feel that safety was a concern in the test environment, although she said that she could understand how the system might help her in more hazardous environments. 5.9.1.5 Participant 5 Participant 5 used a walker and was highly mobile, but tended to wander because of the memory deficits and high disorientation found in her cognitive assessment. She completed all sixteen runs with the same starting orientation (facing the entrance of the maze), since any other initial orientation was found to increase her anxiety. Participant 5 had A-B phase ordering. Figure 5.18 Total frontal collisions for participant 5. Without NOAH (\u00CE\u00BC=0.5; \u00CF\u0083=0.93), with NOAH (\u00CE\u00BC=0.13; \u00CF\u0083=0.35). Figure 5.18 shows the frontal collisions for participant 5. There appears to be a slight discontinuity between phases, however the mean frontal collision and variance is only slightly lower with the intervention. In addition, the total magnitude of collisions in both 115 phases is quite low, however the system maintains a lower overall number of frontal collisions. No statistically significant change is found with the C-statistic (Z=0.22). Note that the missed detection in the intervention phase occurred when the participant covered a lens with her hand. Figure 5.19 Total length of route taken by participant 5. Without NOAH (\u00CE\u00BC=18.91m; \u00CF\u0083=4.27m), with NOAH (\u00CE\u00BC=11.94m; \u00CF\u0083=1.17m). Figure 5.19 shows the length of the route taken by participant 5. There is a discontinuity in performance between phases. The mean route length and variance is lower with the intervention. From inspection it appears that the intervention (NOAH) has an impact on the distance travelled for participant 5. A statistically significant change is found with the C- statistic (Z=2.02, p<0.05). Without the system, the participant wandered and revisited previous locations in the maze since she could not remember the specified route due to memory impairment. When the 116 system was in use, participant 5 was found to be very responsive to prompts and would often respond to instructions by clarifying (e.g. \u00E2\u0080\u009Cleft?\u00E2\u0080\u009D) or saying \u00E2\u0080\u009Cyeah\u00E2\u0080\u009D. When she did not hear prompts, she would often ask the question \u00E2\u0080\u009Cwhere am I going?\u00E2\u0080\u009D, thus suggesting that the system decreased her confusion. During runs 9 and 16, errors in the prompting system resulted in detours that were corrected by subsequent prompts. Figure 5.20 Total time to reach destination for participant 5. Without NOAH (\u00CE\u00BC=422.75s; \u00CF\u0083=115.46s), with NOAH (\u00CE\u00BC=350.75s; \u00CF\u0083=187.15s). Figure 5.20 shows the completion time for participant 5. There is a visual discontinuity in performance between the phases, with the intervention initially causing an increase in completion time, possibly due to unfamiliarity with the collision avoidance system, specifically the stopping mechanism. However, the mean completion time appears to be slightly lower with the intervention. Also, completion time appears to drop after the first run in the intervention phase. This indicates that the participant, with more experience with the system in the maze, is able to complete the task as fast as in the baseline phase. No statistically significant change is found with the C-statistic (Z=-0.43). 117 Figure 5.21 NASA-TLX average ratings for participant 5. Possible ratings were 0-20, where 0 indicates low demand and 20 indicates high demand. Figure 5.21 shows the NASA-TLX ratings (averaged over all trials in the phase) for participant 5. Although the intervention appeared to increase load in four out of six dimensions, a large discontinuity was only observed in the temporal demand upon visual analysis of the raw data. Smaller discontinuities were observed in all other dimensions, except frustration, which showed no visual discontinuity. These ratings make sense based on the participants\u00E2\u0080\u0099 comments during the questionnaire session. While during the baseline phase, she thought of the runs as simply driving tasks, she viewed the runs in intervention phase as tasks that involved getting to a specific location within a specific time. She recognized that she was being guided to a destination. This difference in perception might have directly led to the increased levels in perceived temporal demand, and indirectly led to the increases observed in some of the other dimensions. In addition, due to her impaired short-term memory, it was unclear whether she could remember enough details regarding the completed run in order to accurately provide the above ratings. 118 Her QUEST 2.0 ratings with regard to safety were found to be the same in both phases (\u00E2\u0080\u009Cquite satisfied\u00E2\u0080\u009D), once again possibly due to her high baseline collision avoidance ability. 5.9.1.6 Participant 6 Participant 6 used a walker regularly and was able to navigate around the facility independently. She had left-right confusion, and was thus provided with markers on her hands to help her in identifying directions. She had B-A phase ordering. Figure 5.22 Total frontal collisions for participant 6. With NOAH (\u00CE\u00BC=0.25; \u00CF\u0083=0.46), without NOAH (\u00CE\u00BC=3.13; \u00CF\u0083=2.90). Figure 5.22 shows the frontal collisions for participant 6. Visually, there is a large discontinuity in performance between phases. The mean number of collisions is lower with the intervention. The variance in the number of collisions also appears to be lower in the intervention phase. There also appears to be a decreasing trend during the baseline phase, suggesting that the participant might be improving her collision avoidance performance over 119 time. From visual inspection it appears that the intervention (NOAH) reduces the mean number of frontal collisions for participant 6. However, no statistically significant change is found with the C-statistic (Z=1.27), possibly due to the large trend seen within the baseline phase. The results suggest that system increased safety for participant 6. The high number of collisions at the start of the baseline phase also suggests that the system might be creating user dependence on automated collision avoidance. Over time, the participant learnt how to avoid collisions in the baseline phase by focusing more on the task, and stated that she had to \u00E2\u0080\u009Cthink a lot\u00E2\u0080\u009D while driving around them. The data also suggests that NOAH might not be useful as a training tool for powered wheelchair use, since users do not actually learn how to avoid obstacles while using the system. The participant mentioned that she would want to use the anti-collision system since she thought driving in the facility would be dangerous otherwise. Figure 5.23 shows the length of the route taken by participant 6. No visual discontinuity is found between the phases. The mean and variance are the same in both phases. Thus, the wayfinding module did not appear to help participant 6, possibly because her baseline wayfinding performance was quite high (she was confident that she remembered the route before every run). 120 Figure 5.23 Total length of route taken by participant 6. With NOAH (\u00CE\u00BC=11.31m; \u00CF\u0083=0.0m), without NOAH (\u00CE\u00BC=11.31m; \u00CF\u0083=0.0m). Figure 5.24 Total time to reach destination for participant 6. With NOAH (\u00CE\u00BC=513.38s; \u00CF\u0083=126.62s), without NOAH (\u00CE\u00BC=252.13s; \u00CF\u0083=74.13s). Figure 5.24 shows the completion time for participant 6. The mean completion time and variance in the baseline phase appears to be lower than in the intervention, and there appears to be a slight visual discontinuity between phases. The longest completion time in the intervention is much higher than that in the baseline phase. A statistically significant change 121 in completion time is found with the C-statistic (Z=2.90, p<0.01). We noticed faster and more consistent completion times in the baseline phase since the participant had the tendency to drive over obstacles. On the other hand, the stopping mechanism required her to perform more joystick operation to avoid obstacles, thus slowing her down. Figure 5.25 NASA-TLX average ratings for participant 6. Possible ratings were 0-20, where 0 indicates low demand and 20 indicates high demand. Figure 5.25 shows the NASA-TLX ratings (averaged over all trials in the phase) for participant 6. Average ratings indicate that the mental demand, temporal demand and effort ratings were lower with the intervention, and are supported by visual discontinuities in the raw data. Thus, the intervention was responsible for the mean differences. These ratings were further supported by verbal feedback from the participant. She said she had to \u00E2\u0080\u009Cthink too much\u00E2\u0080\u009D and \u00E2\u0080\u009Ctry hard\u00E2\u0080\u009D to avoid obstacles when the intervention was taken away, 122 corresponding to higher mental demand and effort. The temporal demand can also be explained by the participant\u00E2\u0080\u0099s anxiety regarding speed. She expressed that she wanted the wheelchair to slow down, possibly because she was worried about collisions when the system was not active. We also found a higher mean rating for physical demand when the system was activated, supported by a small visual discontinuity in the data. The participant did find it difficult and frustrating to operate the joystick when the forward motion was stopped by the system, thus resulting in increased physical demand to maneuver away from obstacles during the intervention phase. Although a lower average frustration rating is found in the baseline phase, a decreasing trend is observed in the intervention phase, and an increase in frustration is seen when the intervention is taken away. In addition, it was found that the reasons offered by the participant for her frustration were often unrelated to the study (e.g., \u00E2\u0080\u009Cmy pants are too big\u00E2\u0080\u009D, \u00E2\u0080\u009Cmy diaper is too small\u00E2\u0080\u009D). Participant 6 was \u00E2\u0080\u009Cvery satisfied\u00E2\u0080\u009D with the safety of the wheelchair in both phases, although she expressed a high fear of collisions in the real world when she talked about car and plane accidents. 5.9.2 Custom Questionnaire Results All participants liked the system, commenting that others might find the system useful. Except for participants 3 and 5, all said they would use the powered wheelchair (with and without the system activated) if they were given one. Thus, there was a strong desire for powered mobility. Although participant 6 was found to be quite mobile with her walker, she said she would like to be able to use the wheelchair when she was too tired to walk. 123 Participant 3 was satisfied with his manual wheelchair, and said he would only use a powered wheelchair if it allowed him to move faster than his current mobility device. Participant 5 did not see herself as \u00E2\u0080\u009Chandicapped\u00E2\u0080\u009D enough for a powered wheelchair, and was satisfied with her walker. Participant 4 expressed that she would be able to \u00E2\u0080\u009Cgo to all the places [she] can\u00E2\u0080\u0099t currently go to\u00E2\u0080\u009D if she had a powered wheelchair. Since she is completely reliant on her caregiver to porter her around the facility, she expressed that she would like the independence that the powered wheelchair would offer her. When asked about the effectiveness of the collision avoidance and wayfinding system, all participants were satisfied, with participant 4 stating, \u00E2\u0080\u009Cit seems to be doing what it\u00E2\u0080\u0099s supposed to be doing\u00E2\u0080\u009D. When asked about what the participants liked least about the wheelchair system, most responses were found to be hardware-related (relating to the commercial wheelchair) rather than software-related. Some participants expressed that they did not like the need to charge batteries. While participants 2 and 3 wanted to be able to drive faster, participants 1, 4 and 5 were satisfied with the speed, while participant 6 wanted the chair to be slowed down. This suggests that speed needs to be customized for each user. We also found that users were often frustrated by the lack of wheelchair motion when the joystick was not pushed to its furthest position. The slow speed setting of the wheelchair resulted in less power, thus leading to reduced sensitivity to smaller joystick movements. Participants 2 and 4 found the chair to be bulky and preferred a smaller and lower chair, while participant 3 preferred a bigger chair. 124 We solicited feedback to gain insight on participants\u00E2\u0080\u0099 reactions to a completely autonomous wheelchair that would take them to their desired locations. Participant 5 emphatically stated, \u00E2\u0080\u009CI want to be in control!\u00E2\u0080\u009D. Due to her high levels of anxiety, it is highly likely that an autonomous system would frustrate her. However, her willingness to follow instructions suggests that a prompting system that allows her to make her own decisions (such as the system described in this thesis) is well-suited to her needs and cognitive abilities. Participants 2, 4 and 6 said they would like to use an autonomous chair as long as it functioned correctly, thus suggesting that high system reliability is a crucial requirement of an autonomous wheelchair. Participant 3 was open to using an autonomous wheelchair, but preferred to be in control, only receiving assistance when required. We could not gain any feedback from participant 1 on this topic. 5.9.3 System Performance For each participant, we report on system performance with respect to prompting accuracy as well as responses to system prompts. Compliance refers to user actions that agree with the system prompt, while Non-compliance refers to user actions that disagree with the system prompt. No response is used to refer to situations where the user does not perform any action upon receiving a prompt. In addition, we also briefly comment on the user model\u00E2\u0080\u0099s estimates of the users\u00E2\u0080\u0099 independence and responsiveness. 5.9.3.1 Participant 1 Figure 5.26 shows the prompts issued by the system to participant 1. The overall system accuracy in trials with participant 1 was 89.36%, while the mean accuracy over the 8 125 intervention trials was 88.98%. The minimum and maximum accuracy seen in the intervention trials were 70.15% and 100%, respectively. The highest number of errors were seen to occur after multiple consecutive turns while entering the maze. These errors were corrected in intervention runs 2 and 3 when the localization component was re-initialized as soon as the wheelchair (driven by the user) arrived at a specified location in the maze (roughly mid-way to the destination). The errors in intervention run 6 occurred towards the end of the route due to accumulated localization error. Figure 5.26 Prompts issued to participant 1. Responses to all correct and incorrect prompts by participant 1 are shown in Figure 5.27 and Figure 5.28 respectively. While compliance with correct prompts is quite high, compliance with incorrect prompts is lower. Analysis of the video data reveals that participant 1 ignored or failed to respond to incorrect prompts that suggested motion in the direction of obstacles (hidden from the camera\u00E2\u0080\u0099s view). In contrast, she tended to comply with incorrect prompts 126 when she did not see obstacles blocking her path. In one run, she wanted to move forward as some correct prompts were suggesting, however she expressed anxiety because she saw a painted black line on the floor in front of her that she thought was a crack she might fall into (she pointed to the line and gestured the action of falling down). This led to the acts of No Response observed in Figure 5.27. She only moved forward when she saw the researcher walk across the painted line to demonstrate that the floor was even. Figure 5.27 Responses to correct prompts by participant 1. Figure 5.28 Responses to incorrect prompts by participant 1. 127 5.9.3.2 Participant 2 Figure 5.29 shows the prompts issued by the system to participant 2. The overall system accuracy in trials with participant 2 was 92.96%, while the mean accuracy over 7 intervention trials was 94.51% (no prompts were issued in the last intervention run). The minimum and maximum accuracy seen in the intervention trials were 84.48% and 100% respectively. Most errors were seen close to the end of the route due to accumulated localization error. Figure 5.29 Prompts issued to participant 2. Responses to all correct and incorrect prompts by participant 2 are shown in Figure 5.30 and Figure 5.31, respectively. While compliance with correct prompts was quite high, all incorrect prompts were correctly ignored. This could be due to the fact that participant 2 already had high baseline wayfinding performance, and did not actually need 128 prompts to determine in which direction to drive. Morever, the total number of incorrect prompts was quite low, with persisent errors occurring close to the destination (in intervention run 1) and other isolated prompting errors occurring due to time lags and small localization errors. Figure 5.30 Responses to correct prompts by participant 2. Figure 5.31 Responses to incorrect prompts by participant 2. 5.9.3.3 Participant 3 Figure 5.32 shows the prompts issued by the system to participant 3. The overall system accuracy in trials with participant 3 was 87.27%, while the mean accuracy over 8 intervention trials was 87.64% . The minimum and maximum accuracy seen in the 129 intervention trials were 63.33% and 100% respectively. Errors in intervention run 2 occurred due to consecutive fast turns close to obstacles and were corrected by a manual re-start at the mid-way point. Errors in the last intervention run as well as most errors in other runs occurred at the end of the route due to accumulated localization error. Time lags in computation also caused some isolated errors. Figure 5.32 Prompts issued to participant 3. Responses to all correct and incorrect prompts by participant 3 are shown in Figure 5.33 and Figure 5.34, respectively. While compliance with correct prompts is quite high, compliance with incorrect prompts is lower. Similar to participant 1, participant 3 ignored or avoided responding to incorrect prompts that suggested motion in the direction of obstacles (hidden from the camera\u00E2\u0080\u0099s view) or dead-ends. In contrast, he tended to comply with incorrect prompts at junctions points, where he was often uncertain in which direction to drive, likely due to his poor memory. 130 Figure 5.33 Responses to correct prompts by participant 3. Figure 5.34 Responses to incorrect prompts by participant 3. 5.9.3.4 Participant 4 Figure 5.35 shows the prompts issued by the system to participant 4. The overall system accuracy in trials with participant 4 was 88.68%, while the mean accuracy over 8 intervention trials was 93.28% . The minimum and maximum accuracy seen in the intervention trials were 56.76% and 100% respectively. Errors in intervention run 2 occurred at the end of the route due to accumulated localization error. A manual restart was required in intervention 5 at the mid-way point. Other isolated errors occurred due to delayed prompts. 131 Figure 5.35 Prompts issued to participant 4. Responses to all correct and incorrect prompts by participant 4 are shown in Figure 5.36 and Figure 5.37 respectively. Once again, while compliance with correct prompts is quite high, compliance with incorrect prompts is lower. We also note that participant 4 demonstrates compliance with incorrect prompts as often as non-compliance. Similar to participant 1, participant 4 ignored or avoided responding to incorrect prompts that suggested motion in the direction of obstacles (hidden from the camera\u00E2\u0080\u0099s view) or dead-ends. She did, however, comply with incorrect prompts leading to free space. 132 Figure 5.36 Responses to correct prompts by participant 4. Figure 5.37 Responses to incorrect prompts by participant 4. 5.9.3.5 Participant 5 Figure 5.38 shows the prompts issued by the system to participant 5. The overall system accuracy in trials with participant 5 was 84.46%, while the mean accuracy over 8 intervention trials was 85.12%. The minimum and maximum accuracy seen in the intervention trials were 66.67% and 97.06% respectively. Errors in the beginning of intervention run 3 occurred due to consecutive fast turns close to obstacles and were corrected by a manual re-start at the mid-way point. Most other errors occurred close to the end of the route due to accumulated localization error. 133 Figure 5.38 Prompts issued to participant 5. Responses to all correct and incorrect prompts by participant 5 are shown in Figure 5.39 and Figure 5.40 respectively. Compliance with correct prompts is quite high, while compliance with incorrect prompts is lower. However, there were more cases where participant 5 complied with incorrect prompts than cases where she was found to be non-compliant or unresponsive. Participant 5 only ignored or avoided responding to incorrect prompts that suggested motion in the direction of obstacles (hidden from the camera\u00E2\u0080\u0099s view) or dead-ends. She complied with incorrect prompts at junctions points, resulting in detours during the first and last intervention runs (leading to longer route lengths) that were corrected by subsequent correct system prompts. Since most incorrect prompts issued to participant 5 were at junctions, her overall compliance with incorrect prompts was found to be very high. 134 Figure 5.39 Responses to correct prompts by participant 5. Figure 5.40 Responses to incorrect prompts by participant 5. 5.9.3.6 Participant 6 Figure 5.41 shows the prompts issued by the system to participant 6. The overall system accuracy in trials with participant 6 was 78.71%, while the mean accuracy over 8 intervention trials was 84.72%. The minimum and maximum accuracy seen in the intervention trials were 53.66% and 100% respectively. Errors in intervention run 4 occurred due to consecutive fast turns close to obstacles and were corrected by a manual re-start at the mid-way point. Most other errors occurred close to the end of the route due to accumulated localization error. 135 Figure 5.41 Prompts issued to participant 6. Responses to all correct and incorrect prompts by participant 6 are shown in Figure 5.42 and Figure 5.43 respectively. While compliance with correct prompts was quite high, almost all incorrect prompts were correctly ignored, similar to participant 2. Participant 6 had a high baseline wayfinding performance, and might not have required direction prompts, thus correctly ignoring incorrect prompts even at junction points. 136 Figure 5.42 Responses to correct prompts by participant 6. Figure 5.43 Responses to incorrect prompts by participant 6. 5.9.3.7 User Model Results We found errors made by the user model in estimating the users\u00E2\u0080\u0099 level of independence, thus resulting in prompts being issued even when they might not have been necessary (i.e. when the user was able to navigate independently). For example, in most runs, the system estimated that participants 2 and 6 were not independent, however they reported high levels of confidence in their memory of the specified route during every run, supported by their high baseline wayfinding performance. We also found that the system sometimes estimated users to be unresponsive in scenarios where the users were responding correctly to prompts but were not pushing the joystick far or long enough to initiate wheelchair motion. 137 5.10 Efficacy Study Discussion In this section, we discuss the efficacy study results reported previously, quantitatively and qualitatively. We discuss the implications of these analyses with respect to suggested modifications to the system and its components. We also discuss limitations of the efficacy study and provide suggestions for future work. 5.10.1 Subject Performance Study results show that there was a decrease in the mean number of frontal collisions with the system, regardless of phase ordering. All participants performed at least as well in the intervention phase as they did in the baseline phase in the wayfinding task. In some cases, participants traveled much shorter distances when the system was used. Participants with the largest improvements in wayfinding performance (participants 1 and 5) had shorter mean completion times when the system was in use. An increase in mean completion times was seen with other participants, except for participant 2 who had a shorter mean completion time with the system. 5.10.1.1 Collision Avoidance In the task of collision avoidance, although mean collisions were lowered for all participants as seen in Table 5.2, we notice large differences between participants in terms of their ability. In the previous anti-collision study with the IWS [HWM11], a similar observation was made. While three participants (3, 4 and 5) had very high baseline performance, other participants benefited more from the collision avoidance module due to lower baseline performance. In addition, a learning trend was seen with one of the participants in the study (participant 2). 138 Vision impairments contributed largely to collisions for participant 1, since she was unable to see obstacles and free space. This suggests that the collision avoidance module is particularly useful to cognitively-impaired users with vision impairment, and can significantly improve safety for these users. Results also indicate that visually-impaired users could benefit from additional verbal prompts indicating free space. Table 5.2 Collision avoidance performance. Statistically significant results are bolded. Participant ID Mean Number of Collisions Mean Change between A and B Phase A (8 runs) (baseline) Phase B (8 runs) (intervention) 1 8 1.38 -6.62 2 1.13 0 -1.13 3 0.13 0 -0.13 4 0.25 0.13 -0.12 5 0.5 0.13 -0.37 6 3.13 0.25 -2.88 Although frontal collisions were reduced, they were not completely eliminated in cases where users drove into obstacles from the side, too fast to be detected by the camera. Thus, using cameras with a wider viewing angle, or using additional cameras, would further improve safety. 5.10.1.2 Wayfinding In the wayfinding task, we noticed that 3/6 participants did not benefit much from the prompts, since their baseline performance was already quite high. We found that self-ratings 139 of confidence with regard to their ability to navigate to the goal correlated highly with their performance. Table 5.3 Wayfinding performance. Statistically significant results are bolded. Participant ID Mean Length of Route Taken (in meters) Mean Change between A and B (in meters) Phase A (8 runs) (baseline) Phase B (8 runs) (intervention) 1 18.21 11.31 -6.90 2 11.31 11.31 0 3 13.92 11.31 -2.61 4 11.68 11.31 -0.37 5 18.91 11.94 -6.97 6 11.31 11.31 0 Participants 1, 3 and 5 benefited most from the wayfinding module as seen in Table 5.3. These participants did not usually remember the task (finding the stop sign), and when they were reminded, did not know the location of the stop sign. The absence of signage and the labyrinth-like structure of the environment might have led to increased wayfinding challenges experienced by these participants as suggested in [PPRT00]. Participants 2 and 6 reported high levels of confidence regarding the route and benefited the least from the wayfinding module. Participant 4 was found to be disoriented on one occasion during the baseline, when she expressed that she had temporarily forgotten where she was supposed to go. 140 In general, adherence to audio prompts was found to be quite high, as highlighted in the results. This finding is contradictory to that in the IWS study [HWM11], where prompting adherence was found to be low. The main reason for low adherence was stated to be the high number of prompting errors. However, in our study, we found the prompting accuracy to be high, possibly explaining the high prompting adherence. Overall accuracy of all prompts issued (n=1471) was 87.02%, and the mean prompting accuracy over all trials containing prompts was 88.92% (over 47 trials). Although compliance with correct prompts was high across all users, we noticed a distinct difference in the rates of compliance with incorrect prompts. In particular, we noticed that while users who were confident about the route (2 and 6) showed low compliance with incorrect prompts and tended to correctly ignore these prompts, participants who had poor baseline wayfinding performance (1, 3 and 5) and were less confident in their self-reports (4) complied more often with incorrect prompts, specifically at decision points. These results imply that participants with lower self-ratings of confidence do in fact rely more highly on the prompts for assistance, and thus are able to improve their wayfinding performance by following correct prompts. However, these participants are also more likely to comply with incorrect prompts, thus highlighting the need for a high level of system accuracy, specifically at decision points, to ensure effective navigation and minimize wandering. A large number of incorrect prompts could also lead to confusion and frustration among users, who might choose to ignore all prompts (including correct ones) as a result. Based on the above analysis, an alternate reason for users in the IWS study to ignore prompts could be that they simply did not feel that they needed assistance with maneuvering around 141 obstacles. Interviews with study participants might provide more insights on reasons for compliance and non-compliance. 5.10.1.3 Completion Time Table 5.4 Completion times. Statistically significant results are bolded. Participant ID Mean Completion Times (in seconds) Mean Change between A and B (in seconds) Phase A (8 runs) (baseline) Phase B (8 runs) (intervention) 1 1125.88 702.38 -423.50 2 434.75 327.38 -107.37 3 252.13 381.00 +128.87 4 155.63 252.25 +96.62 5 422.75 350.75 -72.0 6 252.13 513.38 +261.25 Completion time was a secondary outcome measure in this study, so we do not go through an in-depth analysis. In addition, interactions with the researcher during the trials might have increased completion times in some cases, thus the times reported in the results do not necessarily reflect true completion times. However, we observed that, because of the design of the collision avoidance module, some participants stopped more often when the system was activated (if they tended to drive close to obstacles). Although the stopping behavior lowered the number of frontal collisions, it increased mean completion times for participants 3, 4 and 6 as seen in Table 5.4, in spite of the fact that these participants traveled shorter or similar distances with the system. Thus, although the participants took similar or shorter 142 routes, they did not experience time savings. It is expected that a collision avoidance system will increase safety at the cost of speed (by slowing down the wheelchair in presence of nearby obstacles), however for participants who have high baseline collision avoidance ability, such as participants 3 and 4, this tradeoff is undesirable. Participants 1 and 5, who wandered in the baseline phase, were able to complete the navigation task faster due to largely decreased route length. In addition, these participants stopped several times during the baseline phase due to confusion and anxiety, however constant system prompting during the intervention phase encouraged them to continue moving, thus leading to faster completion rates. 5.10.2 Thematic Analysis of Qualitative Data Below we discuss various themes discovered in the qualitative data acquired during the efficacy study. This data was collected through video recordings of the trials and the post- trial questionnaires, researcher observations of the users\u00E2\u0080\u0099 performance and capabilities throughout the trials, as well as background information in participant files. 5.10.2.1 Prior Driving Experience In our study, participants 3, 4 and 5 had fairly high and consistent baseline performances. Although they all had no significant experience driving powered wheelchairs, they had significant experience driving a car at some point in their lives (revealed during trials and interviews). Participant 3 also had significant joystick experience while using a forklift, which could explain his high baseline performance. Participants 1 and 6, on the other hand, 143 had no previous experience driving an automobile (although they had used bicycles) and were noted to have the highest number of collisions, despite the fact that participant 6 was only mildly cognitively impaired. Participant 2 had been a truck driver in the past, and thus did have experience driving automobiles. He showed improved performance with increased use of the wheelchair. This shows that there might be some correlation between wheelchair driving ability and experience with driving automobiles. It is difficult, however, to make any strong conclusions since the participants also had varying degrees and types of cognitive impairment, which might have also affected their driving ability. 5.10.2.2 Attentiveness and Mood We noticed that mental state (particularly level of attentiveness) while driving affected the performance of participants 2 and 6. Most of the initial collisions for participant 2 were more as a result of inattentiveness than due to an inability to see the obstacle or determine free space. He had fewer collisions and drove faster when he was in a positive mind state and focused (e.g. watching the path ahead rather than talking to the researcher). He tended to drive more aggressively if he was frustrated (by the wheelchair speed) or sleepy. A similar trend was noticed with participant 6, who also required extra focus in order to successfully avoid collisions, and tended to get lazy sometimes saying \u00E2\u0080\u009Cthere\u00E2\u0080\u0099s too much stuff around here\u00E2\u0080\u009D, referring to the obstacles. However, a higher variation in attentiveness was seen in participant 6. Participant 4 usually had high collision avoidance skills without the intervention, however two collisions were noticed because she looked away to the side rather than forwards while driving. Thus, except for participant 1 who had visual impairments that prevented her from seeing obstacles clearly, collisions mainly occurred due to a lack of 144 attention. By stopping the motion of the wheelchair in the event of an imminent collision, the system was able to draw the users\u00E2\u0080\u0099 attention towards the obstacle and thus force them to navigate away from it, which they were able to do successfully. Although the system does not currently monitor the user\u00E2\u0080\u0099s level of attention or frustration, future work could involve using methods such as eye-tracking or emotion recognition to acquire more information regarding driver fatigue or inattention, as in [ZJL04]. 5.10.2.3 Perceptions of Safety It is important to consider that the foam obstacles might not have been perceived as dangerous by the participants, possibly making the participants more likely to drive through them. This intuition is supported by the fact that some participants (especially 2 and 6) often tried to physically move the obstacles with their hands, thus implying that they could see the obstacles, but were too lazy to drive around them and knew that they were light enough to remove from the path. A more realistic setting would include real-life obstacles; however, safety is a concern. We also noticed that participants\u00E2\u0080\u0099 concern for safety was directly related to their collision avoidance performance. While participants 1, 2 and 6 all expressed some anxiety regarding collisions through their remarks during surveys and conversations, other participants did not feel that they needed a collision avoidance system, possibly due to their high collision avoidance performance. 145 5.10.2.4 Social Acceptance Although the presence of the researcher did not seem to affect the performance of other participants, it did seem to affect the performance of participant 2. He considered improving his driving ability (including the time taken to complete the task) as a way to impress the researcher and improve social acceptance. This tendency was also apparent during the survey sessions, where he continually gave himself and the system extremely high ratings, regardless of his actual performance. Similar observations of positive ratings issued during surveys to please the researcher were reported in [WKHF10] and referred to as social desirability response bias [Fur86]. Thus, it is important to consider this bias when analyzing survey ratings. In addition, encouragement from the researcher (e.g. \u00E2\u0080\u009Cgood job!\u00E2\u0080\u009D), which was offered to participants (particularly 1 and 5) if they were found to be anxious or frustrated, seemed to positively impact them by increasing their motivation to continue driving. It might be beneficial to incorporate such feedback into the prompting system, as in [LHK+06]. It will be important to evaluate user performance with the system in the absence of supervision in the future to determine the efficacy of the system in the realistic scenario where a caregiver might not be present, however this poses safety concerns. 5.10.2.5 User Confidence and Intent Participants 2 and 6 were always confident about the route they were asked to navigate by responding \u00E2\u0080\u009Cyes\u00E2\u0080\u009D to the question \u00E2\u0080\u009CDo you remember the route to the stop sign?\u00E2\u0080\u009D, and correctly identifying the first turn. These participants traveled the specified route correctly 146 during every trial of the baseline and intervention phases, implying that they did not need any wayfinding assistance. Participant 4 responded with \u00E2\u0080\u009CI think so\u00E2\u0080\u009D, which was typical of her tendency to provide modest ratings. Although she was able to navigate the route correctly most of the time, she was found to be disoriented once during the baseline phase and stated, \u00E2\u0080\u009CI realized soon after I turned that I\u00E2\u0080\u0099d made a mistake, but decided to keep going.\u00E2\u0080\u009D During the intervention phase, the system corrected a deviation made by the same participant, preventing her from navigating along a longer route length. However in this case, the participant mentioned that she wanted to \u00E2\u0080\u009Ctry a new route\u00E2\u0080\u009D, but was hesitant to disobey the system prompts. This scenario presents a challenge for the system with regards to differentiating between user disorientation and intentional deviation. 5.10.2.6 Memory and Wayfinding Abilities It is important to note that some differences were seen in the ways in which participants navigated. Although all participants were reminded of the objective at the start of the trial, participants 1 and 5 did not retain this information and displayed wandering behaviors, often revisiting parts of the route (as seen in the large route lengths in the baseline phase). Wandering differs from wayfinding in that the person walks without having a destination in mind and without knowing where she or he is [PPRT00]. This information was verified by repeated questions from participant 5 during the baseline phase regarding the purpose of the task and where she was. Participant 1 was also unable to identify the purpose of the task when asked about it during the trial. They would thus constantly stop in confusion during the baseline phase or wander aimlessly. The wayfinding prompts that were constantly issued by the system in the intervention phase encouraged them to keep moving and gave them a sense 147 of direction, thus converting the wandering into a wayfinding task. This argument is supported by the fact that participant 5 described the task as \u00E2\u0080\u009Cgetting from one room to another\u00E2\u0080\u00A6on time\u00E2\u0080\u009D when the intervention was introduced. In contrast, participant 3 was able to remember the objective (he actively screened the environment looking for the stop sign) and plan more reasonable routes. His wayfinding abilities match those reported in a study with fourteen patients with mild to moderate dementia who were asked to navigate to a destination in an unfamiliar hospital setting [PRMJ95]. Compared to normal elderly subjects, the participants with dementia had poorly structured overall decision plans, however, they were able to solve well-defined problems and develop sub-plans in routine situations when the necessary information was readily available. Participant 3 often guessed the first turn in the route since he was unable to see the stop sign from the starting location, but subsequently made (correct) turns that had higher probabilities of leading to paths containing the stop sign (e.g. he avoided the turn leading to a conspicuous dead-end without a stop sign). Thus, the increase in route length during the baseline phase was only due to the first incorrect turn. Since he could remember the task, he was motivated to continue navigating until he reached the destination without any assistance (as seen in the baseline phase), thus performing a wayfinding rather than wandering task. The initial prompts provided by the system in the intervention phase allowed him to navigate the shortest route in every trial by encouraging him to steer in the correct direction at the first intersection. Thus, initial task reminders and subsequent wayfinding prompts at intersections would be sufficient to ensure successful navigation for this participant in this study. However, longer navigation routes might require additional reminders to prevent wandering. 148 Participants 2 and 6 always remembered the goal of the task and were able to wayfind successfully. Participant 4 usually remembered the route and was able to navigate along it most of the time. However, she was found to wander on one occasion when she forgot the goal of the task (during the baseline). A timely task reminder in this case might have prevented wandering behaviour, and encouraged wayfinding to the goal. 5.10.2.7 Wheelchair Speed During collision avoidance, the stopping action of the wheelchair led to some frustration among most participants with high baseline collision avoidance ability. Although participant 1 tried alternate joystick movements when the wheelchair was stopped by the collision avoidance module, participant 5 was frustrated by the stopping action when she believed that she had enough room to maneuver around the obstacle. We also found that users were often frustrated by the lack of wheelchair motion when the joystick was not pushed to its furthest position, as in [WKHF10]. The slow speed setting of the wheelchair resulted in less power, thus leading to reduced sensitivity to smaller joystick movements. With regards to overall speed, we found that 2/6 participants wanted to be able to drive faster, while 3/6 were either satisfied or wanted the chair to be slowed down. We found that participants who wanted the wheelchair to be sped up were comparing the powered wheelchair speed to that of the manual wheelchairs they currently used, and wanted to be able to travel at least as fast as they could with their own mobility device. Participants who were satisfied or wanted slower wheelchair speeds either only used walkers (participants 5 and 6) and thus navigated fairly slowly from one location to another, or had a manual 149 wheelchair but could not propel herself (participant 4). This suggests that acceptability of wheelchair speed is directly related to the navigation speed that the users are typically able to achieve with their existing mobility device. Speeds that are significantly higher or lower than their usual navigation speed tend to make users either frustrated or anxious. 5.10.2.8 Decrease of Confusion and Anxiety The improved performance in wayfinding (shorter distance and faster completion rate) potentially led to the higher level of enthusiasm observed in participant 1, who expressed that she wanted more driving time when the system was activated. In contrast, higher levels of fatigue and boredom were observed during the baseline phase (through visual and verbal feedback from the participant). This suggests that wayfinding assistance might not only allow effective navigation, but also decrease fatigue, which is a key issue in manual wheelchair use, and thus improve overall quality of life. We also found lower reported anxiety when the system was in use, however this could have been due to the fact that the user was getting more comfortable with the wheelchair and task over time. We noticed that participant 5 often asked \u00E2\u0080\u009Cwhere am I going?\u00E2\u0080\u009D during the baseline phase and tended to wander. However, the system prompts gave her the feeling that she was trying to get to a certain location, thus decreasing her observed confusion, and she was found to question the task objective less in the intervention phase. 150 5.10.2.9 Need for Powered Mobility and Control Not all participants felt the need for powered mobility, since they were able to achieve their daily mobility needs with their manual wheelchairs. Participant 3 explicitly said that he would only use a powered wheelchair if it allowed him to travel faster than his manual wheelchair. Similar opinions were expressed by participant 2. Participant 5 said \u00E2\u0080\u009CI\u00E2\u0080\u0099m not handicapped!\u00E2\u0080\u009D when she was asked if she would use the powered wheelchair, and was satisfied with her walker. Participant 4, however, felt the greatest need for powered mobility since she was unable to propel herself on her manual wheelchair, and depended on her caregivers to porter her around the facility. When asked what she would do with the wheelchair, she said, \u00E2\u0080\u009CI would go to all the places I can\u00E2\u0080\u0099t currently go to.\u00E2\u0080\u009D It is also interesting to note that participants with higher levels of confusion due to memory impairment (3 and 5) expressed a higher need to be in control in the open-ended questionnaires, while participants who were not confused were more willing to give up control and use an autonomous wheelchair. For example, participant 3 once expressed frustration during a collision event by stating that the wheelchair was \u00E2\u0080\u009Cnot doing what [he was] telling it to do\u00E2\u0080\u009D (it was not responding to forward joystick motion due to a detected obstacle within the safety distance). Participant 5 stated that she wanted to be in control of her driving and did not want an autonomous wheelchair. Participant 4 expressed that she felt restricted by the system\u00E2\u0080\u0099s decisions in some cases, and possibly desired more control. Further studies with the target population would help us determine whether an individual\u00E2\u0080\u0099s confusion level does, in fact, influence his/her attitude towards autonomy and the need for control. 151 5.10.2.10 Shared Decision-Making Participant 1 seemed to regard the prompting system as a collaborative agent, mostly complying with, and only disobeying prompts that she felt were incorrect (e.g. when the system prompted her toward an obstacle that was hidden from the camera`s view). She interacted with the system by responding physically and verbally to the prompts, but it is unclear whether she knew that the prompts were coming from the wheelchair or from the researcher. Participant 3 found the just-in-time prompting approach to be appropriate in the test environment and was found to comply with prompts at decision points when he was unsure of which direction to drive in. However, in the presence of visual cues (e.g. dead- ends, absence of stop-sign in the prompted directions) he was able to plan his own route, ignoring any incorrect prompts. He often stopped to ask the researcher where he needed to go at junction points, but willingly complied with system prompts that followed his question. He thus saw the wayfinding module as an assistant and recognized that the assistance was coming from the wheelchair, demonstrated by his comments \u00E2\u0080\u009Cit is telling me to go\u00E2\u0080\u00A6.\u00E2\u0080\u009D. It is unclear whether he thought that the system was actually responding to his queries. He found the collision avoidance module as less assisting and more imposing, as observed through his comments \u00E2\u0080\u009Cit\u00E2\u0080\u0099s not listening to me\u00E2\u0080\u009D. These observations make sense since while the wayfinding module is adaptive and passive, the collision avoidance module is non-adaptive and active during imminent collisions. Thus, a more adaptive strategy for collision avoidance and methods such as automatic steering correction might result in a system that is more enabling (by allowing motion) rather than disabling (by preventing motion). Participant 5, similar to participant 3, sometimes questioned which direction she needed to drive towards at decision points, and complied with prompts that followed. She complied with two incorrect 152 prompts, which led to two temporary deviations noticed in the intervention phase, and correctly ignored prompts directing her towards obstacles. Participant 4 was also found to comply with prompts (both correct and incorrect), especially at decision points. In fact, the prompts were found to discourage participant 4 from taking intentional detours that resulted in longer route lengths. Although compliance with prompts in these cases resulted in shorter routes, it did reduce opportunities for exploration, which might be an undesirable side-effect. Since participants 2 and 6 did not show any difference in wayfinding performance between the two phases, it is not possible to determine whether they were actually paying attention to the prompts. Although they complied with correct prompts, they mostly ignored any incorrect prompts, possibly because they were already confident regarding their route. They did not interact with the system in any way, except when reminders were issued to participant 2 by the system that included his name at the beginning of the reminder (\u00E2\u0080\u009C[First name], try finding the stop sign\u00E2\u0080\u009D). Participant 2 turned to the researcher and responded \u00E2\u0080\u009Cit\u00E2\u0080\u0099s over there!\u00E2\u0080\u009D, gesturing towards the stop sign. It is unclear whether he thought the researcher, rather than the system, was talking to him (system prompts were recorded in the researcher\u00E2\u0080\u0099s voice). 5.10.2.11 Justification of Prompts Some users expressed that they wanted to know the reason for system action or prompts, especially if they felt there was a better alternative. For example, participants 3 and 5 had high baseline collision avoidance abilities, but tended to drive close to obstacles sometimes. This led to stopped wheelchair motion when NOAH was in use and confused the users 153 because they felt that they had enough room to manoeuvre around the obstacle and did not remember the reason for stopped motion due to their memory impairment. Participants 2, 4 and 6 were also stopped in some cases where they possibly had enough room to get around the obstacle, but since they remembered the reason for the stops, they did not complain and created more room to drive around the obstacles. This shows that participants might in fact be more excusing of restrictions placed on them by the system when they are aware of the reasons for the restriction. Thus, for users with limited short-term memory, a warning prompt telling them that they are being stopped because they are too close to the obstacle might help decrease frustration even when the system is being overly restrictive. The need for justification was also expressed by participant 4 with regards to wayfinding prompts. While most users who remembered the route were not opposed to using the same path every day, participant 4 felt a greater need to explore the environment, stating on one occasion that she would like to take a \u00E2\u0080\u009Cmore scenic route\u00E2\u0080\u009D and did not understand why the system continually prompted her to navigate along the same path. Since participant 4 has the most limited mobility in her manual wheelchair due to lack of strength to propel it, it is possible that she wanted to use her new-found mobility to increase opportunities for independent exploration. Thus, in this case, a justification for the choice of route, along with an option to pick alternative routes, might increase satisfaction for some users. 5.10.2.12 Independent Operation of the System Although most participants were able to navigate independently with the system, participant 1 needed a lot of physical or verbal assistance to operate the joystick on certain days and was 154 able to follow system prompts on her own occasionally. Thus, although the system was able to prevent her from hitting obstacles and helped her in determining which direction to drive towards in order to reach the destination, her ability to drive the wheelchair completely independently is in question. 5.10.3 System Components Analysis and Refinement Based on the results of the efficacy study, it can be concluded that all the system objectives, i.e., collision avoidance, mapping and localization, task reminding, and adaptive navigation assistance were met. In addition, the criteria outlined were met in the following ways: 1) The system improved safety \u00E2\u0080\u0093 it lowered frontal collisions for all participants. 2) The system maximized effective navigation to the goal \u00E2\u0080\u0093 it maintained or improved wayfinding performance for all users through adaptive prompts. 3) The system minimized frustration - Post-trial survey results indicate that no significant increase in frustration levels was caused by the system, although we did notice frustration with the stopping mechanism, which will be addressed in future work. In addition, the system performed with high overall accuracy, thus minimizing frustration caused by incorrect prompts. It was not clear whether users found the prompting to be excessive, and they did not appear to be frustrated by correct prompts. Longer duration studies will provide further insights on acceptable prompting frequency. 155 We now provide a brief discussion of the performance of the overall system and individual components during the study. The Collision Detection and Prompting modules are analyzed further, since they are directly linked to the outcomes measures of the study. Refinements to the modules are suggested based on the quantitative and qualitative analyses presented above. 5.10.3.1 System Set-up Approximately half an hour of setup time was required every day to mount the laptop and camera on the wheelchair, run the required software and set up the obstacle course. For debugging purposes, a small monitor was also installed at the back of the wheelchair to ensure that the software was running properly. Once the hardware was set-up at the beginning of the day, only a few minutes were required to re-start the software for each user at the beginning of the trial. We also required 30 minutes to an hour between trials to charge the laptop. In the future, a power source on the wheelchair should be engineered to facilitate commercial use. 5.10.3.2 Hardware Although the laptop was inconspicuous, the camera\u00E2\u0080\u0099s position was found to be problematic. Participants needed to be reminded to keep their hands away from the camera, and failure to do so resulted in errors in obstacle detection and localization. Alternate mounting positions should be investigated. For example, the camera could be placed over the driver\u00E2\u0080\u0099s shoulder (to avoid interference), pointing slightly downwards in order to capture low obstacles; however, this placement could lead to increased form factor. In addition, higher wheelchair 156 speeds along with computational speeds of the software will need to be investigated to improve satisfaction for users who found the wheelchair to be too slow. 5.10.3.3 Mapping A 2-D map of the test environment was generated prior to the efficacy study, and was found to be accurate. In order to control for lighting effects, the blinds were closed during all trials, however future work will involve testing the system in varying and realistic lighting conditions. In addition, since the obstacle course was constructed with plain foam boards, artificial texture was created on the boards using colored tape to aid the vision system. The rest of the environment was unmodified. The initial and goal locations were specified on the map manually using the Rviz GUI. 5.10.3.4 Localization The Localization module outputs position estimates once every 3 seconds, or if the wheelchair status (on-route, off-route, etc.) changed, whichever happens first. Although this rate was sufficient in most cases, it caused delayed prompts in the presence of quick, consecutive turns that were often required close to the end of the obstacle course due to the layout. Computation speeds of this module could be increased by decreasing image resolution, although this might reduce the number of matched features. Further experiments could be conducted to determine the maximum number of feature matches needed for accurate motion analysis. In addition, the use of GPUs can be investigated to achieve computation speedups. 157 The layout of the obstacle course proved to be a challenging environment for the Localization module due to the large number and height of obstacles. Visual odometry is calculated by tracking features in the environment. The most stable features for tracking are those further away from the wheelchair, since they demonstrate the least amount of displacement as the wheelchair moves through the environment. However, in several cases, the camera\u00E2\u0080\u0099s view of stable features in the distance was blocked by nearby obstacles. Fast turns in front of these obstacles resulted in occlusions and large motion of features that could not be tracked in consecutive image frames. Although these issues were not present in the previous trial experiments due to the more spacious environment, they need to be addressed in order to ensure robustness to smaller and cluttered spaces. The issues highlighted above were found to result in large errors especially when the users were initially oriented such that they had to perform two consecutive turns. This orientation was chosen 80% of the time (in each phase) for five out of six participants (participant 5 always faced forwards due to increased anxiety when she faced in alternate directions). In these cases, if the position estimate produced by the localization module was found to be inaccurate (i.e. if the position error was greater than 1m or the orientation error was greater than 40 o ), the wheelchair position was manually re-initialized when the user arrived at a specified location in the obstacle course. Manual restarts were only required in a total of seven out of forty eight trials (15%), with all but one participant (2) requiring one manual restart. Participant 1 required an additional restart. In the future, this re-initialization can be automated through the use of pre-registered visual landmarks, or RFID tags in problematic 158 areas in the environment (close to intersections). Another way to improve localization accuracy is to use wheelchair encoders that provide mechanical odometry readings. Position estimates derived from mechanical odometry could then be used in combination tracked visual features in the environment. The absence of sufficient visual features would then not be problematic. 5.10.3.5 Trajectory Generation and Analysis The Trajectory Generation and Analysis module was found to be accurate in detecting detours, upcoming turns as well as stopped motion. The status of the wheelchair could be calculated in less than 0.5 seconds based on the Localization module output. However, the rate of output of this module was limited by Localization module. Computational speedups in the Localization module will enable faster output from subsequent modules. In addition, although the above module only analyzed the trajectory for information regarding the user\u00E2\u0080\u0099s driving status, in the future, further information can be obtained from the map (such as location of doorways, walls, etc.) to enable more high-level scene understanding (e.g. upcoming left turn at the end of corridor). This might allow for more descriptive prompts using natural language directions such as in [KTRR10]. 5.10.3.6 Collision Detection The Collision Detector module was able to detect obstacles in most cases. Errors can be attributed to two main reasons: 1) Hard left/right turns into obstacles. This led to occluded views of the obstacles (the 159 obstacles appeared only in one lens and thus were not detected). 2) The camera being tilted upwards or downwards (possibly by the participant during the trial), causing the obstacles to be hidden from the camera\u00E2\u0080\u0099s view. The first error could be corrected by either using a camera with a wider viewing angle, or installing additional cameras to increase sensor coverage. In addition, other types of sensors could be integrated in to the present system. The second error can be prevented by mounting the camera in a way that prevents the user from interfering with it, as mentioned previously. The stopping behavior of the wheelchair resulted in fewer frontal collisions. However, the layout of the obstacle course sometimes required users to maneuver through tight spaces, where the stopping behavior of the wheelchair led to an increase in joystick movements required to navigation through the course while maintaining the pre-specified safety distance. Frustration seen among participants with high baseline collision avoidance ability suggests that the safety distance threshold might need to be customized for each participant. In addition, this threshold might also need to be adjusted based on the type of obstacle encountered. For example, a larger distance threshold might be necessary if a person is detected, while a smaller threshold might be necessary to allow users to drive up closer to dining tables and elevator buttons. The ability to recognize the type of obstacle and adjust distance thresholds has also been suggested in [HWM11]. In addition, adding warning prompts to justify the stopping action of the wheelchair might also lead to improved usability. 160 It would also be worthwhile to investigate alternate strategies such as a time-to-collision approach that slows down the wheelchair as it approaches an obstacle rather than stopping it completely. In addition, an approach that does not adjust the speed, but adjusts the heading so that the wheelchair steers away from obstacles, would increase safety without compromising on speed. However, it is possible that a wheelchair that steers in a direction that is different from the direction specified by the user might lead to frustration and anxiety. Slight adjustments in combination with a time-to-collision approach might thus be more acceptable. Audio prompts justifying these system actions could also help to reduce/prevent confusion. 5.10.3.7 Prompting The Prompting module was found to produce reasonable policies similar to [HVCPA10, HPJ+11], however a much larger number of prompting errors were found in the efficacy study than in the trial experiments. Errors in prompting occurred either due to localization errors discussed above or time lags. Since the localization error was much higher than in the test environment for the trial experiments, the sensor model did not accurately represent the noise in the observations. In the future, the sensor model should account for the amount of clutter and occlusion in the specific test environment in order to accurately estimate the status of the wheelchair. Alternatively, the model can be expanded to include an observation of the root-mean-squared error output by the localization module as a measure of confidence and specify an observation function based on this confidence. For example, the probability that the user is actually off-route is higher if the observed wheelchair status is \u00E2\u0080\u009Coff-route\u00E2\u0080\u009D and the observed confidence of the localization estimate is high. 161 In a few cases, time lags caused delayed prompts, which resulted in missed or incorrect turns by the user. Although these errors were usually corrected by the user when the prompt eventually played, faster localization updates and prompting could improve performance. Scenarios that include multiple consecutive turns might require more complex instructions such as \u00E2\u0080\u009Cturn left, then turn right\u00E2\u0080\u009D. These types of prompts would need to be further investigated, since it has been found that older adults with dementia find statements containing multiple instructions difficult to follow. 5.10.3.7.1 User Model Overall, the system seemed to choose appropriate actions based on observed user behaviors. For example, in run 16 with participant 2, the system estimated the user to be fully independent, and thus did not provide any audio prompts. On the other hand, the system found that participant 1 did not move without assistance, and thus continually prompted her until she reached the destination. The users\u00E2\u0080\u0099 level of independence remained constant throughout the trial, as specified in the model. However, it was found that the system often provided wayfinding assistance even when users might not have required it (when they were possibly independent but the system estimated that they were not). This was due to 2 main reasons: 1) Users with high baseline wayfinding performance (2 and 6) required longer amounts of time to start the navigation task (as compared to navigating while they were already in motion) and to maneuver the wheelchair after a collision event (once again, moving from a stopped state). 162 2) Users with high baseline performance might have realized that the system was active (upon hearing the first prompt or due to memory from previous runs in the intervention phase) and were noticed to wait for prompts even when they possibly knew what to do, thus tricking the system into estimating a lack of independence. In order to address 1), the model can be modified to specify smaller transition probabilities from stopped to moving states for \u00E2\u0080\u009Cslow starters\u00E2\u0080\u009D, corresponding to an intermediate value between \u00E2\u0080\u009Cyes\u00E2\u0080\u009D and \u00E2\u0080\u009Cno\u00E2\u0080\u009D for the independent variable. Incorporating this information would prevent excessive prompting to users who are able to navigate independently and require more time to initiate wheelchair motion. The POMDP can be easily extended to include more states and solved offline using the Symbolic Perseus package, which has been used to solve POMDPs with up to 50 million states [Pou05]. The dependence on the system described in 2) was noticed in participant 6, who had a spike in the number of collisions when the system was taken away and mentioned that she did not like to \u00E2\u0080\u009Cthink too much\u00E2\u0080\u009D in order to avoid obstacles. In addition, both participants 2 and 6 mentioned that they would like an autonomous chair that drove automatically, suggesting that they preferred systems that provided more active assistance, possibly reducing their own physical/mental workload. Thus, although the system might be satisfying user needs/desires for decreased task demand, it might be taking away opportunities for independent decision- making. The user model could possibly benefit from caregiver knowledge of the users\u00E2\u0080\u0099 true capabilities in order to provide an appropriate level of assistance while preventing excessive reliance on the system. For example, the caregiver could initialize the probability of 163 independence of the user to be higher in cases where the user is known to have high wayfinding abilities. The users\u00E2\u0080\u0099 self-reports on confidence could also be used in the model, since their confidence levels were found to be correlated with performance. Results indicate that the functionalities of collision avoidance and wayfinding might be independent. Main predictors of success in these two tasks might be short-term memory and attention as described in the qualitative analysis. The POMDP could be extended to include these predictors as different variables that lead to distinct user behaviors. For example, the model could specify that users with low attentiveness are more likely to collide with obstacles, thus needing a higher level of assistance in collision avoidance. The need for free space prompts could also be automatically estimated by the system by incorporating collision avoidance behavior in the user model. For example, users unable to calculate free space are more likely to hit the same obstacle multiple times. Users with poor short-term memory are more likely to deviate from the optimal route and need directions, while those who are able to learn the route might simply require task reminders (e.g., \u00E2\u0080\u009CFind the stop sign\u00E2\u0080\u009D). In addition, users with poor memory but good short-term planning abilities might only need prompts at decision points and if they deviate, while wanderers require constant prompting to ensure successful navigation to the destination. Since wayfinding performances were found to closely correlate with the users\u00E2\u0080\u0099 self-reports of confidence regarding the route, the independence variable can be initialized based on prior information obtained directly from the user. 164 One major limitation of the system is that user behaviors are only inferred based on wheelchair motion. In many cases, users pushed the joystick, but did not push it far enough or hold it long enough to initiate wheelchair motion. This led to lower probabilities for estimated independence and responsiveness, causing the system to issue a task reminder and re-initialize user states (responsiveness and independence) in some cases. In addition to wheelchair motion information, it is essential to incorporate information about joystick operation by recording joystick movements (through the wheelchair controller) or by tracking the user\u00E2\u0080\u0099s hand using a camera overlooking the joystick. This will also allow the system to provide further assistance with joystick operation by issuing prompts such as \u00E2\u0080\u009Cpush the joystick further\u00E2\u0080\u009D if the system observes correct joystick motion and no wheelchair motion. In addition, audio-visual prompts that demonstrate proper joystick use can be issued. 5.10.3.7.2 Prompting Response With regards to the modality of prompting, audio prompts appeared to be an acceptable and effective means of providing assistance. Deatherage (1972) in [SE99] recommends using the auditory modality if the message is simple, short, and transient, the message deals with events in time, the message calls for immediate action, or the visual system is overburdened, and the like. Using audio prompts in our system allowed participants to follow instructions during the driving task where the visual system may be overburdened. In contrast, a visual interface might distract users and lower efficiency as observed in [SDTB08]. Difficulties in reading screens due to lighting conditions or visual impairments have also led to an increased preference for audio prompts in wayfinding systems for older adults with cognitive impairment [SFHF07]. In addition, the auditory modality allows for faster reaction times of 165 drivers compared to visual displays, and does not require the driver to change his/her head or body orientation. The need for justification of prompts described in the thematic analysis implies that existing prompts could be altered to add more context such as \u00E2\u0080\u009Cmove forward to take the shortest route\u00E2\u0080\u009D. Proving context might improve responsiveness and/or help the user in making more informed decisions. Related work has been done to explain policies generated by Markov Decision Processes in natural language [DMG11]. Intent recognition would also be useful to differentiate between intentional stops (e.g., to converse with a resident) and stops due to confusion [GSSS02]. A speech recognition module could also be added to verify the user\u00E2\u0080\u0099s intent, such as in [RPT00]. In spite of the high accuracy of and compliance with wayfinding prompts observed in the intervention phase, it is difficult to determine how effective the chosen prompts were based on the intervention phase data alone since we do not know what the users\u00E2\u0080\u0099 actions would have been during those trials in the absence of prompts or with an alternative prompting strategy. For example, most turning prompts were issued as the users were approaching the turn, thus we cannot tell which direction the users would have driven in without the system. On the other hand, in a few cases where users approached an intersection and deviated from the route before a turning prompt was issued, the system issued a correction (\u00E2\u0080\u009Coff-route\u00E2\u0080\u009D) prompt, which the users complied with, implying that the prompts might have led to the improved performances seen. However, it is once again difficult to say whether the users would have corrected their direction on their own (perhaps realizing that they had made a mistake). Nevertheless, the combination of high prompting adherence and improved wayfinding performance in the intervention phase compared to baseline 166 performances for some users does suggest that the adaptive system prompts were effective. Studies that include additional phases implementing alternate prompting strategies would help to provide further insights regarding the effect of various strategies on user performance and satisfaction. In addition, the benefits provided by the prompting system to participants 3 and 5, who did not want powered wheelchairs, suggest that the wayfinding module could be implemented on non-powered devices such as walkers or manual wheelchairs in the future. Further studies would need to be conducted to determine prompts that would be appropriate for walker or manual wheelchair use since the prompts in the current system were designed with a joystick interface to a powered device in mind. 5.10.4 Limitations of Efficacy Study The test environment was static and was free of safety hazards (such as sharp and hard objects), thus possibly reducing anxiety and fear of collisions and making participants more likely to drive through the foam obstacles. Future studies should test the system in more realistic environments. Although we showed that the distances traveled were longer for some participants when the system was not used, it is important to note that the longer distances reported were specific to the maze constructed for this study in a limited amount of space. One can see that in a more realistic environment, even a single deviation from the optimal route can lead to arbitrarily longer routes depending on the floor layout. Thus, the benefits provided by the wayfinding 167 system (through increased timeliness, and in turn, decreased user fatigue) are likely to be underestimated in this study. Additional phases could be added to the study to determine with greater confidence whether the intervention causes the changes in outcome measures (e.g. with an A-B-A design, we would expect to see a higher number of collisions and increased route lengths in the last baseline phase). The NASA-TLX has not been formally validated with cognitively-impaired older adults, thus the data obtained using this methodology might not be reliable. However, in most cases, we found correlations between the users\u00E2\u0080\u0099 NASA-TLX ratings and verbal comments made by the users at various times in the study, thus increasing score reliability. Also, due to limited human resources, the surveys were conducted by the researcher. As seen with one of the participants (participant 2), there might be a tendency to please the researcher and thus provide inflated ratings. More accurate ratings might be acquired if an individual unknown to the participant conducts the surveys. The QUEST 2.0 surveys indicated that the users did not provide significantly different ratings between the two phases, possibly because the benefits of powered mobility far overshadowed the added benefits of the NOAH system. For users with memory impairments, it is highly unlikely that they could remember details about the previous phase, and thus were also not able to compare their perceived safety level in the current phase to that in the previous phase. Detailed analysis of real-time feedback obtained from the users through 168 visual observations and verbal feedback might thus more appropriate to determine differences in user experiences between baseline and intervention phases for the target population. It is also important to note that, with the C-statistic used for analysis, significant autocorrelation in the baseline creates an intolerable risk of Type I error (inappropriately rejecting the null hypothesis) when intervention data are added. In addition, the C-statistic only identifies whether the magnitude of change when intervention data are added to baseline data is likely to have occurred by chance alone, and does not address whether the change was caused by the intervention. It also does not address whether the change has clinical or practical significance. It is also important to consider that entirely trivial effects can be found to be statistically significant with the C-statistic if enough data points are collected. Note that this predicament is not a unique limitation of the C-statistic and is generally true of all statistical analyses. We thus use visual inspection as the primary form of analysis, and only use statistical analysis as a supplement. In order to truly assess the impact of the system on the users\u00E2\u0080\u0099 independence and mobility, longer duration user studies are needed. Benefits as well as weaknesses of the system will become more evident as the system is used for longer periods of time in the users\u00E2\u0080\u0099 natural environment. Due to scheduling constraints of participants and the researcher, limited laptop battery life (3 hours), and the availability of only one powered wheelchair, a maximum of three runs could take place per day. Testing with more participants and for longer durations would require additional researchers to run the study, and multiple powered wheelchairs set 169 up with the NOAH system. Adding an onboard power source that the laptop can draw from would also be desirable for future deployment and eliminate the need for recharging between trials. A small number of participants makes it difficult to generalize the results found in this study to the larger population of older adults with cognitive impairment. This is a common pitfall of SSRDs. In addition, the large amount of variation in functional abilities observed in this population implies that the system needs to be tested with several users to identify areas for further improvement. Several candidates in the LTC facility were identified by caregivers as being good candidates who would benefit from the system, and also expressed keen interest in participating in the study upon observation of intelligent wheelchair use by other residents; however, these participants could not be recruited due to lack of consent from SDMs. Since SDMs ultimately decide on powered wheelchair use by the target population, it is essential that study findings on the potential benefits of intelligent wheelchair are conveyed to them. This dissemination of knowledge can increase the number of test users for future studies, and eventually allow operation of the intelligent wheelchair by a larger number of users. Other constraints of the efficacy study are described in Appendix D. 170 Chapter 6: Challenges and Future Work Several challenges lie ahead in designing intelligent wheelchairs for cognitively impaired older adults. Major issues include the high cost of powered mobility and clinicians\u00E2\u0080\u0099 acceptance of the technology. Manufacturers\u00E2\u0080\u0099 liability, need for ongoing technical support and difficulties in getting reimbursements (due to the lack of sufficient evidence for powered mobility outcomes improvement) present further challenges in intelligent wheelchair adoption. In addition, there are several issues related to users\u00E2\u0080\u0099 attitudes towards assistive technologies that must be overcome. For example, the stereovision camera and other sensors might lead to stigmatization for users, resulting in abandonment of the technology. Studies such as the one in this dissertation will allow us to identify and address these concerns in an effective manner. Several technical issues also need to be resolved before NOAH can be deployed. These can be explored in five main areas. 1. Power: An obvious issue is the need to provide additional power to charge the laptop battery. An onboard power supply or laptops with longer battery life will be required to ensure a minimal amount of recharging of laptop batteries. 2. Speed: The wheelchair must be able to travel at faster speeds for increased acceptability, while still being safe. This will require the Collision Detection and Path Planning modules to respond quickly to static and dynamic objects as they appear. 171 Stereo processing and localization software are currently bottlenecks that limit high computational speed. While stereo processing can be done in hardware as in [HWM11] in order to improve runtimes, most state-of-the-art vision-based localization methods are still too slow for systems that involve real-time human interaction. 3. Safety: Due to the high vulnerability of the population, it is essential to improve the system so that it has a 100% success rate in detecting imminent collisions and avoiding them. This implies that the sensor is able to perform effectively in various conditions (including areas with low lighting and no texture). While projected patterns can create artificial texture and improve performance in the presence of textureless regions, dimly lit environments still present a major challenge for stereovision sensors. A system that is truly safe will thus require additional types of sensors as backup mechanisms in case of camera failures. 4. Complexity: As the test environment grows in size, the size of the map increases in size and complexity. High amounts of clutter and occlusions in the environment also pose challenges for collision detection. The system thus needs to be tested in multiple environments before it can deployed in a long-term care setting. For example, the system cannot currently handle drop-offs (e.g., stairs), which might be present in certain indoor test environments. These issues must also be resolved if the system is ever taken to outdoor environments, where a much higher degree of sensitivity will be required to detect curbs and fast-moving vehicles. For outdoor environments, the 172 system could also be modified to use GPS and satellite images. GPS is fairly reliable in outdoor settings, and using existing satellite images of the environment would eliminate the need to pre-construct maps beforehand if GPS coordinates of desired locations are known. Thus, the localization and path planning modules could simply be replaced with existing techniques used in Google Maps, for example. Alternatively, the vision-based SLAM method used in this dissertation could be used and has been shown to be robust in outdoor settings, however this will require initial construction of the map, which would be time-consuming. 5. Compatibility: The current system uses a custom-made controller to interface with the commercial wheelchair used. In order for the system to be easily ported to other commercial wheelchairs, universal controllers that are able to interface with any powered wheelchair will become necessary. In order to complete the required revisions to the various components of the wheelchair as outlined in section 5.10.3, and to tackle some of the deployment issues outlined above such as safety and speed, the following avenues of research are recommended. 6.1 Collision Detection Future work should involve investigating an alternate control strategy to allow users to keep moving in the event of an imminent collision, while ensuring safety. Possible strategies could include time-to-collision and/or steering correction approaches. Since building complete working prototypes for testing, and running multiple user studies with these systems might 173 not be feasible due to time and resource constraints, wizard-of-oz type studies could be carried out with tele-operated wheelchairs to quickly test different control strategies. Additionally, virtual reality environments can be used to simulate real-world powered wheelchair use as in [ATCRB12]. Results from these studies can then be used to inform prototype development. Increased overall wheelchair speeds through the use of alternate control strategies must be balanced by earlier detection of collisions to ensure safety. Stereo processing can be carried out in hardware as done in stereovision cameras provided by Focus Robotics. Other methods to achieve computational speeds should also be explored. In addition, detection of textureless objects should be improved through the use of projected light as in the Kinect camera. Cameras with wider viewing angles should be used to improve sensor coverage, and additional (cheap) sensors such as bump sensors should be investigated for use as failsafe backup mechanisms. In addition, the collision avoidance method should be extended to detect drop-offs by detecting changes in elevation as in [CS07]. 6.2 Path Planning The Localization module needs to run at a faster rate to prevent prompting delays seen in the study, and to allow faster driving in the future. Most state-of-the-art vision-based localization methods are too slow for real-time use in systems that interact with users. However, recent work with Graphics Processing Units (GPUs), which allow parallel execution of computationally intensive operations, has shown promise in providing high computational speed-ups. 174 Localization accuracy can be improved by using additional information acquired from wheelchair encoders. However, manually installing these encoders could be tedious and expensive. Thus, a better alternative would be use cheap inertial measurement units (IMUs) or Wiimotes, which can provide accelerometer and gyroscope data to supplement motion data acquired through the camera. In addition, pre-registered visual landmarks in various parts of the environment can be used to correct location estimates. In addition to simply providing turning prompts, the system could also perform high-level scene analysis to issue prompts that include information about the environment, e.g. \u00E2\u0080\u009Cturn left at the end of the hallway\u00E2\u0080\u009D. Several methods have been implemented to annotate maps with semantic information, such as [Kui00]. 6.3 Prompting Results suggest that the system could benefit from a richer user model. A useful research direction would be to use the video data captured during the trials as input to machine learning techniques in order to discover user-specific and general behavior trends. These behaviors can then be encoded in the model in a more data-driven manner, rather than current method that involves manual specification of the model. Natural language can be used to provide directions as well as justifications for prompts as shown in [DMG11, KTRR10]. This might improve user understanding of system prompts by providing context. Different types of prompts generated through these methods can possibly 175 be tested with the target users in simulated driving environments before they are implemented in the real system. Timing of prompts is also a key issue that needs to be investigated. While users in the efficacy study seemed to find the just-in-time prompts effective, more experiments are required to determine optimal prompting times and frequencies. In addition, issuing earlier prompts might be necessary for users with delayed reaction times. 6.4 User Studies Although most users did not show learning trends, one of the participants was found to improve his driving performance over time. This suggests that some users might benefit from increased training time. A future study could be conducted to allow residents to drive with the NOAH system in a realistic environment during certain times of the day for a period of a month or more. During this time, quantitative observations can be collected by the researcher to identify learning trends, and qualitative feedback can be solicited regarding issues such as acceptability and usefulness, which are difficult to interpret in studies as short as the one in this dissertation. In addition, for users who are capable of learning, NOAH could potentially serve as a training tool for regular powered wheelchair use; however, results also showed potential for increased dependence on the system. By automatically preventing collisions, the system could, in fact, be denying the user the opportunity to learn skills required to ultimately achieve independent mobility [Dur02]. Thus, further research should be conducted to ensure 176 that the assistive technology used achieves its goal of increasing independence rather than creating additional dependence. Other user studies could evaluate different collision avoidance and prompting strategies through the use of virtual reality driving simulators or Wizard-of-Oz systems. Testing through these methods will allow the researcher to eliminate hardware and engineering issues that are often time-consuming to resolve, and instead focus on the improving the feedback interface. Performance can be measured quantitatively with regards to completion times and number of collisions. In addition, levels of confusion and frustration can be assessed through standardized outcomes questionnaires and thematic analysis of visual and verbal observations throughout the study. We hope that continued development and testing of the system will help refine user needs and allow us to create an intelligent wheelchair that truly improves quality of life of older adults with cognitive impairment. 177 Chapter 7: Conclusion In this dissertation work, we designed a new vision-based system for collision avoidance and wayfinding for powered wheelchairs. The system was tested with the target population, older adults with cognitive impairment, since this population is currently not allowed to drive regular powered wheelchairs. We showed through an efficacy study that the system is able to improve safety by reducing the number of frontal collisions. The system is also able to improve wayfinding performance in some cases by prompting the user to navigate along the shortest route. We found that collision avoidance performance was correlated with attentiveness and mood, while wayfinding performance was correlated with memory and self-reported user confidence regarding the route. We also found overall prompting accuracy to be high. While prompting adherence to correct prompts was high for all users, we found that users who were less confident about the route tended to rely on the system more, thus adhering to incorrect prompts more often, while confident users correctly ignored these prompts. A combination of quantitative and qualitative data collection and analysis allowed us to gain a holistic understanding of the system and the target population. We conclude this dissertation by addressing the research questions posed initially in section 1.3. 178 1) How does NOAH impact safety during navigation with a powered wheelchair by the user, through vision-based collision detection? The number of frontal collisions is lower when the system is activated, regardless of the phase ordering. However, due to the position of the camera, side and rear collisions still need to be addressed in future work to eliminate all collisions. 2) How does NOAH impact the users\u00E2\u0080\u0099 ability to navigate to a specified location, with respect to time and distance travelled, through adaptive audio prompts? The system is able to maintain or improve wayfinding performance for all users tested. The system was able to ensure that users always navigated along the shortest route. However, the system did increase completion times for 3/6 users due to its stopping behavior, and led to frustration in users with high baseline collision avoidance abilities, suggesting that alternate collision avoidance strategies should be investigated. 3) How well does NOAH meet users\u00E2\u0080\u0099 needs in terms of satisfaction and usability? All participants in the efficacy study liked the system and felt that it achieved its objectives. The main usability issues concerned wheelchair speed and joystick operation. Participants were also frustrated when the system stopped them in scenarios that they perceived to be safe to navigate through. 179 4) What types of errors occur while detecting and avoiding collisions? Errors detected were due to glare in windows, occlusions and interference with the camera by users. Window detection can be performed to ignore glare. The presence of occlusions suggests the need for cameras with wider viewing angles or multiple cameras to increase coverage. In addition, user interference with the camera suggests that a better mounting location is required. 5) What types of errors occur while providing navigation prompts? Errors in navigation prompts were due to errors in localization due to fast turns and camera interference by the user. In areas where turns were required in quick succession, delayed prompts resulted in detours. In addition, prompts were often issued in cases when the user did not need them due to intentional stopping by the users that were perceived as errors. 6) What future improvements need to be made to increase system performance? Increased sensor coverage is required to prevent collisions in all directions. Wheelchair position estimates can be corrected at regular intervals using pre-registered landmarks. In addition, wheel encoders or inertial measurement units can be used to improve localization accuracy by providing additional odometry measurements. The computational speed of the system must be increased to prevent delayed prompting, or alternatively, prompts must be modified to include multiple instructions (e.g. \u00E2\u0080\u009Cturn left, then turn right\u00E2\u0080\u009D) for quick, 180 consecutive turns. Observations from the efficacy study and additional sensor input (e.g., joystick motion) can be used to refine the user behavior model in order to produce more effective prompting strategies. 7) What future improvements need to be made to increase user satisfaction? Some participants required justification for the stopping action of the wheelchair, thus appropriate prompts can be added to the system (e.g. \u00E2\u0080\u009Cyou cannot move forward since there is an obstacle in front of you\u00E2\u0080\u009D). To reduce frustration caused by blocked wheelchair motions, the system can be modified to automatically adjust the wheelchair\u00E2\u0080\u0099s heading to steer away from detected obstacles. In addition, adaptive distance thresholds can be implemented to allow users to drive closer to obstacles in certain situations. Haptic and/or visual feedback can also be added, and other interfaces can be explored. Finally, increasing computational speed of the system will allow the users to driver faster, while still ensuring safety. The efficacy study discussed in this thesis has provided key insights on the possible benefits of intelligent wheelchairs to older adults with cognitive impairments. It is the first study, to our knowledge, that has investigated both collision avoidance and wayfinding performances of cognitively-impaired older adults during powered wheelchair navigation. Our results demonstrate the high diversity of the target population, and highlight the need for customizable assistive technologies that account for the varying capabilities and requirements of the intended users. By improving collision avoidance and wayfinding performance, the system has shown promise in increasing independent mobility for a population that is 181 currently denied powered wheelchairs due to safety concerns. Although further research and development is necessary before NOAH is clinically and commercially available, the results presented in this dissertation could play a key role in informing future intelligent wheelchair design. Improvements in computational speed as well as joystick usability will help improve user performance and satisfaction. Further user studies will help refine user needs and hopefully allow us to increase mobility and independence of several elderly residents. The research described in this thesis is of broad interest to a large, interdisciplinary audience and has several implications. Our findings suggest that successful design and development of assistive technology, such as intelligent wheelchairs, requires collaboration among computer scientists, engineers, caregivers, clinicians and end users, given the multi-faceted nature of the research problem. A variety of research techniques should also be employed in order to achieve a comprehensive understanding of user needs and the role of technology in fulfilling these needs. Testing with real users in realistic scenarios is imperative in ensuring usability and effectiveness. Independent groups of researchers working on similar problems should be encouraged to develop and evaluate different aspects of the technology in parallel, and to share key research findings, in order to facilitate cross-disciplinary learning that overcomes geographic barriers. This interdisciplinary research will undoubtedly help new and existing researchers with different backgrounds to communicate with each other more effectively, resulting in novel solutions that can benefit the intended users. 182 References [Alt10] K. Alton, \u00E2\u0080\u009CDijkstra-like Ordered Upwind Methods for Solving Static Hamilton-Jacobi Equations,\u00E2\u0080\u009D Ph.D. thesis, Department of Computer Science, University of British Columbia, Vancouver, 2010. [AM06] K. Alton and I. M. Mitchell, \u00E2\u0080\u009COptimal Path Planning under Different Norms in Continuous State Spaces,\u00E2\u0080\u009D Proceedings of IEEE International Conference on Robotics and Automation, pp. 866-872, 2006. [APSL08] J. Aulinas, Y. Petillot, J. Salvi and X. Llad\u00C3\u00B3, \u00E2\u0080\u009CThe SLAM problem: a survey,\u00E2\u0080\u009D Artificial Intelligence Research and Development: Proceedings of 11th International Conference of the Catalan Association for Artificial Intelligence, T. Alsinet, J. Puyol-Gruart and C. Torras (Eds.), IOS Press, Amsterdam, The Netherlands, The Netherlands, pp. 363-371, 2008. [ASAM05] B. Andrea. L. Sauro. M. Andrea, V. Massimo, \u00E2\u0080\u009CNavigation system for a smart wheelchair,\u00E2\u0080\u009D Journal of Zhejiang University - Science A, vol. 6, no. 2, pp 110-117, 2005. [ATCRB12] P. S. Archambault, S. Tremblay, S. Cachecho, F. Routhier, and P. Boissy, \u00E2\u0080\u009CDriving performance in a power wheelchair simulator Disability and Rehabilitation,\u00E2\u0080\u009D Assistive Technology, vol. 7, no. 3, pp. 226-223, 2012. [Ayr08] L. Ayres. (2012, 15 March). \u00E2\u0080\u009CThematic coding and analysis\u00E2\u0080\u009D, The Sage Encyclopedia of Qualitative Research Methods. [Online]. http://www.sageereference.com.myaccess.library.utoronto.ca/research/Articl e_n451.html. [BBCK02] Bourret, E. M., Bernick, L. G., Cott, C. A., & Kontos, P. C, \u00E2\u0080\u009CThe meaning of mobility for residents and staff in long-term care facilities,\u00E2\u0080\u009D Journal of Advanced Nursing, vol. 37, no. 4, 338-345, 2002. [BE00] M. G. Bigel and C. G. Ellard, \u00E2\u0080\u009CThe contribution of nonvisual information to simple place navigation and distance estimation: an examination of path integration,\u00E2\u0080\u009D Canadian Journal of Experimental Psychology, vol. 54, pp. 172-184, 2000. [Bel57] R. Bellman, \u00E2\u0080\u009CA Markovian Decision Process,\u00E2\u0080\u009D Indiana Univ. Math, vol. 6, no. 4, pp. 679\u00E2\u0080\u0093684. [BFMGM07] M. C. Bourbonniere, L. M. Fawcett, W. C. Miller, J. Garden, and W. B. Mortenson, \u00E2\u0080\u009CPrevalence and predictors of need for seating intervention and mobility for persons in long-term care,\u00E2\u0080\u009D Canadian Journal on Aging, vol. 26, no. 3, pp. 195-204, 2007. [BH84] D.H. Barlow and M. Hersen, Single care experimental designs: Strategies for studying behavior change, second edition. New York: Allyn & Bacon, 1984. [BL99] D. A. Brechtelsbauer and A. Louie, \u00E2\u0080\u009CWheelchair use among long-term care residents,\u00E2\u0080\u009D Annals of Long-Term Care, vol. 7, no. 6, pp. 213-220, 2003. [Blu84] C. J. Blumberg, \u00E2\u0080\u009CComments on \u00E2\u0080\u0098A simplified timeseries analysis for evaluating treatment interventions,\u00E2\u0080\u0099\u00E2\u0080\u009D Journal of Applied Behavior Analysis, 17, pp. 539-542, 1984. 183 [BR97] S. Borson and M.A. Raskind, \u00E2\u0080\u009CClinical features and pharmacologic treatment of behavioral symptoms of Alzheimer's disease,\u00E2\u0080\u009D Neurology, vol. 48, no. 6, pp. 17-24, 1997. [Bri03] C. Brighton, \u00E2\u0080\u009CRules of the road,\u00E2\u0080\u009D Rehabilitation Management, vol. 16, no. 3, pp. 18-21, 2003. [BWA + 02] J. Baus, R. Wasinger, I. Aslan et al, \u00E2\u0080\u009CAuditory perceptible landmarks in mobile navigation,\u00E2\u0080\u009D Proceedings of the 12th International Conference on Intelligent User Interfaces, Honolulu, Hawaii, USA, pp. 302-304, 2007. [CB88] D. L. Chute and M. E. Bliss, \u00E2\u0080\u009CProsthesis ware: Personal computer support for independent living,\u00E2\u0080\u009D http://www.homemods.org/library/life- spadprosthesis.html, 1988. [CBF11] W. Chiu, U. Blanke and M. Fritz, \u00E2\u0080\u009CImproving the Kinect by Cross-Modal Stereo,\u00E2\u0080\u009D Proceedings of British Machine Vision Conference, pp. 116.1- 116.10, BMVA Press, 2011. [CCD + 01] A. Corfman, R. A. Cooper, M. J. Dvorznack et al., \u00E2\u0080\u009CA video-based analysis of \u00E2\u0080\u0098trips and falls\u00E2\u0080\u0099 during electric powered wheelchair driving,\u00E2\u0080\u009D presented at RESNA Annual Conference, Reno, NV, 2001. [CPCJA05] R. Ceres, J.L. Pons, L. Calderon, A.R. Jimenez and L. Azevedo, \u00E2\u0080\u009CA Robotic Vehicle for Disabled Children,\u00E2\u0080\u009D IEEE Engineering in Medicine and Biology, pp. 55-63, 2005. [CPW + 10] Y. Chang, S. Peng, T. Wang et al, \u00E2\u0080\u009CAutonomous indoor wayfinding for individuals with cognitive impairments,\u00E2\u0080\u009D Journal of NeuroEngineering and Rehabilitation, vol. 7, no. 45, 2010. doi:10.1186/1743-0003-7-45. [Cre07] J. W. Creswell, Qualitative Inquiry and Research Design: Choosing Among Five Approaches (2nd ed.), Sage Publications Inc., Thousand Oaks, California, 2007. [CS07] J. Coughlan and H. Shen, \u00E2\u0080\u009CTerrain Analysis for Blind Wheelchair Users: Computer Vision Algorithms for Finding Curbs and Other Negative Obstacles,\u00E2\u0080\u009D Proceedings of Conference and Workshop on Assistive Technology for People with Vision and Hearing Impairments, Granada, Spain, 2007. [CSG + 09] A. A. Cyr, A. Stinchcombe, S. Gagnon et al., \u00E2\u0080\u009CDriving difficulties of brain- injured drivers in reaction to high-crash-risk simulated road events: A question of impaired divided attention?,\u00E2\u0080\u009D Journal of Clinical and Experimental Neuropsychology, vol. 31, no. 4, pp. 472-482, 2009. [DCK94] D. Dawson, R. Chan, and E. Kaiserman, \u00E2\u0080\u009CDevelopment of the power- mobility indoor driving assessment for residents of long-term care facilities: A preliminary report,\u00E2\u0080\u009D Canadian Journal of Occupational Therapy, vol. 61, no. 5, pp. 269-276, 1994. [DF05] T. Dutta, and G. R. Fernie, \u00E2\u0080\u009CUtilization of ultrasound sensors for anti- collision systems of powered wheelchairs,\u00E2\u0080\u009D IEEE Transactions on Neural Systems & Rehabilitation, vol. 3, no. 1, pp. 24-32, 2005. [DKC06] D. Dawson, E. Kaiserman-Goldenstein, R. Chan and J. Gleason, \u00E2\u0080\u009CPower- Mobility Indoor driving assessment manual,\u00E2\u0080\u009D 2006. 184 [DKCG06] D. Dawson, E. Kaiserman-Goldenstein, R. Chan, and J. Gleason. (2011, 11 December). Power-Mobility Indoor Driving Assessment Manual. [Online]. http://fhs.mcmaster.ca/powermobility/pida.htm. [DMG11] T. Dodson, N. Mattei and J. Goldsmith, \u00E2\u0080\u009CA Natural Language Argumentation Interface for Explanation Generation in Markov Decision Processes,\u00E2\u0080\u009D Proceedings of Second International Conference on Algorithmic Decision Theory, pp. 42-55, 2011. [DMLAW02] L. Demers, M.Monette, Y. Lapierre, D.L. Arnold and C. Wolfson, \u00E2\u0080\u009CReliability, validity, and applicability of the Quebec User Evaluation of Satisfaction with assistive Technology (QUEST) for adults with Multiple Sclerosis,\u00E2\u0080\u009D Disability and Rehabilitation, vol. 24, no. 1-3, pp. 21-30, 2002. [Dom05] E. Domholdt, \u00E2\u0080\u009CChapter 10 - Single-System Design,\u00E2\u0080\u009D Rehabilitation Research: Principles and Applications, 3rd ed. St. Louis, Missouri, United States of America: Elsevier Saunders, pp. 135-143, 2005. [Dur02] J. Durkin, J, \u00E2\u0080\u009CThe need for the development of a child led assessment tool for powered mobility users,\u00E2\u0080\u009D Technology and Disability, vol. 14, no. 4, pp. 163-171, 2002. [DWS02] L. Demers, R. Weiss-Lambrou and B. Ska, \u00E2\u0080\u009CThe Quebec User Evaluation of Satisfaction with Assistive Technology (QUEST 2.0): An overview and recent progress,\u00E2\u0080\u009D Technology and Disability, vol. 14, pp. 101-105, 2002. [DWWSW99] L. Demers, R. Wessels, R. Weiss-Lambrou, R. Ska and L. Witte, \u00E2\u0080\u009CAn international content validation of the Quebec User Evaluation of Satisfaction with assistive Technology (QUEST),\u00E2\u0080\u009D Occupational Therapy International, vol. 6, no. 3, pp. 159-175, 1999. [EHE+12] F. Endres, J. Hess, N. Engelhard et al., \u00E2\u0080\u009CAn Evaluation of the RGB-D SLAM System,\u00E2\u0080\u009D Proceedings of the International Conference on Robotics and Automation, St. Paul, MA, 2012. [ESL06] P. Elinas, R. Sim, and J. J. Little, \u00E2\u0080\u009C\u00CF\u0083SLAM: Stereo Vision SLAM Using the Rao-Blackwellised Particle Filter and a Novel Mixture Proposal Distribution\u00E2\u0080\u009D, Proceedings of the Internatioanl Conference on Robotics and Automation, Orlando, Florida, pp. 1564-1570, 2006. [FFM75] M.F. Folstein, S.E. Folstein, and P.R. McHugh, \u00E2\u0080\u009CMini-mental State, A practical method for grading the cognitive state of patients for the clinician,\u00E2\u0080\u009D Journal of Psychiatric Research, vol. 12, pp. 189-198, 1975. [FG03] R. H. Fuchs and T. A. Gromak, \u00E2\u0080\u009CWheelchair use by residents of nursing homes: Effectiveness in meeting positioning and mobility needs,\u00E2\u0080\u009D Assistive Technology, vol. 15, no. 2, pp. 151-163, 2003. [FLS00] L. Fehr, W. E. Langbein and S. B. Skaar, \u00E2\u0080\u009CAdequacy of power wheelchair control interfaces for persons with severe disabilities: A clinical survey,\u00E2\u0080\u009D Journal of Rehabilitation Research and Development, vol. 37, no. 3, pp. 353\u00E2\u0080\u009360, 2000. [Fur86] A. Furnham, \u00E2\u0080\u009CResponse bias, social desirability and dissimulation,\u00E2\u0080\u009D Personality and Individual Differences, vol. 7, no. 3, pp. 385-400, 1986. [GBG05] J. Goodman, S. A. Brewster and P. Gray, \u00E2\u0080\u009CHow can we best use landmarks to support older people in navigation?,\u00E2\u0080\u009D Behavior and Information Technology, vol. 24, pp. 3-20, 2005. 185 [GSB06] G. Grisetti, C. Stachniss and W. Burgard, \u00E2\u0080\u009CImproved Techniques for Grid Mapping with Rao-Blackwellized Particle Filters,\u00E2\u0080\u009D IEEE Transactions on Robotics, vol. 23, no. 1, pp. 34-46, 2006. [GSGW99] L. Demers, B. Ska, F. Giroux and R. Weiss-Lambrou, \u00E2\u0080\u009CStability and reproducibility of the Quebec User Evaluation of Satisfaction with assistive Technology (QUEST),\u00E2\u0080\u009D Journal of Rehabilitation Outcomes Measurement, 3(4), pp. 42-52, 1999. [GSSS02] D. Garlan, D. Siewiorek, A. Smailagic and P. Steenkiste, \u00E2\u0080\u009CProject Aura: Toward Distraction-Free Pervasive Computing,\u00E2\u0080\u009D IEEE Pervasive Computing, vol. 21, no. 2, pp. 22-31, 2002. [HAB + 10] W. Honore, A. Atrash, P. Boucher et al., \u00E2\u0080\u009CHuman-Oriented Design and Initial Validation of an Intelligent Powered Wheelchair,\u00E2\u0080\u009D presented at RESNA Annual Conference, Las Vegas, Nevada, 2010. [Har04] P. Hard, \u00E2\u0080\u009CExamining the barriers: Powered wheelchair mobility for people with cognitive and/or sensory impairments,\u00E2\u0080\u009D presented at the ARATA 2004 National Conference, Melbourne, Australia, 2004. [HBJ99] H. Hoyer, U. Borgolte, and A. Jochheim, \u00E2\u0080\u009CThe OMNI-Wheelchair - State of the art,\u00E2\u0080\u009D Center on Disabilities, Technology and Persons with Disabilities Conference, Northridge, 1999. [Hea11] Health Canada. (2011, 11 December). Long-Term Facilities-Based Care. http://www.hc-sc.gc.ca/hcs-sss/home-domicile/longdur/index-eng.php. [HF05] H. Huang and G. R. Fernie, \u00E2\u0080\u009CThe laser line object detection method in an anti-collision system for powered wheelchairs,\u00E2\u0080\u009D presented at the IEEE 9th International Conference on Rehabilitation Robotics: Frontiers of the Human-Machine Interface, Chicago, Illinois, 2005. [HME+05] S. Helal, W. Mann, H. El-Zabadani et al., \u00E2\u0080\u009CThe Gator Tech Smart House: A Programmable Pervasive Space,\u00E2\u0080\u009D IEEE Computer, vol. 38, pp. 50-60, 2005. [HMHLS00] D. F. Hultsch, S. W. S. MacDonald, M. A. Hunter, J. Levy-Bencheton and E. Strauss, \u00E2\u0080\u009CIntraindividual variability in cognitive performance in older adults: Comparison of adults with mild dementia, adults with arthritis, and healthy adults,\u00E2\u0080\u009D Neuropsychology, vol. 14, no. 4, pp. 588-598, 2000. [How11] T. How, \u00E2\u0080\u009CDevelopment of an Anti-Collision and Navigation System for Powered Wheelchairs,\u00E2\u0080\u009D Master\u00E2\u0080\u0099s thesis, Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, 2011. [HPJ+11] J. Hoey, T. Ploetz, D. Jackson et al., \u00E2\u0080\u009CRapid Specification and Automated Generation of Prompting Systems to Assist People with Dementia,\u00E2\u0080\u009D Pervasive and Mobile Computing, vol. 7, no. 3, 2011. [HS88] S. G. Hart and L. E. Staveland, Development of the NASA-TLX (Task Load Index): Results of empirical and theoretical research. N. Meshkati (Ed), Amsterdam: North Holland Press, pp. 239-250, 1988. [HVCPA10] J. Hoey, A. von Bertoldi, P. Poupart and A. Mihailidis, \u00E2\u0080\u009CAssisting Persons with Dementia during Handwashing Using a Partially Observable Markov Decision Process,\u00E2\u0080\u009D Proceedings of the International Conference on Vision Systems (ICVS), Biefeld, Germany, 2007. [HWM11] T. How, R. Wang and A. Mihailidis, \u00E2\u0080\u009CClinical Evaluation of the Intelligent Wheelchair System,\u00E2\u0080\u009D presented at Festival of International Conferences on 186 Caregiving, Disability, Aging and Technology (FICCDAT), Toronto, Canada, 2011. [IMD + 01] L. I. Iezzoni, E. P. McCarthy, R. B. Davis et al., \u00E2\u0080\u009CMobility difficulties are not only a problem of old age,\u00E2\u0080\u009D Journal of General Internal Medicine, vol. 16, pp. 235\u00E2\u0080\u0093243, 2001. [JHLY07] P. Jia, H. H. Hu, T. Lu and K. Yuan, \u00E2\u0080\u009CHead gesture recognition for hands- free control of an intelligent wheelchair,\u00E2\u0080\u009D Industrial Robot: An International Journal, vol. 34, no. 1, pp. 60 \u00E2\u0080\u0093 68, 2007. [JSS96] S. Jaglal S, P. G. Sherry and J. Schatzker, \u00E2\u0080\u009CThe impact and consequences of hip fracture in Ontario,\u00E2\u0080\u009D Canadian Journal of Surgery, vol. 39, pp. 105\u00E2\u0080\u0093111, 1996. [KBC+09] K. Konolige, J. Bowman, J. D. Chen et al., \u00E2\u0080\u009CView-based maps,\u00E2\u0080\u009D International Journal of Robotics Research (IJRR), vol. 29, no. 10, 2010. [KCHC09] A.M. Karmarkar, D.M. Collins, A. Helleher and R.A. Cooper, \u00E2\u0080\u009CSatisfaction related to wheelchair use in older adults in both nursing homes and community dwelling,\u00E2\u0080\u009D Disability and Rehabilitation: Assistive Technology, vol. 4, no. 5, pp. 337-343, 2009. [Kir08] R.L. Kirby. (2008, Oct) Wheelchair Skills Program Manual v4.1. [Online]. http://www.wheelchairskillsprogram.ca/eng/4.1/WST_Manual_Version4.1.5 1.pdf [KTRR10] T. Kollar, S. Tellex, D. Roy, N. Roy, \u00E2\u0080\u009CToward Understanding Natural Language Directions,\u00E2\u0080\u009D Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction, Osaka, Japan, pp. 259-266, 2010. [Kui00] B. Kuipers, \u00E2\u0080\u009CThe spatial semantic hierarchy,\u00E2\u0080\u009D Artificial Intelligence, vol. 119, pp. 191-233, 2000. [LBJ+99] S. P. Levine, D. A. Bell, L. A. Jaros, R. C. Simpson and Y. Koren, J. Borenstein, \u00E2\u0080\u009CThe NavChair assistive wheelchair navigation system,\u00E2\u0080\u009D IEEE Transactions on Rehabilitation Engineering, vol. 7, pp. 443-451, 1999. [LFK04] L. Liao, D. Fox, H. Kautz, \u00E2\u0080\u009CLearning and inferring transportation routines,\u00E2\u0080\u009D Proceedings of the 19th National Conference on AI, 2004. [LFOBS03] P. P. Lee, Z. W. Feldman, J. Ostermann, D. S. Brown and F. A. Sloan. Longitudinal Prevalence of Major Eye Diseases. Archives of Ophthalmology, vol. 121, no.9, pp. 1303-1310, 2003. [LHK+06] L. Liu, H. Hile, H. Kautz et al., \u00E2\u0080\u009CIndoor Wayfinding: Developing a Functional Interface for Individuals with Cognitive Impairments,\u00E2\u0080\u009D Proceedings of Computers & Accessibility: ASSETS, pp. 95-102, 2006. [LM06] K. Labelle and A. Mihailidis, \u00E2\u0080\u009CThe Use of Automated Prompting to Facilitate Handwashing in Persons With Dementia,\u00E2\u0080\u009D American Journal of Occupational Therapy, vol. 60, no. 4, pp. 442-450, 2006. [Lov91] W. Lovejoy, \u00E2\u0080\u009CA survey of algorithmic methods for partially observable Markov decision processes,\u00E2\u0080\u009D Annals of Operations Research, vol. 28, pp. 47\u00E2\u0080\u009366, 1991. [LSMN02] E. F. Lopresti, R. C. Simpson, D. Miller, and I. Nourbakhsh, \u00E2\u0080\u009CEvaluation of sensors for a smart wheelchair,\u00E2\u0080\u009D presented at the RESNA 2002 Annual Conference, Minneapolis, Minnesota. 2005. 187 [Mar00] E. R. Marcantonio, \u00E2\u0080\u009CDementia,\u00E2\u0080\u009D Merck Manual of Geriatrics, 3rd ed., Beers, M. H., Jones , T. V., Berkwits, M., Kaplan, J. L., Porter, R., eds. Whitehouse Station, NJ: Merck & Co., Inc., pp. 357-371, 2000. [MBB+10] A. Mihailidis, S. Blunsden, J. N. Boger et al., \u00E2\u0080\u009CTowards the Development of a Technology for Art Therapy and Dementia: Definition of Needs and Design Constraints,\u00E2\u0080\u009D The Arts in Psychotherapy, vol. 37, no. 4, 2010. [MBCH08] A. Mihailidis, J. Boger, T. Craig, and J. Hoey, \u00E2\u0080\u009CThe COACH prompting system to assist older adults with dementia through handwashing: An efficacy study,\u00E2\u0080\u009D BMC Geriatrics, vol. 8, no. 28, 2008, doi:10.1186/1471- 2318-8-28. [MDBM10] L. Montesano, M. Diaz, S. Bhaskar, and J. Minguez, \u00E2\u0080\u009CTowards an Intelligent Wheelchair System for Users With Cerebral Palsy,\u00E2\u0080\u009D IEEE Transactions on Neural Systems and Rehab. Eng., vol. 18, no. 2, pp. 193- 202, 2010. [MDK+03] A. Morris, R. Donamukkala, A. Kapurai, \u00E2\u0080\u009CA robotic walker that provides guidance,\u00E2\u0080\u009D Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Taipei, Taiwan, 2003. [MEBH07] A. Mihailidis, P. Elinas, J. Boger and J. Hoey, \u00E2\u0080\u009CAn Intelligent Powered Wheelchair to Enable Mobility of Cognitively Impaired Older Adults: An Anti-Collision System,\u00E2\u0080\u009D IEEE Transactions on Neural Systems & Rehabilitation Engineering, vol. 15, no. 1, pp. 136-143, 2007. [Med11] Medicare. (2011, 11 December). Long-Term Care.[Online] http://www.medicare.gov/LongTermCare/static/Home.asp. [Mey01] C. B. Meyer, \u00E2\u0080\u009CA case in case study methodology,\u00E2\u0080\u009D Field Methods, vol. 13, no. 4, pp. 329-352, 2001. [MFM + 94] J. N. Morris, B. E. Fries, D. R. Mehr et al., \u00E2\u0080\u009CMDS Cognitive Performance Scale,\u00E2\u0080\u009D Journal of Gerontology: Medical Sciences, vol. 49, no. 4, pp. M174- M182, 1994. [Mil11] B. Miller. (2011, Sep). Single Subject Research Design (SSRD). [Online]. http://vchri.ca/i/pdf/SingleSubjectResearch.pdf. [ML00] D. Murray and J. Little, \u00E2\u0080\u009CUsing Real-Time Stereo Vision for Mobile Robot Navigation,\u00E2\u0080\u009D Autonomous Robots, vol. 8, no. 2, pp. 161-171, 2000. [MM01] M. L. Mittelstaedt, H. Mittelstaedt, \u00E2\u0080\u009CIdiothetic navigation in humans: Estimation of path length,\u00E2\u0080\u009D Experimental Brain Research, vol. 13, pp. 318- 332, 2001. [MMAM06] L. Montesano, J. Minguez, J.M. Alcubierre, and L. Montano, \u00E2\u0080\u009CTowards the adaptation of a robotic wheelchair for cognitive disabled children,\u00E2\u0080\u009D Proceedings of International Conference on Intelligent Robots and Systems, Beijing, CHN, pp. 710-716, 2006. [MMB + 05] W. B. Mortenson, W. C. Miller, J. Boily, B. Steele, L. Odell, E. Crawford et al., \u00E2\u0080\u009CPerceptions of power mobility use and safety within residential facilities,\u00E2\u0080\u009D Canadian Journal of Occupational Therapy, vol. 72, no. 3, pp. 142-152, 2005. [MMB+06] W. B. Mortenson, W. C. Miller, J. Boily et al., \u00E2\u0080\u009COverarching principles and salient findings for inclusion in guidelines for power mobility use within 188 residential care facilities,\u00E2\u0080\u009D Journal of Rehabilitation Research & Development, vol. 43, no. 2, pp. 199-208, 2006. [MMGT09] S. McGarry, L. Moir, S. Girdler, and L. Taylor, \u00E2\u0080\u009CThe smart wheelchair: is it an effective mobility training tool for children with cerebral palsy?,\u00E2\u0080\u009D The Centre for Cerebral Palsy, Coolbinia, WA, UK, 2009. [MMPK05] K. Mofatt, J. McGrenere, B. Purves, and M. Klawe, \u00E2\u0080\u009CThe partricipatory design of a sound and image enhanced daily planner for people with aphasia,\u00E2\u0080\u009D Proceedings of ACM CHI, pp. 501-510, 2005. [MMS96] F. Masson, P. Maurette, L. R. Salmi et al., \u00E2\u0080\u009CPrevalence of impairments 5 years after a head injury, and their relationship with disabilities and outcome,\u00E2\u0080\u009D Brain Injury, vol. 10, no. 7, pp. 487-497, 1996. [MMW+04] U. P. Mosimann, G. Mather, K. A. Wesnes, et al., \u00E2\u0080\u009CVisual perception in Parkinson disease dementia and dementia with Lewy bodies,\u00E2\u0080\u009D Neurology, vol. 63, pp. 2091-2096, 2004. [MP02] C. McCarthy and M. Pollack, \u00E2\u0080\u009CA plan-based personalized cognitive orthotic,\u00E2\u0080\u009D Proceedings of the 6th International Conference on AI Planning and Scheduling, pp. 243-252, 2002. [MPSW03] R. J. Mendoza, D. J. Pittenger, F. S. Savage, and C. S. Weinstein, \u00E2\u0080\u009CA protocol for assessment of risk in wheelchair driving within a healthcare facility,\u00E2\u0080\u009D Disability and Rehabilitation, vol. 25, no. 10, pp. 520-526, 2003. [MSF+00] E. Mori, T. Shimomura, M. Fujimori et al., \u00E2\u0080\u009CVisuoperceptual Impairment in Dementia With Lewy Bodies,\u00E2\u0080\u009D Archives of Neurology, vol. 57, pp. 489-493, 2000. [NCK+89] M. C. Nevitt, S. R. Cummings, S. Kidd et al., \u00E2\u0080\u009CRisk factors for recurrent nonsyncopal falls,\u00E2\u0080\u009D Journal of American Medical Association, vol. 261, pp. 2663\u00E2\u0080\u00932668, 1989. [Nyg91] T.E. Nygren, \u00E2\u0080\u009CPsychometric Properties of subjective workload measurement techniques: Implications for their use in the assessment of preceived mental workload,\u00E2\u0080\u009D Human Factors, vol. 33, no. 1, pp. 17-33, 1991. [Ont11] Ontario Ministry of Health and Long-Term Care. (2011, 11 December). Understanding Health Care in Ontario. [Online]. http://www.health.gov.on.ca/en/ministry/hc_system/default.aspx#4. [Ott86] K.J. Ottenbacher, Evaluating Clinical Change, Strategies for occupational and physical therapists. Balitmore, MD, United States of America: Williams & Wilkins, 1986. [OWNC00] J.P. Odor, M. Watson, P. Nisbet, and I. Craig, \u00E2\u0080\u009CThe CALL Centre smart wheelchair handbook 1.5,\u00E2\u0080\u009D CALL Centre, 2000. [Pat02] M. J. Patton, Qualitative Research and Evaluation Methods (3rd ed.), Sage Publications Inc., London. [PCJC06] A. Pronobis, B. Caputo, P. Jensfelt, and H. I. Christensen, \u00E2\u0080\u009CA discriminative approach to robust visual place recognition,\u00E2\u0080\u009D Proceedings of International Conference on Intelligent Robots and Systems, 2006. [PGK86] L. G. Pawlson, M. Goodwin, K. Keith, \u00E2\u0080\u009CWheelchair use by ambulatory nursing home residents,\u00E2\u0080\u009D Journal of American Geriatrics Society, vol. 34, pp. 860-864, 1986. 189 [PLBGL08] H. Pigot, D. Lussier-Desrochers, J. Bauchet, S. Giroux S, and Y. Lachapelle (Eds), A Smart Home to Assist in Recipe Completion, IOS Press, Amsterdam, The Netherlands, 2008. [PLG+04] J. Patterson, L. Liao, K. Gajos et al., \u00E2\u0080\u009COpportunity Knocks: a System to Provide Cognitive Assistance with Transportation Services, Proceedings of The Sixth International Conference on Ubiquitous Computing (UBICOMP), 2004. [PMPRT03] J. Pineau, M. Montemerlo, M. Pollack, N. Roy, and S. Thrun, \u00E2\u0080\u009CTowards robotic assistants in nursing homes: Challenges and results,\u00E2\u0080\u009D Robotics and Autonomous Systems, vol. 42, pp. 3\u00E2\u0080\u00934, 2003. [Pol06] M. E. Pollack, \u00E2\u0080\u009CAutominder: A Case Study of Assistive Technology for Elders with Cognitive Impairment,\u00E2\u0080\u009D Generations, vol. 30, pp. 67-79, 2006. [Pou05] P. Poupart, \u00E2\u0080\u009CExploiting Structure to Efficiently Solve Large Scale Partially Observable Markov Decision Processes,\u00E2\u0080\u009D Ph.D. thesis, Department of Computer Science, University of Toronto, Toronto, 2005 [PPRT00] R. Passini, H. Pigot, C. Rainville and M. Tetreault, \u00E2\u0080\u009CWayfinding in a nursing home for advanced dementia of the Alzheimer's type,\u00E2\u0080\u009D Environment & Behavior, vol. 32, no. 5, 684-710, 2000. [Pri09] Pride Mobility. (2009, Mar). Quantum Rehab - Electronics - Q-Logic - Enhanced Display. [Online]. http://www.pridemobility.com/quantum/electronics/q-logic- enhanceddisplay.asp [PRMJ95] R. Passini, C. Rainville, N. Marchand, and Y. Joanette, \u00E2\u0080\u009CWayfinding in dementia of the Alzheimer's type: Planning abilities,\u00E2\u0080\u009D Journal of Clinical & Experimental Neuropsychology, vol. 17, no. 6, pp. 820\u00E2\u0080\u0093832, 1995. [PSF01] E. Prassler, J. Scholz, P. Fiorini, \u00E2\u0080\u009CA Robotics Wheelchair for Crowded Public Environments,\u00E2\u0080\u009D IEEE Robotics & Automation Magazine, vol. 8, pp. 38\u00E2\u0080\u009345, 2001. [PSS + 02] J. L. Payne, J. E. Sheppard, M. Steinberg, A. Warren, A. Baker, C. Steele, J. Brandt, C. G. Lyketsos, \u00E2\u0080\u009CIncidence, prevalence, and outcomes of depression in residents of a long-term care facility with dementia,\u00E2\u0080\u009D International Journal of Geriatric Psychiatry, vol. 17, pp. 247-253, 2002. [RD07] A. Ranganathan and F. Dellaert, \u00E2\u0080\u009CSemantic Modeling of Places Using Objects,\u00E2\u0080\u009D Proceedings of Robotics: Science and Systems (RSS), Atlanta, Georgia, 2007. [RKJ94] J. H. Ricker, P.A. Keenan, and M. W. Jacobson, \u00E2\u0080\u009CVisuoperceptual-spatial ability and visual memory in vascular dementia and dementia of the Alzheimer type,\u00E2\u0080\u009D Neuropsychologia, vol. 32, no. 10, pp. 1287-1296, 1994. [RMJL05] D. Rodriguez-Losada, F. Matia, A. Jimenez and G. Lacey, \u00E2\u0080\u009CGuido, the robotic smartwalker for the frail visually impaired,\u00E2\u0080\u009D First International Conference on Domotics, Robotics and Remote Assistance for All - DRT4all, 2005. [RN98] M. Rizzo and M. Nawrot, \u00E2\u0080\u009CPerception of movement and shape in Alzheimer's disease,\u00E2\u0080\u009D Brain, vol. 121, no. 12, pp. 2259-2270, 2004. 190 [RPT00] N. Roy, J. Pineau and S. Thrun, \u00E2\u0080\u009CSpoken dialogue management using probabilistic reasoning,\u00E2\u0080\u009D Proceedings of the 38th Annual Meeting on Association For Computational Linguistics, Hong Kong, 2000. [SC08] D. Strubel and M. Corti, \u00E2\u0080\u009CWandering in Dementia,\u00E2\u0080\u009D Psychologie & Neuropsychiatrie du Vieillissement, vol. 6, no. 4, pp. 259 \u00E2\u0080\u0093 264, 2008. [SCG92] H.-J. Sun, D. P. Carey and M. A. Goodale, \u00E2\u0080\u009CA mammalian model of optic- flow utilization in the control of locomotion,\u00E2\u0080\u009D Experimental Brain Research, vol. 91, pp. 171\u00E2\u0080\u0093175, 1992. [SDTB08] J. Sodnik, C. Dicke, S. Tomazic and M. Billinghurst, \u00E2\u0080\u009CA user study of auditory versus visual interfaces for use while driving,\u00E2\u0080\u009D International Journal of Human-Computer Studies, vol. 66, no. 5, 318-332, 2008. [SE99] N. Stanton and J. Edworthy, Human Factors in Auditory Warnings. Ashgate, Aldershot, 1999. [Sen12] Senior Homes. (2012, 22 June). Convalescent Homes. [Online]. http://www.seniorhomes.com/p/convalescent-homes/. [SF98] H. -J. Sun, and B. J. Frost, \u00E2\u0080\u009CComputation of different optical variables of looming objects in pigeon nucleus rotundus neurons,\u00E2\u0080\u009D Nature Neuroscience, vol. 1, pp. 296\u00E2\u0080\u0093303, 1998. [SFHF07] M. M. Sohlberg, S. Fickas, P. Hung and A. Fortier, \u00E2\u0080\u009CA comparison of four prompt modes for route finding for community travellers with severe cognitive impairments,\u00E2\u0080\u009D Brain Injury, vol. 21, no. 5, pp. 531-538, 2007. [Sim05] R. C. Simpson, \u00E2\u0080\u009CSmart Wheelchairs: A Literature Review,\u00E2\u0080\u009D Journal of Rehabilitation Research & Development, vol. 42, no. 4, pp. 423-438, 2005. [SLC08] R. C. Simpson, E. F. LoPresti and R. A. Cooper, \u00E2\u0080\u009CHow many people would benefit from a smart wheelchair?,\u00E2\u0080\u009D Journal of Rehabilitation Research and Development, vol. 45, no. 1, pp. 53-72, 2008. [SLH + 05] R. Simpson, E. LoPresti, S. Hayashi et al., \u00E2\u0080\u009CA prototype power assist wheelchair that provides for obstacle detection and avoidance for those with visual impairments,\u00E2\u0080\u009D Journal of Neuroengineering Rehabilitation, vol. 2, no. 30, 2005. [SPB02] R. C. Simpson, D. Poirot, and F. Baxter, \u00E2\u0080\u009CThe Hephaestus Smart Wheelchair System,\u00E2\u0080\u009D IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol 10, no. 2, pp. 118-122, 2002. [SPB99] R. Simpson, D. Poirot, and M.F. Baxter, \u00E2\u0080\u009CEvaluation of the Hephaestus Smart Wheelchair system,\u00E2\u0080\u009D Proceedings of International Conference on Rehabilitation Robotics, Stanford, CA, pp. 99-105, 1999. [SS03] D. Scharstein and R. Szeliski, \u00E2\u0080\u009CHigh-Accuracy Stereo Depth Maps Using Structured Light,\u00E2\u0080\u009D Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 195-202, Madison, WI, 2003. [TBF00] S. Thrun, W. Burgard and D. Fox, \u00E2\u0080\u009CA real-time algorithm for mobile robot mapping with applications to multi-robot and 3d mapping,\u00E2\u0080\u009D Proceedings of IEEE International Conference on Robotics and Automation (ICRA 2000), San Francisco, CA, 2000. 191 [THMOP01] S. S. Travis, M. Hendricks, L. McClanahan, A. Osmond, C. Pruett, \u00E2\u0080\u009CMotorized cart driver safety in assisted living,\u00E2\u0080\u009D Geriatric Nursing, vol. 22, no. 4, pp. 213-215, 2001. [TM95] P. Tully and C. Mohl. (2011, 11 December). Older residents of health care institutions. [Online]. http://www.statcan.gc.ca/studies-etudes/82- 003/archive/1995/5018984-eng.pdf. [Try82] W. W. Tryon, \u00E2\u0080\u009CA simplified time-series analysis for evaluating treatment interventions,\u00E2\u0080\u009D Journal of Applied Behavior Analysis, vol. 15, pp. 423-429, 1982. [TT03] A. Tashakkori and C. Teddlie (Eds), Handbook of mixed methods in social and behavioral research, Sage Publications Inc., Thousand Oaks, California, 2003. [VBHM08] P. Viswanathan, J. Boger, J. Hoey, and A. Mihailidis, \u00E2\u0080\u009CA comparison of stereovision and infrared as sensors for an anti-collision powered wheelchair for older adults with cognitive impairments, Technology and Aging - Selected Papers from the 2007 International Conference on Technology and Aging, A. Mihailidis, J. Boger, H. Kautz & L. Normie (Eds), IOS Press, vol. 21, pp. 165-172. [Vis11] P. Viswanathan. (2012, Jan). Intelligent Wheelchair Software. [Online]. www.cs.ubc.ca/~poojav/software/ [VMSLM09] P. Viswanathan, D. Meger, T. Southey, J. J. Little, and A. Mackworth, \u00E2\u0080\u009CAutomated spatial-semantic modeling with applications to place labeling and informed search,\u00E2\u0080\u009D Proceedings of Canadian Robot Vision, Kelowna, Canada, 2009. [VS08] S. Vasudevan and R. Siegwart, \u00E2\u0080\u009CBayesian space conceptualization and place classification for semantic maps in mobile robotics,\u00E2\u0080\u009D Robotics and Autonomous Systems, vol. 56, no. 6, pp. 522 \u00E2\u0080\u0093 537, 2008. [VSLM10] P. Viswanathan, T. Southey, J. J. Little, and A. Mackworth, \u00E2\u0080\u009CAutomated place classification using object detection,\u00E2\u0080\u009D Proceedings of Canadian Conference in Computer and Robot Vision, pp. 324-330, Ottawa, Canada, 2010. [VSLM11] P. Viswanathan, T. Southey, J. J. Little, and A. Mackworth, \u00E2\u0080\u009CPlace Classification Using Visual Object Categorization and Global Information,\u00E2\u0080\u009D Proceedings of Canadian Conference in Computer and Robot Vision, Halifax, Canada, 2011. [Wan11] R. H. Wang, \u00E2\u0080\u009CEnabling Power Wheelchair Mobility with Long-Term Care Home Residents with Cognitive Impairments,\u00E2\u0080\u009D Ph. D. thesis, Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, 2011. [WGHF11] R. H. Wang, S. M. Gorski, P. J. Holliday, G. R. Fernie, \u00E2\u0080\u009CEvaluation of a contact sensor skirt for an anti-collision power wheelchair for older adult nursing home residents with dementia: Safety and mobility,\u00E2\u0080\u009D Assistive Technology, 2011. [WKHF10] R. H. Wang, P. C. Kontos, P. J. Holliday, G. R. Fernie, \u00E2\u0080\u009CThe experiences of using an anti-collision power wheelchair for three long-term institutional care residents with mild cognitive impairment,\u00E2\u0080\u009D Disability and 192 Rehabilitation: Assistive Technology, 2010, http://dx.doi.org/10.3109/17483107.2010.519096. [WMDF11] R. H. Wang, A. Mihailidis, T. Dutta, and G. R. Fernie, G. R, \u00E2\u0080\u009CUsability testing of multimodal feedback interface and simulated collision-avoidance power wheelchair for long-term-care home residents with cognitive impairments,\u00E2\u0080\u009D Journal of Rehabilitation Research and Development, vol. 48, no. 6, pp. 801-822, 2011. [WSA02] J. L. Wolff, B. Starfield B and G. Anderson, \u00E2\u0080\u009CPrevalence, expenditures, and complications of multiple chronic conditions in the elderly,\u00E2\u0080\u009D Archives of Internal Medicine, 162, pp. 2269-2276, 2002. [Yan98] H. A. Yanco, \u00E2\u0080\u009CWheelesley, A Robotic Wheelchair System: Indoor Navigation and User Interface,\u00E2\u0080\u009D Lecture Notes in Artificial Intelligence: Assistive Technology and Artificial Intelligence, V.O. Mittal, H.A. Yanco, J. Aronis and R. Simspon (Eds), Springer-Verlag, pp. 256-268, 1998. [Yin03] R. K. Yin, Case study research: Design and methods (3rd ed.), Sage Publications Inc., Thousand Oaks, California, 2003. [YM07] S. Yang and A. K. Mackworth, \u00E2\u0080\u009CHierarchical Shortest Pathfinding Applied to Route-Planning for Wheelchair Users,\u00E2\u0080\u009D Proceedings of the 20th Canadian Conference on Artificial Intelligence (CAAI07), Montreal, Canada, 2007. [ZBT09] Q. Zeng, E. Burdet and C.L. Teo, \u00E2\u0080\u009C Evaluation of a Collaborative Wheelchair System in Cerebral Palsy and Traumatic Brain Injury Users,\u00E2\u0080\u009D Neurorehabilitation and Neural Repair, vol. 23, no. 5, pp. 494-504, 2009. [ZJL04] Z. Zhu, Q. Ji and P. Lan, \u00E2\u0080\u009CReal Time Non-intrusive Monitoring and Prediction of Driver Fatigue,\u00E2\u0080\u009D IEEE Transactions on Vehicular Technology, 53, pp. 1052-1068, 2004. [ZTRB08] Q. Zeng, C.L. Teo, B. Rebsamen and E. Burdet, \u00E2\u0080\u009CA Collaborative Wheelchair System,\u00E2\u0080\u009D IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 16, no. 2, pp. 161-170, 2008. 193 Appendices Appendix A Questionnaires A.1 NASA-TLX 194 A.2 QUEST 2.0 The following are sample QUEST 2.0 questions. Questions related to services were omitted during the study. 195 A.3 Custom Questionnaire 1) If this device was available for you to use, would you use it? Why/Why not? 2) If yes above, where would you like to go using this device? (e.g. kitchen, bathroom, etc.) 3) Do you like the device? Why/Why not? 4) What did you like the most about the device? 5) What did you like the least about the device? 6) What changes would you like to see in the device? 7) Would you like a wheelchair that drove on its own? Why/Why not? 196 Appendix B Data Collection Form 197 Appendix C NASA-TLX Raw Data Participant Phase- Run Mental Physical Temporal Performance Effort Frustration Participant 1 (A-B ordering) A-1 low low low med low high A-2 low low low high med high A-3 low low med med med med A-4 med med med med med med A-5 med med med med high med A-6 med med med med med low A-7 high med med high med low A-8 med low med med med med B-1 med high med high med high B-2 high med med med med med B-3 med med med med low low B-4 med med med high low low B-5 med med med high low low B-6 med med high high med low B-7 med med med high med low B-8 low low med high med low Participant 2 (A-B ordering) A-1 0 0 0 0 0 0 A-2 0 0 0 0 0 0 A-3 0 0 0 0 0 0 A-4 0 0 0 0 0 0 A-5 0 0 0 0 0 0 A-6 0 0 0 0 0 0 A-7 0 0 0 0 0 0 A-8 0 0 0 0 0 0 A average 0 0 0 0 0 0 B-1 0 0 0 0 0 0 B-2 0 0 0 0 0 0 B-3 0 0 0 0 0 0 B-4 0 0 0 0 0 0 B-5 0 0 0 0 0 0 B-6 0 0 0 0 0 0 B-7 0 0 0 0 0 0 B-8 0 0 0 0 0 0 B average 0 0 0 0 0 0 198 Participant Phase- Run Mental Physical Temporal Performance Effort Frustration Participant 3 (B-A ordering) B-1 2 3 3 8 3 3 B-2 4 4 3 7 4 4 B-3 4 4 4 6 6 5 B-4 3 3 2 5 3 3 B-5 8 3 3 5 4 11 B-6 4 3 3 4 4 4 B-7 4 3 3 6 3 3 B-8 3 2 3 4 4 3 B average 4 3.125 3 5.625 3.875 4.5 A-1 4 2 3 3 3 3 A-2 3 3 3 4 3 3 A-3 3 3 4 5 3 3 A-4 3 2 3 5 3 3 A-5 3 3 3 5 4 3 A-6 3 3 3 3 3 3 A-7 3 3 3 5 3 3 A-8 4 3 3 5 4 3 A average 3.25 2.75 3.125 4.375 3.25 3 Participant 4 (B-A ordering) B-1 6 5 6 7 7 6 B-2 4 4 4 7 6 5 B-3 5 5 5 4 3 3 B-4 5 4 3 3 3 3 B-5 4 4 4 4 4 3 B-6 3 4 5 5 4 4 B-7 4 4 3 4 3 3 B-8 4 3 3 3 3 6 B average 4.375 4.125 4.125 4.625 4.125 4.125 A-1 3 3 3 4 3 2 A-2 3 3 3 4 3 2 A-3 3 3 3 3 2 2 A-4 3 3 2 3 2 2 A-5 2 2 1 2 2 2 A-6 3 2 2 4 2 2 A-7 3 2 2 2 2 1 A-8 2 2 2 3 2 2 A average 2.75 2.5 2.25 3.125 2.25 1.875 199 Participant Phase- Run Mental Physical Temporal Performance Effort Frustration Participant 5 (A-B ordering) A-1 10 4 10 3 3 3 A-2 4 3 3 3 3 3 A-3 10 7 4 4 4 4 A-4 15 8 10 2 10 3 A-5 3 2 2 2 2 1 A-6 13 1 1 1 1 2 A-7 10 3 4 10 5 3 A-8 12 2 1 1 1 2 A average 9.625 3.75 4.375 3.25 3.625 2.625 B-1 10 4 5 3 3 2 B-2 18 1 1 5 1 1 B-3 15 2 10 2 10 2 B-4 10 1 1 10 10 2 B-5 7 2 10 10 10 2 B-6 10 2 10 1 10 2 B-7 10 1 10 10 10 2 B-8 2 2 10 10 2 2 B average 10.25 1.875 7.125 6.375 7 1.875 Participant 6 (B-A) B-1 2 17 1 4 2 17 B-2 1 1 1 1 1 1 B-3 10 1 19 1 1 18 B-4 1 10 1 6 7 10 B-5 18 10 1 1 9 8 B-6 11 13 18 1 1 14 B-7 10 9 1 1 2 1 B-8 7 5 8 1 1 2 B average 7.5 8.25 6.25 2 3 8.875 A-1 11 10 10 2 10 10 A-2 10 1 10 1 2 10 A-3 18 1 2 1 2 1 A-4 10 10 10 1 10 1 A-5 12 2 20 1 10 10 A-6 8 10 1 10 20 2 A-7 12 2 2 2 19 2 A-8 18 2 20 1 10 10 A average 12.375 4.75 9.375 2.375 10.375 5.75 200 Appendix D Research Process Research context The research was mainly funded by an NSERC CGS D award to P. Viswanathan. The project was also partly funded by a CIHR mobility in aging award, which limited the research specifically to powered mobility devices for the elderly. Although the choice of sensors was not specified by grant proposals, the choice of the stereovision sensor was made in the early stages of research based on previous work by one of the supervisory committee members (A. Mihailidis) and other collaborators at the University of Toronto. The target population was defined by P. Viswanathan in the research proposal based on previous work with A. Mihailidis at the University of Toronto, which involved anti-collision wheelchairs for older adults with cognitive impairment. This target population was chosen since it is known to be a largely neglected group in powered mobility research. Moreover, since A. Mihailidis and his students have already been conducting trials with this target population for related projects, the population was considered to be easily accessible for the efficacy study, and research experts familiar with the needs of the target users could be used as resources. Roles The choice of stereovision sensor was made jointly by P. Viswanathan and committee members (A. Mackworth, J. Little and A. Mihailidis) as described above. An intelligent wheelchair capable of avoiding collisions and providing wayfinding assistance through vision-based methods was proposed by A. Mihailidis earlier in [MEBH07]. Software design of the specific modules of the system and methods for integration were proposed by P. Viswanathan with approval and feedback from the entire supervisory committee. 201 Implementation of all modules of the system was carried out by P. Viswanathan. P. Alimi assisted with software installation on the laptop as well as with data collection for the testing of the Path Planner module. The efficacy study design was written by P. Viswanathan as a major amendment to a protocol written by T. How for the Intelligent Wheelchair System [How11]. The design of the previous study was amended to increase the number of users and trials. The description of the task and system was modified to include the wayfinding component (the previous study only involved collision avoidance). The obstacle course was re-designed to include multiple routes and different types of obstacles. Outcome measures and data collection procedures were modified to obtain information about distances traveled and user confidence regarding routes, and this information was used in additional analysis. A custom survey was added to solicit qualitative feedback from users. Feedback on the protocol was obtained from A. Mihailidis, R. Wang, J. Boger and T. How from the University of Toronto. T. Craig, R. Wang, T. How and A. Calvin assisted with set-up and video-taping during the clinical trials. Equipment for the trials was provided by UBC and the Intelligent Assistive Technology and Systems Lab in Toronto. All manuscripts were reviewed by J. Little, A. Mackworth and A. Mihailidis, who provided high-level feedback on the write-up and the system. This dissertation was reviewed by the entire supervisory committee (including C. Conati and I. Mitchell). Constraints The objectives defined by P. Viswanathan for this research were to complete a working prototype as well as test the system with the target user population. A working prototype was important to implement in order to determine the potential of the technology to solve the 202 real-world problem defined in this dissertation. Testing with the target user population is crucial to ensure that the technology will actually be used by the population for which it was designed. Fulfilment of both of the above objectives, however, placed many constraints on the research. Several challenges were faced during development to ensure real-time performance of the system. For example, various vision-based SLAM systems were found to be too slow for practical use. In addition, all other components of the system were implemented to be as computationally efficient as possible (the collision avoidance module was re-implemented after the first set of experiments to increase image processing rates). Thus, a majority of the research effort (2.5 - 3 years) was spent on design, development, controlled testing and refinement of the system, which limited the time available to design and run the efficacy study. Since transfer of participants in wheelchairs to locations other than their long-term care facility was determined to be infeasible (cost of transportation, consent to re-locate participants, etc.), the study location was restricted to the long-term care facility that the target users were residing in. In addition, the space necessary to lay out the maze further restricted the choice of long-term care facility. The above considerations in addition to challenges in recruitment led to fewer participants than expected. In addition, the availability of only one intelligent wheelchair, limited battery life and scheduling constraints limited the total number of trials to three per day. 203 Finally, time constraints also did not allow for further analysis of the data or additional studies. Studies in more realistic environments, such as the users\u00E2\u0080\u0099 day-to-day environments would have helped to determine the usefulness and acceptance of the system for longer-term use. Interviews with caregivers would have supplied additional information about their perceptions, which is also an important aspect to consider for deployment. 204 Appendix E Information and Consent Form Investigating the Efficacy of Using an Anti-collision and Navigation System on a Powered Wheelchair to Improve the Safety and Mobility of Older Adults with Dementia Investigators: Pooja Viswanathan (PhD student) under the supervision of Dr. Alex Mihailidis Background Older adults living in long term care (LTC) facilities often experience a variety of conditions that affect their physical and cognitive abilities. These conditions can make mobility quite difficult for older adults who require a wheelchair. Many older adults have difficulty propelling themselves in a manual wheelchair and those with cognitive impairments are not allowed to use a powered one because of concerns for the wellbeing of themselves and those around them. The resulting restriction/loss of mobility can significantly impact the quality of life of these individuals. To address this problem, researchers from the University of Toronto have developed an anti-collision and navigation system in an effort to enable older adults with mild-to-moderate cognitive impairments safely and independently operate a powered wheelchair in a LTC setting. We are looking for participants to test-drive a powered wheelchair with the new anti- collision and navigation system. The findings from this study will be valuable in the process of improving opportunities for LTC residents who are dependent on others for mobility. This study is part of doctoral research being conducted by Pooja Viswanathan under the supervision of Dr. Alex Mihailidis. You may contact Pooja Viswanathan at poojav@cs.ubc.ca or (778) 829-7665 or contact Alex Mihailidis at alex.mihailidis@utoronto.ca or (416) 946-8565 to answer any questions you may have. Purpose You are being asked to participate in a research study to determine the effectiveness of an anti-collision and navigation system for a powered wheelchair. Six older adults with mild-to-moderate dementia will participate in this study. Each participant will be asked to participate in two training sessions and navigate a short obstacle course once a day for up to 16 days. Procedure If you agree to participate, a research assistant will perform a quick interview with you to determine whether you are an appropriate candidate for this study. If all inclusion criteria are met and you would like to participate, a research assistant will escort you to an obstacle course constructed in the basement of the Harold and Grace Baker Centre (located beside the hair salon). Here you will be seated in a powered wheelchair and asked to navigate to a specific goal, while performing several movement tasks along the route: turning 90\u00CB\u009A left, 205 turning 90\u00CB\u009A right, stopping, entering narrow and wide corridors, maneuvering through obstacles, and rotating 180\u00CB\u009A in place. There will be two groups of participants. The first will complete the tasks without aid from the anti-collision and navigation system, the second group will complete the same tasks, but with the assistance of the anti-collision and navigation system. Midway (8 driving days) through the trials, the groups will switch. To ensure your safety, the obstacle course will be built from foam and lightweight objects with plenty of padding. The speed of the wheelchair will be very slow to give ample time for you to react and to ensure any collisions with obstacles are very gentle. The test area will be kept clear of all personnel except the two research assistants. There is an emergency shutoff switch on the chair, which will be monitored at all times by the researchers. The well-being of the participants is our primary concern - should you become nervous or upset at any time during the trials, you will be escorted to your room immediately and removed from the study. You will be discreetly videotaped by a research assistant while you complete the obstacle course. All videos will be kept strictly confidential and will only be viewed by the research team. Risks / Benefits There is a possibility of minor collisions when operating the wheelchair through the obstacle course. The obstacles will be composed of Styrofoam to ensure that collisions will not harm the participants. The course will contain objects covered in thick blue foam to ensure any impacts are harmless. Two research assistants will be present at all times to provide any assistance if needed. While participants will not benefit directly from participating, the findings from this study may result in a safe powered wheelchair for older adults with dementia. It is hoped that this will significantly improve safe mobility and quality of life of many older adults with dementia who are otherwise dependent on others for mobility. Confidentiality The information collected during this study will remain strictly confidential and will not affect the individual\u00E2\u0080\u0099s care or treatment in any way. Upon you / your substitute decision maker\u00E2\u0080\u0099s consent to participate in this study, you will be assigned a coded number. The only connection between you and your data will be this signed consent form. All other data, including any video and/or research notes, will be marked using only your coded number, not your name. Your name will not be used in any report or publication. All data and videos collected in this study will be safely stored in a secure, locked location with access limited to researchers involved with this particular study. All video tapes and research notes will be physically destroyed within five years after completion of the study. Confidentiality can only be guaranteed to the extent permitted by law. Costs and Compensation There are no costs associated with participation in this study. Trials will occur within your schedule. There is no compensation for participation in this study. 206 Legal Rights as a Substitute Decision Maker of Participant You are encouraged to ask any questions about the study at any time. Your participation is completely voluntary and you are free to withdraw from the study at any time. Choosing not to participate or choosing to withdraw from the study will not affect your care or status at Harold and Grace Baker Centre in any way. If you choose to withdraw, all data that can be identifiably attributed to the participant will be withdrawn by the investigator. You waive no legal rights by participating in this study. Please contact the University of Toronto Ethics Review Office at ethics.review@utoronto.ca or 416-946-3273 if you have any questions about your rights as a participant and/or your rights as a substitute decision maker of a participant. Consent to Participate I have read the entire consent form and my questions about the study have been answered by the researchers. I understand I will receive a signed copy of this consent form. I understand that I am free to ask questions about the study at any time. My participation in this study is voluntary and I am free to withdraw or discontinue participation at any time. Withdrawal from the study will not affect my status or quality of care at the Baker Centre. I consent to participate in this study. __________________________________________________ Print name of Participant _________________________ _________________________ _______________ Printed name of Substitute Signature of Substitute Date Decision Maker of Participant Decision Maker of Participant Consent to Videotaping I consent to have my trials videotaped. I understand captured video data will be treated as confidential, will only be viewed by the research team and will only be used for this study. Any videos where the individual\u00E2\u0080\u0099s face / name is recognizable will not be shown without my expressed permission. _________________________ _________________________ _______________ Print name of Substitute Signature of Substitute Date Decision Maker of Participant Decision Maker of Participant 207 I certify that I obtained the consent of the substitute decision maker of the participant above. I understand that I must give a signed copy of the informed consent form to the substitute decision maker of the participant, and keep the original copy on file in the repository location designated on my REB application files for 3 years after the completion of the research project. _________________________ _________________________ _______________ Print name of Research Signature of Research Date Assistant Assistant Participant ID:[ ]"@en . "Thesis/Dissertation"@en . "2012-11"@en . "10.14288/1.0052150"@en . "eng"@en . "Computer Science"@en . "Vancouver : University of British Columbia Library"@en . "University of British Columbia"@en . "Attribution-NonCommercial-NoDerivs 3.0 Unported"@en . "http://creativecommons.org/licenses/by-nc-nd/3.0/"@en . "Graduate"@en . "Navigation and Obstacle Avoidance Help (NOAH) for elderly wheelchair users with cognitive impairment in long-term care"@en . "Text"@en . "http://hdl.handle.net/2429/42950"@en .