UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A single subject participatory action design method for powered wheelchairs providing automated back-in… Adhikari, Bikram 2014

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


24-ubc_2015_february_adhikari_bikram.pdf [ 6.74MB ]
JSON: 24-1.0167654.json
JSON-LD: 24-1.0167654-ld.json
RDF/XML (Pretty): 24-1.0167654-rdf.xml
RDF/JSON: 24-1.0167654-rdf.json
Turtle: 24-1.0167654-turtle.txt
N-Triples: 24-1.0167654-rdf-ntriples.txt
Original Record: 24-1.0167654-source.json
Full Text

Full Text

A Single Subject Participatory Action Design Method forPowered Wheelchairs Providing Automated Back-inParking Assistance to Cognitively Impaired Older Adults:A pilot studybyBikram AdhikariBE, Electronics and Communication Engineering, Tribhuvan University, 2010MS, Biological and Agricultural Engineering, Washington State University, 2012A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFMaster of ScienceinTHE FACULTY OF GRADUATE AND POSTDOCTORALSTUDIES(Computer Science)The University of British Columbia(Vancouver)December 2014© Bikram Adhikari, 2014AbstractMobility is one of the most significant factors that determines older adults’ per-ceived level of health and well being. Cognitively impaired older adults are de-prived of using powered wheelchairs because of the operational safety risks. Theseusers can benefit from intelligent assistance during cognitively or visually chal-lenging tasks such as back-in parking. An intelligent powered wheelchair thatassists a cognitively impaired elderly user to perform a back-in parking task isproposed. A single subject participatory action design method is used with a cog-nitively impaired older adult to identify design guidelines for the proposed system.Based on analysis of transcripts from semi-structured interviews with the partici-pant, a semi-autonomous back-in parking system is designed to drive the poweredwheelchair into a pre-specified back-in parking space when the user commands itto. A prototype of a non-intrusive steering guidance feature for a joystick handleis also designed to render shear force in a way that can be associated with steeringbehavior of a car. The performance of the proposed system is evaluated in a pilotstudy. Experiments with the autonomous trigger and autonomous assisted modesare conducted during a back-in parking task with real-life obstacles such as tablesand chairs in a long-term care facility. A single-subject research design is usedto acquire and analyze quantitative data as a pilot study. Results demonstrate anincrease in the user’s perception of ease of use, effectiveness and feeling of safetywith the proposed system. While the user experienced at least one minor con-tact in 37.5% of the trials when driving unaided, the proposed system eliminatedall minor contacts. No statistically significant difference in completion time androute length is observed with the proposed system. In the future, improved back-inparking systems can use this work as a benchmark for single subject participatoryiiaction design. Future iterations could also replicate the usability study on a largerpopulation.iiiPrefaceDr. Alan Mackworth and Dr. Ian Mitchell, my academic supervisors, introducedme to the ongoing research in intelligent Powered Wheelchair (PWC) at Univer-sity of British Columbia (UBC). I worked as a teleoperator who was responsi-ble to mimic different levels of autonomy of an intelligent PWC in a Wizard-of-Oz (WOO) study [1, 2]. The WOO study was conducted with cognitively impairedolder adults in a Long Term Care (LTC) facility under direct supervision of Dr.Pooja Viswanathan. Based on the pilot participant’s performance during the WOOstudy Dr. Viswanathan and I came up with the proposed project as an iterative de-velopment on the WOO study. All the study was conducted at Vancouver GeneralHospital (VGH) Banfield Pavilion residential care facility. Ms. Guylaine Deshar-nais, Occupational Therapist at VGH Banfield Pavilion, arranged suitable experi-ment site and coordinated schedule with the participant during the research. Ms.Selene Baez and I conducted and transcribed the focused interview sessions. I de-veloped the back-in parking system and an intuitive joystick handle for the PWC.Dr. William C. Miller supervised the design of the user study. Mr. Pouria Talebi-fard and I conducted the user study along with Dr. William C. Miller as the princi-ple investigator. The UBC Behavioral Research Ethics Board (BREB) certificate ofapproval number is H13-00765.This research was supported by CANWHEEL (the Canadian Institutes of HealthResearch (CIHR) Emerging Team in Wheeled Mobility for Older Adults Grant#AMG-100925), Natural Sciences and Engineering Research Council (NSERC)grants, the Canadian Foundation for Innovation (CFI) Leaders Opportunity Fund/ British Columbia Knowledge Development Fund Grant #13113, the Institute forComputing, Information and Cognitive Systems (ICICS) at UBC, and TELUS.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiGlossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . 21.4 Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.5 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.6 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.1 Participatory Action Design . . . . . . . . . . . . . . . . . . . . 42.2 Intelligent Wheelchair Systems and Older Adults . . . . . . . . . 52.3 Shared Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.4 Shared Control in PWC . . . . . . . . . . . . . . . . . . . . . . . 72.5 A PWC WOO Study . . . . . . . . . . . . . . . . . . . . . . . . . 8v3 Participatory Action Design . . . . . . . . . . . . . . . . . . . . . . . 103.1 Conceptualization . . . . . . . . . . . . . . . . . . . . . . . . . . 103.2 Initial Guidelines from the WOO Study . . . . . . . . . . . . . . . 113.2.1 User Background . . . . . . . . . . . . . . . . . . . . . . 113.2.2 WOO Pilot Study . . . . . . . . . . . . . . . . . . . . . . 113.2.3 Features of Intelligent Powered Wheelchair . . . . . . . . 123.2.4 Sing Along Scenario . . . . . . . . . . . . . . . . . . . . 143.2.5 Summary of Initial Guidelines . . . . . . . . . . . . . . . 153.3 Focused Interview: Session I . . . . . . . . . . . . . . . . . . . . 153.3.1 Conceptual Baggage . . . . . . . . . . . . . . . . . . . . 153.3.2 Back-in Parking Scenario . . . . . . . . . . . . . . . . . 163.3.3 User Interface . . . . . . . . . . . . . . . . . . . . . . . . 173.3.4 Joystick Interface . . . . . . . . . . . . . . . . . . . . . . 183.3.5 Design Guidelines I . . . . . . . . . . . . . . . . . . . . . 193.4 Focused Interview: Session II . . . . . . . . . . . . . . . . . . . . 193.4.1 Conceptual Baggage . . . . . . . . . . . . . . . . . . . . 193.4.2 Back-in Maneuver . . . . . . . . . . . . . . . . . . . . . 203.4.3 Shared Control . . . . . . . . . . . . . . . . . . . . . . . 223.4.4 Design Guidelines II . . . . . . . . . . . . . . . . . . . . 223.5 Overall Design Guidelines and Discussion . . . . . . . . . . . . . 234 System Development . . . . . . . . . . . . . . . . . . . . . . . . . . . 244.1 System Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . 244.2 System Functionalities . . . . . . . . . . . . . . . . . . . . . . . 244.3 Hardware and Software Platform . . . . . . . . . . . . . . . . . . 254.4 System Design . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.4.1 Robot Setup . . . . . . . . . . . . . . . . . . . . . . . . . 264.4.2 Back-in Parking System . . . . . . . . . . . . . . . . . . 304.4.3 User Interface . . . . . . . . . . . . . . . . . . . . . . . . 374.5 System Test and Discussion . . . . . . . . . . . . . . . . . . . . 405 Pilot Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415.1 Experiment Scenario . . . . . . . . . . . . . . . . . . . . . . . . 41vi5.2 Test Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . 415.3 System Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425.4 Usability Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 Results, Analysis and Discussion . . . . . . . . . . . . . . . . . . . . 456.1 System Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456.2 Usability Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476.2.1 Completion Time . . . . . . . . . . . . . . . . . . . . . . 486.2.2 Route Length . . . . . . . . . . . . . . . . . . . . . . . . 526.2.3 Minor Contacts . . . . . . . . . . . . . . . . . . . . . . . 526.2.4 Average Usability . . . . . . . . . . . . . . . . . . . . . . 537 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . 557.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57A Semi-structured Interview Questionnaire . . . . . . . . . . . . . . . 63A.1 Background Interview . . . . . . . . . . . . . . . . . . . . . . . . 63A.2 Early Prototyping Interview . . . . . . . . . . . . . . . . . . . . . 64A.3 In-Chair Session . . . . . . . . . . . . . . . . . . . . . . . . . . . 67A.4 Post In-chair Session Interview . . . . . . . . . . . . . . . . . . . 67viiList of FiguresFigure 3.1 A simulated sing-along environment in a LTC facility. . . . . . 16Figure 3.2 Simulating a sing-along scenario on paper. . . . . . . . . . . 17Figure 3.3 Physical interface design brainstorming. . . . . . . . . . . . . 18Figure 3.4 A medium fidelity prototype of an intelligent PWC. . . . . . . 20Figure 3.5 Diagram of key positions during a back-in parking task. . . . 21Figure 4.1 PWC platform diagram . . . . . . . . . . . . . . . . . . . . . 25Figure 4.2 System Block Diagram . . . . . . . . . . . . . . . . . . . . 27Figure 4.3 ROS Navigation Stack Setup [3] . . . . . . . . . . . . . . . . 32Figure 4.4 PWC user interface on the joystick shaft. . . . . . . . . . . . . 37Figure 4.5 Joe holding the joystick handle. . . . . . . . . . . . . . . . . 38Figure 4.6 A rotating knob and Pacinian corpuscles in right hand. . . . . 39Figure 4.7 Operation of the user interface. . . . . . . . . . . . . . . . . . 40Figure 5.1 Experimental scenario (overhead view). . . . . . . . . . . . . 42Figure 5.2 Joe on PWC at initial positions. . . . . . . . . . . . . . . . . . 43Figure 6.1 System Test: Completion time. . . . . . . . . . . . . . . . . . 46Figure 6.2 System Test: Route Length. . . . . . . . . . . . . . . . . . . 47Figure 6.3 Usability Test: Completion time. . . . . . . . . . . . . . . . . 49Figure 6.4 Usability Test: Route length. . . . . . . . . . . . . . . . . . . 50Figure 6.5 Usability Test: Number of minor contacts. . . . . . . . . . . . 51Figure 6.6 Usability Test: Total number of minor contacts across modes. 52Figure 6.7 Average usability score. . . . . . . . . . . . . . . . . . . . . 53viiiGlossary2D two-dimensionalANOVA Analysis of varianceBREB Behavioral Research Ethics BoardCANBUS Controller Area Network BusCFI Canadian Foundation for InnovationCIHR Canadian Institutes of Health ResearchCWA Collaborative Wheelchair AssistantDBW Drive By WireDWA Dynamic Window ApproachGB GigabyteGPU Graphical Processing UnitHF Haptic FeedbackHRI Human Robot InteractionICICS Institute for Computing, Information and Cognitive SystemsIQR interquartile rangeLED Light Emitting DiodeixLTC Long Term CareLTS Long Term SupportNSERC Natural Sciences and Engineering Research CouncilOS Operating SystemPAD Participatory Action DesignPANDA Parking and Driving AssessmentPIDA Power-Mobility Indoor Driving AssessmentPS3 PlayStation3PTU Pan-Tilt UnitPWC Powered WheelchairRDP Road Departure PreventionRGB Red Green BlueRGBD Red Green Blue DepthROS Robot Operating SystemSLAM Simultaneous Localization and MappingSSPAD Single Subject Participatory Action DesignUBC University of British ColumbiaUSB Universal Serial BusVGH Vancouver General HospitalWOO Wizard-of-OzxAcknowledgmentsFirst of all, I extend my deepest thanks Mr. Joe (pseudonym for the participant) forwelcoming me to work with him and providing valuable feedback for the research.I thank my supervisors, Dr. Alan Mackworth and Dr. Ian Mitchell, for their supportand guidance during my academic journey at UBC. I thank Dr. Jim Little for beingthe second examiner of this thesis. This research would not have been possiblewithout Dr. Pooja Viswanathan’s thesis and the WOO study. It was a great learningexperience working with her in this research. I express my sincere gratitude toher for her thorough guidance throughout this research. I thank Mr. Neil Traftfor his valuable feedback and suggestions during system development and thesiscopyediting. I thank Mr. Pouria Talebifard and Ms. Selene Baez for their supportduring the focused interview sessions and the user study at the LTC facility. I thankDr. William C. Miller for his supervision with the design of the user study. Ithank Dr. Karon MacLean for her valuable suggestions during ethics submissionand initial interview design. The joystick interface was initially designed under hersupervision as a course project. I thank Ms. Guylaine Desharnais for her supportat the VGH Banfield Pavilion. I thank my loving parents and dear family membersfrom the bottom of my heart for all their encouragement, love and support in myevery single step in life.xiChapter 1IntroductionIf I have seen farther it is by standing on the shoulders of giants.— Sir Isaac Newton (1855)1.1 MotivationMobility is one of the most significant factors that determines older adults per-ceived level of health and well being [4–6]. As older adults lack strength to pro-pel themselves on manual wheelchairs, Powered Wheelchairs (PWCS) can aid intheir mobility and independence. Cognitively impaired older adults are deprivedof using these PWCs because of the operational safety risks [6]. Fully autonomousPWCs are not recommended for these users because it can be frustrating if the PWCbehaves differently than they would expect [7]. However, under cognitively or vi-sually challenging circumstances the user could ask the intelligent PWC to lead oreven completely take over the driving task. Driving in close proximity of obstaclesduring tasks such as navigating through corridors, docking under a table and park-ing could be some autonomous behaviors that might benefit these elderly users.Driving a PWC backward is challenging due to poor visibility and possibly lessintuitive joystick motion during reverse maneuver. For example, while driving adifferential drive PWC backwards, the joystick needs to be pushed in the backwardleft direction to turn towards the backward right direction leading to confusion attimes.1We propose an intelligent PWC that assists cognitively impaired elderly usersto perform a back-in parking task. We choose this problem based on observationof the driving behaviour and the desire of a participant to address his backwarddriving issues as observed during a Wizard-of-Oz (WOO) [1, 2] study. We pro-pose to solve this problem by working closely with the participant to identify hisneeds and work together with him to come up with a suitable technology to per-form the task. We use the Single Subject Participatory Action Design (SSPAD)methodology to identify system design criteria with him. Based on the analysis oftranscripts from semi-structured interviews with the participant, we design a semi-autonomous back-in parking system which drives the PWC into a pre-specifiedback-in parking spot when the user commands it to. We also design a prototype ofa non-intrusive steering guidance feature for a joystick handle to render shear forcein a way that can be associated with steering behavior of a car. We evaluate the per-formance of the proposed system in a pilot study. We experiment with autonomoustrigger and autonomous assisted modes during a back-in parking task with real-lifeobstacles such as tables and chairs in a LTC facility. We use single-subject researchdesign to acquire and analyze quantitative data as a pilot study.1.2 ObjectivesThe objectives of our research are to:1. Develop an intelligent PWC system for a cognitively impaired older adult ina LTC facility to• assist in navigating into a pre-identified back-in parking space;• ensure safety by designing a suitable user-interface.2. Evaluate performance of the proposed system within a controlled experi-ment.3. Test the system with a cognitively-impaired LTC resident in a pilot study.1.3 Research QuestionsOur primary research questions are:21. How does the proposed back-in parking system affect the performance of theuser in terms of completion time, route length and number of minor contacts?2. How does the proposed system affect the user’s perception of ease of use,effectiveness and safety?1.4 HypothesesOur hypotheses for the research questions are:H1. The proposed system will reduce completion time and route length to getto a suitable parking spot. It will also reduce the number of minor contactswhile back-in parking.H2. The proposed system will increase the user’s perception of ease of use, ef-fectiveness and safety.1.5 ContributionsOur contribution is the iterative technology development stage beyond the previ-ously conducted WOO study. We have developed a non-intrusive steering guidancesystem and a semi autonomous back-in parking assistance system. We use SSPADmethodology to assess usability of back-in parking system in a smart PWC for fora cognitively impaired older adult. We describe the expected behavior of the sys-tem from the user’s perspective. We describe our endeavors to meet the user’srequirements and design a suitable pilot study for preliminary evaluation of theproposed system.1.6 Thesis OverviewThe remaining sections of this thesis are organized as follows. Chapter 2 summa-rizes related work in the field. Chapter 3 provides details of participatory actiondesign, and Chapter 4 provides details on the implementation of the system. Chap-ter 5 describes the design of the experiment for the pilot study. Chapter 6 discussesand analyses findings from the pilot study. Finally, we highlight the main conclu-sions of the research and discuss possible future work in Chapter 7.3Chapter 2Literature ReviewRather than work forward from a technology or a complex strategy,work backward from the needs of the customers and build thesimplest product possible. — Eric Ries (The Lean Startup)2.1 Participatory Action DesignParticipatory Action Design (PAD) is a user-centered approach to the design, de-velopment and assessment of technology with emphasis on the active involvementof stakeholders in the design and decision making process [8–12]. PAD involves abroad level of collaborative activities between designers and end users. Designersact as facilitators or visual translators to help participants to express their ideas.The most common activities in PAD use visual and semantic tools – such as a bagof words, stickers or objects of different shapes – to offer ways of expression tonondesigners. Designers prompt participants to use these tools to express theirideas for the system to be designed. Participants may use these tools to create theirdesired systems, services or interfaces. Designers probe into details of what theyhave made to understand their needs and identify their creative content. These ac-tivities and interactions between designers and participants help designers to formguidelines towards design, development and assessment of an actual system.Participatory design was initially used in 1970s in Norway by computer pro-fessionals with members of an iron and metal workers union to introduce computer4systems into the workplace. Though the PAD approach has been adopted in diversefields such as urban design, landscape planning and human computer interaction,it has not been frequently used with stakeholders with reduced capabilities due todisability or ageing [12]. It has been recognized that older adults are capable of be-ing critical as potential active consumers of assistive technologies [13]. Involvingthese users in the design process might help to avoid the application of technologythat causes more problems than it solves. The “Design for All (Universal Design)”philosophy [14] also suggests that if the technology works for older adults it willwork better for everyone.Seale et al. [13] described a focus group methodology to help older adultsto describe their mobility requirements. In recent years assistive HRI systems de-signs have successfully used PAD methodology [11, 15] with focus groups. Sinceour target population is cognitively impaired older adults residing in Long TermCare (LTC) facilities, it is extremely difficult to identify and gather focus groupsto engage in participatory action design methodology. As Muller [9] points out,the visual and hands-on nature of participatory design practices directly conflictthe universal usability needs of individuals with visual and motor disabilities. Ad-ditionally, the type of impairments and the attitudes towards modern technologyvary widely among individuals with disabilities and older adults in general. Hence,PAD implementation within a group of participants can be difficult to manage andoperate [12]. Thus, in this research we take a “one wheelchair at a time” approachtowards designing a system with a single subject PAD (SSPAD) methodology.2.2 Intelligent Wheelchair Systems and Older AdultsA smart wheelchair should match a person’s need and abilities [4]. Perceptual,physical and cognitive declines due to ageing tend to make some tasks difficultto achieve. Task specific autonomous behaviors can help improve mobility of el-derly users while still exercising their cognitive abilities on other tasks [4]. Kairyet al. [16] found users facing difficulties in restricted spaces and highly dynamicenvironment. We encourage readers to refer to [17–19] for a broader overview ofintelligent PWCs. Here, we refer to the literature in task-specific behaviors of in-telligent PWCs. Autonomous behaviours for PWCs primarily include navigating in5crowded environments [20, 21] and obstacle avoidance [19]. Parking and dockingassistance have been studied in docking to a power-lift on the back of a van [22]and on to a custom designed wheelchair dockable bed [23]. Implementation detailsof these methods are not yet available for user testing. As our work focuses on us-ability of existing robot navigation methods from a user’s perspective, we build onthe established 2D navigation pipeline implemented as the “navigation Stack” [3]in the open-source robotic platform known as the Robot Operating System (ROS).2.3 Shared ControlThe context of shared control involves a control signal generated by combiningreal-time signals from multiple agents in a system. In the context of our PWC,we have the user (PWC driver, remote teleoperator or trainer) and an embeddedcontroller (wizard or computer) as these two agents. For the WOO study describedin Section (2.5), we used the PWC driver as the user but replaced the embeddedcontroller with a remote teleoperator as the “wizard”.Shared control has been used in applications involving remote operation suchas surgery and pilot training systems. These applications assume users to be trainedprofessionals. Literature in shared control for lane assistance for passenger cars ap-pears more closely related to our application as these systems also assume users tobe novice drivers, and also because the degrees of freedom of motion are restrictedto a plane. Katzourakis et al. [24] describe three approaches to shared Road De-parture Prevention (RDP) for a simulated emergency maneuver. They employedHaptic Feedback (HF), Drive By Wire (DBW) and the combination of HF and DBWwith normal driving. In HF, given a likelihood of a road departure, the RDP appliedan advisory steering torque such that the two agents would carry out the emergencymaneuver cooperatively. In DBW, given a likelihood of road departure, the RDP ad-justed the front-wheels angle to keep the vehicle on the road, without feedback. Inthis mode, the user’s steering signal was completely overridden by the RDP. Theirexperiments on 30 participants in a vehicle simulator suggested that HF had no sig-nificant effect on the vehicle’s path or the likelihood of road departure. The authorsdescribe that users perceived haptic feedback as authoritarian if it was strong andthey generated more torque on the steering wheel against HF to override the RDP6system if the HF was not strong. DBW and DBW+HF, on the other hand, success-fully reduced the likelihood of departure. However, the authors report degradedstimulus-response compatibility with DBW systems because when the DBW systemtook over control, the users would not feel the steering wheel turning, which con-fused their internal perception of the vehicle. Taking inspiration from their work,our system used an approach similar to DBW+HF.Force-feedback haptic joysticks have been used in smart PWCs [25, 26] butthey are bulky, expensive and/or lack sufficient torque. Vibration feedback on theseat [27] and steering wheel [28] have led to a reduction in reaction time andfrontal collision in a lane–keeping task.2.4 Shared Control in PWCThe Smart PWC systems literature has been extensively reviewed in [18, 29] . Weexplore the systems that have been tested with cognitively and/or mobility impairedusers. Viswanathan et al. [29] and Wang et al. [30] did not use the concept ofshared control. Their systems would either provide higher level supervisory guid-ance with visual and audio cues or use switch control policies such that either thesystem or the user would have complete control over the PWC. The CollaborativeWheelchair Assistant (CWA) [31, 32] modified users’ input by computing motionperpendicular to the desired path. Perpendicular motion away from the desiredpath increased the elastic path controller’s output and forced the PWC to return tothe path. The amount of guidance was determined by varying a parameter thatcontrolled the elastic path controller’s gain. Urdiales et al. [25] used smoothness,directness and safety measures to compute local efficiencies of human and robotcontrol signals. The system blended the control signals based on current and pastaverage relative efficiencies. Li et al. [33] used safety, comfort and obediencemeasures (similar to [25]) to blend the user’s and the autonomous controller’ssignal, along with an online optimization procedure to maximize the minimum ofthese measures. Their experiment with able bodied users and cognitively intactmobility impaired older adults showed that the system improved smoothness of thewheelchair trajectory and reduced the likelihood of collision. Carlson and Demeris[34] implemented a shared control scheme by first identifying the user’s intended7destination based on joystick heading and the autonomous system’s trajectory. Sec-ondly, they implemented an obstacle avoidance algorithm to find a traversable di-rection close to the user’s input. Their experiment with able bodied users and anexperienced end-user with mobility impairment showed improved user safety witha small additional time cost. It also allowed users to perform a secondary taskwhile driving the wheelchair, by decreasing cognitive workload, visual attentionand manual dexterity demands.2.5 A PWC WOO StudyThe WOO experimental technique [35] was originally used in the design of userinterfaces involving natural language processing. In the context of Human RobotInteraction (HRI) [36], WOO is a powerful tool for iterative design as it allowsvarious options to be tested before significant effort is invested in developing a fullyfunctional system. A WOO study on the preference of shared control on intelligentPWCs was conducted with cognitively impaired older adults in LTC facilities inVancouver area [1, 2]. Three levels of autonomy– speed control, direction controland autonomous driving (details in – [2]) were tested under different scenarios inLTC facility settings.1. Speed Control: In this policy, the wizard restricted the maximum magnitudeof the control signal based on the proximity of nearby obstacles.2. Direction Control: In this policy, the wizard intrusively took control of thesteering if the PWC crossed a threshold distance from an obstacle, steered thePWC to the nearest free space and released control back to the user.3. Autonomous Control: In this policy, the wizard took full control of the PWCto perform a task. The user could stop the PWC, but otherwise could notchange its motion.Speed control mode did not provide direction guidance around obstacles. Di-rection and autonomous control were intrusive because the system took away con-trol from the user. Intrusive guidance methods are not suitable for our target popu-lation. Older adults with more significant cognitive impairment are more likely to8be disoriented. The previously conducted WOO study suggested that older adultswith more significant cognitive impairment did not like the system to take awaycontrol at any time. One participant stated, “The chair should do what I want it todo rather than what it wants to do”. Older adults with mild cognitive impairmentare comfortable with autonomous systems; however, older adults with degenerativecognitive impairment at an early stage of impairment find more health benefits byexercising with cognitively involved tasks. We believe that a suitable non-intrusiveguidance could benefit these novice elderly PWC users by not only giving themcontrol at all times but also helping them exercise their cognitive skills wherebythey follow an intuitive navigation guidance interface and yet remain safe.Additional findings from the WOO study suggest that user preferences for thelevel of autonomy varied among participants and driving scenarios. This varia-tion in preference for level of autonomy is possibly an indicator of diverse levelof cognitive impairment in the target population. However, user preferences to bein control of some aspect of the wheelchair operation and decision making wereevident in the study. Some participants preferred autonomous driving in situationsthey found difficult to maneuver. The findings from the study also suggested thatmany of these users were able to articulate their needs and preferences for an in-telligent PWC. These findings inspired an extension of the WOO study with a focuson the needs of a single subject whom we found to be particularly articulate. Toachieve this we adopted a participatory design approach for a specific autonomousbehavior, as desired by this key informant. Details of this approach follow in thenext section.9Chapter 3Participatory Action Design3.1 ConceptualizationWe identified the need for back-in parking assistance during a WOO [1, 2] studyconducted on a simulated intelligent wheelchair with cognitively impaired olderadults. The wheelchair was designed with varying levels of autonomy and wastested under five different driving scenarios as described in the Power-Mobility In-door Driving Assessment (PIDA) [37]. These scenarios included driving through ahallway, driving in and out of an elevator, docking to a table, manuevering throughobstacles and back-in parking. From the user study conducted with ten LTC res-idents, we identified several key conceptual and design requirements [7]. Usersoften desire to be in control; sometimes they want control over higher level routeplanning, and sometimes over lower level driving behavior. There was also muchvariability in preference for specific levels of autonomy across participants, andeven across scenarios for the same participant. With such ambiguity and contradic-tion seen in user responses, we believe that these ambiguities need to be addressedwithin the design process on an individual basis [7].We identified one of the participants from the user study who had a highercognitive score compared to other participants and was able to articulate his needsduring the study. He had a specific problem with operating the wheelchair whiledriving backwards. Based on his preference and performance in the WOO study,two specific goals emerged: 1) a suitable driving interface that would give the user a10sense of maximum control and 2) a (semi-)autonomous back-in parking assistancesystem that would take control over driving backward if needed.In collaboration with this user, we identified a specific scenario in the LTCwhere he could potentially benefit from a back-in parking assistance system.3.2 Initial Guidelines from the WOO StudyOn initial evaluation of interview transcripts from the WOO pilot study using opencoding techniques two themes emerged: 1) level of autonomy and 2) preferenceof interface. Axial non-hierarchical coding was done using these two themes tosummarize the transcript. In the following subsections, we summarize the findingsfrom our thematic analysis to inform our conceptual baggage for the subsequentfocus interviews.3.2.1 User BackgroundOur user in the study is referred with his anonymized name as “Joe”. He hasexperience communicating with both highly technical and less technical peopleduring his professional career, and this is useful as he can articulate his needs inboth technical and non-technical language. Joe has had Parkinson’s disease sincethe early 2000s. The degenerative nature of Parkinson’s disease is concerning tohim as he understands that his mobility will be limited as time goes by. At thebeginning of our WOO study (July 2013) his primary medium of mobility was amanual wheelchair which he had been using for six months. He used a walkeronly during his physiotherapy sessions when he was accompanied by a certifiedphysiotherapist. He used to propel his manual wheelchair with his hands and feet.During his freezing period due to Parkinson’s disease, his feet could not moveeasily and mobility was severely impaired. He mentioned that he did not feel goodwhen people lined up behind him in the corridor while he froze and had difficultypropelling himself even to the side of the corridor to give way.3.2.2 WOO Pilot StudyJoe was our pilot participant during the WOO study. During the study, he founddriving backwards was mentally taxing because he was not used to the standard11joystick interface for operating the PWC. Moreover, he found it difficult to judgewhen to stop while back-in parking as he had no idea how far obstacles were behindhim. He said, “The main concern you know as you pull back, is that you will hitsomething or someone in the back. But, if it stops automatically, that works.”Since driving backwards involved identifying suitable joystick movement for thedesired maneuver, he had to trial and error before proceeding with the heading.He said, “So right now, I still need to trial and error. Try this way, do that way.Sometimes it does not work. So, the difficult part is the way to do. I think thatonce I got the right direction and decision, I was able to move back quite easily.”In comparison to forward driving he said, “In that [forward driving] case, I knowif I go that way, I turn that way. And if I go that way, I turn that way. But here[backward driving], I was doing the opposite and then it’s taking me away fromthe object in the positive form of direction and then I am trying to back up andin a negative backward direction, so all confused.” During the WOO study, hehad several comments on level of autonomy and user interface of the simulatedintelligent PWC. The following section highlights some of his comments.3.2.3 Features of Intelligent Powered WheelchairLevel of AutonomyDuring the overall driving scenarios Joe preferred to be either in steering correctionor basic safety mode as compared to autonomous mode. The other two modes gavehim more independence over control of the PWC. He was more inclined towardssteering correction mode as compared to basic safety as basic safety impeded hismomentum by stopping near obstacles while steering correction helped him keephis momentum by steering away as he was approaching an obstacle. He highlightedthe importance of the user being in control as much as possible when he said, “Toencourage people use the manual mode as oppose to the automatic mode becauseit encourages them to use their mind and their memory, exercise their memory.” Hesuggested reducing the number of driving modes to maximum of two. The basicsafety should be a default feature and the system should provide some form ofguidance. If the user fails after multiple trials, the system should take over control12to perform the task. Such an example scenario was during back-in parking wherehe preferred autonomous or steering correction mode as compared to basic safety.He noted that the other two modes helped him towards reaching the goal while thebasic safety was useful in only avoiding obstacles but not necessarily reaching tohis desired back-in parking destination. He also made a strong remark that, despitefull autonomy of the PWC, the user must have opportunity at all times to changehis mind to go to a different destination and hence override autonomous behavior.He said, “Number one priority, override everything.”User InterfaceIn the WOO study, audio prompts and vibration feedbacks were used to signalwheelchair behavior. Joe did not recommend using audio feedback as he said,“It could be not welcomed by nurses if everybody has that kind of chair. Then, itwill be ringing everywhere!” On using personalized wireless audio prompts, he re-sponded that users would not be sophisticated enough to use them and they wouldrequire more training with additional interfaces. He preferred vibration feedbackover audio as he said, “Well, without making any sound alarm, maybe just usethe vibration, so uh, perhaps the vibration is the main communication between thewheelchair and the user. I do not think it is too hard to put on something to alert theuser, right? Because as long as it does not make any noise, something like vibrate,or something on the side, right? But I will not suggest having a sound.”As an alternative visual display, he commented that visual forms should besimple. He recommended using a screen such that it is situated at a location notobstructing the user’s view. The screen would primarily be for diagnostic detailsfor a technical person to supervise the system but Joe did not suggest using a screenin front because as he commented, “unless they [the users] have technical back-ground, otherwise they do not know what to do, right? And plus, and then uh —there is no harm to put something on the front here [around the joystick] if needed,then they can mount the panel back to here [behind seat of PWC].” He also pointedout that screen situated in front of the wheelchair would be hazardous as targetusers, being older adults, have a tendency to slump forward which he himself ex-perienced often. He was against having anything moving in the front as he pointed13on the intelligent PWC and said, “I saw the wheelchair that you have got, and ithas something mounted on the front, so that is a bit of a concern for me. Actually,a big concern for me, too. The front needs to be (I: Clear? Okay.) nothing in thefront.” With reference to simple light displays he stated,“Well, you usually movequite slowly, you know? And, I also think because it is so close, you should nothave any problems looking at the lights, you know? Because the lights should beflashing, eh? (I: Flashing light.) Flashing yellow, right? Flashing red, that is agood idea, too.”As a method of input Joe mentioned that his stuttering vocal input was not agood idea and would prefer to use his hand. He said, “Uh, I think that is it. voice,hand, that is about it, is not it? Use hand and the voice, not too well, so be the best,right? So, not to rely too much on the voice. Joystick should be more reliable.”Another instance he said, “That is actually one of the most important features Ilike because it is hard to sometimes communicate, especially when I cannot move,my talking is also kind of stutter. In that case, I do not have to talk to anyone elseand just do it myself and push the joystick.” Despite disapproving of visual display,Joe agrees to using touchscreen as an input device as he said, “Touchscreen isusually okay. Touchscreen should be among the best.” But he still prefers to haveclear space in front and says, “if you don’t want to use it, you can put it up.”3.2.4 Sing Along ScenarioIn the LTC facility where Joe resides, every week a sing-along program is orga-nized where a singer sings along with the residents their favorite songs. Joe enjoysparticipating in the event which he finds helpful for him to work on his stutteringwhile singing. As all the residents participating in the event park their wheelchairsaround the singer, he finds an intelligent wheelchair able to assist him while back-inparking at the event would save care-givers time.Joe described his enthusiasm for sing-along event as, “Uh, yes! I like to goto the music program, the ”sing-along” program. Because I found out that okay,although I sometimes stutter when I talk, when I sing, I can sing, right?”. Hedemonstrated his enthusiasm about the event by singing a couple of stanzas fromone of his favorite songs without any stuttering.143.2.5 Summary of Initial GuidelinesFrom the analysis of the transcripts from the original WOO sessions with Joe, weobserved that he had following preferences:• Fewer modes to reduce confusion on the choice of level of autonomy.1. One driving mode would provide basic safety and with a suitable guid-ance towards a goal while providing maximum user control.2. Another driving mode would be semi-autonomous at situations he founddifficult to operate, such as back-in parking.• The user interface could mostly be around the joystick with lights flashingred, green and blue.The following two focused interview sessions explore these key aspects of de-sign requirements in detail. These subsequent interview sessions were part of theextension after we had identified Joe as the key informant and back-in parking asthe desirable task to automate.3.3 Focused Interview: Session I3.3.1 Conceptual BaggageIn this basic system design identification session, we attempted to build a cleargroundwork for the study by understanding the nature of back-in parking task. Ourobjective was to identify design requirements of the back-in parking system and amedium of interaction between the user and the intelligent PWC to communicatethe information required to execute the task. We developed a sample visualizationof a sing-along scenario based on our observation of the sing-along event at Joe’sLTC residence (Figure 3.1). We used props with different basic physical and geo-metric properties to inspire design ideas and potential interface prototypes duringthe interview session.15Figure 3.1: A simulated sing-along environment in a LTC facility.3.3.2 Back-in Parking ScenarioFirst, we identified the problem in a sing-along scenario in a LTC facility. On apiece of paper, small cubes, each symbolic of a wheelchair, were arranged (Fig-ure 3.2). Joe was given a cube with markers to represent forward heading of thePWC. With the cube at one end of the paper, he described his hypothetical parkingscenario in a sing-along like environment. Joe suggested parking spaces should bepre-specified like a vehicle parking lot such that during crowded situation, availablespace for parking could be optimized. Each parking location should be assigneda number. He described himself driving up to position A (Figure 3.2b). At posi-tion A, the PWC would identify available parking spaces and show the number ofthe available parking spot. The user would confirm the position to the PWC. Theintelligent system would take over control to drive the PWC to the desired parkingspace.Joe characterized parking space allocation in an open space as shown in Figure3.2c. He described that the arrangement of seating would depend upon one’s inter-est in active participation in the sing-along event. Residents who preferred to sing16(such as himself) were aligned in the front row while the rest were aligned at theback. The first person would begin at the end of an arc (Figure 3.2c), the followingparticipant would align adjacent to the first, and so on. However, people wouldhave a preference to be parked next to their friends. Hence, people would leavespace for their friends. Thus the user should have the option to ignore a parkingspace suggested by the system and go to an alternative spot instead.Figure 3.2: Simulating a sing-along scenario on paper. a) Initial setting; b)Possible trajectory; c) Parking space characterization.3.3.3 User InterfaceFor an interface, Joe suggested a visual display that would show currently availableparking spots. The user could choose the parking spot with input on the displayvia touch or through the joystick. The user would have an option to override theparking decision at any time and move to a different location by taking over joy-stick control. Joe initially pointed out that the joystick was unintuitive to him tothink as an interface because he was not familiar with joystick like interfaces. Joesaid, “I think the main driver is joystick. If you put more things on the joystick,that would be good because you do not have to move to different angles, differentplaces to do that.” Older adults have less dexterity in their hands to move aroundto interact with multiple interfaces. As the main drive signal would be commu-nicated through the joystick in PWCs, having other controls also centrally locatedon/around the joystick would be beneficial.17Figure 3.3: a) Props used for physical interface design brainstorming; b)Comparing small ball, thimble and small plastic ball; c) Holding a sty-rofoam ball; d) Holding a styrofoam cone and e) Squeezing a spongyball.3.3.4 Joystick InterfaceAs the primary focus of our research was on assistive user interfaces for navi-gation, we chose to focus particularly on the joystick. First, we presented twojoystick heads, a thimble and a small ball (Figure 3.3). He found the small ballmore comfortable than the thimble surface which he found hard. When presentedwith a bigger plastic ball which was softer, Joe found this bigger plastic ball wouldmake him feel more comfortable and safer because holding the bigger plastic ballfelt softer and easier to hold with his hand. When presented with a styrofoam ballwhich was slightly larger than the plastic ball, he noted that the larger styrofoamball was still more comfortable and also could provide a surface to place addi-tional interface buttons and displays. When presented with styrofoam cones of twodifferent sizes, he found them less convincing than the styrofoam ball, yet couldprovide a surface for additional interface. We also tested if Joe could use pressureas a form of interaction with a spongy ball to squeeze. We found his grip was notstrong enough to squeeze the spongy ball.183.3.5 Design Guidelines IIn this interview session, the following guidelines were drawn up:• Parking spots for intelligent PWCs should be pre-specified as in vehicle park-ing lots.• The user should have control to override the autonomous behavior at anyinstant during the parking task.• Increasing the surface area of a joystick handle could open opportunities tointegrate more functionality into the joystick interface.3.4 Focused Interview: Session II3.4.1 Conceptual BaggageIn this session, we attempted to identify the division of control between manualdriving and autonomous back-in parking. Based on the user’s wish that parkingspaces are predefined and the user knows which parking spot he wants to park hisintelligent PWC, we focus on possible maneuvers required for shared control duringautonomous back-in parking. We used a medium fidelity prototype of an intelligentPWC with a RGBD sensor mounted on a PTU. The RGBD sensor would be used tosense objects in the wheelchair’s surrounding environment and the PTU would beused to turn the RGBD sensor towards the desired direction. Specific technicaldetails of these units are described in Chapter 4. The task was performed in a1m × 1m parking space made of styrofoam obstacles (Figure 3.4). The mediumfidelity prototype was teleoperated to mimick an intelligent PWC.Three key locations relative to the parking spot were used to understand thedesired system behavior at each location, as shown in Figure 3.5. At position Athe camera would rotate to build a local map of its environment. The user wouldthen drive the PWC from position A to E. At position E, the user would trigger theautonomous mode. For this prototype, the autonomous mode was mimicked witha teleoperated PWC. The teleoperated PWC would turn around with its back facingtowards the parking spot. The PTU would also turn so that the RGBD camera19would face directly towards the parking spot. The teleoperated PWC would thendrive backward into the parking spot until it reached position P to a final stop.Sample videos of this medium fidelity prototype were created showing behaviorsat each position.Figure 3.4: A medium fidelity prototype of an intelligent PWC with an RGBDsensor in a simplified back-in parking scenario.3.4.2 Back-in ManeuverDuring the interview session, with reference to the three specific locations pertain-ing to particular moments in the back-in parking task, Joe reiterated that the PWCwould initially be manually driven up to a certain position. For a given set of avail-able parking spaces, he said, “I would choose myself because I am able to think butfor others I think the chair could choose.” He would like to be able to drive the PWCuntil he reaches his friend. Following that, the intelligent PWC should show some“green signal” before executing the parking maneuver. The green signal could ei-20Figure 3.5: Diagram of key positions during a back-in parking task.ther be a flashing light or a visual display of the parking space on a screen. As hedescribed it, “Before it parked here it has to show a picture of the space. I wouldlike to see that [not understood] in the back, the car in the back like that. I wouldlike to see that there is good distance so that I don’t hit the car in the back and alsoI would like to see to park parallel with the line properly.” When asked if he wouldlike to select a parking space using an interface with visual interactive display toturn the camera around to search and select a parking space, he suggested to keepthe interface simple. He said, “It is just a parking space anyways, right?. You arenot there for life. If you do not like it than you move to another one.” As a bottomline, “Somehow the flexibility to actually pick the spot to park. I do not see theneed to be any more sophisticated than that.”213.4.3 Shared ControlIn order to agree or disagree with the selected parking spot, the user could use a“yes/no” type of interface. Joe suggested to push the joystick forward to trigger theautonomous mode and to pull it back to stop the autonomous behavior. When pro-posed with an idea of pulling back to trigger back-in parking and pushing forwardto take over control and drive forward, he commented, “That is actually good. Itis just a convention.” As a bottom line he said, “The user should always have anoption to stop the back-in parking behavior and continue driving ahead.”3.4.4 Design Guidelines IIIn this interview session, the following guidelines were drawn up:• We identified key locations involved during an autonomous back-in parkingmanuever. These locations are:1) Vantage point (A) from where the intelligent PWC identifies availableback-in parking spaces. The vantage point could be anywhere between en-trance of the sing-along event to a space adjacent to a parking space. Atthe vantage point the PWC would scan around to identify suitable parkingspaces.2) Autonomous driving mode beginning point which could be as early asvantage point (A) or as late as entrance point (E) of a back-in parking spaces.The PWC user may select and trigger the autonomous driving behavior at thevantage point or manually drive towards a parking space up to its entry point.As the PWC arrives adjacent to a parking space it would notify the user ofavailable parking space and prompt the user to engage autonomous back-inparking maneuver. The user could either agree to park on the spot with someinput to the system or continue driving forward to the next location. Whenthe user agrees to park into a given parking space, the autonomous systemwould take over control and back into the parking space.3) Final stopping point (P) at which the intelligent PWC completes theback-in parking manuever.22• A visual display showing the parking space is desirable for the user to havean idea that enough obstacle free space is available to continue driving intothe parking space.3.5 Overall Design Guidelines and DiscussionOver multiple iterations of SSPAD with Joe, we identified several objectives thatcould be useful to help him achieve successful and safe driving of a PWC withembedded intelligence.• The intelligent PWC should provide assistance while navigating in the back-ward direction where user visibility is limited.• The interface of the intelligent PWC should be informative of obstacles andsuggest possible safe directions for navigation into a parking space.• Despite Joe’s willingness to accept an intelligent PWC platform to support hisactivities of daily living, his preference is to exercise his cognitive abilitiesand be as independent as possible is important. Hence, the user’s freedom totake over control of the system is crucial for a successful system design.Design guidelines from these two focused interview sessions drive the objec-tives of our proposed back-in parking assistive system described in Chapter 4.Identifying suitable parking spaces and accessing a feasible parking space formthe essential components of a fully functional back-in parking assistance system.Within the limited scope of the first iteration of our high-fidelity prototype, we fo-cus primarily on navigation assistance and shared control during a back-in parkingtask assuming that a suitable space has been identified and the PWC is located verynear it.23Chapter 4System Development4.1 System ObjectivesOur system objective is to design an autonomous back-in parking system withshared control, such that the user can either perform the back-in maneuver him-self, with guidance from the PWC, or he can let the system autonomously navigateinto a parking spot. As Joe’s preferred input/output modalities are visual and hap-tic channels, our additional objective is to design a visual/haptic interface throughwhich he may control the intelligent PWC.4.2 System FunctionalitiesThe key functions of our system are to:1. build a map of the environment around the PWC,2. navigate the PWC backward in a pregenerated map, and3. render backward driving guidance with a physical user interface.These functions will be assessed in terms of ability to:1. reduce the number of minor contacts with the surrounding environment whilenavigating into a back-in parking space, and2. reduce time and distance travelled to get to a suitable parking spot.244.3 Hardware and Software PlatformFigure 4.1: PWC platform diagramOur PWC platform is a Quickie® Rhythm PWC (Figure 4.1) modified to com-municate through a Controller Area Network Bus (CANBUS) to Universal SerialBus (USB) interface. This interface allows access to odometry data and joystickcommand signals from the PWC. A UTM-30LX laser rangefinder sensor fromHOKUYO® is mounted at the back of the PWC for localization and mapping. AnXtion PRO LIVE camera from ASUS® is mounted on top of a PTU-D47 pan-tilt25unit from FLIR®. This sensor provides point cloud data of nearby obstacles aroundthe PWC within closer proximity and in richer detail than a laser scanner would al-low. This sensor data along with laser rangefinder data are used for localizationand obstacle avoidance during navigation. NeoPixel Ring - 24× WS2812 5050RGB LED from AdaFruit® forms an egocentric display around the PWC joystick. Itshows the possible directions the user could point the joystick for safe navigation.A free running servo motor is integrated into the PWC joystick lever to render shearforce for steering guidance. The servo motor and Neopixel ring communicate tothe system through an Arduino Uno microcontroller board.Our PWC platform runs on a Lenovo W530 laptop with Intel® CoreTM i7-3720QM processor, 8GB main memory and a 120GB solid state hard disk. ItsNVIDIA-Quadro-K1000M Graphical Processing Unit (GPU) with 2GB memoryprovides an additional computational resource for processing the RGBD sensordata. It has two dedicated USB controllers for USB 3.0 and USB 2.0. All systemsare implemented in ROS [38] (Hydro Turtle release) on top of an Ubuntu version12.04 LTS host OS. Specific details of the functioning of the overall software andhardware systems are described in Section System DesignFigure 4.2 shows the functional block diagram of our first prototype of a back-in parking assistance system. The system comprises of three components whichare highlighted in Figure 4.2 as white, light gray and dark gray shades. Whitecomponents form the physical interface of the system. Light gray componentsform the core of the PWC robotic platform; these components interface sensors andactuators to the computer. Dark gray components form the autonomous navigationportion of the system. Each of these components of the system is described indetail in following sections.4.4.1 Robot SetupTo perform autonomous navigation, we need to configure the sensors and actuatorsto be compatible with the ROS framework. The PWC interface communicates tothe computer through the CANBUS to USB interface. The user’s joystick signal and26Figure 4.2: System Block Diagramencoder data from the wheels of the PWC are transmitted through this interface.Laser rangefinder, RGBD sensor and PTU are connected to the computer throughseparate USB interface. We ensure that the RGBD sensor is connected to a dedicatedUSB bus as it requires the highest bandwidth among all the devices connected to thecomputer. A PlayStation3 (PS3) joystick communicates to the computer through aBluetooth interface.JoystickThere are two joysticks to operate the PWC, a PS3 joystick and a PWC joystick.The joystick signals are vectors comprising two values corresponding to two axesof the joysticks. Ps3Joy node is a standard ROS node used to interface with a PS3joystick. This node expects values in the range [−1,+1], with 0 correspondingto the center position of the joystick. However, joystick signals coming from thePWC are neither zero centered nor within the range [−1,+1]. To make these twojoystick signals comparable, the extreme and center position values are empirically27identified and normalized with the ProcJoy node before being published to the restof the system.EncodersThe PWC interface provides angular velocity of the wheels in terms of the numberof encoder pulses occurring within every 100 ms sampling period. The existingROS navigation stack does not support robot navigation in the backward direc-tion, which limits its use in a back-in parking application. To be able to use thesame navigation stack for both forward and backward driving purposes, we usethe OdomFilter node to remap forward encoder pulses to reverse, and vice versa.Similarly, this node also swaps left and right encoder pulses before feeding theminto the ProcOdom node. The OdomFilter node is not required for PWC whilenavigating in forward drive mode.ProcOdom node listens to encoder pulses filtered by the OdomFilter node.Using wheel diameter, axle track between center drive wheels and encoder pulses,the ProcOdom node computes velocities of each wheel and the robot. The velocityof the robot is integrated using a fourth-order Runge Kutta integrator to obtain therobot’s odometry which gets published as the odom topic.Laser RangefinderHokuyu node is a standard ROS node used to interface with SCIP 2.0-compliantHokuyo laser rangefinders. Raw laser rangefinder readings contain points that arewithin the robot’s footprint (in other words, returns due to self-occlusion). In ad-dition, when points on the edge of an object are being scanned using laser rangefinders, they show a veiling effect in which neighboring objects appear connectedto each other. Another effect that is prominent with laser rangefinders is multiplereflections of laserscan data when highly reflective obstacles are in close proximity.Laserscan Filter is an alias to a standard ROS package named laser f ilterswhich provides a pipeline of filters for laser range finder data. ScanShadowsFilterin laser f ilters removes laser readings that are most likely caused by the veilingeffect that occurs at the edge of an object. LaserScanRangeFilter in laser f iltersremoves laser readings that lie within a certain range specified in its parameter. It is28used to remove points that lie within the robot’s footprint. LaserScanIntensityFilterin laser f ilters removes laser readings which are greater than maximum intensityspecified in its parameters. It is used to reduce effects due to multiple reflectionscoming from highly reflective obstacles within close proximity. The final outputfrom the Laserscan Filter package is a filtered scan topic.RGBD SensorThe RGBD sensor is mounted at approximately normal human height (about 2 me-ters) on a PTU attached behind the seat of the PWC (Figure 4.1). Having the RGBDsensor situated at that elevation enables it to scan around the robot’s footprint,which would otherwise be impossible due to its large minimum range of 0.8 m.It is to be noted that the RGBD sensor can easily saturate the USB bus and thusrequires a dedicated USB bus to perform real time operation. We exploit the secondUSB controller available on our computer for this purpose. As we are primarilyinterested in navigating in a two-dimensional (2D) planar indoor environment, weonly use projections of point clouds from RGBD sensor onto the ground plane.Openni node is a standard ROS node to interface with an RGBD sensor. Pointcloud data from the sensor are filtered in the PointCloud to Laserscan node andpublished to the rbgdscan topic. To perform the filtering, PointCloud to Laserscanfirst transforms points from the sensor frame to the robot’s footprint frame. Theseframes are defined in the Robot Trans f orms node which is described in the fol-lowing Section. Then, points that lie within the robot’s footprint are removed. Tomap this filtered point cloud to a laserscan, each point in the cloud is indexed basedon its angle in polar coordinates. For each discrete angle, the closest point distanceis recorded. The resulting array is published as the rgbdscan topic.Pan-Tilt UnitPTU Controller is an alias to a ROS package d p ptu47 pan tilt stage which pro-vides a node that controls the actual PTU-D47 hardware. The desired pan angleand tilt angle can be passed through ROS service calls. The current pan angle andtilt angle values from the PTU are received through a ROS topic pan tilt status stamped.29The Robot Trans f orm node subscribes to this topic to dynamically reconfigure therobot’s transform tree during run-time. We explain this transform tree next.Robot TransformROS packages usually require some description of the coordinate frames of sen-sor data and robot linkages. A transform tree defines offsets in translation androtation between coordinate frames. The Robot Trans f orm node provides a robottransform tree to establish a coordinate frame relationship between laser scanner,RGBD sensor, robot base and global coordinate frames. The point on the base ofthe wheels at the center of the axle track between wheels of the PWC is definedas the base f oot print frame. The robot’s odometry values provide translation androtational values between the base f oot print frame and the global frame (dubbedodom). Initial translation and rotation of the laser scanner, PTU and RGBD sensorsare empirically obtained (using a measuring tape and a protactor) with reference tothe base f oot print frame. Since the RGBD sensor is mounted on a PTU, its coor-dinate frame changes as the PTU rotates. The current coordinate frame of the PTUis updated with the pan tilt status stamped topic from the PTU Controller.Separate transform trees must be used for forward and reverse motion of therobot. These two sets of transforms represent coordinate frames assuming front ofthe PWC as robot face (in case of forward motion), and back of the PWC as robotface (in case of reverse motion). Only one of the two transform trees is in use atany one time. For the experiments conducted in this study, we only need to use therobot transform for reverse motion. Forward motion of the PWC is always underuser’s control and need not be automated.4.4.2 Back-in Parking SystemIn this section, we describe the technical components involved in realizing a back-in parking manuever (Figure 3.5). Based on the design guidelines from the focusedinterview sessions, the primary function of the robot would be to drive the PWCin the backward direction autonomously when the user triggers the autonomousdriving behavior. The autonomous driving behavior could be engaged betweenvantage point and entry point as shown in Figure 3.5.30The back-in parking system involves mapping the environment and navigatingto a given goal location. Based on the user’s desired level of autonomy, a suitableshared control policy is chosen to achieve the back-in parking task. Componentsof the system related to identifying and locating a suitable parking space are notfully implemented in this prototype. These components are mimicked through theWOO control interface on the PS3 joystick.MappingWe use a map-based approach to autonomous navigation. Using this approach,the robot is required to construct a map of its environment and be able to local-ize itself within the map. Simultaneous Localization and Mapping (SLAM) [39]techniques have been well studied and applied for these applications. Open sourceimplementation of several SLAM techiniques are available on www.openslam.org.Gmapping [40] is a highly efficient Rao-Blackwellized particle filter designedto learn grid maps from laser range data. Gmapping is a well tested ROS packagethat can create a 2D occupancy grid map which looks like a building floorplan.The package uses scan and odom topics to listen to laserscan and pose data fromthe robot and publishes the occupancy map as the map topic. Gmapping nodebuilds the map of the surrounding as a background task as the user navigates tothe autonomous driving mode beginning point from the vantage point. At the au-tonomous driving mode beginning point, the parking space is required to be withinthe map.31Figure 4.3: ROS Navigation Stack Setup [3]32NavigationROS 2D Navigation (move base) [3] stack uses information from odometry, sensorstreams, and a goal pose to produce safe velocity commands that the robot can ex-ecute to move its mobile base. Our system uses the setup of the navigation stack asdescribed in Figure 4.3. In Figure 4.3, the white components are required compo-nents that are already implemented, the gray components are optional componentsthat are already implemented, and the blue components must be created for eachrobot platform. Instead of the gray components, map server and amcl, provid-ing the map and robot localization, our system receives this information from theGmapping package. Laserscan data as scan topic from laser scanner and rgbdscantopic processed from RGBD sensor provide the required sensor sources. RobotTrans f orm publishes the required sensor transforms. ProcOdom publishes therequired odom topic from the odometry source. The command velocity for thesystem is published as the cmd vel topic which comprises linear and angular ve-locity (v,w). For normal PWCs these velocities (v,w) are roughly proportional tothe deviation (x, y) of a joystick from its neutral position [41].The move base package comprises of three components: costmap, plannerand recovery behaviors. Details of each of these components are described in [3].Here, we briefly summarize these components in the context of our system.As 2D navigation assumes a robot navigating on a planar surface, navigation”costmaps” are generated by projecting obstacle data onto the planar surface onwhich the robot is navigating. In the costmaps, occupied cells are assigned a lethalcost such that no part of the robot’s footprint is allowed to be inside the correspond-ing cell. Area in the costmap within a user-specified inflation radius around theoccupied cells are assigned a uniformly high cost. The cost decays exponentiallyfrom the inflation radius towards free space. The costmaps for global planner andlocal planner can be configured separately as required.The global planner creates a high-level plan towards a goal location. Theglobal planner uses an A* search over the cells of the costmap to plan a trajectory.The global planner assumes a circular robot and does not take into account thedynamics and kinematics of the robot. These assumptions are made to ensure thatthe global planner returns quickly even though the path computed may not be33optimal or feasible. Due to these limitations, the global planner planner is onlyused as a high-level guidance for navigation in an environment.The local planner generates velocity commands (as ROS topic cmd vel) pro-portional to the joystick commands of the PWC while attempting to follow the planproduced by the global planner. The local planner uses the Dynamic WindowApproach (DWA) [42] to forward simulate and select among potential commandsusing a cost function that combines distance to obstacles, distance to path generatedby the global planner and the speed of the robot. The weight of each componentof the cost function plays a signficant role in determining the behavior of the robot.For example, robots designed for navigating mostly in an obstacle free space mighthave a high weighting factor for distance to obstacles. In [3], as the robot was re-quired to pass through narrow spaces such as doorways, the authors were able toset the weighting factor such a way that guaranteed only 3 cm of clearance betweenthe base of the robot and the obstacles.In cases where the robot might get stuck being surrounded by obstacles and un-able to find a valid plan to its goal, it executes behaviors as described in recovery behaviors.In case the robot gets stuck, a number of increasingly aggressive behaviors are ex-ecuted in an attempt to clear space around the robot. In [3], the authors describeincreasingly aggressive clearing of the local costmap and rotating in place. Theincreasing aggressive costmap clearing approach first sets costmap outside of apre-specified region to zero. Next, the robot performs an in-place rotation to clearout space. If it fails to clear out the space, the robot more aggressively sets costmapto zero to remove all obstacles outside of the rectangular region in which it can ro-tate in place. In our application, we do not perform in-place rotation as it mightbe uncomfortable for our user operating a PWC. The recovery behavior clears thecostmap around the PWC when the robot finds itself stuck. If clearing the costmapstill does not help in finding a valid plan, the goal is aborted and the PWC stops.The recovery behavior was never required during the experiments.Shared ControlThe Shared Control node listens to the PWC joystick and PS3 joystick signals fromProcJoy and Ps3Joy nodes respectively. This node also listens to the cmd vel ROS34topic from the move base package. Shared Control node publishes the angular joy-stick positions from these three command signals to the Arduino controller throughthe rosserial interface. The Arduino controller uses these angular components toproduce a suitable user interface signal. Details of the user interface are describedin Section 4.4.3.A joystick control signal is specified in polar coordinates (ρ,θ), where ρ ∈[0,1] is the (normalized) magnitude of the joystick deflection and θ ∈ [−pi,+pi]is the angle (with θ = 0 corresponding to a forward deflection and θ increasingcounter-clockwise). Vector~u is the PWC joystick shaft position (ρu,θu) and vector~d is the desired joystick shaft position (ρd ,θd) derived from cmd vel ROS topic(ρa,θa) from the move base package during the experiment and from wizard’s PS3joystick signal (ρw,θw) during the training session. As a safety feature, the PS3joystick can be used to override both the user’s and the move base signals duringthe experiment, although this emergency intervention was not actually needed.~d(ρd ,θd) =(ρw,θw) in training session(ρa,θa) in experiment session(4.1)Shared Control node uses these three command signals to publish the final out-put vector~s as shared joystick shaft position (ρs,θs) to the motor controllers basedon mode of operation of the PWC. This output vector is published as joy out putROS topic. The modes of operation of the PWC are:1. Mode 1 Zero Guidance: Given that the PWC user thinks s/he is capable ofnavigating into the parking spot on his/her own, we allow the user to nav-igate into a back-in parking spot without any system intervention. Minorcontacts are allowed on the sides, but frontal contacts are unacceptable asthey might expose participant’s feet to risk of impact. The impact of contactis minimized by conducting the session in a very controlled environmentwith padded obstacles or obstacles that would move upon contact. In thismode of operation, joy out put(~s) is same as the PWC’s joystick signal (~u).~s =~u (4.2)352. Mode 2 Autonomous Assisted: For novice users who struggle to navigatebackward into a parking spot and might benefit from some training or as-sistance in the task, this autonomous assisted mode could be useful. In thismode, the user joystick is customized to produce both haptic feedback anda visual LED display to indicate the driving direction suggested by the de-sired joystick shaft position ~d. The wheelchair only operates when the user’sangular command (θu) and the desired angular command (θd) are within asmall threshold level (αthresh). The required level of agreement could varybased on the user’s cognitive level as well as the stage of their PWC training.We empirically set αthresh = pi/12.~s =~d if |θd−θu| ≤ αthresh0 otherwise(4.3)3. Mode 3 Autonomous Trigger: For users who would prefer to let the au-tonomous system take care of the back-in parking task, this autonomoustrigger mode could be useful. The user can drive forward by him/herself, butthe system would trigger back-in parking behavior when the user pushes thejoystick backward. The PWC drives along the path suggested by the desiredjoystick shaft position ~d. The PWC stops as soon as the user arrives at theparking destination or as soon as the user releases or pushes the joystick inthe forward direction.~s =~d if |θu|> pi/2~u otherwise(4.4)Experimenting with these three modes of operation could be useful to under-stand user preference in level of autonomy as well as their ability to complete theback-in parking task with different levels of assistance. Mode 1 would provide abaseline of the user’s ability to back-in park on their own. Mode 2 would informon the user’s ability to follow steering guidance to navigate backwards. Mode 3would inform on the user’s level of comfort with a self-driving system.36WoO ControlComponents of the back-in parking system that are not within the scope of thisthesis involve identifying suitable parking spaces and assessing a feasible parkingspace. We assume a predefined parking spot location and use WoO Control topublish a navigation goal topic for the move base package. WoO Control alsoprovides an interface for a trainer or teleoperator to switch between modes duringtraining or experiment using a PS3 joystick.4.4.3 User InterfaceFigure 4.4: PWC user interface with LED ring around a rotating knob mountedon the joystick shaft.Figure 4.4 shows the components of the user interface. The user interface con-sists of a NeoPixel LED Ring display and a rotating knob mounted on the PWCjoystick shaft. The rotating knob is powered by a free running servo motor. InFigure 4.4, vector ~u is the direction pointed by the PWC joystick shaft and vector~d is the desired direction published by move base during the experiment. Dur-ing the training session, ~d is the desired direction derived from the trainer’s PS3joystick signal. This interface compares the user’s current heading ~u with the de-sired heading ~d to produce the visual and haptic display to render guidance duringnavigation.37Visual DisplayThe visual display shows ~d and ~u as blue and red lights respectively (Figure 4.4).But whenever the PWC is operated by the PS3 joystick, this additional signal isshown as green lights.Haptic DisplayFigure 4.5: Joe holding the joystick handle.Pacinian corpuscles (Figure 4.6) are mechanoreceptors sensitive to transientvibration and pressure applied to skin. These corpuscles, due to their relativelylarger receptive field and fast adaptation rate, play a significant role in haptic per-ception [44, 45]. The finger webbings contain 44 to 60% of the total Pacinian cor-puscles (Figure 4.6). The haptic display renders the angular difference between~d and ~u (αud) in the form of a proportional shear force on a rotating knob. Therotating knob contains bumps which exert shear force on the webbings betweenthumb and index finger (Figures 4.5 and 4.6). The direction of rotation is designedto mimick the steering wheel behavior of vehicles. We believe that this mimickery38Figure 4.6: A rotating knob and Pacinian corpuscles in right hand of a 89-year-old female specimen cadaver [43]. The surface around the knobshows the location where shear force is executed.is easier for older adults to associate with their past driving experience, and thus asuitable intuitive transition to using a joystick interface.Figure 4.7 shows the operation of the user interface. In Mode 1, the visualdisplay only shows red lights corresponding to direction of ~u. The haptic displayis not functional in this mode. In Mode 2, the visual display flashes blue lightcorresponding to the direction of ~d. As shown in Figure 4.7a, the haptic display isfunctional only when the angle between~u and ~d (αud) is less than αmax (empiricallyset to pi/3). As shown in Figures 4.7b and 4.7c, the PWC drives in the directionof ~d only when |αud | is less than αthresh (empirically set to pi/12). The haptic dis-play stops only when |αud | is less than αdb, which corresponds to the 0.052 radiansdeadband of the servo motor (Figure 4.7d). In Mode 3, visual and haptic displaysrender similarly to that of Mode 2. However, the driving behavior is slightly differ-ent in Mode 3: the PWC drives following ~d whenever user’s joystick points in anybackward direction.39Figure 4.7: Operation of the user interface. a) |αud | <αmax (Haptic on, noPWC motion); b) and c) |αud | <αthresh (Haptic on, PWC in motion); d)|αud | <αdb (Haptic off, PWC in motion)4.5 System Test and DiscussionThe system was tested in a laboratory setting with requirements as specified in thePIDA [37]. The PIDA describes the back-in parking task as an ability to back in andpark a PWC between two chairs spaced 1m (∼3ft) apart and placed against a wall.In the example scenario set up in the lab, we used a group of chairs with small andthin legs to simulate a challenging scenario for 2D navigation. Our pilot study wasconducted in a LTC scenario with wheelchairs spaced with 10% increase in size ofthe parking space for safety purposes.40Chapter 5Pilot StudyIn this chapter, we describe the experimental setup of a pilot study conductedwith Joe at his LTC facility. A back-in parking scenario was created using sparewheelchairs available at his LTC facility. The location for the experiment was anactivity room where sing-along events are organized. Three specific starting posi-tions were selected (Figure 5.1). System tests and usability tests were conductedin the scenario.5.1 Experiment ScenarioThe experiment scenario comprises of a parking spot, P, which is surrounded bymanual wheelchairs(typical of the type used by residents of the LTC facility) onthree sides. Three specific initial orientations (O1-O3) are marked on the floor(Figure 5.2a). The task is to back-in park the PWC into the parking spot P startingfrom the three different initial orientations under three different levels of autonomyas described in Section 4.4.2. Task completion time, route length and number ofminor contacts are recorded. The usability scores are recorded in terms of ease ofuse, effectiveness and safety.5.2 Test HypothesesThe following hypotheses were tested in the Usability experiment:41Figure 5.1: Experimental scenario (overhead view).1. H1: Mode 2 and Mode 3 will reduce number of minor contacts, completiontime and route length as compared to Mode 1.2. H2: Mode 2 and Mode 3 will increase the user’s perception of ease of use,effectiveness and safety as compared to Mode 1.5.3 System TestThe system test evaluates average completion time, number of minor contacts androute lengths for Mode 3. A total of 126 back-in parking tasks were completed withat least 40 trials from each initial orientation taken in random order. The systemtest is performed at the LTC facility without any participant sitting on the PWC.42(a) Initial Position O1 (O3 is similar to O1 but on the opposite side.)(b) Initial Position O2Figure 5.2: Joe on PWC at initial positions.435.4 Usability TestThe usability test compares the effects of treatments (Mode 2 and Mode 3) withbaseline observations (Mode 1). The effects are measured in terms of number ofminor contacts, completion time and route length. The usability test was conductedin 90 minute sessions, each between 10:00am - 12:00noon on three separate days.On a 30 minutes training session, conducted a day prior to the experiment, theuser is trained to operate the PWC in different modes. On the first two days, weperform 10 trials in Mode 1 as baseline observations. The following 30 trials formthe intervention phase, with rapidly alternating treatments of the three differentmodes. On the first day, we randomly select O2 as the initial orientation. For thesecond day, the user chooses his preference of initial orientation between O1 andO3.On the third day, we continue intervention and conclude with the final baselineobservations. All told, the user performs 8 pairs of alternating treatment of Mode1 and Mode 2 from O1 and O2 respectively. The final 8 baseline observations areperformed from the user’s preferred starting orientation.Usability scores in terms of ease of use, effectiveness and feeling of safetyare recorded at the end of each day’s session in a 30 minute interview. The userrates these three modes on a scale from 1–10. Finally, a semi-structured interviewprobes for more information about usability.44Chapter 6Results, Analysis and DiscussionIn this chapter, we describe our analysis of experimental data from the pilot studyconducted with Joe at his LTC facility. For statistical analysis of effects of treat-ments in terms of completion time and route length, we use a Kruskal-Wallis test,which is a non-parametric alternative to Analysis of variance (ANOVA) for datawith small sample sizes having a non-normal distribution. The effect of treatmentsin terms of number of minor contacts, ease of use, effectiveness [46] and safety arepresented graphically. System test results analysis highlights the robustness of thesystem used for the study. Usability test results analysis examines the effects of thesystem on the performance of the user.6.1 System TestFigures 6.1 and 6.2 show completion times and route lengths for 126 repetitions ofthe system parking by itself at the LTC from the different initial orientations in boxplots. The bottom of each box is the 25th percentile (lower quartile), the top is the75th percentile (upper quartile), and the line in the middle is the 50th percentile(median). The outer whiskers are 1.5 × IQR and dots are outliers.The median completion times were 17, 22 and 18 seconds for initial orienta-tions O1, O2 and O3 respectively (Figure 6.1). There was statistically significantdifference between O1 and O2 (p-value = 5.33e-07) as well as O2 and O3 (p-value= 1.07e-08) but no statistically significant difference between O1 and O3 (p-value45Figure 6.1: System Test: Completion time. (The bottom of each box is the25th percentile, the top is the 75th percentile, and the line in the middleis the 50th percentile. The outer whiskers are 1.5 × IQR and dots areoutliers.)= 0.10). The median route lengths were 1.95, 2.24 and 2.10 meters for O1, O2and O3 respectively. There was statistically significant difference between O1 andO2 (p-value = 7.50e-09), O1 and O3 (p-value = 6.36e-06) as well as O2 and O3(p-value = 1.36e-07). There was no minor contacts observed during all the trialsin Mode 3. Outliers occurring during the experiment were situations when thecompletion time is outside the 1.5 × IQR. One of the possible causes of faiure waswhen the local planner in the autonomous navigation system failed to steer towardsthe goal while navigating around obstacles. 13.5% of points fall in the outliers inthe completion time box plot (Figure 6.1). Hence, the performance of the systemused in the usability study can be approximated to be 86.5%.46Figure 6.2: System Test: Route Length. (The bottom of each box is the 25thpercentile, the top is the 75th percentile, and the line in the middle is the50th percentile. The outer whiskers are 1.5× IQR and dots are outliers.)6.2 Usability TestFigures 6.3,6.4 and 6.5 show Mode 1, Mode 2 and Mode 3 in red, orange and bluecolors respectively over a three days usability study with Joe. In these figures, trialswith initial orientation O1 are labeled as triangles whereas those with initial orien-tation O2 are labeled as dots. In the following subsections, we compare these threemodes in terms of completion time, route length and number of minor contacts.We also compare the average usability of these three modes in terms of ease of use,effectiveness and safety.476.2.1 Completion TimeFor O2 (Figure 6.3), the completion times in Mode 1 and Mode 3 were not sig-nificantly different (p-value = 0.05). The completion times in Mode 1 and Mode2 were not significantly different on day 1 (p-value = 0.13), while Mode 2 wasstatistically significantly less than Mode 1 on day 3 (p-value = 0.001).For O1 (Figure 6.3), the completion times in Mode 1 and Mode 2 were notsignificantly different (p-value = 0.65). The completion times in Mode 1 and Mode2 were not significantly different on day 1 (p-value = 0.08), while Mode 1 wasstatistically significantly less than Mode 2 on day 3 (p-value = 0.01).48Figure 6.3: Usability Test: Completion time.49Figure 6.4: Usability Test: Route length.50Figure 6.5: Usability Test: Number of minor contacts.516.2.2 Route LengthFor O2 (Figure 6.4), the route lengths in Mode 1 and Mode 3 were not significantlydifferent (p-value = 0.17). The route lengths in Mode 1 and Mode 2 were notsignificantly different (p-value = 0.10) on day 1 while Mode 2 was statisticallysignificantly less than Mode 1 (p-value = 0.01) on day 3.For O1 (Figure 6.4), the route length in Mode 3 was statistically significantlyless than Mode 1 (p-value = 0.01). The route length in Mode 2 was statisticallysignificantly less than Mode 1 (p-value = 0.03 on day 1 and 0.03 on day 3).6.2.3 Minor ContactsFigure 6.6: Usability Test: Total number of minor contacts across modes.A total of 39 minor contacts occured during Mode 1 of operation (Figures 6.5and 6.6). While the user experienced at least one minor contact in 37.5% of thetrials during Mode 1, there were not any minor contacts during Mode 2 and Mode3 of operation.526.2.4 Average UsabilityFigure 6.7: Average usability score.Average usability scores over three days show user preference in the descend-ing order of Mode 3, Mode 2 and Mode 1 for ease of use, effectiveness and safetyduring all three days. This preference was also consistent with the order of pref-erence established in the qualitative semi-structured interview where Joe describedhis order of preference in a separate question. Joe described that Mode 1 requiredhigher mental load than Mode 2 and Mode 3. In Mode 2, Joe found that there wasa shared cognitive load. He mentioned that following the guidance on the joystickwas initially mentally challenging. But, after some practice, following the guid-ance was the only task he had to perform during the back-in parking maneuverwhich made him experience a shared cognitive load. At the end of the interventionphase, he reported a sense of anticipation as to where the PWC was going to guidehim, indicating some learning trend on his part. This made him feel Mode 2 wasuseful as a learning interface for the back-in parking task, as it helped him under-stand joystick movements while still feeling safe and secure with the system beingresponsible for path planning and obstacle avoidance. In Mode 3, he found the53least cognitive load, leading to better perception of ease of use, effectiveness andsafety. In this mode, he described his increased level of confidence and ability tofocus on overall scenario understanding and vigilance, which made him feel morecomfortable with the system.54Chapter 7Conclusions and Future Work7.1 ConclusionsIn this thesis we discussed a SSPAD method to design and evaluate a back-in park-ing system with a cognitively impaired older adult living in a LTC facility. We eval-uated the usability of standard robotics software along with a simple non-intrusivejoystick interface with the user. We draw the following conclusions addressingthe research questions posed in Section 1.3.1. How does the proposed back-in parking system affect performance of theuser in terms of completion time, route length and number of minor contacts?While the user experienced at least one minor contact in 37.5% of the trialswhen driving unaided (Mode 1), the proposed system eliminated all minorcontacts. There was not any statistically significant difference in completiontime and route length between the modes. The lack of significant differencecould be due to short length of task in terms of time and distance to travel.2. How does the proposed system affect the user’s perception of ease of use,effectiveness and safety?There was an increase in the user’s perception of ease of use, effectivenessand feeling of safety using the proposed back-in parking system as com-pared to driving unaided (Mode 1). It could be argued that the generalizabil-55ity of these scores is debatable because the user himself was involved in thedesign process indicating a possible bias in favor of the proposed system.7.2 Future WorkAs a pilot study, our objective was to set the foundation for a single subject, user-centric robotics system design, targetted at an ageing population. This thesis fo-cuses on the intersection of clinical research and computational robotics. The us-ability study could be replicated on a larger population in a longer term clinicalstudy for further evaluation of the usability of the proposed system. This workcould also serve as a benchmark for future algorithms and systems for back-in park-ing and similar technologies. In the future, we would like to develop this systemas an assessment method as the Parking and Driving Assessment (PANDA) system.The PANDA system could be used to evaluate the level of autonomy needed to suc-cessfully cater to the mobility needs of users of smart, powered, mobility-basedcollaborative systems such as intelligent PWCs.56Bibliography[1] P. Viswanathan, R. Wang, and A. Mihailidis, “Wizard-of-Oz andmixed-methods studies to inform intelligent wheelchair design for olderadults with dementia,” in Association for the Advancement of AssistiveTechnology in Europe, 2013. → pages iv, 2, 8, 10[2] I. Mitchell, P. Viswanathan, B. Adhikari, E. Rothfels, and A. Mackworth,“Shared control policies for safe wheelchair navigation of elderly adults withcognitive and mobility impairments: Designing a wizard of oz study,” inAmerican Control Conference (ACC), 2014, June 2014, pp. 4087–4094. →pages iv, 2, 8, 10[3] E. Marder-Eppstein, E. Berger, T. Foote, B. Gerkey, and K. Konolige, “Theoffice marathon: Robust navigation in an indoor office environment,” inInternational Conference on Robotics and Automation, 2010. → pages viii,6, 32, 33, 34[4] C.-A. Smarr, T. Mitzner, J. Beer, A. Prakash, T. Chen, C. Kemp, andW. Rogers, “Domestic robots for older adults: Attitudes, preferences, andpotential,” International Journal of Social Robotics, vol. 6, no. 2, pp.229–247, 2014. [Online]. Available:http://dx.doi.org/10.1007/s12369-013-0220-0 → pages 1, 5[5] E. Bourret, L. Bernick, C. Cott, and P. Kontos, “The meaning of mobility forresidents and staff in long-term care facilities,” Journal of AdvancedNursing, vol. 37, no. 4, pp. 338–345, 2002. [Online]. Available:http://dx.doi.org/10.1046/j.1365-2648.2002.02104.x → pages[6] L. Fehr, W. E. Langbein, and S. B. Skaar, “Adequacy of power wheelchaircontrol interfaces for persons with severe disabilities: A clinical survey,”Development, vol. 37, no. 3, pp. 353–360, 2000. → pages 1[7] P. Viswanathan, J. L. Bell, R. H. Wang, B. Adhikari, A. K. Mackworth,A. Mihailidis, W. C. Miller, and I. M. Mitchell, “A Wizard-of-Oz Intelligent57Wheelchair Study with Cognitively-Impaired Older Adults: Attitudestoward User Control,” in IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) Workshop on Assistive Robotics for Individualswith Disabilities: HRI Issues and Beyond, Sept 14–18, 2014. → pages 1, 10[8] M. J. Muller and S. Kuhn, “Participatory design,” Communications of theACM, vol. 36, no. 6, pp. 24–28, 1993. → pages 4[9] M. J. Muller, “Participatory design: the third space in HCI,”Human-computer interaction: Development process, pp. 165–185, 2003. →pages 5[10] K. Moffatt, J. McGrenere, B. Purves, and M. Klawe, “The participatorydesign of a sound and image enhanced daily planner for people withaphasia,” in Proceedings of the SIGCHI conference on Human factors incomputing systems. ACM, 2004, pp. 407–414. → pages[11] K. M. Tsui, E. McCann, A. McHugh, M. Medvedev, H. A. Yanco,D. Kontak, and J. L. Drury, “Towards designing telepresence robotnavigation for people with disabilities,” International Journal of IntelligentComputing and Cybernetics, vol. 7, no. 3, pp. 307–344, 2014. → pages 5[12] D. Ding, R. A. Cooper, and J. Pearlman, “Incorporating participatory actiondesign into research and education,” in International Conference onElectrical Engineering, 2007. → pages 4, 5[13] J. Seale, C. McCreadie, A. Turner-Smith, and A. Tinker, “Older people aspartners in assistive technology research: the use of focus groups in thedesign process,” Technology and Disability, vol. 14, no. 1, pp. 21–29, 2002.→ pages 5[14] M. Story, J. Mueller, and R. Mace, “The universal design file: Designing forpeople of all ages and abilities,” Design Research and Methods Journal,vol. 1, no. 1, 1998. → pages 5[15] R. Cooper, G. G. Grindle, J. J. Vazquez, J. Xu, H. Wang, J. Candiotti,C. Chung, B. Salatin, E. Houston, A. Kelleher, E. Teodorski, and S. Beach,“Personal mobility and manipulation appliance design, development, andinitial testing,” Proceedings of the IEEE, vol. 100, no. 8, pp. 2505–2511,Aug 2012. → pages 5[16] D. Kairy, P. Archambault, P. W. Rushton, E. Pituch, A. El Fathi, C. Torkia,P. Stone, F. Routhier, R. Forget, L. Demers et al., “Users’ perspectives of58intelligent power wheelchair use for social participation,” in Proceedings ofRehabilitation Engineering and Assistive Technology Society of NorthAmerica Annual Conference, RESNA, 2013. → pages 5[17] H. A. Yanco, “Shared user-computer control of a robotic wheelchair system,”Ph.D. dissertation, Massachusetts Institute of Technology, 2000. → pages 5[18] R. C. Simpson, “Smart wheelchairs: A literature review,” Journal ofRehabilitation Research and Development, vol. 42, no. 4, pp. 423–438,2005. → pages 7[19] P. Viswanathan, “Navigation and obstacle avoidance help (NOAH) forelderly wheelchair users with cognitive impairment in long-term care,”Ph.D. dissertation, University of British Columbia, 2012. → pages 5, 6[20] E. Prassler, J. Scholz, and P. Fiorini, “A robotics wheelchair for crowdedpublic environment,” Robotics & Automation Magazine, IEEE, vol. 8, no. 1,pp. 38–45, 2001. → pages 6[21] A. Sutcliffe, J. Pineau, and D. Grollman, “Rephrasing the problem of roboticsocial navigation,” in Workshop on Assistive Robotics for Individuals withDisabilities: HRI Issues and Beyond, 2014. → pages 6[22] H. Sermeno-Villalta and J. Spletzer, “Vision-based control of a smartwheelchair for the automated transport and retrieval system (ATRS),” inRobotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEEInternational Conference on, May 2006, pp. 3423–3428. → pages 6[23] Y. Ren, W. Zou, H. Fan, A. Ye, K. Yuan, and Y. Ma, “A docking controlmethod in narrow space for intelligent wheelchair,” in Mechatronics andAutomation (ICMA), 2012 International Conference on, Aug 2012, pp.1615–1620. → pages 6[24] D. I. Katzourakis, J. C. de Winter, M. Alirezaei, M. Corno, and R. Happee,“Road-departure prevention in an emergency obstacle avoidance situation,”IEEE Trans. Systems, Man, and Cybernetics: Systems, 2013. → pages 6[25] C. Urdiales, J. Peula, M. Fernandez-Carmona, C. Barrue´, E. Pe´rez,I. Sa´nchez-Tato, J. del Toro, F. Galluppi, U. Corte´s, R. Annichiaricco,C. Caltagirone, and F. Sandoval, “A new multi-criteria optimization strategyfor shared control in wheelchair assisted navigation,” Autonomous Robots,vol. 30, no. 2, pp. 179–197, 2011. → pages 759[26] E. Vander Poorten, E. Demeester, E. Reekmans, J. Philips, A. Huntemann,and J. De Schutter, “Powered wheelchair navigation assistance throughkinematically correct environmental haptic feedback,” in Robotics andAutomation (ICRA), IEEE International Conference on, May 2012, pp.3706–3712. → pages 7[27] S. de Groot, J. C. F. de Winter, J. M. Lo´pez Garcı´a, M. Mulder, and P. A.Wieringa, “The effect of concurrent bandwidth feedback on learning thelane-keeping task in a driving simulator.” Human Factors, vol. 53, no. 1, pp.50 – 62, 2011. → pages 7[28] J. Chun, S. H. Han, G. Park, J. Seo, I. Lee, and S. Choi, “Evaluation ofvibrotactile feedback for forward collision warning on the steering wheeland seatbelt,” Int. Journal of Industrial Ergonomics, vol. 42, no. 5, pp. 443 –448, 2012. → pages 7[29] P. Viswanathan, J. J. Little, A. K. Mackworth, and A. Mihailidis,“Navigation and obstacle avoidance help (NOAH) for older adults withcognitive impairment: a pilot study,” in Proc. Int. ACM SIGACCESS Conf.on Computers and Accessibility (ASSETS), 2011, pp. 43–50. → pages 7[30] R. H. Wang, A. Mihailidis, T. Dutta, and G. R. Fernie, “Usability testing ofmultimodal feedback interface and simulated collision-avoidance powerwheelchair for long-term-care home residents with cognitive impairments.”Journal of Rehabilitation Research & Development, vol. 48, no. 7, 2011. →pages 7[31] Q. Zeng, C. L. Teo, B. Rebsamen, and E. Burdet, “A collaborativewheelchair system,” IEEE Trans. Neural Systems and RehabilitationEngineering, vol. 16, no. 2, pp. 161–170, 2008. → pages 7[32] Q. Zeng, E. Burdet, and C. L. Teo, “Evaluation of a collaborative wheelchairsystem in cerebral palsy and traumatic brain injury users,”Neurorehabilitation and Neural Repair, vol. 23, no. 5, pp. 494–504, 2009.→ pages 7[33] Q. Li, W. Chen, and J. Wang, “Dynamic shared control forhuman-wheelchair cooperation,” in ICRA, 2011, pp. 4278–4283. → pages 7[34] T. Carlson and Y. Demiris, “Collaborative control for a robotic wheelchair:evaluation of performance, attention, and workload,” IEEE Trans. Systems,Man, and Cybernetics, Part B: Cybernetics,, vol. 42, no. 3, pp. 876–888,2012. → pages 760[35] J. F. Kelley, “An empirical methodology for writing user-friendly naturallanguage computer applications,” in Proceedings of the SIGCHI conferenceon Human Factors in Computing Systems. ACM, 1983, pp. 193–196. →pages 8[36] L. D. Riek, “Wizard of oz studies in HRI: a systematic review and newreporting guidelines,” Journal of Human-Robot Interaction, vol. 1, no. 1,2012. → pages 8[37] D. Dawson, R. Chan, and E. Kaiserman, “Development of thepower-mobility indoor driving assessment for residents of long-term carefacilities: A preliminary report,” Canadian Journal of OccupationalTherapy, vol. 61, no. 5, pp. 269–276, 1994. → pages 10, 40[38] M. Quigley, K. Conley, B. P. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler,and A. Y. Ng, “ROS: an open-source robot operating system,” in ICRAWorkshop on Open Source Software, 2009. → pages 26[39] S. Thrun, W. Burgard, and D. Fox, “A real-time algorithm for mobile robotmapping with applications to multi-robot and 3D mapping,” in Robotics andAutomation, Proceedings. ICRA. IEEE International Conference on, vol. 1.IEEE, 2000, pp. 321–328. → pages 31[40] G. Grisetti, C. Stachniss, and W. Burgard, “Improved techniques for gridmapping with Rao-Blackwellized particle filters,” Robotics, IEEETransactions on, vol. 23, no. 1, pp. 34–46, 2007. → pages 31[41] E. B. Vander Poorten, E. Demeester, E. Reekmans, J. Philips,A. Huntemann, and J. De Schutter, “Powered wheelchair navigationassistance through kinematically correct environmental haptic feedback,” inRobotics and Automation (ICRA), 2012 IEEE International Conference on.IEEE, 2012, pp. 3706–3712. → pages 33[42] D. Fox, W. Burgard, and S. Thrun, “The dynamic window approach tocollision avoidance,” IEEE Robotics & Automation Magazine, vol. 4, no. 1,pp. 23–33, 1997. → pages 34[43] B. Stark, T. Carlstedt, R. Hallin, and M. Risling, “Distribution of humanPacinian corpuscles in the hand a cadaver study,” Journal of Hand Surgery(British and European Volume), vol. 23, no. 3, pp. 370–372, 1998. → pages39[44] W. Ku¨hnel, Color atlas of cytology, histology, and microscopic anatomy.Thieme, 2003. → pages 3861[45] S. J. Lederman and R. L. Klatzky, “Haptic perception: A tutorial,” Attention,Perception, & Psychophysics, vol. 71, no. 7, pp. 1439–1459, 2009. → pages38[46] J. R. Lewis, B. S. Utesch, and D. E. Maher, “Umux-lite: when there’s notime for the sus,” in Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems. ACM, 2013, pp. 2099–2102. → pages 4562Appendix ASemi-structured InterviewQuestionnaireA.1 Background InterviewThank you for participating in this sub-study. This sub-study will focus on devel-opment of a prototype of an intelligent wheelchair capable of semi-autonomouslyidentifying and navigating into back-in parking space. Your contribution will be inthe conception, design, development and evaluation of a suitable user-interface foryou to interact with the wheelchair to perform the back-in parking task. Before webegin with the design of our new user-interface, I would like to know about youropinion on current experience with your mobility device.• How do you get around (name of the LTC facility)?• How easy or difficult it is getting around (name of the LTC facility) usingyour manual wheelchair/power wheelchair/walker?Probe: What specifically makes it easier or harder to get around?Probe: Other environments if applicable.• What concerns do you have with your ability to use your mobility device(manual wheelchair/power wheelchair/walker)?Probe: Currently?63Probe: In the future (related to cognitive decline)?• How well informed do you feel about your environment while using yourmobility device (manual wheelchair/power wheelchair/walker)? Do you thinkyou have good sense what is around you and your mobility device so that youcan safely move around?Probe: Within the facility?Probe: Outside of the facility?• What challenges do you face while driving backward?• What challenges do you face while back-in parking?• What do you like about your current mobility device? Explain to me, at whattimes do you find your mobility device useful to you?• What kind of problems do you have using mobility device (manual wheelchair/powerwheelchair/walker/cane) (if they use a device)?Probe: What kind of help do you get with your moving around on yourmobility device?Probe: What do you think about the user-interface in (your interaction with)your mobility device?• Have you ever had an accident with your powered wheelchair? What hap-pened? (if powered wheelchair user.)• If you could wave a magic wand and create a wheelchair that would helpyou move around with minimum physical and mental effort and maximumawareness of your surrounding, what would it look like and how would itwork?A.2 Early Prototyping InterviewNow that we’ve talked about your mobility experiences and some of your ideas, I’dlike to show some short videos that demonstrates what a new ”intelligent” powered64wheelchair that is being developed is able to do. After that I’ll ask you some morequestions in order to get your feedback on what you saw.Please keep in mind that the wheelchair you will be seeing is a prototype andnot a final product. That is why, we really appreciate being able to hear yoursuggestions at this point in time.For each video (total: up to 3 videos showing up to 3 different modes thewheelchair would identify and back-in park into a parking space):• What do you think of that?Probe: What did you think of [specific feature of driving mode, e.g., howthe wheelchair looked around to find a parking space, how the wheelchairconfirmed a suitable parking space with the user, how the chair drove to thedesired parking space, how the chair negotiated with the user input whiledriving in confined space, etc.] ?Probe: For you?Probe: For others?For the sub-study shown in the video, the researchers are developing a suitableuser-interface to guide wheelchair in parking space identification and confirmation,navigation negotiation during parking behavior in confined space.• What do you think about using the joystick to drive the wheelchair?Probe: What do you think about the position, size of the joystick?Probe: What do you think about the haptic (vibration) feedback?• Do you think you can use the joystick to communicate with the wheelchairother than driving?• In what other ways do you find it intuitive to command the wheelchair?Probe: What do you think about using voice command, hand/head gesturesetc. ?• Do you like the wheelchair to give you visual cues or audio-cues or bothwhen it’s interacting with you?65Probe: What information would you like the wheelchair to give you as visualcues?Probe: What information would you like the wheelchair to give you as audiocues?• How would you like the visual interface on the wheelchair to be like?Probe: What information would you like to see in the visual display?Probe: When would you like to see what? Before/during and after identify-ing parking spot? Before/during and after parking task is executed?Probe: Where and how big visual interface would you like?Probe: How big texts/icons would you like in the visual display?Probe: Would you like a touch screen interface in the visual display?• How would you like to tell the wheelchair that you want it to help you startlooking for parking space?• How would you like it if you could touch the joystick or some part of thewheelchair so as to communicate with it?• How would you like to tell the wheelchair which direction to search a parkingspace?• How would you like the wheelchair to inform you about the parking space itdetected?Probe: Would you like it to flash the identified parking space on the screenor some lights?• How would you like to confirm to the wheelchair that the parking space itidentified is acceptable to you?Probe: How would you like to tell the wheelchair to find another parkingspace if you disagree with the one it’s recommending to you?Now that we have talked about your opinion on how you would like to interactwith the wheelchair, the researchers will be rapid prototyping the user interface66based on your design recommendations. We will iterate their design over anotherseries of interviews.A.3 In-Chair SessionNow, a working prototype of the back-in parking system is ready. We would likeyou to experience the working system. Please keep in mind that the wheelchairyou will be seeing is a prototype and not a final product. That is why we reallyappreciate being able to hear your suggestions at this point in time. Immediatelyafter each in-chair session• How easy was it for you to use the wheelchair to back-in park?Probe: What made it easy/difficult?• How effective was the wheelchair in helping you back-in park?• How safe did you feel while using this wheelchair in this task?• How did the wheelchair impact your ability (positive and/or negative) toavoid obstacles?• How did the wheelchair impact your ability (positive and/or negative) tocomplete the back-in parking task?• Would you use this wheelchair feature that is designed to help you with[main feature of driving mode] while you’re doing the back-in parking?Why/Why not?• If you could wave a magic wand and change this wheelchair feature, howwould you change it to help you with the back-in parking?A.4 Post In-chair Session InterviewWe would highly appreciate your suggestions for future iterations on this system.• What do you think about the interface in this prototype? Probe: How do youlike the size of the LED lights display contents?67Probe: How do you like the amount and organization of information on thescreen? ( too much details, too few details, etc.)Probe: How do you like steering guidance rendering on the joystick?• Which features did you like the most? Which features did you like the least?Why?• What do you think about the overall driving behaviour of the wheelchair?• Where would you like to go/what would you like to do using this device?• Where do you think you would find this autonomous back-in parking featureuseful?• Would you like to try it out in an additional in-chair session within yourfacility?• Would you feel confident being seen using this wheelchair?• What suggestions do you have to improve this interface in the next iteration?Probe: Some people care about how they look when using a wheelchair inpublic. How would you feel about that with this wheelchair?• What do you like the most/least about the wheelchair?• If you could wave a magic wand and create the perfect powered wheelchair,what would it look like and how would it work?Thank you for your feedback, the researchers will take your feedback into ac-count to design a newer version.68


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items