Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Designing technology for and with special populations : and exploration of participatory design with… Moffatt, Karyn 2004

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2004-0275.pdf [ 9.9MB ]
Metadata
JSON: 831-1.0051334.json
JSON-LD: 831-1.0051334-ld.json
RDF/XML (Pretty): 831-1.0051334-rdf.xml
RDF/JSON: 831-1.0051334-rdf.json
Turtle: 831-1.0051334-turtle.txt
N-Triples: 831-1.0051334-rdf-ntriples.txt
Original Record: 831-1.0051334-source.json
Full Text
831-1.0051334-fulltext.txt
Citation
831-1.0051334.ris

Full Text

D e s i g n i n g Technology F o r a n d W i t h S p e c i a l Populations: A n Exploration of Participatory Design w i t h People w i t h A p h a s i a by Karyn Moffatt B.A.Sc, University of British Columbia, 2001  A THESIS SUBMITTED IN PARTIAL F U L F I L L M E N T OF T H E REQUIREMENTS FOR T H E D E G R E E OF  Master of Science in T H E FACULTY OF G R A D U A T E STUDIES (Department of Computer Science)  We accept this thesis as conforming to the required standard  T h e U n i v e r s i t y of B r i t i s h C o l u m b i a April 2004 © Karyn Moffatt, 2004  Library Authorization  In presenting this thesis in partial fulfillment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission.  Date (dd/mm/yyyy)  Name of Author (please print)  " \ ^ 5 ^ < ^  Degree:  t  A^vU^/^Qpl^  lAitU^^VvU^Ov  H.Sc.  Department of CompiAer The University of British Columbia Vancouver, BC Canada  Y e a r :  $C<Qx\UL  Q-DOH  Abstract Computer technology has become ubiquitous in today's society and many daily activities depend on the ability to use and interact with computer systems. Most computer technology, however, is currently designed for the "average" user, and thus ignores substantial segments of the population excluding them from many common activities. The goal of our research is to address, in part, the problem of designing inclusive technology, focusing on the design of technology for users with aphasia. Aphasia is a cognitive disorder that impairs language abilities, including some or all of speaking, listening, reading, and writing. It results from damage to the brain and most commonly occurs after a stroke, brain tumor, or head trauma. From interviews with aphasic individuals, their caregivers, and speech-language pathologists, several needs were identified that could be met with new application software. Among those needs was a daily planner application that would allow aphasic users to independently manage their appointments using a Personal Digital Assistant (PDA). This research was conducted in two phases: (1) a participatory design phase in which ESI Planner (the Enhanced with Sound and Images Planner) was iteratively developed with input from aphasic participants, and (2) an evaluation phase where a lab study was performed to assess the effectiveness of the resulting tri-modal design, which incorporates triplets of images, sound, and text to represent appointment data. This methodology was used to achieve both usable and adoptable technology. An additional goal in performing this research was to identify where traditional user-centered design methodology and experimental evaluation are inadequate for our target population. Several guidelines have emerged from our work, which are likely to be relevant to others engaging in research with special populations.  ii  Contents Abstract  ii  Contents  iii  List of Tables  v  List of Figures  vi  Acknowledgements  vii  Dedication  viii  1  Introduction 1.1 1.2 1.3  1  Research Motivation Research Objectives and Overview A Note On Participant Disclosure  .  2 4 7  2  Related Work 2.1 Participatory Design 2.2 Technology and the User with Aphasia 2.2.1 Commercially Available A A C Devices 2.2.2 Beyond Iconic Word Dictionaries 2.2.3 Technology and the User with Developmental Disorders . . . 2.2.4 Technology and the Older User 2.2.5 Summary 2.3 Evaluating Assistive Technology  8 8 10 11 12 14 15 16 17  3  Phase One: Participatory Design of ESI Planner 3.1 Participants 3.2 Methodology 3.2.1 Brainstorming 3.2.2 Low-Fidelity Paper Prototyping 3.2.3 Medium-Fidelity Software Prototyping 3.2.4 High-Fidelity Software Prototyping . 3.2.5 Implementation  18 19 21 22 31 34 36 38  iii  4  Phase Two: Experimental Evaluation of ESI Planner 4.1 Two Planner Conditions 4.2 Participants 4.3 Methodology 4.4 Dependent Measures 4.5 Individual Differences 4.6 Results 4.6.1 Quantitative Results 4.6.2 Qualitative Results 4.6.3 Implications for the Design of ESI Planner 4.6.4 Ongoing Work  40 40 42 43 49 50 51 51 56 58 59  5  Implications 5.1 Guidelines for working with Special Populations 5.2 Guidelines for Accessible Handheld Technology 5.2.1 Accessibility issues with the tap interaction 5.2.2 Accessibility issues with the physical form factor . . . . . . .  60 60 63 64 65  6  Conclusions and Future Work 6.1 Satisfaction of Thesis Goals 6.1.1 Identification of Specific Needs 6.1.2 An Application to Support Daily Living Activities 6.1.3 Methodological Adaptations  67 67 68 68 70  6.2  71  Future Work  Bibliography  73  Appendix A Contributions and Credits  79  Appendix B Triplet Databases  81  Appendix C Semi-Structured Interview  84  iv  List o f Tables 4.1 4.2 4.3 4.4 4.5 4.6  Language scores on the Western Aphasia Battery Speech and language classifications of participants Univariate repeated-measures analysis for task time Univariate repeated-measures analysis for tasks correct Univariate repeated-measures analysis for tasks complete Self-reported planner preferences  v  50 51 52 52 52 57  List of Figures 1.1  Overview of the design process used in the research  3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8  Timeline for the participatory design phase in months Screen-captures of the tri-modal dictionary Example paper prototype from low-fidelity prototyping with Anita . Paper prototypes of the three appointment creation designs Initial medium-fidelity prototype of ESI Planner Paper prototypes of the three layouts tested for ESI Planner . . . . Medium-fidelity prototype of the detail-in-context layout Screen-captures of the ESI Planner interface  22 24 32 34 35 36 37 39  4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8  Screen-captures comparing ESI Planner and NESI Planner Screen-capture of the application used to eliminate triplets Example of a written task used in the evaluation of ESI Planner . . Splash screen used to hide the planner interface between tasks . . . Tasks completed with each interface Interaction between tasks correct and interface ordering Interaction between tasks correct and interface ordering Speech and language classifications and planner preferences  41 45 47 48 54 55 56 58  A. l  Timeline for the research crediting collaborators  80  B. l Famous People used in the evaluation of ESI Planner B.2 Famous Places used in the evaluation of ESI Planner  82 83  vi  6  Acknowledgements First, I would like to thank my supervisors, Dr. Joanna McGrenere and Dr. Maria Klawe, for their support and guidance. I am particularly grateful to Joanna for agreeing to supervise me in her first few months as a faculty member, for always providing detailed and thorough feedback, and for always being available when I needed guidance. I feel fortunate to have had Maria as an early supervisor and mentor, without her relentless encouragement I would never have pursued this degree. Dr. Giuseppe Carenini deserves a huge thanks for his contributions as my second reader; he provided a number of very helpful comments. I would also like to thank Dr. Brian Fisher for serving on my official thesis committee. I would additionally like to thank the amazing group of people who have acted as a surrogate committee. In addition to Joanna and Maria, Dr. Peter Graf and Barbara Purves were actively involved in directing this research. Peter provided invaluable input into the design of the study, which lead to a much better design. Without the support, assistance, and expertise provided by Barbara this research would not have possible. I am immensely grateful for the time and energy Barbara generously donated to this work. These individuals have also directly contributed to the research: Rhian Davies, who never failed to be there when I needed an extra hand or brain; Leah Findlater, who continued to provide support and input though her own research went in a different direction; and Shirley Gaw, who helped ensure the participatory design sessions ran smoothly. I am especially grateful to the many participants (and their caregivers) who donated their time and energy to participating in the project, and to the organizers of the BC Aphasia Centre, the Victoria Leap Program, and the Shaughnessy Stroke Club for their assistance in this research. Finally, I would like to thank my friends and family for being especially understanding during the past few months. In particular, I thank my parents for their encouragement and support; they have always believed in me, and that belief has been contagious.  KARYN  The University of British April 2004  Columbia  vii  MOFFATT  In memory of A n i t a Borg, 1949-2003  viii  Chapter 1  Introduction ... / have ideas ... a lot. But, but I get very frustrated ... I mean of course it drives me crazy, because I come in and say, " ...I should be somebody who can doith [sic] stuff!" And then I realize, well... [Anita Borg, 1949-2003, On living with aphasia] The research presented in this thesis documents initial exploratory work of the Aphasia Project, a multi-disciplinary research project investigating how technology can be designed to support individuals with aphasia in their daily lives. The motivation for this project came from Anita Borg, a computer scientist and aphasic individual. Anita acquired aphasia as a result of brain cancer, and though the debilitating effects of her tumor forced her to leave her professional career, her desire to contribute to society remained. As such, she became interested in using her experience and unique insight to develop technology for people with aphasia. Although Anita knew she most likely would never see the benefits of this work, she was inspired to use her condition to help others. Through Anita's personal friendship with Maria Klawe, one of the researchers on our team, the idea for the Aphasia Project was born. This project began with Anita working with us to identify specific needs of aphasic individuals that could be met with technological innovation. In this thesis we report on the early work performed with Anita to envision useful technologies, and on the subsequent work in which we built and tested  1  one of the envisioned applications, namely a tri-modal daily planner. In addition to investigating the design of that specific computer application, we also examine the methodology used in its development. One high-level goal in performing this research was to explore the process of effectively designing adoptable technology for people with aphasia. Specifically, we wanted to identify where traditional usercentered design methodology and experimental evaluation are inadequate for our target population. While the HCI community has long recognized that it should play a role in the design, implementation, and evaluation of technology for users with disabilities [7, 15], there has been relatively little work done with disabled users; even less work has been done with users with cognitive disabilities.  1.1  Research Motivation  Aphasia is a cognitive disorder that affects about 100,000 individuals in Canada [2] and 1 million people in the United States [43]. Aphasia is usually acquired as a result of stroke, brain tumor, or other brain injury, and results in an impairment of language, that is, an impairment to the production and/or comprehension of speech and/or written language. Rehabilitation can reduce the level of impairment, but a significant number of individuals are left with a lifelong chronic disability that influences a wide range of activities and prevents full re-engagement in life. There is great variability of language abilities and impairments across individuals with aphasia resulting both from differences in severity and from differences in relative impairment of language modalities [20].  For example, some aphasic individuals  have relatively good auditory and reading comprehension but very limited output in either speech or written language. Others may have fairly fluent speech, albeit with numerous semantic errors, accompanied by relatively poor comprehension of both spoken and written language. In addition, there can be accompanying deficits , 1  Most aphasic individuals have damage to the left side of the brain, which is where the language-centers are located; thus, deficits commonly occur on the right side of the body due to the contra-lateral relationship between the brain and the body. 1  2  depending on the site of lesion in the brain, including right visual field deficits and right hemiparesis or hemiplegia , which affect limb function [20]. 2  By harnessing advances in computer technology and handheld devices, and building on the ever-increasing computer literacy of aphasic individuals, it seems feasible today to create assistive technologies that permit individuals with aphasia to re-engage in life, and to augment their autonomy and quality of life. A wide variety of assistive technologies are already available mainly to facilitate therapeutic efforts as well as the recovery and maintenance of basic language functions. These include Lingraphica by Lingraphicare [37], Dynamyte by Dynavox [14], Vantage by PRC [49], the Gus Pocket Communicator [22], Enkidu's Impact Series [17] and the Saltillo ChatPC [51]. However, the number of reports of successful applications for people with aphasia remains quite limited [20]. This observation contrasts with the successful harnessing of computer technology in the service of communication for non-aphasic, speech-impaired individuals, such as Stephen Hawking. One reason for the lack of previous success may be that efforts have tended to focus on individuals with severe or profound aphasia, for whom efforts to develop effective alternative communication strategies, such as gesturing or drawing, have failed [28]. Design efforts have not attempted to leverage the retained communicative abilities possessed by many aphasic individuals. Another reason that may have thwarted previous efforts is that they have focused on technologies that support basic language functions rather than higher-level goals, that is, the practical reallife needs of aphasic individuals that occur after hospital and therapy discharge. In a 1988 survey of individuals with aphasia, 72% of respondents reported that they could not return to work, despite 50% of them having received over a year of speech-language therapy [43]. The long-term goal of the Aphasia Project is to fill this niche by creating and evolving high-level applications that meet the real-life needs of aphasic individuals. Hemiplegia refers to a total paralysis of the arm, leg, and trunk on one side of the body, whereas hemiparesis refers to a weakness of one side of the body. 2  3  1.2  Research Objectives and Overview  O u r three p r i m a r y goals for t h i s thesis research were (1) t o identify specific needs t h a t c o u l d be m e t b y n e w a p p l i c a t i o n software, (2) t o create software t o meet one s u c h need, a n d (3) t o identify where t r a d i t i o n a l user-centered design m e t h o d o l o g y a n d e x p e r i m e n t a l e v a l u a t i o n are inadequate for effectively designing a d o p t a b l e technology for a user p o p u l a t i o n w i t h c o m m u n i c a t i o n i m p a i r m e n t s a n d a h i g h degree o f individual variability. T h r o u g h i n t e r v i e w s w i t h A n i t a we identified several possible a p p l i c a t i o n s t o develop: a d a i l y planner, a recipe b o o k , a w o r d d i c t i o n a r y , a p e r s o n a l h i s t o r y recorder, a n d a conversation p r i m e r . It's i n t e r e s t i n g t o note t h a t t h a t t h e a p p l i c a t i o n s identified represented n o t o n l y f u n c t i o n a l needs b u t also pastimes a n d hobbies. A l t h o u g h each of t h e proposed a p p l i c a t i o n s was interesting, t h e scope o f t h i s w o r k r e q u i r e d us t o select j u s t one for development. W e chose t o develop a t r i - m o d a l d a i l y p l a n n e r a p p l i c a t i o n for use o n a h a n d h e l d device. A n i t a considered t h e d a i l y planner a p p l i c a t i o n t o be one of t h e most i m p o r t a n t , a n d moreover, i t was also i m p o r t a n t to her h u s b a n d w h o h a d t a k e n o n the r e s p o n s i b i l i t y o f m a n a g i n g her schedule. T h e E n h a n c e d w i t h S o u n d a n d Images P l a n n e r , or E S I P l a n n e r , uses t r i p l e t s o f images, s o u n d , a n d text t o r e d u n d a n t l y encode a p p o i n t m e n t d a t a , t h u s e n a b l i n g i n d i v i d u a l s w i t h a p h a s i a t o independently manage their schedules. A s m o s t people w i t h a p h a s i a have difficulty w i t h r e a d i n g a n d w r i t i n g , we h y p o t h e s i z e d t h a t these t r i p l e t s w o u l d m a k e i t easier for people w i t h a p h a s i a t o c o m p r e h e n d t h e i n f o r m a t i o n presented w i t h i n a d a i l y p l a n n e r . T h i s hypothesis is based o n (1) knowledge t h a t people w i t h a p h a s i a generally r e t a i n t h e i r a b i l i t y t o recognize images [58], a n d (2) a n e c d o t a l evidence from o u r p a r t i c i p a n t s suggesting t h a t r e a d i n g m a y be easier w h e n t h e t e x t is c o n c u r r e n t l y r e a d a l o u d t o t h e m . O u r research was c o n d u c t e d i n t w o phases: a n d a n e x p e r i m e n t a l e v a l u a t i o n phase.  a p a r t i c i p a t o r y design phase  I n P h a s e O n e , t h e p a r t i c i p a t o r y design  phase ( C h a p t e r 3 ) , E S I P l a n n e r was i t e r a t i v e l y developed w i t h i n p u t from aphasic participants.  W e followed a four step process c o n s i s t i n g o f b r a i n s t o r m i n g , l o w -  fidelity paper p r o t o t y p i n g , m e d i u m - f i d e l i t y software p r o t o t y p i n g , a n d high-fidelity  4  software prototyping. However, this phase did not proceed in a strictly sequential manner, but rather, many iterations were required. Our goal in this phase was to produce an adoptable and usable design, and to gain insight into the process of working with people with aphasia on the design of technology.  Our research  questions included: 1. What are the specific daily-living needs of people with aphasia that could potentially be met with technology? 2. How does the participatory design process need to be adapted in order to facilitate participation by people with aphasia? 3. How should ESI Planner be designed in order to achieve both an adoptable and usable application? In Phase Two, the experimental evaluation phase (Chapter 4), a lab-style study was performed to assess the effectiveness of the resulting design of ESI Planner. In this phase, we wanted to determine if the work performed in Phase One did result in a more usable design when evaluated with a group of users who did not have influence on its design. Specifically, we wanted to test our hypothesis that the tri-modal design of ESI Planner supported aphasic individuals in appointment management tasks. The following research questions were relevant to this phase of the work: 1. How does ESI Planner compare to an equivalent text-only planner? (a) Does it allow users to accomplish tasks more quickly? (b) Is it easier to use? (c) Does it require less time to learn? (d) Is it preferred by some or all aphasic individuals? 2. What modifications need to be made to the evaluation process, in order to accommodate the special needs of aphasic individuals?  5  Figure 1.1 gives an overview of the design and evaluation processes used in this work.  Design Cycle I Design |  | Implementation |  Brainstorming  Phase One  Phase Two  Figure 1.1: Overview of the design process used in the research.  Throughout this research we encountered many challenges including interpreting data from a highly heterogenous subject pool, recruiting sufficiently many participants, addressing mobility and transportation issues, and communicating with participants. Our methods for addressing these challenges included using standardized tests to assess abilities, connecting with support groups and organizations, and gaining practical experience with the target population. In addition, we encountered several limitations inherent in current handheld technology, which need to be addressed if these devices are to be used for assistive technologies. The implications of these challenges and limitations are presented as guidelines in Chapter 5. In summary, this thesis is composed of six chapters including this introduction. Chapter 2 reviews literature relevant to the development of technology for people with aphasia and provides background information on participatory design methodology. Chapter 3 describes the participatory design process we used in the design of ESI Planner, and Chapter 4 covers the experimental evaluation used to evaluate the usability of ESI Planner with respect to an equivalent text-only planner 6  interface. Chapter 5 discusses the implications of this research and provide guidelines relevant to others engaging in research with special populations. Finally, Chapter 6 presents the conclusions of the research, and examines directions for future work. The majority of the work presented here was performed by the author; however, it was a multidisciplinary collaborative effort. Throughout this thesis, contributions for which the author was not the lead researcher are noted. In addition, Appendix A provides an overview of the major milestones and indicates the lead contributors for each. Substantial portions of this thesis have already been published in the 2004 proceedings of the SIGCHI conference on Human factors in computing systems [39].  1.3  A Note On Participant Disclosure  Participatory design methodology is unique in that it aims to include participants as equal members of the design team. This blurs the line between researcher/designer and participant/user. In a successful participatory design project, it is hoped that participants will feel ownership over the work accomplished. In this case, participant anonymity may no longer be desirable as it denies participants credit for their work. In our research, Anita Borg and Skip Marcella, two of the four participants in the participatory design phase, expressed a desire to be credited directly. Out of respect for their wishes, this thesis addresses those participants by name; all other participants are referred to only by their initials.  7  Chapter 2  Related W o r k In this chapter we review literature relevant to our research. We begin with an overview of participatory design, relating it to the methodology used in Phase One (Chapter 3) of this work. We then turn our discussion to the development of assistive and accessible technology.  In that section, we review both technology developed  specifically for persons with aphasia, and technology which has been developed for similar populations.  This chapter concludes with a discussion of challenges  to working with special populations, specifically highlighting issues pertaining to evaluating assistive technology.  2.1  Participatory Design  Participatory design is an approach to the design, development, and assessment of technology that places an emphasis on the active involvement of the intended users in the design and decision-making process. Rooted in the Scandinavian workplace democracy movement, participatory design emerged in the late 1970's as an offshoot from action-oriented research, which, at the time, was being conducted with trade unions in Scandinavia to ensure that the introduction of technology did not lead to a deskilling of the workforce [16, 40, 31]. Early projects, including NJMF, DEMOS, and UTOPIA, were aimed at empowering unionized workers by giving them active control over their work environment and the processes by which they accomplished their work [10, 16, 53]. Participants in these projects were elected representatives 8  of their union, and were considered equal members of the design team. They participated continuously from the start of a project through to its completion. In contrast, current North American interpretations of participatory design tend to be customer-oriented and productivity motivated, having emerged from corporate rather than political interests [53]. In these practices, the purpose is functional: user involvement is used to produce better products with increased market share [10]. In practices such as Contextual Design [27], designers meet with users in the their workplace to gather field data; however, users are not considered part of the design team. Rather, they are seen as a separate entity that can be observed, interviewed, and examined in order to build a picture of the users' existing work practices. That understanding is then used to guide development, thus ensuring a better fit between the new technology and the existing work practices. In light of the many diverse interpretations of participatory design, there have been attempts to classify and evaluate methods relative to one another. As cited in [34], Tom Erickson of Apple Computer outlined four dimensions for measuring the level of user participation in participatory design projects: the level of directness of the interaction between the users and the designers, the length of the involvement of the users in the design process, the scope of the users' participation in the overall system being designed, and the degree of control given to users over design decisions. While the early Scandinavian endeavors ranked high on all four dimensions, current practices vary in their fulfilment of these objectives [34]. In comparison, Trigg and Clement outlined tenets aimed at identifying, not the differences, but rather, the similarities between various practices [61]. Among other things, these tenets identify the following similarities: a respect for the users of the technology; a recognition for the value of collaboration in bringing about innovation; a view that a "system" is a complex combination of networks of people, practices, and technology; an understanding of the importance of the context in which the technology will be used; and a desire to improve the lives of the users. Given our focus on improving the quality of life of aphasic individuals, the research reported in this thesis is more similar to the early Scandinavian efforts than  9  to the current North American variations. However, moving from the domain of empowering unionized workers to improving the quality of life of a population with special needs was not a simple shift. Muller [40] noted that the visual, hands-on nature of most participatory design practices are in direct conflict with the universal usability needs of individuals with visual and motor disabilities. That observation extends easily to include individuals with aphasia: the verbal communication practices of participatory design, including the Think Aloud protocol, present many difficulties for people with speech and language impairments. So while our work is similar in many ways to the early Scandinavian projects, we have needed to modify and adapt the particularities of many of the practices to fit the specific needs of our domain.  2.2  Technology and the User with Aphasia  To date, research in the development of technology for people with aphasia has focused predominantly on the development of devices to assist in communicative exchanges. These devices are generally referred to as Augmentative and Alternative Communication (AAC) devices. Broadly speaking, A A C refers to any method or technique that augments or replaces, either temporarily or permanently, any primary method of expressive communication [62]. An A A C device is simply technology that provides A A C functionality. While A A C devices fill an obvious need for some people with aphasia, these devices are not the only way technology can support their daily activities. Nonetheless, much can be learned from the work done on the development of communication devices. In this section, we review both technology developed specifically for persons with aphasia, and that which has been developed for other similar populations. With respect to technology developed for aphasia, we present both commercially available devices, which most often take the form of symbol-based dictionaries, and more recent research innovations, which attempt to move beyond support for the expression of wants and needs and towards supporting deeper social interactions.  10  2.2.1  Commercially  Available  A A C Devices  Essentially all commercial assistive technology available for persons with aphasia today is in the form of A A C devices that provide symbol-based access to a searchable collection of words and short phrases. These systems build on the retained ability of many aphasic individuals to recognize image-based representations of objects [58]. In these systems, each word/concept has a tri-modal representation, consisting of an image form, a sound form, and a visual-letter form. The user is able to search through the image library to retrieve a desired item, and once selected, its letter and sound forms are made available for use in communicating with others. For example, if a user wanted to say, "I want macaroni and cheese for dinner," the following scenario might apply: the user first selects an icon chosen to represent / want, then selects macaroni and cheese from a food category and finally selects an icon for dinner from a category representing daily activities. By default most systems have upwards of 10,000 icons and symbols; they, thus, rely on the availability of a caregiver or therapist to help the user select and organize a subset for use in communication. However, even with a well-customized interface, these systems can still be slow and hard to navigate due to the volume of images and symbols required for daily communication [6]. Some systems, such as Lingraphica by Lingraphicare [37], Dynamo and Dynamyte by Dynavox [14], and Vantage by PRC [49], are packaged in custom-built, dedicated handheld technology. Other devices, including the Gus Pocket Communicator [22], Enkidu's Impact Series [17] and the Saltillo ChatPC [51], use off-the-shelf non-dedicated handheld technology . While the advantages of using custom tech1  nology include an improved form factor and better durability, a major disadvantage is the high cost of producing custom hardware, which makes these systems prohibitively expensive for many individuals. Moreover, these devices can only be used for the single functionality for which they were designed; thus, they are unable to address the full range of needs of aphasic individuals. Many systems using off-the-shelf Currently the Medicare system in the US only funds communciation devices that cannot function as portable computers. As such, each of Gus, Enkidu and Saltillo also offer dedicated-use versions of their products which have all other functionality disabled. 11  non-dedicated techonolgy address form-factor and durability concerns by providing optional protective sleeves or carrying cases. While such devices may assist aphasic individuals who have few, if any, other communication options to express their basic needs, they do little to leverage the skills of individuals who have retained some communicative ability [6]. Many aphasic individuals have a partial ability to talk, write, draw, and gesture. For those individuals, the communication rates provided by these devices—typically less than 8 words per minute—are far too slow to be useful in daily conversation, which generally proceeds at rates ranging from 150-250 words per minute [6].  2.2.2  Beyond Iconic Word Dictionaries  In recent years, several investigators have applied computer technology to meet specific needs of individuals with some retained communicative ability. While in some instances such work has been undertaken in the context of individualized rehabilitation programs targeted exclusively to one particular individual, others have developed and tested systems across multiple users. Moreover, while some systems have been developed specifically for people with aphasia, others have been targeted to a broader category of cognitively impaired individuals. We now give some examples of each of these categories of systems. Cognitive prosthetics, as introduced by Elliot Cole, are a form of rehabilitative treatment that uses computer technology to support individuals in functional activities [11].  In each case, the prosthesis is custom-built specific to the indi-  vidual needs of the user. One example of a cognitive prosthesis is a customized check-writing application designed to support one particular individual in paying her bills [12, 11, 29]. In contrast, Sutcliffe, Fickas, Solhberg and Elhardt designed a prototype email system aimed at a more general audience of cognitively disabled users [57]. It consisted of four different interfaces (free format, idea prompt, form fill, and menu driven), which differed in their complexity and the level of support they provided. These interfaces were evaluated relative to each other with several users and a range  12  of cognitive impairments, including aphasia. The major finding in this evaluation was that contrary to traditional evaluation studies, no common pattern of usability errors emerged, pointing to the need for customization mechanisms. Hine and Arnott describe a PDA-based multi-media communication system for story-telling [24]. It was designed to help users with speech and langauge impairments participate in conversation, by providing an easy-to-use interface for selecting multi-media based stories consisting of video-clips, audio-clips, and still-images. An. evaluation was performed with one user with cerebral palsy (with associated poor speech and reduced manual dexterity), which compared the system to an earlier desktop version [25, 23] on which the PDA version was based. The user was able to retrieve stories from the PDA significantly faster than from the desktop and was in general enthusiastic about the device; these timing findings were primarily attributed to the direct interaction afforded by the PDA's touch screen compared to the indirect selection interaction afforded by the mouse. The PDA version was specifically designed to allow the user to select items with a finger instead of the pen. For individuals with motor impairments, supporting direct finger interaction may provide significant advantages over the mouse. In comparison to the two systems just described [57, 24], which were targeted to a broad audience of users with cognitive impairments, the TalksBac system [63] and a related system, PROSE [64], were developed specifically for persons with aphasia. They were designed to leverage the ability of some aphasic individuals to recognize familiar words and phrases, and in doing so, help them to participate in conversation. TalksBac guides users through a selection of short sentences and phrases that can be read aloud via a speech synthesizer during conversation. These conversational items are stored in a hierarchy that is continuously updated based on the individual's usage over time. PROSE, designed to be used in conjunction with TalksBac, allows the aphasic user to introduce pre-recorded stories into conversations. Both systems rely heavily on the availability and willingness of familiar partners, or caregivers, to manage and update entries in the system on an ongoing basis.  13  TalksBac was evaluated by four nonfluerit adults with aphasia for nine months and was found to improve the conversational abilities of two of the four participants; i.e., those two participants were able to initiate more new conversational topics, were better able to elaborate on topics, had greater control over the conversation (as measured by the number of times responses were given to questions posed by the non-aphasic partner), and needed to confirm fewer clarifications for the non-aphasic partner. The success of the system for those two individuals was attributed to its ability to fill a need that had not already been filled by the development of effective alternative communication strategies such as gesture, drawing, or note making [63]. PROSE was compared for one individual against two other strategies and found to be effective in augmenting her conversational participation [64]. While most research efforts have tended to focus on individuals with severe or profound aphasia, SOCRATES (Simulation of Oral Communication Research, Analysis, and Transcription Engineering System) [33] was developed specifically to support more mildly impaired aphasic individuals for whom reading and writing are relatively manageable tasks . SOCRATES is a chat applet designed to enable 2  individuals to participate in online conversations. It provides a collection of support mechanisms, including a means of forming a secondary "help" conversation. This "help" conversation provides a supportive environment in which aphasic users can get assistance from trusted partners for use in the primary conversation where they might be less comfortable asking for help. SOCRATES is currently being piloted in the community and is receiving positive feedback; however, no formal evaluations have been conducted.  2.2.3  Technology and the User with Developmental Disorders  So far this section has focused on research and products for people with acquired cognitive impairments; however, research has also been undertaken to understand how assistive technology can be designed to support individuals with developmental Presentation by Ralf Klamma given at Dagstuhl Seminar No 03481: e-Accessibility: new Devices, new Technologies and new Challenges in the Information Society, November 24, 2003.  14  cognitive disorders, such as Downs syndrome. For example, the Cognitive Lever Project (CLever) at the University of Colorado at Boulder is working with users with Downs syndrome on the design of a prompting system that can be tailored to guide a particular individual in his or her everyday activities. One possible scenario is assisting an individual to navigate to and from her local community center. In that scenario, the system incorporates many details such as reminders to bring her house keys and backpack, as well information about which bus to take and where to get on and off the bus [8, 18}. Lancioni, O'Reilly, Seedhouse, Furniss, and Cunha have also explored the design and use of prompting systems by persons with developmental disabilities [36] (also [35]). In their work, a prompting system was developed using custom-built dedicated hardware. The only input to the system was a single physical button which advanced the system from one prompt to the next. Output from the system consisted of a visual display, an auditory output, and a vibration box. While the vibration box was worn by the user and was used to signal the user after a period of inactivity, the visual display, auditory output, and input button were all housed in a small 19 x 18 x 5 cm palm-top device which communicated with the vibration box via a radio link. An experiment was performed comparing the palm-top system to a traditional card system, which used booklets of paper cards as prompts. All six participants in this study performed more steps correctly with the computer system, and all preferred it to the traditional card system.  2.2.4  Technology  a n d the Older  User  Designing for older user populations requires special attention be paid to the unique characteristics of this population. Older users represent a more diverse demographic than their younger counterparts: they have a wider range of physical, cognitive, and sensory functioning; their abilities are changing quickly and constantly; and they are more likely to suffer from multiple impairments [21]. As stroke is the most common etiology for aphasia and the prevalence of stroke increases drastically with age, it  15  is reasonable to assume that the design challenges applicable to designing for the older user will apply equally to this work. Recently, many researchers have begun developing technology to meet the specific needs of older users. Most of these initiatives have focused on enhancing the functional abilities of elderly users and increasing their access to information [26]. For example, Ogozalek, compared printed leaflet presentation, computer-based text presentation, and computer-based multimedia presentation, and found that elderly users both retained more information about prescription options when the information was presented on a computer in multimedia format and preferred the computerbased multimedia format [47]. This success was attributed to the ability of the multimedia format to reduce reading demands on the user. Other projects, including the ELDer project [26] and the Aging in Place project [41], have looked instead at increasing the social opportunities available to older adults. For example, the Digital Family Portrait [42], is an augmented photo frame that uses data, such as measurements of activity, to give distributed family members a qualitative sense of each others' daily activities and well-being [42].  2.2.5  Summary  In this section, we have reviewed technology relevant to persons with aphasia. Although there is much to be learned from the research endeavors described, our research can be distinguished from that work in four important ways. First, our work differs from that of most previous efforts to develop technology specifically for persons with aphasia in that: (1) our work does not focus specifically on supporting communication, but rather on supporting higher-level activities; and (2) we are neither focusing on severely impaired individuals, who have not developed alternative communicative strategies, nor on mildly impaired individuals who need only occasional support. Rather, we are focusing on individuals who are able to communicate somewhat verbally and non verbally, but experience significant difficulties managing daily tasks that involve reading and writing. Second, our work is distinct from cognitive prosthetics and similar endeavors in that we are not attempting to  16  rehabilitate aphasic individuals. Instead, we are trying to leverage and enhance their existing abilities. Third, we are focusing on an acquired cognitive deficit, and thus, our work differs from that aimed at developmentally disabled individuals: we can take advantage of the pre-existing life skills of our participants. Fourth, while our work does focus predominantly on older users, it is more specifically targeted. However, there is much we can learn from general efforts to support the needs of older users.  2.3  Evaluating Assistive Technology  Evaluation is an integral component of HCI research, and although many authors have acknowledged the challenges involved in working with special populations [1, 30, 46, 44, 45, 54, 55], to date, very little work has addressed those difficulties. Stevens and Edwards discuss their experience working with special populations in their evaluation of Mathtalk [56], a system developed to support blind users in the manipulation of mathematical expressions. They highlight several challenges in evaluating assistive technology including the inappropriateness of controlled laboratory experiments for highly heterogeneous populations, the difficulty of acquiring a sufficient sample of the population, and the unavailability of appropriate control conditions. Much more work is needed to identify and address barriers to developing technology for special populations. The term special populations refers to a large and diverse set of populations. Many challenges encountered when working with one special population will be common to other populations, but each individual population will also have its own unique challenges. Identifying both the broad challenges applying to many special populations, and the individual challenges unique to a specific population is an area that needs to be addressed. Once these issues have been identified, we can then begin to develop a framework that outlines the challenges and methods for overcoming them. In Chapter 5, we discuss the challenges we faced in this work and the methods used to address them.  17  Chapter 3  Phase One: Participatory Design of ESI Planner Phase One of the research used participatory design methodology to identify and develop a both useful and usable application for aphasic individuals. Participatory design is a process that incorporates early and continual participation of the intended users to produce technology that will realize better acceptance and will better suit the needs of its users. It is based on the premise that the users of a system will understand their needs differently than the designers will, and that both understandings are needed to ensure a successful product is developed. We felt that participatory design methodology was likely to be essential for our research, given the sizable differences in the technological needs and skills of the researchers and the intended users. This chapter describes our experience using participatory design methodology with people with aphasia. We begin with a discussion of our participants, focusing particularly on those characteristics of their aphasia affecting our work. We then describe in detail the methodology used in designing ESI Planner, reflecting throughout on lessons learned in this phase of the research.  18  3.1  Participants  As mentioned previously in the introductory chapter, Anita Borg provided the inspiration for this project, and was also its first participant. Anita was a middle-aged professional woman who was highly computer literate. She had progressive aphasia resulting from a brain tumor in her left temporal lobe. Individuals with progressive aphasia experience a gradual loss of language functionality over time. In contrast, individuals with chronic aphasia, which occurs most often after a stroke, experience a sudden onset but afterwards do not typically experience a further decline. Anita first began showing symptoms of aphasia in the spring of 1998 when she noticed she was having difficulty recalling people's names. At the time of our research (early 2003), her language skills had degraded significantly and were limiting her autonomy in daily life. Anita had relatively good auditory comprehension compared to her speech production, reading comprehension and writing ability. She understood most of what was said to her, although she often needed further explanation. Anita's speech was relatively fluent as she did not have too much trouble with the syntax and flow of language, but she often had difficulty finding content words. As a result, her speech was full of circumlocutions as she would try to work around missing words. Nonetheless, she was a very good communicator and was generally able to get her message across using both verbal and non-verbal communication strategies. Reading was further complicated by a right visual field deficit and writing was hampered by a tremor, which was a side effect of one of the medications she was taking for the cancer. Ideally, continuity would have been maintained throughout the research with uninterrupted participation by Anita . However, Anita regrettably had to resign 1  from the project before completion of the preliminary design; thus, in order for the research to make progress, surrogate design members were needed to fill her role. Although we began this project knowing that her time with us would be limited, her resignation was a tremendous loss, as her enthusiasm for the project was extraordinarily motivating and her insight unmatched. Anita withdrew from the project in February 2003, just four months after the project's initiation and, sadly, passed away as a result of her brain tumor the following April. 1  19  In light of our experiences with Anita, we decided that at that initial stage, we needed to work with individuals whose condition was stable to ensure continued progress with a consistent set of participants. We decided, therefore, to focus on individuals with chronic aphasia, and within chronic aphasia,- to focus on individuals who were at least one year post onset. While rehabilitation can help improve residual communicative ability in individuals with chronic aphasia, improvements generally plateau after one year [9], leaving individuals in a relatively stable state. Three participants, Skip, SS, and MP, were recruited for the design team. Each of these participants acquired aphasia as the result of a stroke, and all were at least one year post onset. At the time of Anita's resignation, Skip was already working with us to corroborate and supplement the input we were getting from Anita. Skip is a middle-aged man with very good computer skills and a strong interest in technology. In addition to aphasia, Skip has apraxia, which is a deficit in the motor programming stage of speech [13]. As a result, his speech is more impaired relative to other output modalities, such as gesture and writing, than might be were he to have aphasia alone. Skip also has right hemiparesis, a weakness on the right side of the body, resulting from the stroke; however, Skip, who is right-handed, still uses his right hand, albeit with increased difficulty, for writing and tapping on the iPAQ. While Skip has very few words available to him orally, there are many more that he can write out as single words or short phrases. He uses a variety of nonverbal communication strategies in addition to gesture, facial expression and the use of props.  Skip carries with him a small pad of paper at all times so that  he can draw pictures and write out words. He also carries with him cards preprinted with important information such as his address. Skip has very good auditory comprehension, and relies heavily on this ability when communicating with others. He depends on his communication partners to guess out loud what he is trying to communicate so that he can either confirm that his partner has it right or try again to communicate the message.  20  SS is a middle-aged woman with some familiarity with computers; she is comfortable with general computing activities including email and word processing. SS has a noticeable speech impairment, and finds reading difficult; however, she has relatively good auditory comprehension, and writing ability. MP is a also a middleaged woman with some computer experience, although, she is less comfortable with computers than SS. M P has very good speech production and auditory comprehension, but often has difficulty writing. While both SS and MP successfully manage their schedules with a paper planner, both expressed interest in using a computer planner with image and sound functionality. Due to the large variability in impairments across people with aphasia, none of these individuals had exactly the same difficulties as Anita, although all felt that improvements could be made to text-only planners. In fact, Skip, who had initially felt he would not benefit from a tri-modal design, became enthusiastic as the research progressed and he discovered the potential for an enhanced planner to aid him, not in managing his schedule, but in communicating his daily activities to others.  3.2  Methodology  Over a period of seven months, we worked on the iterative design of ESI Planner, a tri-modal daily planner that would help individuals like Anita, for whom reading and writing were obstacles, independently manage their schedules. In this section, we describe the methodology used during this phase of the research. Figure 3.1 shows a timeline for Phase One and highlights the iterative nature of participatory design methodology. Although we did not follow a strictly sequential process, for the purposes of clarity we describe this phase in four parts: brainstorming, low-fidelity paper prototyping, medium-fidelity software prototyping, and high-fidelity software prototyping.  21  Timeline for Phase One B r a i n s t o r m i n g a n d low-fidelity  Low-fidelity prototyping of appointment  prototyping with Anita  b r o w s i n g with 4 n o n - a p h a s i c participants'  M1  M2  M3  Med-fidelity prototyping  Low-fidelity prototyping of  of appointment  appointment c r e a t i o n with  b r o w s i n g with S k i p  Skip, S S , a n d M P  M4*  M5  M6  Informal  XI  Med-fidelity p r o t o t y p i n g of  brainstorming  appointment b r o w s i n g with 1  with A n i t a  n o n - a p h a s i c participant*  M7  I  Brainstorming a n d informal  Med-fidelity prototyping of  evaluation of c o m m e r c i a l  a p p o i n t m e n t b r o w s i n g with  d a y planner with S k i p  Skip, S S , a n d M P  Non-aphasic participants were used sparingly throughout our design process to supplement information provided by our aphasic participants. Our motivations for using this approach are discussed in Section 3.22.  Figure 3.1: Timeline for the participatory design phase in months (Ma;).  3.2.1  B r a i n s t o r m i n g  During the brainstorming stage, our primary goal was to identify specific needs of aphasic individuals and to informally evaluate existing software. Brainstorming began with Anita, starting with a series of informal conversations, and culminating in a semi-formal session. Later we had an additional brainstorming session with Skip. Each of these three sessions will now be discussed.  Informal Brainstorming with Anita This research began with informal conversations between Anita and Maria Klawe, to identify areas where technology could be used to support her daily activities. Anita and Maria were longtime friends and colleagues; at the time this research began, they had already been meeting regularly to discuss, among other things, Anita's condition and how it was affecting her quality of life. While Anita's ability 22  to speak, read and write had diminished significantly, her ability to recognize images remained fully intact. So although she often could not recognize or produce the name of the person or thing she wished to talk about, she could use pictures to confirm and communicate what she was thinking. For example, Anita often had difficulty with the word yoga, and could neither produce it nor recognize it in spoken or written form. As a result, she used a picture of a person doing yoga to help her communicate to others when she was going to yoga. Building on this strategy, the idea of using an iPAQ—a small handheld computer with graphics and sound capabilities—emerged. To investigate this idea further, Anita and Maria both acquired iPAQ's and began exploring ways the iPAQ could be used to help Anita, and others with aphasia, communicate and engage in daily activities more independently. As a first step, Anita's husband (Winfried Wilcke) created a tri-modal dictionary for the iPAQ to help Anita with word finding problems. This tri-modal dictionary consisted of a series of H T M L pages organized in a two level tree structure as shown in Figure 3.2. The top-level consisted of a text listing of all categories, as in Figure 3.2(a), and at the item-level, each category was associated with a single H T M L page, as in Figure 3.2(b). Note that at that point, Anita was still mostly able to read familiar words and short sentences, albeit with difficulty. Thus, she was able to manage the text used to describe categories in the top level of the dictionary using the sound clips for help; sound clips could be played for each of the categories by tapping the question mark to the right of the text. Navigation was accomplished by means of hypertext links. For example, tapping the Food link in Figure 3.2(a) would bring up a page showing the list of foods shown in Figure 3.2(b). Each item in the dictionary was represented by an image, a sound clip, and a text descriptor. Sound clips were played by tapping the associated image. Due to display size and resolution constraints, only about one item could be viewed on the screen at a time, and thus vertical scrolling was used to browse items in a category. The tri-modal dictionary was developed to run on Microsoft Pocket Explorer, a web browser included with the iPAQ.  23  While the tri-modal dictionary was developed to help Anita with word finding problems, it was not the only application identified in her initial meetings with Maria. It was, however, the most straightforward, and the easiest to quickly prototype using existing applications (i.e., Microsoft Pocket Explorer).  Along with  the dictionary, two additional applications emerged from the preliminary meetings. The following summarizes the three applications that emerged from the informal brainstorming sessions: 1. A tri-modal  dictionary  that would help with word finding problems by using  triplets of images, text, and sound to represent words/concepts.  A key re-  quirement would be for triplets to be meaningfully organized without using schemes such as alphabetical, which depend on language.  24  2. A tri-modal recipe book that would supplement instruction and ingredient information by using triplets of images, text, and sound where appropriate. Traditional recipe organization, that is, a list of ingredients followed by prose style instruction, would need to be modified to facilitate comprehension. In particular, complicated instructions would need to be broken into comprehensible chunks. 3. A tri-modal daily planner that would facilitate appointment management tasks by using triplets of images, text, and sound to represent appointment data within the planner. Applications 2 and 3 were particularly interesting, as they were a divergence from most previous work, which focused on the development of A A C devices, rather than on applications designed to support higher-level tasks. At that point, with a few initial ideas in hand, it seemed appropriate for more members of the Aphasia Project to meet with Anita, to explore these ideas, and to brainstorm further. Semi-Formal Brainstorming with A n i t a Anita lived in California and her health prevented her from travelling to Vancouver; therefore, three members of our group visited her at her home. We (the author, Joanna McGrenere, and Barbara Purves) met with her for several hours over a two day period, and in these meetings we focused on the following three activities: (1) informally evaluating the tri-modal dictionary and a commercial A A C device (Gus Pocket Communicator); (2) further exploring specific needs of Anita that could be met by technology; and (3) evaluating initial paper prototypes of a tri-modal recipe book and a tri-modal daily planner. We begin with a discussion of the first two of these activities; the paper prototyping element of these meetings will be described in Section 3.2.2. At the time of our meeting, the tri-modal dictionary had been available to Anita for several weeks. It contained entries representing 114 words/concepts distributed across seven categories. Although Anita confirmed that the entries rep-  25  resented words/concepts she often needed, for the most part, she had not been using the dictionary. We discovered that while Anita embraced the concept of the dictionary, there were several limitations that, in practice, made the particular implementation difficult and frustrating for Anita to use: 1. Display Size and Image Resolution: Anita found the extensive scrolling needed to browse each category to be tedious and tiring. 2. Organization:  While Anita could manage a two-level tree hierarchy, it was  clear from her experiences with other software that she would not be able to manage deeper hierarchies, limiting the number of items that could be placed in the dictionary. 3. Targeting: Anita had difficulty with the size of the buttons in Microsoft Pocket Internet Explorer (5mm x 5mm). Moreover, the proximity of the buttons often resulted in adjacent actions being executed by accident, the result of which was very disorienting and confusing for Anita. 4. Language Dependency: While Anita was somewhat capable of managing the text in the dictionary, it was becoming increasingly difficult, and as such, she was concerned that in the near future she would no longer be able to do so. Anita was less willing to invest time and effort into learning the interface as a result of this concern. 5. Navigating: It was clear that Anita had difficulty navigating the application, although it was not entirely clear whether this difficulty was due to a software bug in Microsoft Pocket Internet Explorer, or due to a usability problem with the design of the web browser.  She identified this difficulty as the single  greatest source of frustration. So although Anita was enthusiastic about the dictionary and could, for the most part, use it, she was reluctant to do so in the absence of assistance. Often when Anita did try to use the dictionary, she would give up quickly, as she was unable to recover from errors. Anita found this particularly frustrating, as she had 26  been highly computer literate prior to acquiring aphasia, and the shortcomings of this system only emphasized how much the aphasia was limiting her. Concurrent to the development of the picture dictionary, our review of existing technology determined that many commercially available devices were similar to the picture dictionary in that they provided individuals with a pictorial means of finding and communicating missing words. Accordingly, we decided to evaluate one such product, Gus Pocket Communicator [22], with Anita to see how it would compare to the picture dictionary. We used Gus Pocket Communicator for this evaluation because it runs on standard commercial PDAs instead of specialized hardware, making it possible for us to install a trial version on one of our iPAQs for the evaluation. Gus Pocket Communicator is a layered picture archive that helps adults with communication or speech disorders compose short phrases, which can then be played via speech synthesis to a communication partner. Typically, someone working with a person with aphasia, such as a caregiver, speech therapist, or family member, would initially help that person draw from the library of over 2,500 communication symbols, a customized set of images to use in communication. A standard setup would allow approximately nine symbols to be viewed at a time. These symbols would either represent a group of items, a single item, or a navigational mechanism (e.g., show me more items in this category).  Selecting a group would navigate  deeper into the hierarchy; selecting an item adds the text associated with that item to the message being constructed. Thus, users can build up messages for use in communication by searching through the hierarchy and selecting the appropriate items.  For example, a particular setup might include entries for / in the group  people and happy in the group emotions.  The message, "I am happy," could then  be constructed by first selecting the entry for / and then the entry for  happy . 2  Unfortunately, due to copyright constraints, no images of the Gus Pocket Communicator are included here. The interested reader can refer to the vendors web site, http://ww.gusinc.com/, where many images could be found at the time of writing (April 2004). 2  27  In Anita's opinion, Gus Pocket Communicator was basically usable, and addressed many of the design problems identified for the picture dictionary. For example, Gus Pocket Communicator used fewer buttons to enable a reduced set of key functions. This allowed for larger buttons which Anita could select using her finger, which she found more comfortable than the stylus. Also, the pictures used in Gus Pocket Communicator were specifically developed for use on a PDA, whereas the ones used in the picture dictionary came from a variety of sources including the Internet and scanned images from Anita's photo album. As such, the Gus images were more appropriate for creating thumb-nail sized images, allowing more items to fit on the screen and facilitating navigation. Nonetheless, Anita was still concerned about the manageability of the system for individuals who wanted a large set of words available to them. Like the picture dictionary, Gus Pocket Communicator used a hierarchical organization, which we suspect Anita would have had difficulty navigating. Determining an appropriate navigational scheme for people with aphasia is an open question requiring further examination. We suspect that information visualization techniques, including perhaps fish-eye displays [19, 60, 3, 5], might be useful for helping aphasic individuals organize large data sets in a language-independent manner. At that point, we turned our focus from evaluating the word dictionary and Gus Pocket Communicator to further brainstorming of Anita's needs that could potentially be met with technology. Although prior to the visit we had identified two applications in addition to the dictionary, we revisited this topic to see what other ideas would emerge. Two additional applications were identified: • A conversation primer that would help Anita preplan conversations. Anita found it particularly difficult to communicate specific information to individuals with whom she was less or not at all familiar (e.g., in a doctor's visit, or to call a cab) . 3  Anita's fear of being unable to communicate in these situations had created a cycle in which with every occurrence, she became increasingly stressed and decreasingly able to communicate. 3  28  • A personal history recorder that would enable Anita to record and share her life story: people she had known, places she had worked, and things she had done. It is worthwhile noting that the applications identified were aimed not only at addressing functional needs, but also, at addressing pastimes and hobbies. For example, while the daily planner targeted Anita's functional need of managing her schedule, the recipe book was identified to meet her desire to continue her lifetime passion for cooking. Anita had the physical and cognitive ability to cook, and did not have a problem preparing daily meals.  However, she enjoyed preparing her  favorite recipes and experimenting with new and challenging ones; it was this form of cooking she wanted addressed with a recipe book. While each of the ideas identified were interesting, it was clear that Anita considered the recipe book and daily planner to be the most important, and thus, we decided to pursue the development of these two applications. This thesis documents the development of the daily planner application. It is interesting to note, that the planner was not only important to Anita, but also to her husband, Winfried, who had taken on the responsibility of managing her daily schedule. This demonstrates how the use of technology in this domain has the potential to help, not only the aphasic individuals using the technology, but also their caregivers and family members.  Semi-Formal Brainstorming with Skip The purpose of the brainstorming session with Skip was to examine a commercial daily planner application to determine which aspects of the product were inaccessible to a person with aphasia, and to identify ways to overcome those limitations. The session with Skip lasted half an hour and was comprised of three parts. First, Skip was given an introduction to the iPAQ and a demonstration of how to use the daily planner included with it, Microsoft Pocket Outlook. Next, Skip was asked to perform two common appointment management tasks: to locate a scheduled appointment, and to schedule a new meeting. The session ended with a discussion of the usability of Pocket Outlook for people with aphasia, using our initial paper prototypes of the 29  ESI Planner interface (discussed in detail in Section 3.2.2) to stimulate discussion on possible improvements. Although Skip made some occasional input errors while using the iPAQ, he was able to successfully complete both tasks without too much difficulty. However, when scheduling an appointment, it was clear that many of the options available in Pocket Outlook, such as the ability to schedule recurring appointments or to assign appointments to different categories, were not useful. Skip noted that he would prefer if the space allocated to those options was instead used for a larger input mechanism. In fact, he strongly felt that the small size of the soft input panel keyboard was the largest limitation of the system. Interestingly, though Skip was able to complete the tasks without too much trouble, whenever he did get slightly lost, his default recovery strategy was to try one of the physical buttons located on the bottom-front of the device. In addition to a four-way navigational button, there are four programmable quick-launch buttons on the iPAQ. These buttons are by default programmed to launch the contact manager, calendar, email, and task manager programs. While Skip was never explicitly told that the buttons would not be necessary or useful for the tasks at hand, they were not once used during the demonstration. From this we can observe that the physical buttons are potentially powerful, and correspondingly, users might be drawn to them when they are confused.  As such, to minimize the potential for confusion, care  should be used when assigning functionality to them. Specifically, it is likely that the default behavior of these buttons will be inappropriate for aphasic individuals, and possibly other special populations. When shown the paper prototypes, Skip was not immediately drawn to the idea of using images or sound clips for a daily planner. In general, he was able to manage adequately with his paper planner, as he was able to jot down a few familiar words with relative ease. The main problem he had with his current method of scheduling was ensuring consistency with the person with whom he was scheduling, an issue that would not be addressed by the tri-modal design. As such, he did not feel he would benefit from a tri-modal planner. He did, however, feel that  30  for individuals who were more severely impaired in reading and writing, as he had been when he first acquired aphasia, that the images and sound clips were likely to be helpful. Moreover, he was very enthusiastic about the development of aphasiafriendly technology, and was interested in participating in the project. While it might seem inappropriate for Skip to take the role of a "user" in participatory design, when he himself was not a target user, in this case it seemed appropriate for two reasons: 1. Prior to rehabilitation, Skip would have been part of the target population. 2. His relatively mild reading and writing impairments allowed him to effectively communicate design ideas. So, although Skip's relatively mild impairments made him an unlikely candidate for using ESI Planner, his ability to communicate, albeit non-verbally, combined with his previous life experience made him an excellent choice for our participatory design team.  3.2.2  Low-Fidelity Paper Prototyping  Paper prototypes axe often used in iterative design for their ability to quickly bring together many ideas and uncover design flaws [4, 50]. Through their appearance of being rough and malleable, they encourage users to suggest large structural or conceptual changes that would seem impossible to make to a more finalized system. They are often praised for enabling designers to quickly move through several iterations, and to garnish feedback on fundamentals (such as flow of control), which become difficult to change as development progresses, as opposed to superficial details (such as the size and color of fonts), which can easily be changed at any point in the development. Low-fidelity paper prototypes were first introduced during the semi-formal brainstorming session with Anita, described in the previous section. Anita's problem with traditional paper and electronic daily planners was twofold. First, the input of appointment data via writing, typing, or tapping was slow and difficult. 31  Attempts often resulted in frustration and resignation. Second, the representation of appointment data as text made it often impossible for her to recognize and interpret the stored information. This was true even for appointments she had entered herself, as her language skills were inconsistent and unreliable. For our low-fidelity prototyping session with Anita, we created a series of initial paper prototypes. One example is shown in Figure 3.3. Unfortunately, standard low-fidelity prototype evaluation, which includes the Think Aloud protocol, proved very difficult for Anita. While her speech remained relatively fluent, it often lacked sufficient detail for her to give a specific account of how she would use the proposed interfaces. Furthermore, she was very concerned with her physical ability to interact with the intended device, an aspect of the design for which paper prototyping provides little insight.  01/23/615  m -m  • \%  OS • r.<t  1l  % I t , 1 }  ? « •  3* 4»  h„  I  Figure 3.3: One of the initial paper prototypes used in the low-fidelity prototyping session with Anita.  Paper prototypes were, however, useful for discussing specific aspects of the design relative to Anita's abilities, and for stimulating general design discussion. For instance, with respect to the prototype shown in Figure 3.3, Anita commented that while she did have difficulty reading in general, she would not have had trouble with 32  the numbers used to display time. She noted, that she had relatively little difficulty with numbers, and moreover, by providing the numbers in sequence she would be able to use their order to help her figure them out. She also noted that while the small suns adjacent to the numbers might help reinforce the time of day, they would become a greater hindrance if they interfered with the numeric representation of time, suggesting a need for clear separation between images and text. Despite its challenges, paper prototyping was used multiple times in our design process. As is explained in the subsequent section, we found many design flaws in our first medium-fidelity prototype that did not appear to be specific to aphasia. After confirming our suspicions by testing our prototype with one nonaphasic individual, we decided to step back to paper prototyping, this time with non-aphasic participants. We tested the three different designs shown in Figure 3.6 (page 36) using four non-aphasic participants. Our rationale for taking this course of action was that we hoped that by removing general design flaws we could better use our time with aphasic participants to focus on aphasia-specific aspects of the design. Paper prototyping was used one last time with aphasic participants, near the end of the iterative design process, to get input for the design of appointment creation and modification functionality. In our initial session with Anita, we discovered that looking at the whole interface was too much to cover for the limited time we had; thus, we chose to focus on just the navigation and layout components. As such, we subsequently had to return to the design of the appointment creation functionality. Based on our experiences with paper prototyping with Anita, we decided this time to work with printed copies of computer generated paper prototypes, instead of hand drawn ones. A few examples of these prototypes are shown in Figure 3.4. By using computer generated illustrations, we were able to more accurately capture details such as widget size. Furthermore, we were able to efficiently generate a larger set of illustrations than we could have by hand. By using a larger set of illustrations, we were able to thoroughly explore the interaction sequence while reducing the overall level of Think Aloud required.  33  New Appon itment date | time j m i age* pound* j text Tap xtuge to edit  Tap Items to edit 6/903  Dtt:  £  «*-  a SU:  1103m  Q  Eiffel Tow  r'Ml  1 s^OK  H  1  1  1200 pm  0 ft 9  n-i  El  1 Xc—  —  1  _  I  (a) Wizard-style  subject  mm  El locoaa  Eiff.lTow §j S»p3of 5-UW«!(lin<) 11:15 «n-100pm '  I *<»  9  HE  m iii MH  =  1 X<'—.  r "/:UUun-yjU«n 1 MarJ iynMoraDe EifFelTower  10 11 _ 12 !  2  T  ~T~ ~T~  mm Pari  ra  6 T  ?  1  (b) Tabbed Panes  8  11:15 «n-100 pm  Mm  um  1* a  m  m& mm  (c) In Context Creation  Figure 3.4: Paper prototypes of the three appointment creation design options tested for ESI Planner using three aphasic participants.  While these paper prototyping sessions were generally successful, we nevertheless found there was still more to be learned about paper prototyping with this population. Perhaps the biggest lesson learned in these sessions was with regard to how easy it is for miscommunications to occur while working with aphasic individuals. Unfortunately, one of our participants mistakenly thought we were doing the paper prototyping as an exercise to help him understand what we were building; he thought that because of his aphasia we were underestimating his intelligence. Fortunately, he felt sufficiently comfortable with one of our researchers to share these concerns, allowing us to address them by explaining our intent and the true purpose of paper prototyping. 3.2.3  M e d i u m - F i d e l i t y  Software  Prototyping  Our first prototype for ESI Planner, shown in Figure 3.5, had a similar design to Microsoft Pocket Outlook in that it assigned equal screen real estate to each hour of the day, which required vertical scrolling, even to view appointments falling within the 6:00 A M to 8:00 P M time frame. However, in our first session it became clear that for browsing tasks, this combination of searching through the days, and  34  scrolling within a day to find appointments was potentially problematic in that the user repeatedly missed appointments for which scrolling was required.  Suspecting that this difficulty had nothing to do with aphasia, we ran the same tasks with one non-aphasic individual and found the same problems. At this point, as mentioned previously, we went back to paper prototyping, this time using four non-aphasic individuals to test the three interfaces shown in Figure 3.6: an emphasized scroll version that clearly indicated when appointments were "hidden" (a) , a dynamic timeline version similar to that used in Palm Pilot planners that displayed only hours for which appointments existed and thereby minimized scrolling (b) , and a detail-in-context design (c). The detail-in-context design proved to be the clearest design overall: no major usability problems were found with the detail-in-context design, whereas participants frequently missed off-screen appointments when using the emphasized scroll (e.g., the 3:00 P M appointment in Figure 3.6(a)), and had difficulty recognizing undisplayed free time in the dynamic timeline version (e.g., 10:00 A M to 2:00 P M in Figure 3.6(b)). It is therefore the one we used. As shown in Figure 3.7, the left-hand 35  B P  03/01/05  E  HI  03/01/03  8|  1 Q L jViVfr Wane  EiftlW  11  i  x  a  i  il  2 .  os/ol/m fit  B  I feat*  K  1  (a) Emphasized Scroll  (b) Dynamic Timeline  (c) Detail in Context  Figure 3.6: Paper prototypes of the three layouts tested for ESI Planner using four non-aphasic participants.  side of the screen sets the context, highlighting which parts of the day are booked, while the right-hand side gives the appointment details. 3.2.4  High-Fidelity  Software  Prototyping  The participatory design phase of this research resulted in a high-fidelity prototype of ESI Planner, which was subsequently evaluated in a laboratory study. As shown in Figure 3.8, each screen displays a single day from 6:00 AM to 8:00 PM for which a maximum of five appointments can be scheduled. Although this aspect of the design could be seen as limiting, from our discussions with participants as well as speechlanguage pathologists, these choices seemed to meet the needs of our user population. None of the  aphasic participants expressed a need for scheduling more than five  appointments per day. Moreover, in their current scheduling the information used to describe an appointment was generally sparse. It often consisted of only the starting time of the appointment and one additional descriptor, either the person they were meeting or the place at which they were meeting. Of the four participants, three were managing their own schedules (Anita was relying on assistance from her husband). Each of these participants carried with them a small paper planner; two participants used a weekly planner and one used a monthly planner. 36  Figure 3.7: Medium-fidelity prototype of the detail-in-context layout.  Navigating ESI Planner is accomplished in two ways: the date can be changed by tapping the arrowed buttons shown in the top left and right of Figure 3.8(a), and by tapping the current date, which brings up the calendar shown in the top center of Figure 3.8(a). Appointment creation is initiated by selecting one or more hours from the left-hand side of the display, which brings up the time selection window shown in Figure 3.8(b) initialized to the hours selected. Within the time selection window, the user can specify the start and end time of the meeting in fifteen minute increments by adjusting the minute and hour fields of the start and end times.  Once the time has been entered, a default appointment is created,  as shown in Figure 3.8(c). Tapping on the default person or place brings up the associated selection tool, as shown in Figure 3.8(d), which can be used to specify the relevant fields in the appointment. Within the triplet selection tool, items are are organized in a unordered sequential list. This is most likely suboptimal; further work is required to determine appropriate organizational schemes for people with aphasia. Once both fields have been specified, the appointment can befinalizedby  37  tapping the "OK" button denoted by a check mark and located on the right hand side of the appointment as shown in Figure 3.8(e). This switches the appointment from Edit mode to Display mode. In Edit mode, tapping on any of the triplet components brings up the selection tool, whereas in Display mode tapping on the image enlarges it, as shown in Figure 3.8(f), and tapping on the sound button plays the sound clip. Returning to Edit mode from Display mode is done by tapping on the Edit button located on the right hand side of the appointment, as shown in Figure 3.8(f). It is intended that appointments be left in Display mode, except when they are being created or modified.  3.2.5  Implementation  ESI Planner was designed for Windows Pocket P C 2002 devices. All evaluations of ESI Planner were done using an HP iPAQ 5400. ESI Planner was implemented in embedded Visual Basic 3.0 using the Pocket P C 2002 SDK. While embedded Visual Basic's shallow learning curve provided early advantages, its limited power and flexibility became increasingly problematic as the development progressed. Ultimately, we would have preferred to use a more powerful language such as embedded Visual C++, which would have provided us with greater control over the software. ESI Planner uses the Pocket Outlook Object Model (POOM) as the back-end for appointment data storage. This was advantageous as it allowed us to focus our efforts on developing the interface rather than on building a back-end data storage facility. The constraints of using an existing system caused few difficulties as the amount of data we wanted to store per appointment was fairly limited (date, time, place, person). Our triplet database was.fixed and small, so we were able to store triplets in the POOM as keys, with ESI Planner using those keys to look-up the corresponding text string, image file, and sound file from a look-up table. However, this method of handling the data is likely to be slow and impractical for triplet databases that are large or dynamic. The development of a custom back-end should be considered, before the system is used or evaluated in a field context.  38  9:00am to 11:00am  tj ai  MarljTi Monroe 12:00 pmto3:15pm  AS  Person to Meat  (a) The date can be changed by tapping the arrowed buttons (top left and right), or by tapping the date, which brings up a calendar for date selection.  (b) Selecting one or more hours from the left-hand side of the display brings up the time selection window.  FJ3 EffelTm  w\ lr 3Da*tFTH1 Place to*  (c) Once the start and end times have been specified, a default appointment is created.  MMH _  Jm«y  1/30(0, »  _  fp»T1  9:00 am to 11:00 am  OH rs  i i lJ  Mariyn Mcnrae 12:00 P"> 'o 3:15 pin Queen Elizabeth II  (d) Tapping on the person/place brings up the associated triplet selection window.  Eiffel lower  HI rs Big Ben  (e) Tapping on the check mark in the upper right corner of the appointment switches the appointment from Edit to Display mode.  (f) In Display mode, tapping on an image enlarges it, and tapping on a sound icon plays the associated sound clip.  Figure 3.8: Screen-captures of the ESI Planner interface.  39  Chapter 4  Phase Two: Experimental Evaluation of ESI Planner In Phase One, ESI Planner was iteratively developed with input from aphasic participants. In this chapter, we describe Phase Two, in which a laboratory experiment was conducted to assess the effectiveness of Phase One relative to our goal of developing a usable high-level application to better support the needs of aphasic users. To meet the challenges inherent in working with this population, this was not a traditional laboratory study. Some of the constraints of a traditional study, such as maintaining a consistent experimental environment, needed to be relaxed in order to accommodate the special needs of this population.  4.1  Two Planner Conditions  ESI Planner was compared with an equivalent text-only electronic planner, NESI Planner (the Not Enhanced with Sound and Images Planner). In this study, we wanted to specifically test our hypothesis that an interface using images and sound would better support aphasic individuals in appointment management tasks. Thus, we chose not to compare ESI Planner to an existing commercial product such as Microsoft Pocket Outlook. While the results of such a study would be interesting, it would not have allowed us to test our hypothesis, as other design factors would  40  have confounded the results. For example, ESI Planner has no text input, instead triplets are selected from a list; Microsoft Pocket Outlook, on the other hand, inputs appointment data via the soft input panel. In a comparison of ESI Planner and Microsoft Pocket Outlook, it would be difficult to tell if differences in preference and performance were due to the images and sounds, or the result of the different input mechanisms. The NESI Planner interface retains as much of the ESI Planner interface as possible while removing sound and image functionality. All widgets and interaction sequences for navigating the planner, and for adding, modifying, and deleting appointment data are shared between ESI Planner and NESI Planner. Thus, completing any given task requires the same number and sequence of commands. Figure 4.1 shows equivalent screen-captures of the ESI Planner and NESI Planner interfaces.  [3  October 10/2/03 •  f f ] | Done j  7:30 am to 8:30 am Michael Jackson  Golden Gate Bridge  12:00 pm to 1:00 pm  3  BSD Nelson Mandela ' 3:00 pm to  <\  f  nm  [edit I  TArc de Triumph  4:30 pnr  Albert Einstein Golden Gate Bridge  (a) The ESI Planner interface  (b) The NESI Planner interface  Figure 4.1: Screen-captures comparing the ESI Planner and NESI Planner interfaces.  41  4.2  Participants  Our goal was to have eight participants complete the study. While eight participants might be considered small in a traditional user study, it is a sizeable number when working with special populations due to the difficulty of recruiting participants from a limited pool. As one of our first eight participants, E T , did not complete the planner evaluation portion of the study , we had to include one additional participant, 1  bringing the number of participants up to nine. Nonetheless, we believe ET's data provides valuable insights into the evaluation process, and thus we include it in this discussion. As our participants were drawn from a small, close-knit pool, and there was much enthusiasm for participating in the study, two additional participants were included in the process, although their data could not be used . We included those 2  participants to ensure participant satisfaction and to maintain a good relationship with the groups through which we were recruiting participants. However, their data has not been analyzed, and thus they are not included in this discussion. Further explanation of why we included those participants in the experimental process, but did not use their data, is provided in Section 5. In total, eleven aphasic individuals participated in the study: nine of which are considered in this discussion, with data from eight included in the analyses. Participants ranged in age from 47 to 86. They had a range of educational backgrounds from high school completion up to post-graduate education. There were 1 female and 8 male participants. None of the participants in the experimental evaluation were part of the participatory design phase of ESI Planner. Participants were selected to be at least one year post onset to ensure a minimum level of stability had been reached in health and rehabilitation. Most had some experience with computers; only one had not used a computer previously. All possessed an interest in the use of computer technology and a willingness to learn. *ET withdrew from the study halfway through the planner evaluation due to fatigue and frustration. Our suspicions for why this occurred are discussed in Section 4.6.2. One participant did not meet the minimum criteria we had for participation in terms of a capacity for interacting with technology. The other had to postpone his session due to family commitments until after our deadline for data analysis. 2  42  Participants were recruited through local stroke and aphasia clubs. As communication deficits complicate tasks such as navigating large, unfamiliar places (like the UBC campus), and many of our participants had associated mobility limitations, it was unreasonable for us to expect our participants to come to our lab for the study. Thus, the study was conducted in a location that was convenient to each participant, which most often was at a stroke or aphasia club to which they belonged. Two individuals, however, preferred to come to the university. While working with the planners, only the researchers and the participant were present. ESI Planner was designed to be used independently, and therefore, caregivers were not involved in the evaluation.  4.3  Methodology  Given the extensive individual differences inherent in our population, a withinsubjects design was chosen.  Based on recommendations from a speech-language  pathologist, we included two sessions, neither of which lasted more than ninety minutes. The first session, conducted by a computer science researcher, was the planner evaluation session. Participants performed a set of tasks with one planner, took a break, and then completed an isomorphic set of tasks with the second planner. To control for learning effects, we counterbalanced both the presentation order of the interfaces (ESI-NESI vs. NESI-ESI), and the presentation order of the task sets (Task set a then b vs. Task set b then a). Thus, we had a total of four interface/task set conditions: ESIa-NESIb, ESIb-NESIa, NESIa-ESIb, and NESIb-ESIa; and for each condition we had two participants. The second session comprised a speech and language assessment conducted by a certified speech-language pathologist (see Section 4.5 for details). A ninety minute evaluation session allowed participants to spend at most thirty minutes with each interface, with the remaining thirty minutes for interviews, breaks, and administrative details. We piloted the study with, SS, one of our  43  participants from the participatory design phase. From SS's performance on the pilot tasks, we determined that ten tasks would be a reasonable number for a thirty minute block. One challenge was determining appropriate task scenarios. It would not have been realistic to test participants on the management of appointments with people and places they had never seen before, nor was it practical to create fully customized databases for each participant. We chose a compromise between these two extremes, and constructed databases of fifteen famous people and fifteen famous places. The famous people used were selected from Time magazine's "TIME 100 - People of the Century," [59] and People weekly's "25 Amazing Years" [48]. The famous places were determined by an informal survey of eight people asking them to list famous or well known places. The image and text fields of all elements in these databases are shown in Appendix B. The sound clips were recordings of the text read aloud. At the start of the session, participants were given the opportunity to go through the databases and eliminate up to five unfamiliar faces and five unfamiliar places. No participant selected the maximum five unfamiliar entries for exclusion in either category, and in most cases, no more than one entry was eliminated. An application, shown in Figure 4.2, was developed to allow participants to cycle through the triplets and mark each as either familiar  or unfamiliar.  For most of our partic-  ipants, this was their first experience with the iPAQ. As such, this step also served as an introduction to the iPAQ, and to using the pen for selection and navigation. Participants were given as much assistance as needed by the researcher. Each triplet was shown on a page by itself, with the image, sound, and text forms available to the user. By default the triplets were unmarked; the user marked them using the familiar  and unfamiliar  became bold as shown for familiar  buttons.  Once marked the button text  in Figure 4.2. The arrowed buttons were used  to advance to the next and return to the previous item in the database. Once all items in both databases were marked, the participant's planner was populated with appointments using randomly selected triplets from the participant's familiar  set.  That is, the time and date of initial appointments were predetermined and fixed  44  for each task set; however, the people and places used for these appointments were randomly selected from those familiar to the participant. The people and places in the task sets were also selected in this manner.  14 triplets left to classify 1 Classified Familar of 10 required  Figure 4.2: Screen-capture of the application used to eliminate triplets from the face and place databases.  The ten tasks were broken into three primary categories: retrieval, creation, and modification. In addition there was one compound task, given last, where the participant was asked to count the number of appointments matching a specific criterion over a period of time (e.g., the number of appointments with Marilyn Monroe in the month of August). For each of the three task categories, the participant was first given a demonstration of the task by the researcher and then given three similar tasks to perform. For the compound task, no demonstration was given, as this task built on previously demonstrated skills.  45  The first two tasks in each category were presented in written form, but read aloud if necessary. The second task was considered to be a more reliable indication of the participant's ability to complete the task, as any misunderstandings could be clarified during the first task. The third task was given verbally, with written cues if necessary, and was designed to evaluate the participant's ability to manage the planner with auditory instructions only.  Given the individual differences in  participants' language abilities, these different task presentations were used in order to evaluate the effect of task presentation on participant response. Figure 4-3 shows an example appointment creation task. Pictures were used where possible to highlight information, and text was structured in short segments so as to facilitate reading comprehension. Designing tasks such that the time for the participant to communicate the result would not dominate the task time was particularly challenging. For example, to test appointment retrieval the participant might be asked to find out with whom a particular appointment is scheduled. The desired measure in such a task is the time it takes the participant to determine with whom the appointment is scheduled, and thus should not include the time taken to communicate the result. In typical testing situations, this communication time is negligible and can often be ignored; however, when working with aphasic participants, this time can not only be significant, but can vary significantly among participants and even among tasks for a single participant. To assess task success we first ensured that the actions made by the participant were reasonable for completing the given task. Continuing with our previous example of locating an appointment and determining with whom it is scheduled, if the participant never successfully navigated to the appointment in question, it was clear that the task was not completed successfully.  However, the opposite is not  true. That is, even if the participant did correctly locate the desired appointment, it was not necessarily clear that the task was completed successfully, because if the participant could not clearly communicate with whom the appointment was scheduled, then it was not clear that they understood the appointment data contained  46  Create an appointment With  At  T  14A  Person:  Marilyn Monroe  Place:  Eiffel Tower  Date:  September 14, 2003  Start Time:  4:00pm  End Time:  5:15pm  Figure 4.3: Example of a written task used in the evaluation of ESI Planner.  within it. In those situations, where task success was ambiguous, we relied on selfassessment. Prior to starting the tasks, participants were instructed that they did not have to communicate answers, but only had to tell us whether or not they were sufficiently confident in their understanding of the appointment data to hypothetically act on it. So for the previous example, they would not have been asked to read out the person's name, but only to say whether or not they were confident they could pick the right person out of a room full of people. For testing purposes, a splash screen that hid the planner interface between tasks was added to each of the planners (see Figure 4.4). This screen showed two buttons, an Exit button that exited the planner program, and a start button, labelled either Next Task or Demo (as appropriate), which revealed the planner and started the task timer. Participants were instructed to take time before beginning each task to ensure that they understood the task. When they were ready, they were to begin by tapping the Next Task button. A Done button was added to the planner  47  interfaces for our study as shown in the upper right hand corner of both of the screen-captures in Figure 4.1. When participants felt they had completed the task, they were to tap the Done button. This stopped the timer, and hid the planner by returning to the splash screen.  Figure 4.4: Splash screen used to hide the planner interface between tasks.  The ten tasks were given to the participants one at a time. When thirty minutes had expired, the researcher indicated to the participant that it was time to stop and move on to the next part of the study. At that time, participants were offered a short break including light refreshments of juice and cookies. Video was used to capture verbal interactions between the participant and the researcher, and physical interactions between the participant and the system, including unsuccessful screen taps that could not be captured by the event logger (as the event logger only captured taps which triggered a command). The event logger recorded a listing of all commands issued and task times in a time-stamped 48  file. We secured the iPAQ to the table with velcro for the planner evaluation session as it has a tendency to slide unless held steady. Our primary goal was not to explore or evaluate the general accessibility of the iPAQ, and this allowed our participants to focus on the appointment management tasks. In addition, fixing the position of the iPAQ facilitated video capture of the display. The planner evaluation session concluded with the researcher conducting a semi-structured interview to capture information which included participants' computer experience, daily planner usage (both paper and electronic), and interface preferences (see Appendix C for interview questions).  4.4  Dependent Measures  The following quantitative measures of performance were used for each of the two planners: • Task time: the sum of all task times • Tasks correct: the number of tasks completed correctly • Tasks complete: the total number of tasks completed The qualitative self-reported measures captured in the interview and used for ranking the two planners were as follows: • Fastest: which planner the participant felt was fastest to use • Easiest: • Preferred:  which planner the participant felt was easiest to use which planner the participant preferred overall  • Long term: which planner the participant would prefer to use, if the participant had a longer time to spend learning to use it  49  4.5  Individual Differences  Participants' language abilities were assessed using the Western Aphasia Battery, a standardized battery that is widely used to assess language impairments in aphasia [32]. Table 4.1 gives the language scores of participants on the seven sub-tests of the battery used in our assessment: Spontaneous Speech - Information Content, Spontaneous Speech - Fluency, Naming, Repetition, Auditory Comprehension, Reading Comprehension, and Written Expression. On the basis of assessment results, participants' abilities in the areas of speech, audition, reading, and writing are described in terms of severity using the following three classifications: mild, moderate, and severe (see Table 4.2).  These classifications reflect the means of  standardized scores in each section (Speech Production, Auditory Comprehension, Reading Comprehension, and Written Expression) of 8.0-10.0, 4.0-7.9, and 0.0-3.9 respectively. Table 4.1: Language scores of participants on the Western Aphasia Battery (N = 9) Subtest" Speech Production la. Info Content lb. Fluency** 2. Naming 3. Repetition Mean Auditory Comprehension Reading Comprehension Written Expression c  SR C W C B W V J P G P N F M M  ET  3.0 2.0 3.9 1.2 2.5 7.2 4.9 1.2  2.0 2.0 2.0 2.3 2.1 3.6 4.9 4.0  6  -  5.0 4.0 6.5 7.3 5.7 9.8 8.1 3.6  9.0 5.0 6.4 7.0 6.9 8.0 9.4 6.0  8.0 10.0 6.0 9.0 8.8 8.1 8.8 6.8 7.9 8.5 9.6 9.6 9.4 10.0 8.0 9.2  5.0 2.0 3.0 3.3 3.3 7.1 5.1 3.5  9.0 9.0 8.2 9.0 8.8 9.4 8.2 8.0  "All subtests are scored out of 10 ^Scores were not available for this participant due to an intervening medical incident Spontaneous Speech - Information Content Spontaneous Speech - Fluency Note that language scores were not available for participant C W as an intervening medical incident occurred after the evaluation session, changing his language abilities such that a valid assessment was no longer possible. For CW, the classifications shown in Table 4.2 were made by a speech-language pathologist familiar with his language skills prior to the intervening incident. 50  Table 4.2: Speech and language classifications of participants reflecting scores on the Western Aphasia Battery (N = 9) Measure Speech Audition Reading Writing  SR  CW  CB  wv  JP  GP  NF  MM  ET  severe  severe  moderate  moderate  moderate  mild  severe  mild  severe  moderate  moderate  mild  mild  mild  mild  moderate  mild  severe  moderate  moderate  mild  mild  mild  mild  moderate  mild  moderate  severe  severe  severe  moderate  mild  mild  severe  mild  moderate  As language scores were not available for CW, classifications were made by. a speechlanguage pathologist familiar with his language skills prior to the intervening medical incident. a  4.6  Results  In this section we report on the results of the quantitative and qualitative analyses, and discuss limitations of the ESI Planner design that were uncovered during the evaluation phase.  4.6.1  Quantitative  Results  Univariate repeated-measures analyses were performed for the quantitative withinsubjects measures: task time, tasks correct, and tasks complete. Each of these measures were considered in combination with the between-subjects factors resulting from the counter-balancing of the interfaces and the isomorphic task sets. Summaries of the results of these analyses are shown in Tables 4.3 to 4.5. In the interest of brevity, main effects from the between-subjects comparisons have not been included here; none of the analyses had significant results for those comparisons. On average, participants spent 17 minutes and 12 seconds doing tasks with ESI Planner, compared to 15 minutes and 47 seconds with NESI Planner. These results include participants who reached the thirty minute time limit; for these participants, their total task time is based on the tasks they completed within the thirty minute limit. Although, this method of handling the data could potentially skew the completion time averages, an equal number of participants reached the  51  Table 4.3: Summary of results of the univariate repeated-measures analysis for task time Source Task Time Task Time*I Task Time*T* Task Time*I*T Error  SS 28985.06 9850.56 189007.56 5365.56 112697.75  a  df 1 1 1 1 4  MS 28985.06 9850.56 189007.56 5365.56 28174.44  F 1.03 0.35 6.71 0.19  Sig. 0.37 0.59 0.06 0.69  "I: Counterbalancing Variable—Interface Order (ESI-NESI vs NESI-ESI) T: Counterbalancing Variable—Task Set Order (ab vs ba) 6  Table 4.4: Summary of results of the univariate repeated-measures analysis for tasks correct Source Tasks Correct Tasks Correct*I Tasks Correct*T Tasks Correct*I*T Error  SS 5.06 1.56 0.56 0.56 0.75  a  6  df 1 1 1 1 4  MS 5.06 1.56 0.56 0.56 0.19  F 27.00 8.33 3.00 3.00  Sig. 0.01 0.04 0.16 0.16  "I: Counterbalancing Variable—Interface Order (ESI-NESI vs NESI-ESI) T: Counterbalancing Variable—Task Set Order (ab vs ba) 6  Table 4.5: Summary of results of the univariate repeated-measures analysis for tasks complete Source Tasks Tasks Tasks Tasks Error  Complete Complete*I Complete*T Complete*I*T a  6  SS 3.06 3.06 3.06 3.06 1.25  df 1 1 1 1 4  MS 3.06 3.06 3.06 3.06 0.31  F 9.80 9.80 9.80 9.80  Sig. 0.04 0.04 0.04 0.04  °I: Counterbalancing Variable—Interface Order (ESI-NESI vs NESI-ESI) T: Counterbalancing Variable—Task Set Order (ab vs ba) 6  52  limit in each condition , and thus, this is not a concern here. While the comparison 3  of the task times did not show a statistically significant difference (Table 4.3), it may suggest that ESI Planner takes longer to learn. Participants did, however, complete significantly more tasks correctly with ESI Planner (Table 4.4), completing on average 7.9 tasks correctly with ESI Planner, and only 6.8 tasks correctly with NESI Planner. Figure 4.5 shows a chart of the individual scores for tasks correct and complete, and reveals that in fact no participant completed more tasks correctly with NESI Planner (two participants correctly completed the same number of tasks with each planner). While this main effect was significant (F(l,4) = 27.00, p < .01), a closer analysis of the data reveals that the difference between planners largely came from participants in the NESI Planner-first ordering (NESI-ESI); i.e., there was a significant interaction effect between the number of tasks completed correctly and the order in which the interfaces were presented to the participant (F(l,4) = 8.33, p < .05). As shown in Figure 4.6, participants who saw NESI Planner first (NESIESI) performed better when they subsequently saw ESI Planner, completing on average 7.25 tasks correctly with NESI Planner and 9.00 with ESI Planner. In contrast, participants in the ESI Planner-first ordering (ESI-NESI) did not show this improvement in their second planner condition, completing on average 6.75 tasks correctly with ESI Planner and only 6.25 tasks correctly with NESI Planner. This interaction effect, combined with the feedback given by some of our participants that the NESI Planner interface was less cluttered, might indicate that using NESI Planner first acted as a scaffold for the ESI Planner. Participants may have first learned the flow of control with NESI Planner, and then built on that knowledge to master the extra image and sound functionality of ESI Planner. For tasks complete (Table 4.5), there was a significant three-way interaction effect between tasks complete, interface order, and task set order (.F(l,4) = 9.8, p < .05). A graph of this interaction is shown in Figure 4.7 revealing that participants in In total five participants reached the limit in at least one condition. Four participants reached the limit with their first planner—two with ESI Planner, and two with NESI Planner—and one participant reached the limit in both conditions. 3  53  Total Tasks Completed by Participant Showing Both Tasks Correct and Incorrect E S I  m Correct  N E S I  CW  CB  IJ  Correct  [  Incorrect  2] I n c o r r e c t  WV  JP  CP  NF  MM  Participants  Figure 4.5: Tasks completed with each interface, showing both the tasks completed correctly and incorrectly (N = 8).  three of the four interface/task set conditions completed on average the same number of tasks with each planner, while participants in the fourth condition (NESIb-ESIa) improved substantially, completing 3 to 4 more tasks with ESI Planner than with NESI Planner. While there was also a main effect of interface on tasks complete, this effect was due to the large variation in the one condition. Given the small number of participants in each condition (N = 2), it is not clear how to interpret this result, although it is likely this finding is predominantly due to individual differences. We had intended to further analyze the results to see if the format in which a task was presented (written vs. auditory) had an effect on task success. However, as each of our participants had an equal reading and auditory comprehension impairment, no conclusions could be drawn from this data. Our belief is that presentation format needs to be customized to ensure that each participant's strengths  54  Average tasks correct for each trial by interface ordering 10  9  re n  E  ESI  8 7  NESI  6 5  4  First Planner  Second Planner  -0»»  ESI-NESI  6.75  6.25  —Q—  NESI-ESI  7.25  9.00  Planner C o n d i t i o n  Figure 4.6: Interaction between tasks correct and interface ordering (TV = 8).  are leveraged, but further studies with participants specifically chosen to differ in reading and auditory comprehension would be needed in order to verify this. One observation that did emerge from the use of written and verbal presentation formats was the latter's effect on feature use. In general, the sound functionality of ESI Planner was seldom used; in fact, only two participants used it (SR—once, C W — t h r e e times). However, the one occasion SR used it was during a verbal task, and although C W did not use it specifically for tasks presented orally, it was clear he used it to match the sound played to the sound read to him by the researcher (recall that written tasks were read aloud to the participant, if necessary, see section 4.3). These two uses of the sound functionality suggest that it can be used to match an auditory input with the correct triplet. This finding, albeit weak, is noteworthy considering that in real situations, written cues are not always readily available (as  55  Average tasks complete for each interface by interface and task set ordering - O - ESIa-NESIb - • > - E S I b - N E S I a  -D-NESIa-ESIb  NESIb-ESIa I  1u  9• 8-  CO  -, .  CT> Ol  v  dumber of T  <t>  -  </>  M  NESI  ESI  - O - ESIa-NESIb  8.00  8.00  - • > • ESIb-NESIa  7.00  7.00  -ONESIa-ESIb  10.00  10.00  •HH-NESIb-ESIa  6.50  10.00  Planner Interface  Figure 4.7: Interaction between tasks correct and interface ordering (N — 8).  they were in this study), and thus, it is likely that the sound functionality of ESI Planner is underestimated in our results.  4.6.2  Qualitative  Results  Qualitatively, participants were asked to rank the planners in four categories: fastest, easiest, preferred, and long term . The results of these questions, shown in Table 4  4.6, when considered in combination with the language classifications shown in Table 4.2, reveal some interesting findings. In general, participants were evenly divided in their preferences with five participants preferring ESI Planner overall, and three preferring NESI Planner. However, when language assessments are taken into account, trends emerge that suggest As ET did not complete the evaluation sessions, preference data is not available for this participant. 4  56  Table 4.6: Self-reported planner preferences (N = 8)  Measure Fastest Easiest Preferred Long Term  SR  CW  CB  WV  JP  GP  NF  MM  neither neither ESI ESI  ESI ESI ESI ESI  ESI ESI ESI NESI  NESI NESI NESI NESI  NESI neither NESI neither  NESI NESI NESI ESI  ESI NESI ESI ESI  ESI ESI ESI ESI  a higher preference for ESI Planner for certain types of users. The three participants who consistently ranked NESI Planner higher had mostly mild to moderate deficits (WV, JP, GP). Of the five participants who preferred ESI Planner, four had at least one severe classification (SR, CW, CB, NF), and three (SR, CB, NF) were moderate or severe in all classifications. Figure 4.8 highlights this trend showing each participant's language classifications and overall planner preference with participants ordered by severity. The four participants to the left of the break (the more severely impaired individuals) consistently preferred ESI Planner, whereas to the right of the break, the preferences were mixed. Although we cannot conclusively say anything about the influence of any one of the language ratings, we believe reading is most likely the chief factor influencing this split. For participants with only mild reading deficits, navigating NESI Planner was relatively easy and personal preferences dominated; however, when reading is at least moderately impaired , the image and 5  sound support ESI Planner provides became important for task success. A final observation refers to participant E T , who was unable to complete the study. E T had a severe auditory comprehension deficit, whereas all other participants had only a mild or moderate deficit in that category. This, combined with his severe or moderate deficits in all other areas, may have made it difficult for E T to communicate with the researcher and to understand the tasks presented. Based on a conversation with ET's caregiver after the session, we strongly suspect that his difficulties with the experimental evaluation do not reflect his actual ability to learn or use the planner interfaces, as he has had success with other computer-related There were no participants in this study with severe reading impairments; we predict that for such individuals, preferences would be the same or stronger as for those with moderate reading impairments. 5  57  Speech and Language Classifications and Planner Preferences by Participant Oral Modalities  Planner Preference  Written Modalities  | Speech I Auditory  ESI  ESI  ESI  ESI  SR  CW  NF CB  | Reading R l Writing  NESI  NESI  WV  JP  NESI  ESI  Level of J moderate impairment mild  More severely impaired  CP MM  More mildly impaired  Participants ordered by severity  Figure 4.8: Speech and language classifications in each modality and overall planner preferences by participant (TV = 8).  activities at home. Rather, we believe E T required more time to acclimatize himself to the device and the tasks, and more support from the researchers. This finding highlights the limitations of experimental evaluation and reinforces the need for alternative evaluation techniques.  4.6.3  Implications for the Design of E S I P l a n n e r  Design flaws were uncovered during the formal evaluation, although further studies are needed to determine whether they are aphasia specific. Many study participants experienced problems with spin-button controls such as the one used for increasing and decreasing the minutes for time selection in Figure 3.8(b). Seemingly confused by the two options, these users would alternate between the up and down arrows, never advancing to the target. Also, multiple participants demonstrated hesitance  58  in exploring the interface.  This obstacle could perhaps be overcome by clearly  indicating the effect of an action. For example, adding the next day's date to the date forward button could indicate more clearly the effect of tapping that button.  4.6.4  Ongoing Work  The results presented in this section reflect a preliminary analysis of the data. Ongoing analyses of the video data and log files are under way. In particular, we hope to gain further insight into the interaction difficulties some of our participants experienced while working with the iPAQ. In some cases, it appeared that participants knew what they wanted to do, but had difficulty interacting with the iPAQ, possibly because they were not pressing hard enough on the display or because they were inadvertently touching part of the screen with their finger or wrist. In addition, we are looking at the interaction sequences used by the participants.  Sutcliffe, Fickas, Sohlberg, and Ehlhardt [57], in their comparison of  email interfaces, categorized user behavior and interactions, and used the results to construct behavior networks for each task. Using a similar approach, we will be constructing networks for our tasks. We will use these networks to look at both successful and unsuccessful strategies in the hopes of gaining insight into why task failures occurred. The results of such an analysis would then be used to improve future versions of the interface. Finally, we will review the video data to examine participant's help seeking strategies and the effect of their strategy on task success. With the results of this analysis, we will explore how our methodology can be improved to encourage participants to use the more successful strategies.  59  Chapter 5  Implications Throughout this research, we encountered several challenges, many of which are likely to be relevant to others engaging in research with special populations. From these challenges, various guidelines have emerged, which we have divided into the following two categories: guidelines for working with special populations, and guidelines for designing accessible handheld technology. In this chapter, we discuss these guidelines, relating them to their founding challenges.  5.1  Guidelines for working w i t h Special Populations  Working with special populations presents many obstacles to standard scientific methodology. In our work with aphasic persons, the most notable challenges included: interpreting data from a population with large individual differences, recruiting sufficient participants, addressing the mobility and transportational issues of physically and cognitively impaired individuals, and communicating effectively with participants with speech and language impairments. From these challenges, methodological insights emerged, which are presented below.  Guideline One: Assess abilities through standardized tests Speech and language assessments were used as part of the experimental evaluation methodology in our research. These assessments proved invaluable, as they provided  60  insights into the results that would not otherwise have been apparent. For example, we would not have been able to infer reasons for the diverse planner preferences expressed by our participants had data relevant to their language and communication skills not been available. While the use of standardized tests is a generally accepted and encouraged practice in HCI, it was particularly important in this research where large individual differences complicated analysis of the results. It is important to note that these assessments provided more than mere confirmation of the researchers' informal intuition. In many situations, the researchers were surprised by the results of the assessment; many aphasic individuals have developed sufficient compensatory skills to mask the extent of their deficits. As such, when working with diverse user populations, formal assessments should be used, whenever possible, to give an unbiased assessment of the abilities of each participant.  Guideline Two: Connect with existing groups and organizations The cooperation and assistance of aphasia and stroke clubs aided immeasurably in the execution of this research. These clubs facilitated recruitment by helping us contact participants and mitigated transportational needs by allowing us to use their facilities as a common place where we could meet with several participants in one visit. Nevertheless, this was not a perfect solution. Performing the research off-site meant giving up many of the benefits of a controlled laboratory. Aphasia centers and stroke clubs generally operate with modest resources, and thus, do not have superfluous space available to lend out. The space offered to us was typically the personal office of one of the club's facilitators. While the club's facilitators were sensitive to the needs of the researchers, they also had their own jobs to perform, which ultimately led to several unavoidable disturbances throughout the course of the study. Furthermore, although the organizers were supportive of the research, their understanding of it was influenced by their own perceptions and agendas. Specifi-  61  cally, many facilitators, with a genuine intent to help, repeatedly misrepresented our research as being rehabilitative, despite clear explanations to the contrary. While this may have helped entice some individuals to participate, it ultimately led to awkwardness and misgivings when these same individuals came to realize the research did not match their expectations. It is therefore necessary for researchers to exercise caution when using intermediaries to help contact and recruit participants. It is important to confirm, not only with each participant, but also with any caregivers or family members involved (who may also be donating time and energy), that the intent and purpose of the research is understood, and for everyone to share their expectations for the research and their motives for participating.  Guideline Three: Gain experience with the target population By far, the most difficult challenge in this work was communicating effectively with participants. Most of the researchers on our team had little or no experience with aphasia prior to this project, and although the guidance and expertise of the team's speech-language pathologist were extremely helpful, gaining the practical experience necessary was, nonetheless, difficult. Many communication strategies are available to facilitate communication with aphasic individuals [52], and it was important for the researchers to learn and practice them prior to performing the research. . Moreover, extra time had to be allotted to ensure participants were given sufficient opportunity to fully understand the tasks and ask questions, and instructions needed to be carefully phrased to facilitate understanding. While this was a challenge throughout all phases of our research, it was particularly significant during the experimental evaluation phase, where timing was important. In that stage, the sensitivity developed by the researchers during the participatory design phase was critical to minimizing the effect of communication barriers on the research outcome.  Guideline Four: Use a mix of advocate and target users Having Skip on the participatory design team allowed us to gain a deeper understanding of our population's needs. Skip was able to articulate those needs more  62  c l e a r l y t h a n most a c t u a l target users. A l t h o u g h S k i p ' s r e l a t i v e l y m i l d i m p a i r m e n t s m a d e h i m a n u n l i k e l y c a n d i d a t e for u s i n g E S I P l a n n e r , h i s a b i l i t y t o c o m m u n i c a t e , albeit n o n - v e r b a l l y , c o m b i n e d w i t h his previous life experience m a d e h i m a n excellent advocate user. U s i n g a m i x o f b o t h advocate users a n d a c t u a l target users t o t o represent t h e target p o p u l a t i o n c a n help b u i l d a b e t t e r p i c t u r e o f t h e requirements w h e n w o r k i n g w i t h aphasic i n d i v i d u a l s a n d p o s s i b l y other s p e c i a l p o p u l a t i o n s .  5.2  Guidelines for Accessible Handheld Technology  M o d i f i c a t i o n s t o increase t h e accessibility o f t h e k e y b o a r d date back t o t h e 1980's; for e x a m p l e , B u x t o n , F o u l d s , R o s e n , S c a d d e n , a n d S c h e i n i n t h e i r 1986 review o f interfacing devices for h a n d i c a p p e d users [7], gave t h e following extensive list: Keyboards may be modified to compensate for poor finger control through: attachment of keyboard guards, replacement of keys such as SHIFT and CONTROL  with latching type keys; disengagement of the auto repeat  function of keys and the inclusion of a key delay such that the key must be held for some time before being accepted to reduce accidental selections. Furthermore, keyboards may be redefined and multiple keystrokes reduced to a single macro through background software to facilitate access with a single finger and head-mounted or mouth-held pointers. Expanded and miniature keyboards and touch panels are now available for persons with poor targeting ability or restricted ranges of movements. One-handed chordic keyboards may be used effectively by persons having one functional hand or by blind persons since the fingers never have to leave the keys. [pg. 2] S i m i l a r accessibility o p t i o n s are not yet available for h a n d h e l d devices. I n p a r t i c u l a r , we f o u n d t w o general areas where users experienced p r o b l e m s , w h i c h we have categorized as follows:  accessibility issues w i t h t h e t a p i n t e r a c t i o n , a n d  accessibility issues w i t h t h e p h y s i c a l f o r m factor.  63  5.2.1  Accessibility issues w i t h the tap interaction  The tap interaction of the handheld device was problematic for many users with motor control impairments. While not strictly associated with aphasia, motor impairments are common in individuals who have had a stroke. Many participants clearly preferred to use their fingers to interact with the device, despite the loss in precision they incurred by giving up the stylus. Hine and Arnott [24], in discussing their experience developing a PDA-based multi-media communication service, made a similar observation. They noted that users tended to prefer to use their finger and concentrate on a pointing task rather than to use a stylus and concentrate on a combined pointing and gripping task. However, allowing finger-based interaction places substantial constraints on the minimum target size, which, given the limited display size of PDA devices, will require significant design tradeoffs to be made. As such, designing accessible applications to support fingerbased interaction may not be the best solution, and further investigation into the development of alternative interaction techniques should be considered. One example of a novel interaction technique developed to meet the needs of users with motor impairments is EdgeWrite, a unistroke text entry method for handheld devices, which guides users through character entry by using physical edges for support [65] Many of our participants found that the lack of physical support provided by the pen to be troublesome. That is, they had difficulty with the combination of moving the pen in two dimensions to align the stylus with the target, while controlling movement in the third dimension to avoid tapping prematurely. In contrast, the point and click interaction of a mouse avoids this by allowing the user to rest his/her arm against the table during targeting. With a mouse, arm movement is used for targeting (pointing), and finger movement is used for selection (clicking), whereas with a pen, arm movement is used both for targeting (pointing) and selection (tapping). Some participants worked around this difficulty by using the inactive space around targets as a landing zone for the stylus, which in effect, decoupled pointing from tapping. These participants would first touch the stylus to an inactive region of 64  the screen, and then, using the resistance of the screen for support, drag the stylus to the desired target. Of course this strategy is only possible if sufficient inactive space around targets is available, and other options may exist. For example, it may be possible to introduce a tap delay such that the tap must be held for some minimum period of time before being accepted. This could potentially help with targeting difficulties by reducing accidental selections, and would have the added benefit that screen real estate would not have to be allocated to the provision of inactive space. A final problem encountered with the tap interaction was sensitivity. Motor control limitations caused some individuals to tap repeatedly, causing unexpected behavior in the system. Unlike on the desktop, where accessibility options allow users to customize the sensitivity of key input, there is no such functionality on the Pocket P C operating system. Such functionality is required if these systems are to be used for assistive technologies.  5.2.2  Accessibility issues w i t h  the physical form  factor  The physical form factor of the iPAQ also caused problems for some of our participants who had difficulty holding on to the device due to limited use of one hand. Others found the location of the physical buttons on the device to be problematic as they would accidentally press them during operation. In addition to causing physical problems, the buttons were also cognitively confusing to some of our participants, as it was not obvious to those participants what functionality was assigned to the buttons, nor why the functionality chosen was picked over other possibilities. Both the motor-control related limitations, and the cognitive limitations of the physical form of the device are now described. Many individuals who have had a stroke suffer from weakness or total paralysis on one side of the body, making it difficult or impossible to operate the iPAQ while holding it in one hand. As such, many of our participants needed to be able place it on a table to use it; however, the iPAQ was not designed to be used in that way and has a tendency to slide across the table unless held steady. For the purposes of our experimental evaluation, our primary goal was not to explore the  65  general accessibility of the iPAQ, and so we mitigated this issue by using velcro to fix the iPAQ firmly to the table surface. This helped users to focus on the appointment management tasks rather than on holding the iPAQ, and also facilitated video capture of the screen. However, for handheld devices to be accessible, they will need to be designed with non-skid surfaces that accommodate one handed operation. Moreover, on a few occasions, users accidentally pressed the buttons on the bottom of the device while trying to grasp it. Although this seldom happened during the evaluation, we suspect that it would probably have happened more frequently had we not fixed the device to the table for that phase. The default for the buttons is to launch other applications, and so accidental button presses can be particularly confusing. The buttons should be relocated to a position on the device where they are less likely to be pressed accidentally. We suspect that relocating them from the bottom-front of the device to the top-front will most likely be the best solution, but further investigation is required. Finally, the cognitive implications of having a few dedicated buttons needs to be considered. Giving a few functions dedicated access endows those functions with added importance. The buttons have an inherent power that needs to be assigned carefully. Recall that Skip tended to use these buttons whenever he got stuck in an application (Section 3.2.1). It is, thus, likely that the best use for buttons will be to provide error recovery functionality to help users return to a safe and familiar state when confused. Although the participants in the evaluation did not tend to use the buttons, we suspect that this was because the structure of the evaluation implicitly discouraged participants from exploring the device, and moreover, that those participants were, in general, less willing to explore than Skip. We predict that in a less structured setting, we would see more participants experimenting with the buttons.  66  Chapter 6  Conclusions and Future Work The research presented in this thesis reports on our experience designing and developing a tri-modal daily planner for people with aphasia. Our high-level goal for the Aphasia Project is to gain insight into the process of effectively designing accessible and adoptable technology for people with aphasia; the preliminary work reported here provides a first step in that direction. Most research to date has focused on either rehabilitative applications or technology to support basic language functions. Our research addresses a substantial limitation of previous work in that we target the high-level goals and practical real-life needs of aphasic individuals that occur after hospital and therapy discharge.  6.1  Satisfaction of Thesis Goals  Three main objectives were presented for the documented research: (1) to identify specific needs of aphasic individuals that could be met with new technology, (2) to develop an application to meet one of those identified needs, and (3) to identify where traditional user-centered design methodology and experimental evaluation are inadequate for effectively designing adoptable technology for people with aphasia. In the following sections we address our fulfillment of each of these goals.  67  6.1.1 Our  Identification of Specific Needs first objective for this work was to identify areas where new technology could  be developed to assist aphasic individuals in daily living activities. Through brainstorming sessions with Anita, we identified the following preliminary set of needs: 1. A word dictionary that would help with word finding problems. 2. An electronic recipe book that would facilitate the comprehension of instruction and ingredient information. 3. A daily planner that would facilitate appointment management tasks. 4. A conversation primer that would support individuals in preplanning conversations. 5. A personal history recorder that would enable aphasic individuals to record and  share their life story.  The  use of triplets of images, text, and sound was seen as an integral factor  in each of the above proposed technologies. We believe that the seamless integration of tri-modal functionality will be a key characteristic of any application designed to support aphasic individuals. It is important to note that the above list represents only the needs of one person, and thus, should not be interpreted as generalizable of the needs of all aphasic individuals; however, it was not our intent to create an exhaustive listing of needs, but rather to generate a small number of ideas to drive further investigation. In future work, it would be interesting to expand upon this list by brainstorming with a larger group of aphasic individuals.  6.1.2  A n Application to Support Daily Living Activities  ESI Planner was developed with input from aphasic participants to fulfill our second goal: to develop an adoptable and usable application to support an identified need of aphasic individuals. While more evaluation is required, our two phase approach 68  gives us confidence that we are on the right path. ESI Planner was designed to support individuals in managing their daily-schedules. It uses a simplified design that incorporates triplets of images, sound, and text to redundantly encode appointment data for the user. Our laboratory evaluation revealed that ESI Planner significantly improved the ability to correctly manage appointment data, at least for participants in the NESI Planner-first ordering. We attributed this improvement to a scaffolding effect: participants who saw NESI Planner first benefitted from learning a simpler interface, then building on that base knowledge when learning the more complex ESI Planner interface. In contrast, participants who saw ESI Planner first were disadvantaged: they had to master master both the navigational functionality common to ESI and NESI Planner, and the image and sound functionality unique to ESI Planner, within the time frame of a single thirty minute block. Thus, they might not have brought as solid a mastery of the navigational functionality into their second block, when sound and image support were removed. Our qualitative self-reported measures revealed that for participants with moderate to severe impairments, there was a strong preference for the tri-modal design, and even a few of the more mildly impaired participants preferred it to the text-only design. Those who did prefer the text-only planner, attributed their preference to the simplicity of having fewer items on the screen. That finding clearly points to the need for customization. Determining how customization mechanisms should be incorporated into assistive technology is a key area for further investigation. In particular, determining how customization can be added without complicating the system design will be particularly challenging. Other questions include determining which aspects should be customizable, and how much control the user should be given or expected to manage. For example, should it be assumed that the customization will always be done by a caregiver or therapist? Or will it be necessary that individuals themselves be able to manage the customization? If so, should adaptive support be included to help guide the user to the appropriate customiza-  69  tion options? An example of adaptive support for customization for accessibility is in the Windows X P operating system. If the user holds down the right shift key for eight consecutive seconds, the operating system infers that the user might be having difficulty with the keyboard, and asks the user if he/she would like to activate the relevant accessibility options. Future work will also need to address the question of how to best input and organize the triplets within the triplet selection tool. For the purposes of the research presented here, we used a static unordered list. Obviously, a key requirement of an actual system will be functionality to support individuals in customizing and updating their triplet databases. This will require the development of an easy to use interface for capturing and organizing images and sound clips. While using an unordered list presented few, if any, usability problems during our evaluation, our lists contained only ten items, which is likely unrealistically small. One possible improvement would be to continuously reorder the list based on the individual's usage (i.e., if the user typically meets with Marilyn Monroe at 3:00pm, the system might place Marilyn Monroe at the top of the list for all 3:00pm appointments). However, reordering might instead confuse users, if the order of the elements is useful for identifying triplets. As such, further investigation looking specifically at this question is required.  6.1.3  Methodological  Adaptations  To satisfy our final goal, we examined the process used to design ESI Planner, looking specifically for implications for the development of accessible and adoptable technology for people with aphasia. In our process, participatory design followed by a formal evaluation was central to ensuring that the resulting technology was accessible and adoptable by the target population. We encountered many challenges throughout this research, including both challenges to working with special populations and challenges to using a standard handheld device as a platform for developing assistive technology.  70  From those challenges we developed a set of guidelines, which are relevant to others engaging in research with special populations. These guidelines include suggestions for overcoming communication barriers to working with participants with speech and language impairments; ways in which participatory design methodology can be modified to rely less on protocols such as Think Aloud; and strategies for overcoming hurdles inherent to working with special populations (e.g., locating sufficient participants, managing large individual differences, etc.). Also included are guidelines for directions for future work in improving the accessibility of handheld technology. These refinements include improvements to the physical form factor to accommodate individuals with motor control limitations, and improvements to the usability of the tap interaction. There are still many unanswered questions with regard to the development of technology for people with physical and cognitive impairments.  While some  work has been done for people with severe physical impairments, very little has been done for people with a combination of physical and cognitive impairments. This is most likely because stroke survivors with only motor deficits can manage somewhat satisfactorily with standard design options. However, when impairments are combined, the tolerance for frustration is greatly decreased prohibiting the use of standard designs. Determining design limits for people with these types of deficits is an area which remains to be addressed. Constraints such as the minimum target size for widgets, the maximum manageable "clutter" on the screen, the easiest navigation patterns, and so on, are all unknowns.  6.2  Future Work  It is not uncommon for assistive technologies to fail to be adopted even after demonstrating success in clinical or laboratory settings [ 6 ] . The participatory process used to design ESI Planner, together with our lab-style evaluation, give us confidence that we are on the right path to achieving an adoptable technology; however, more evaluation is required. The next logical step for this research would be to work  71  towards a field evaluation, incorporating the findings from our laboratory study to first improve the design of ESI Planner. Ideally, a longitudinal field study would involve a set of participants, both aphasic and non-aphasic, using ESI Planner over a period of several months. Beyond adoptability, the goal would be to determine the extent to which the design generalizes to other populations, and to ascertain which aspects of the design are specific to aphasic users. Some interesting non-aphasic populations to include would be the elderly and the young, along with so called "average" users. We hypothesize that all populations would be able to effectively use the planner, but for so called "average" users, participants would experience greater frustration by the limitations imposed by the feature-reduced design and would be less satisfied overall with the application. If our hypothesis proves true, it would raise the question of how to effectively support users with different preferences and needs within a single application. Within the ESI Planner, this could be explored with further studies aimed at addressing how customization can be incorporated into universal design. In general, the problem of accommodating individual differences is not well understood in the field of Human-Computer Interaction. Here, it is particularly challenging given the large variance in our target populations. One approach that has been proposed is to support individual differences via a layering of interfaces [38]. These interfaces would allow users to choose between functionality and ease of use to suit their individual needs. This may prove difficult, however, as adding the functionality to move between interfaces, in itself adds complexity. Addressing this complexity would be a intrinsic challenge of the proposed work.  72  Bibliography [1] Abascal, J., Arrue, M., Garay, N., & Tomas, J. (2003). USERfit Tool. A tool to facilitate design for all. In N. Carbonell & C. Stephanidis (Eds.), Lecture Notes in Computer Science: Vol. 2615. User Interfaces for All, (pp. 141-151). Springer-Verlag Heidelberg. [2] The Aphasia Institute. (2003). The Aphasia Institute: What is Aphasia? U R L http://www.aphasia.ca/about/whatis.shtml. Available Online February 2004. [3] Bartram, L . , Ho, A., Dill, J., & Henigman, F . (1995). The continuous zoom: a constrained fisheye technique for viewing and navigating large information spaces. In Proceedings of the ACM symposium on User interface and software technology, (pp. 207-215). A C M Press. [4] Beaudouin, M . , & Mackay, W. (2003). Prototyping tools and techniques. In J. Jacko & A. Sears (Eds.), The Human-Computer Interaction Handbook, (pp. 1006-1031). Mahwah, NJ: Lawrence Eribaum Associates. [5] Bederson, B. B., Clamage, A., Czerwinski, M . P., & Robertson, G. G. (2004). Datelens: A fisheye calendar interface for pdas. ACM Transactions on Computer-Human Interaction, 11 {I), 90-119. [6] Beukelman, D. R., & Mirenda, P. (Eds.) (1998). Augmentative and alternative communication: Management of severe communication disorders in children and adults. (2nd ed.) Baltimore, Maryland: Paul H Brooks Publishing Co. [7] Buxton, W., Foulds, R., Rosen, M., Scadden, L . , & Shein, F. (1986). Human interface design and the handicapped user. In Proceedings of the SIGCHI conference on Human factors in computing systems, (pp. 291-297). A C M Press. [8] Carmien, S. (2002). MAPS: PDA scaffolding for independence for persons with cognitive impairments. Paper presented ath the Human-computer interaction consortium (HCIC '02), available online at http://www.es.Colorado. edu/~13d/clever/projects/maps/carmien_HCIC2002.pdf. [9] Cherney, L., & Robey, R. (2002). Aphasia treatment: recovery, prognosis, and clinical effectiveness. In R. Chapey (Ed.), Language intervention strategies in adult aphasia, (4th ed., pp. 148-172). Baltimore, MD: Lippincott Williams, and Wilkins. 73  [10] Clement, A., & Van den Besselaar, P. (1993). A retrospective look at PD projects. Communications of the ACM, 36(6), 29-37. [11] Cole, E . (1999). Cognitive prosthetics: An overview to a method of treatment. NeuroRehabilitation, 12(1), 39-51. [12] Cole, E . , & Matthews, M . (1999). Cognitive prosthetics and telerehabilitation: Approaches for the rehabilitation of mild brain injuries. In Proceedings of Basil Therapy Congress, (pp. 111-120). Basel, Switzerland. [13] Duffy, J. R. (1995). Motor speech disorders: Substrates, differential diagnosis, and management. In Defining, Understanding, and Categorizing Motor Speech Disorders, (1st ed., pp. 3-13). St Louis, MO: Mosby. [14] Dynavox Systems. (2004). Dynavox. Available Online March 2004.  U R L http://www.dynavoxsys.com.  [15] Edwards, A., Edwards, A., & Mynatt, B. (1993). Designing for users with special needs. In Proceedings of the SIGCHI conference on Human factors in computing systems, (tutorial). [16] Ehn, P. (1992). Scandinavian Design: On participation and skill. In P. Adler & T. Winograd (Eds.), Usability: Turning technologies into tools, (pp. 96-132). New York: Oxford University. [17] Enkidu Research. (2004). Enkidu Research. U R L http://www.enkidu.net/ enkidu.html. Available Online March 2004. [18] Fischer, G., & Sullivan Jr., J. F. (2002). Human centered public transportation systems for persons with cognitve disabilities: Challenges and insights for participatory design. In Proceedings of the Participatory design conference, (pp. 194-198). CPSR. [19] Furnas, G. W. (1986). Generalized fisheye views. In Proceedings of the SIGCHI conference on Human factors in computing systems, (pp. 16-23). A C M Press. [20] Goodglass, H., Kaplan, E . , & Barresi, B. (2001). The assessment of aphasia and related disorders. (3rd ed.) Philadelphia, PA: Lippincott Williams, and Wilkins. [21] Gregor, P., Newell, A. F., & Zajicek, M . (2002). Designing for dynamic diversity: interfaces for older people. In Proceedings of the ACM SIGCAPH conference on Assistive technologies, (pp. 151-156). A C M Press. [22] Gus Communications Inc. (2004). Gus Communications. U R L http://www. gusinc.com. Available Online February 2004, Copyright 2001. [23] Hine, N., & Arnott, J. (2002). A multimedia social interaction service for inclusive community living: Initial user trials. Universal Access in the Information Society, 2(1), 8-17. 74  [24] Hine, N., Arnott, J., & Smith, D. (2003). Design issues encountered in the development of a mobile multimedia augmentative communication service. Universal Access in the Information Society, 2(3), 255-264. [25] Hine, N., & Arnott, J. L. (2002). Assistive social interaction for non-speaking people living in the community. In Proceedings of the ACM SIGCAPH conference on Assistive technologies, (pp. 162-169). A C M Press. [26] Hirsch, T., Forlizzi, J., Hyder, E., Goetz, J., Kurtz, C , & Stroback, J. (2000). The ELDer project: social, emotional, and environmental factors in the design of eldercare technologies. In Proceedings of the ACM conference on Universal usability, (pp. 72-79). A C M Press. [27] Holtzblatt, K. (2003). Contextual design. In J. Jacko & A. Sears (Eds.), The Human-Computer Interaction Handbook, (pp. 941-963). Mahwah, NJ: Lawrence Erlbaum Associates. [28] Hux, K., Manasse, N., Weiss, A., & Beukelman, D. (2001). Augmentative and alternative communication for persons with aphasia. In R. Chapey (Ed.), Langauge Intervention Strategies in Adult Aphasia, (4th ed., pp. 675-689). Williams and Wilkins. [29] Katz, R. (2001). Computer applications in aphasia treatment. In R. Chapey (Ed.), Langauge Intervention Strategies in Adult Aphasia, (4th ed., pp. 718741). Williams and Wilkins. [30] Keates, S., Clarkson, P. J., Harrison, L.-A., & Robinson, P. (2000). Towards a practical inclusive design approach. In Proceedings of the ACM conference on Universal usability, (pp. 45-52). A C M Press. * [31] Kensing, F., & Munk-Madsen, A. (1993). Structure in the tookbox. Communications of the ACM, 36(6), 78-85. [32] Kertesz, A. (1982). Western Aphasia Battery. New York: Grune and Stratton. [33] Klamma, R., Spaniol, M., & Jarke, M . (2003). Virtual communities: Analysis and design support. In CAiSE '03 Forum Information Systems for a Connected Society, (pp. 113-116). [34] Kuhn, S., & Winograd, T. (1996). Profile 14: Participatory design. In Bringing Design to Software. Addison-Wesley. URL http://hci.stanford.edu/bds/ 14-p-partic.html. [35] Lancioni, G., Van den Hof, E . , Furniss, F., O'Reilly, M., & Cunha, B. (1999). Evaluation of a computer-aided system providing pictorial task instructions and prompts to people with severe intellectual disability. Journal of Intellectual Disability Research, 43(1), 61-66.  75  [36] Lancioni, G. E . , O'Reilly, M. F., Seedhouse, P., Fumiss, F., & Cunha, B. (2000). Promoting independent task performance by persons with severe developmental disabilities through a new computer-aided system. Behavior Modification, 24(5), 700-718.  [37] Lingraphicare. (2004). Lingraphicare. URL http://www.lingraphicare.com. Available Online April 2004, Copyright 2004. [38] McGrenere, J . , & Moore, G. (2000). Are we all in the same "bloat"? Proceedings  of Graphics  interface,  In  (pp. 187-196).  [39] Moffatt, K., McGrenere, J., Purves, B., & Klawe, M. (2004). The participatory design of a sound and image enhanced daily planner for people with aphasia. In Proceedings  of the SIGCHI  conference  on Human factors  in computing  systems,  (To appear). Vienna, Austria: A C M Press. [40] Muller, M. J. (2003). Participatory design: The third space in HCI. In J. Jacko & A. Sears (Eds.), The Human-Computer Interaction Handbook, (pp. 464-481). Mahwah, NJ: Lawrence Erlbaum Associates. [41] Mynatt, E . D., Essa, I., & Rogers, W. (2000). Increasing the opportunities for aging in place. In Proceedings  of the ACM conference  on Universal  usability,  (pp. 65-71). A C M Press. [42] Mynatt, E . D., Rowan, J., Craighill, S., & Jacobs, A. (2001). Digital family portraits: supporting peace of mind for extended family members. In Proceedings of the SIGCHI  conference  on Human  factors  in computing  systems.  Seattle, WA: A C M Press. [43] National Aphasia Institute. (2004). Impact of Aphasia Results  of a Needs Survey.  on Patients  and  Family:  U R L http://www.aphasia.org/NAAimpact.html.  Available Online February 2004, Original publication date 1988. [44] Newell, A., & Gregor, P. (2002). Design for older and disabled people - where do we go from here? Universal  Access in the Information  Society, 2(1), 3-7.  [45] Newell, A . F . , Carmichael, A., Gregor, P., & Aim, N. (2003). Information technology for cognitive support. In J. Jacko & A. Sears (Eds.), The HumanComputer Interaction Handbook, (pp. 464-481). Mahwah, NJ: Lawrence Erlbaum Associates. [46] Newell, A. F., & Gregor, P. (2000). "user sensitive inclusive design" in search of a new paradigm. In Proceedings  of the ACM conference  on Universal  usability,  (pp. 39-44). A C M Press. [47] Ogozalek, V. (1994). A comparison of the use of text and multimedia interfaces to provide information to the elderly. In Proceedings of the SIGCHI conference on Human factors  in computing  systems, (pp. 65-71). A C M Press.  76  [48] People weekly Books (Eds.). (1999). People weekly: 25 Amazing Years. New York, NY: People weekly Books. [49] Prentke Romich Company. (2004). Prentke Romich Company. URL http: //www.prentrom. com. Available Online March 2004, Copyright 2004. [50] Rettig, M . (1994). Prototyping for tiny fingers. Communications of the ACM, 37(4), 21-27. [51] Saltillo Corporation. (2004). Saltillo Corporation. U R L h t t p : / / s a l t i l l o . com/. Available Online March 2004, Copyright 2003. [52] Simmons-Mackie, N., & Kagan, A. (1999). Communication strategies used by 'good' and 'poor' speaking partners of individuals with aphasia. Aphasiology, 13(9-11), 807-820. [53] Spinuzzi, C. (2002). A Scandinavian challenge, a US response: Methodological assumptions in Scandinavian and US prototyping approaches. In Proceedings of the International conference on computer documentation, (pp. 208-215). [54] Stephanidis, C , Salvendy, G., Akoumianakis, D., Arnold, A., Bevan, N., Dardailler, D., Emiliani, P., Iakovidis, I., Jenkins, P., Karshmer, A., Korn, P., Marcus, A., Murphy, H . , Oppermann, C., Stary, C , Tamura, H . , Tscheligi, M . , Ueda, H., Weber, G., & Ziegler, J. (1999). Toward an information society for all: HCI challenges and R&D recommendations. International Journal of Human Computer Interaction, 11(1), 1-28. [55] Stephanidis, C , & Savidis, A. (2001). Universal access in the information society: Methods, tools, and interaction technologies. Universal Access in the Information Society, 1(1), 40-55. [56] Stevens, R. D., & Edwards, A. D. N. (1996). An approach to the evaluation of assistive technology. In Proceedings of the ACM SIGCAPH conference on Assistive technologies, (pp. 64-71). A C M Press. [57] Sutcliffe, A., Fickas, S., Sohlberg, M . M., & Ehlhardt, L. A. (2003). Investigating the usability of assistive user interfaces. Interacting with Computers, 15, 577-602. [58] Thorburn, L., Newhoff, M., & Rubin, S. (1995). Ability of subjects with aphasia to visually analyze written language, pantomime, and iconographic symbols. American Journal of Speech-Language Pathology, 4, 174-179. [59] Time Inc. (2004). TIME 100 - People of the Century. URL http://www.time. com/time/timelOO/. Available Online February 2004, Copyright 2003. [60] Tochtermann, K., & Dittrich, G. (1992). Fishing for clarity in hyperdocuments with enhanced fisheye-views. In Proceedings of the A CM conference on Hypertext, (pp. 212-221). A C M Press.  77  [61] Trigg, R., & Clement, A. (2004). Participatory Design. Computer Professionals for Social Responsibility, Palo Alto, CA. URL http://www.cpsr.org/ program/workplace/PD.html. Available Online March 2004, Last modified February 2000. [62] Venkatagiri, H. (2002). Clinical implications of an augmentative and alternative communication taxonomy. AAC Augmentative and Alternative Communication, 18 (1), 45-57. [63] Waller, A., Denis, F., Brodie, J., & Cairns, A. Y. (1998). Evaluating the use of TalksBac, a predictive communciation device for nonfluent adults with aphasia. International Journal of Language and Communication Disorders, 33(1), 4570. [64] Waller, A., & Newell, A. F. (1997). Towards a narrative-based augmentative communication system. European Journal of Disorders of Communication, 32, 289-306. [65] Wobbrock, J. O., Myers, B. A., & Kembel, J. A. (2003). Edgewrite: a stylusbased text entry method designed for high accuracy and stability of motion. In Proceedings of the ACM symposium on User interface and software technology, (pp. 61-70). A C M Press.  78  Appendix A  Contributions and Credits While the majority of this research was carried out by the author, a multidisciplinary effort was needed, including expertise from computer science, audiology and speech sciences, and psychology.  Figure A . l shows the key milestones in the re-  search and indicates the lead contributor for each. Other members of the Aphasia Project, though not included in the figure, played a supporting role in this work. The following individuals contributed as a lead for one or more of the milestones: Maria Klawe (computer science), Winfried Wilcke (Anita's husband and computer science), Joanna McGrenere (computer science), and Barbara Purves (audiology and speech science).  79  Timeline for Research, Identifying Milestone Leads Karyn: Implementation of ESI Planner and supporting software Karyn-Brainstorming i and informal evaluation: of commercial day planner with Skip Winfried: Design & Implementation of the Picture Dictionary  Mi  Karyn Low-fidelity prototyping of ?appointment creation with Skip, S S , and M P  Karyn: Med-fide'ity prototyping of . appointment browsing with Step-  /M2  Maria Informal (brainstorming with Anita Joanna, Barbara, .1 Karyn: Brai nstormi ng i with Anita j I Karyn: Low-fidelity I j prototyping with Anita  Karyn Planner; evaluation sessions  Karyn; Low-fidelity -prototyping;: o? appoi r.tment b rowsing with ; 4 non-aphasic participants  J/13:  M4  M6  M5  M7  Karyn; Med-fidelity prototyping ofrappointment :SbVowsirig'withs1 ndnaphasic participant Karyn Med-fi delity prototyping of aopomtmont browsing with Skip, S S , and MP  M9  M8 '  Barbara Recruitment.of Participants::  Barbara Western Aphasia' Battery  Appendix B  Triplet Databases To ensure participants were familiar with the people and places depicted in the planners during the evaluation phase, databases of 15 famous people and 15 famous places were used to populate the planner with fictitious appointments. The image and text fields of all elements in those databases are shown below in Figures B . l and B.2. The sound clips were simply the text read aloud.  81  (a) George Bush  (b) Fidel Castro  g (d) Cindy Crawford  (e) Princess Diana  (f) Albert Einstein  4" (g) Michael Jackson  (h) Michael Jordan  (i) Lucille Ball  (j) Nelson Mandela  (k) Muhammad Ali  (1) Marilyn Monroe  (m) Oprah Winfrey  (n) Queen Elizabeth II  (o) Ronald Regan  Figure B . l : Famous People used in the evaluation of E S I Planner  82  (g) Great Pyramid  (h) Great Wall China  (i) Statue of Liberty  (j) Mount Fuji  (k) Leaning Tower of Pisa  (1) Mount Rushmore  (m) Sphinx  (n) Stone Henge  (o) Taj Mahal  Figure B.2: Famous Places used in the evaluation of ESI Planner  83  Appendix C  Semi-Structured Interview The following questions were used as a basis for the semi-structured interview used to capture information which included participants' computer experience, daily planner usage (both paper and electronic), and interface preferences. 1. What is your experience with computers? (a) Are you comfortable using a computer? (b) Would you consider yourself knowledgeable of computers? (c) What kinds of things can you do? (d) Do you currently use a computer? What for? 2. Do you currently use a calendar or day planner to schedule appointments? (a) What kind? i. Paper / electronic ii. Daily / Weekly / Monthly (b) Do you find your current calendar easy to use? 3. Today you used two different planner applications: one with images, sound and text, and one with only text. (a) Which do you feel you were fastest with? (b) Which did you feel more comfortable with? (c) Which did you like better? 4. If you were to use one of these planners for a longer period of time, which would you prefer to use?  84  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0051334/manifest

Comment

Related Items