Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Bimodal cueing in aphasia : the influence of lipreading on speech discrimination and language comprehension Dupuis, Karine 2011

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


24-ubc_2011_spring_dupuis_karine.pdf [ 426.17kB ]
JSON: 24-1.0071733.json
JSON-LD: 24-1.0071733-ld.json
RDF/XML (Pretty): 24-1.0071733-rdf.xml
RDF/JSON: 24-1.0071733-rdf.json
Turtle: 24-1.0071733-turtle.txt
N-Triples: 24-1.0071733-rdf-ntriples.txt
Original Record: 24-1.0071733-source.json
Full Text

Full Text

BIMODAL CUEING IN APHASIA: THE INFLUENCE OF LIPREADING ON SPEECH DISCRIMINATION AND LANGUAGE COMPREHENSION  by  Karine Dupuis B.A. (H), The University of Calgary, 2008  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  MASTER OF SCIENCE in The Faculty of Graduate Studies  (Audiology and Speech Sciences)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  April 2011  © Karine Dupuis, 2011  ABSTRACT Previous research on the influence of lipreading on speech perception has failed to consistently show that individuals with aphasia benefit from the visual cues provided by lipreading. The present study was designed to replicate these findings, and to investigate the role of lipreading at the discourse level. Six participants with aphasia took part in this study. A syllable discrimination task using the syllables /pem, tem, kem, bem, dem, gem/, and a discourse task consisting of forty short fictional passages, were administered to the participants. The stimuli were presented in two modality conditions, audio-only and audiovisual. The discourse task employed two grammatical complexity levels to examine the influence of lipreading on the comprehension of simple and moderately-complex passages. Response accuracy was used as the dependent measure on the discrimination task. Two measures were used in the discourse task, on-line reaction time from an auditory moving window procedure, and off-line comprehension question accuracy. A test of working memory was also administered. Both inferential statistics and descriptive analyses were conducted to evaluate the data. The results of the discrimination task failed to consistently show that the participants benefited from the visual cues. On the discourse task, faster reaction times were observed in the audio-visual condition, particularly for the complex passages. The comprehension question accuracy data revealed that the two participants with the most severe language comprehension deficits appeared to benefit from lipreading. These findings suggest that the benefits of lipreading primarily relate to processing time, and that these benefits are greater with increased stimulus complexity and context. In addition, a strong positive correlation between working memory and comprehension question accuracy was found, supporting the claim that working memory may be a constraint in language comprehension. No correlation was found between participants’ accuracy scores on the ii  discourse and discrimination tasks, replicating previous research findings. The results from this study provide preliminary support for the clinical use of lipreading and working memory capacity for the treatment of language comprehension difficulties in individuals with aphasia.  iii  PREFACE This study was approved by the UBC Behavioural Research Ethics Board, certificate number H10-00500.  iv  TABLE OF CONTENTS ABSTRACT............................................................................................................................. ii PREFACE ............................................................................................................................... iv TABLE OF CONTENTS..........................................................................................................v LIST OF TABLES ................................................................................................................ viii LIST OF FIGURES ................................................................................................................ ix ACKNOWLEDGEMENTS ......................................................................................................x DEDICATION ........................................................................................................................ xi CHAPTER 1: LITERATURE REVIEW ..................................................................................1 1.1 1.1.1  1.2  Introduction ..................................................................................................................1 Outline of the Thesis ....................................................................................................3  The Nature of Visual Speech Perception .........................................................................4  1.2.1  The McGurk Effect ......................................................................................................4  1.2.2  The Lateralization and Integration of Audio-Visual Speech Perception ..................................6  1.2.3  The Role of Memory in Lipreading Abilities .....................................................................9  1.2.4  The Benefits of Visual Speech .....................................................................................11  1.2.5  Summary .................................................................................................................12  1.3  Language Comprehension in Aphasia...........................................................................13  1.3.1  Comprehension Deficits in Wernicke’s Aphasia ..............................................................13  1.3.2  Comprehension Deficits in Broca’s Aphasia ...................................................................16  1.3.3  Working Memory and Comprehension Deficits in Aphasia ................................................18  1.3.4  Speech Discrimination Deficits in Aphasia .....................................................................19  1.3.5  Summary .................................................................................................................20  1.4  Lipreading in Aphasia .................................................................................................20  1.5  The Present Study .......................................................................................................24  1.5.1  Research Questions and Hypotheses ..............................................................................25  v  CHAPTER 2: METHOD ........................................................................................................27 2.1  Introduction ................................................................................................................27  2.2.  Participants.................................................................................................................27  2.2.1  Vision Screening .......................................................................................................28  2.2.2  Hearing Screening .....................................................................................................28  2.2.3  Working Memory Testing ...........................................................................................29  2.2.4  Language Testing ......................................................................................................30  2.2.5  Apraxia Screening .....................................................................................................30  2.3  Discrimination Task ....................................................................................................33  2.3.1  Overview .................................................................................................................33  2.3.2  Stimuli Description ....................................................................................................33  2.3.3  Stimuli Preparation ....................................................................................................34  2.3.4  Procedure.................................................................................................................35  2.4  Discourse Task ...........................................................................................................36  2.4.1  Overview .................................................................................................................36  2.4.2  Stimuli Description ....................................................................................................37  2.4.3  Stimuli Preparation ....................................................................................................39  2.4.4  Procedure.................................................................................................................41  2.5  General Procedure ......................................................................................................42  2.6  Analysis .....................................................................................................................43  CHAPTER 3: RESULTS ........................................................................................................45 3.1  Overview ...................................................................................................................45  3.2  Group Results .............................................................................................................45  3.2.1  Discrimination Task ...................................................................................................45  3.2.2  Discourse Task..........................................................................................................46  3.2.3  Discrimination versus Discourse ...................................................................................50  3.2.4  Relationship to Working Memory .................................................................................50  3.2.5  Effect of Hearing Loss ................................................................................................50  vi  3.3.  Case Analyses ............................................................................................................51  3.3.1  Participant 1 .............................................................................................................51  3.3.2  Participant 2 .............................................................................................................53  3.3.3  Participant 3 .............................................................................................................55  3.3.4  Participant 4 .............................................................................................................56  3.3.5  Participant 5 .............................................................................................................58  3.3.6  Participant 6 .............................................................................................................59  CHAPTER 4: DISCUSSION..................................................................................................62 4.1  Introduction ................................................................................................................62  4.2  Research Question ......................................................................................................62  4.3  Hypothesis 1...............................................................................................................65  4.3.1  Discrimination Task ...................................................................................................66  4.3.2  Discourse Task..........................................................................................................69  4.3.3  Summary .................................................................................................................72  4.4  Hypothesis 2...............................................................................................................72  4.5  Hypothesis 3...............................................................................................................75  4.6  Summary of Key Findings ...........................................................................................77  4.7  Clinical Implications ...................................................................................................77  4.8  Limitations of the Study and Future Directions .............................................................79  4.9  Conclusion .................................................................................................................81  REFERENCES .......................................................................................................................82 APPENDIX A: DISCOURSE TASK STIMULI ....................................................................97  vii  LIST OF TABLES Table 1 Summary of participants' profiles ..................................................................................32 Table 2 Experimental conditions................................................................................................36 Table 3 Average stimuli characteristics of each list .....................................................................38 Table 4 Possible combinations of lists across conditions..............................................................42 Table 5 Percent correct (out of 24 trials) on the discrimination task in each condition ...................45 Table 6 Mean RTs in milliseconds for each subject in the four conditions on the discourse task .....47 Table 7 Percent correct (out of 20 questions) on the discourse task in the four conditions ..............49 Table 8 Effects of hearing loss...................................................................................................51 Table 9 P1's mean RTs across conditions ...................................................................................52 Table 10 P2's mean RTs across conditions..................................................................................54 Table 11 P2's mean comprehension question accuracy across conditions in the discourse task .......54 Table 12 P3's mean RTs across conditions..................................................................................55 Table 13 P3's mean comprehension question accuracy across conditions in the discourse task .......56 Table 14 P4's mean RTs across conditions..................................................................................57 Table 15 P4's mean comprehension question accuracy across conditions in the discourse task .......57 Table 16 P5's mean RTs across conditions..................................................................................58 Table 17 P5's mean comprehension question accuracy across conditions in the discourse task .......59 Table 18 P6's mean RTs across conditions..................................................................................60 Table 19 P6's mean comprehension question accuracy across conditions in the discourse task .......60  viii  LIST OF FIGURES Figure 1 Group mean (SE) RTs across conditions on the discourse task........................................47 Figure 2 Group mean (SE) comprehension question accuracy across conditions on the discourse task  ...............................................................................................................................................49  ix  ACKNOWLEDGEMENTS I wish to thank everyone who made this research possible. In particular, sincere thanks are extended to: My research supervisor, Dr. Jeff Small, for his guidance, mentoring and unwavering support. The completion of this thesis would have been impossible without his dedication to this project. I am very grateful for all the research skills I learned from Jeff. Dr. Barbara Purves and Dr. Valter Ciocca for serving on my committee. Their suggestions and support were immensely appreciated. The Social Sciences Research Council of Canada (SSHRC) and the University of British Columbia for their financial support. Arash Malekzadeh, for guiding me through everything technological. Audio and video editing and E-Prime programming would have been close to impossible without his help. His patience with both E-Prime and myself has been extraordinary. Kevin Frew, Heather Wood, Jennifer Borland, Emily Chan, Wendy Johnstone, Liina MacPherson, Allison Haas, Jill Petersen, Sharon Adelman and Stéfane Kenny for their insights and help with various aspects of my thesis. Dan Carlson and Colleen Bergen for inspiring me to concentrate on aphasia for this research project. Susan Yang for sharing this experience with me, and for her moral support. Everyone in the Adult Language Processing and Disorders lab for letting me hog the lab on numerous occasions and for keeping my brain awake and alert with chocolate, cookies, and sweet treats which have not gone unnoticed. All my research participants. Their contribution to this thesis and to the field of speech-language pathology is invaluable. I keep nothing but good memories of the time I spent with each of them. My parents, for their emotional and financial support throughout my university career. I would not be where I am today without their unconditional love and encouragement. Pour tout ce que vous avez fait pour moi depuis que je suis toute petite, merci!  x  DEDICATION  To my mother, for instilling in me a love of learning, and for discovering the perfect career for me.  xi  CHAPTER 1: LITERATURE REVIEW 1.1  Introduction A large body of research has demonstrated that the nature of speech perception is  inherently multimodal (e.g., Massaro, 1987; Rosenblum, 2005). Although verbal language comprehension is often considered an auditory task, speech information is also acquired through the visual modality. In face-to-face communication, listeners not only hear what their interlocutor is saying, they also have access to the movements of the speaker’s lips, jaw, and tongue (Badin, Tarabalka, Elisei, & Bailly, 2010; Schmid, Thielmann, & Ziegler, 2009). Visual speech refers to movements of the face, jaw, lips and tongue as a result of speaking (Kim, Davis, & Krins, 2004). The terms “lipreading” and “speechreading” have also been used by different authors to refer to the same concept. Campbell, Dodd, and Burnham (1998) note that the term “speechreading” has largely replaced the word “lipreading” to explicitly recognize that the process relies on more than the lips. For the purpose of this study, all three terms will be used interchangeably to refer to the process of extracting linguistic information through the visual modality by watching a speaker’s face. The visual cues provided by lipreading have been shown to be highly beneficial in a variety of circumstances, including in noise (Schwartz, Berthommier, & Savariaux, 2004) and in cases of hearing loss (MacSweeney at al., 2002). This is thought to be due to the redundant and complementary sensory input provided by the two modalities. However, very little consideration has been given to the role of visual speech perception in communication disordered populations (Woodhouse, Hickson, & Dodd, 2009). Individuals with aphasia, a language disorder caused by brain damage, is one population that has received little attention in the area of visual speech perception. Since comprehension difficulties are almost always 1  present in aphasia (Brookshire, 1987; Le Dorze, Brassard, Larfeuil & Allaire, 1996), visual cues derived from lipreading have the potential to greatly benefit this population. However, although many speech-language pathologists use lip shapes and visible lip movements in aphasia therapy, research in this area has failed to consistently show that individuals with aphasia are able to derive phonological information through speechreading to improve speech perception (Hessler, Jonkers, & Bastiaanse, 2010; Schmid & Ziegler, 2006; Youse, Cienkowski, & Coehlo, 2004). On the other hand, lipreading has been shown to benefit those with pure word deafness (Morris, Franklin, Ellis, Turner, & Bailey, 1996; Shindo, Kaga, & Tanaka, 1991), and treating visual speech perception can improve speech production in individuals with non-fluent aphasia (Fridriksson et al., 2009). Research to date on visual speech perception in aphasia has focused mostly on low level phonetic discrimination. Phonetic discrimination is qualitatively very different from language comprehension. While the comprehension of language involves processing in a number of language domains, including phonology, morphology, syntax, semantics and pragmatics, phonetic discrimination requires only acoustic-phonetic encoding, perhaps in conjunction with some phonological processing. In addition, discrimination studies are context-free, while sentences are generally much richer in contextual support. Context is considered to provide redundancy of information (Brookshire, 1987, Wright & Newhoff, 2004), and therefore, may help with language comprehension because it provides more opportunity for top-down processing. Individuals with aphasia are sensitive to contextual influences during language processing (Pierce & DeStefano, 1987). As Caplan and Hildebrandt (1988) note, many patients with aphasia have well-preserved extralinguistic knowledge, which is why semantic and pragmatic contextual information is often crucial to  2  their understanding. Hence, one cannot conclude from a phonetic discrimination study that language comprehension is affected similarly by visual cueing. In an attempt to shed light on this issue, this study aims to partially replicate previous studies that have investigated whether persons with aphasia benefit from the added visual cues provided by speechreading in phonetic discrimination tasks. The main purpose of this study, however, is to examine whether individuals with aphasia benefit from lipreading in language-rich contexts (i.e., in a discourse context). 1.1.1  Outline of the Thesis In the remainder of Chapter 1, I first describe the nature of visual speech perception,  specifically the McGurk effect, the laterality and integration of the auditory and visual modalities, the role of memory in lipreading abilities, and the benefits of speechreading in different populations and in varying conditions. I subsequently describe language comprehension and speech perception deficits in fluent and non-fluent aphasia, and examine the role of working memory as a proposed explanation for these deficits. Previous studies investigating lipreading in aphasia will be reviewed. Lastly, the purpose of this research and a summary of the research hypotheses and questions will be presented. In Chapter 2, I present the methods used for the discrimination and discourse experiments. The participants, stimuli, and procedures are discussed in detail. The results of the experiments are laid out in Chapter 3. Finally, Chapter 4 consists of a discussion of the results of the experiments in light of current research in the field and our understanding of the multimodal nature of language comprehension. Implications for clinical practice in speech-language pathology and directions for future research are suggested.  3  1.2  The Nature of Visual Speech Perception Visual speech perception involves the mapping of articulatory-related movements to  an abstract phonological code of language (Schmid et al., 2009). These associations between movements and the sounds that are perceived are generally considered to develop through normal exposure to language, and the ability to lipread has been shown to begin in early infancy (Dodd, 1987). Hence, through exposure and experience with language, listeners become able to use both the auditory and the visual modalities to decode the speech stream. However, not only are listeners able to use the visual cues, there is strong evidence that these cues are so robust that when they are available, their processing is mandatory (Woodhouse et al., 2009). The McGurk effect (McGurk & McDonald, 1976) is probably the best known phenomenon demonstrating the automatic processing of visual speech. The McGurk effect refers to an illusion that results from the combination of cross-modally discordant stimuli. It is briefly described in the following section. 1.2.1  The McGurk Effect The crucial role and irrepressible nature of visual speech are well illustrated by the  McGurk effect (McGurk & McDonald, 1976), an illusion produced by the simultaneous presentation of incongruent visual and auditory stimuli. The auditory presentation of a sequence of sounds at the same time as a visual presentation of the articulatory movements of a different sequence of sounds often leads to the subjective experience of hearing a third, different signal, that does not correspond to either the auditory or the visual message (Colin & Radeau, 2003). In 1976, McGurk and McDonald dubbed syllables (e.g., /baba/) in the auditory modality with different syllables (e.g., /gaga/) in the visual modality, and obtained two different types of illusions: fusions and combinations. Fusions involve the blending of the original CVCV sequences, yielding a third sequence such as /dada/. Combinations, on 4  the other hand, involve the perception of at least one of the presented phonemes from each syllable (e.g., /baga/ or /gabga/). The McGurk effect has since been replicated and expanded numerous times (e.g., Green & Gerdeman, 1995; Jones & Munhall, 1997; Massaro & Cohen, 1993; Werker, Frost, & McGurk, 1992), and tested with infants (Burnham & Dodd, 1996) and in a variety of languages (e.g., Bertelson, Vroomen, Wiegeraad, & de Gelder, 1994; Colin, Radeau, Soquet, Colin & Deltenre, 2002; Sams, Surakka, Helin, & Kättö, 1997; Sekiyama & Burnham, 2008) (see Colin & Radeau, 2003 for a more extensive list and details). According to Massaro and Stork (1998), the McGurk effect is present not only in syllables, but also in sentences. For instance, they showed that the nonsense sentence “My bab pop me poo brive” presented auditorily at the same time as the nonsense sentence “My gag kok me koo grive” presented visually leads to the perception of the sentence “My dad taught me to drive” (p. 237). The McGurk effect shows that the processing of both the auditory and visual information in the speech stream is mandatory when both types of cues are available; this fact becomes even more evident in studies where participants are explicitly informed that there is a mismatch between the audio and visual signals. For example, subjects in Summerfield and McGrath’s (1984) study of vowels were told that on some of the trials, the audio and visual stimuli would be incompatible (i.e., had been dubbed), but they were asked to report only what they had heard and ignore the movement of the speaker’s lips. The results suggested that processing of the visual information was unavoidable, such that the dubbed vowels heard auditorily were perceived as being closer to the vowels presented visually. However, as Colin and Radeau (2003) note, although processing is irrepressible, cognitive factors such as attention can nonetheless influence audio-visual integration such that the impact of one modality can be reduced. 5  These findings have led to a more recent body of research investigating the psychophysical and neurological aspects of visual speech. Specifically, much of this research has attempted to locate where visual speech processing takes place in the brain, and if and how the auditory and visual modalities integrate to produce a uniform percept of speech. This becomes highly relevant when considering visual speech in neurologically impaired individuals, such as those with aphasia. This area of research is very rich and will not be examined in detail in this thesis; however, the following section highlights some important points regarding these issues. 1.2.2  The Lateralization and Integration of Audio-Visual Speech Perception There is now convincing evidence that linguistic processing takes place dominantly  in the left hemisphere in most individuals; visual-spatial processing, on the other hand, appears to recruit the right hemisphere to a greater extent (e.g., Milner, 1971). The question regarding where visual speech information is processed in the brain remains open to debate: does it develop along with visual skills in the right hemisphere, or with language skills in the left hemisphere (Campbell, 1992)? The evidence thus far has been mixed. For example, Baynes, Funnell, and Fowler (1994) and Diesch (1995) have both concluded that the two hemispheres contribute differently but significantly to the perception of visual speech. In particular, Diesch (1995) found a right hemifield (and therefore left hemisphere) advantage for combination responses to McGurk type stimuli, and a left hemifield (right hemisphere) advantage for fusion responses. Campbell’s (1986) initial study on laterality revealed a right hemisphere advantage for visual speech. Subjects in her study were asked to match unilaterally presented still photographs depicting a person saying individual phonemes, to phonemes previously presented auditorily. The results showed that they performed better when the photographs 6  were presented in the left visual field. A major criticism of this approach, however, is that the stimuli were not very realistic and representative of everyday lipreading experiences. Further studies using more realistic stimuli (i.e., moving lips) were conducted, which revealed a left hemisphere advantage (e.g., Jordan & Thomas, 2007; Smeele, Sittig, Massaro, & Cohen, 1998). In addition, some imaging studies have also found a left hemisphere advantage. In their fMRI and PET study using syllable identification, Sekiyama, Kanno, Miura and Sugita (2003) found that Broca’s area was activated in the visual-only condition, suggesting that Broca’s area is not auditory-specific, or that the auditory-motor pathway is automatically activated by visual speech. In addition, their results showed increased activation in the left superior temporal sulcus in the audio-visual condition, indicating that cross-modal binding appears to take place in the left hemisphere. Studies with clinical populations have further added to the debate. Two stroke patients were examined by Landis and Regard (in Campbell, 1998), one who had sustained a right hemisphere stroke, the other a left hemisphere stroke. Interestingly, the patient with the left-sided stroke had unimpaired face processing but poor lipreading skills; in contrast, the patient with the right-sided stroke had difficulty with facial processing, but was unimpaired in her speechreading abilities. Nicholson, Baum, Cuddy and Munhall (2002) studied one individual with right hemisphere damage following a stroke, and found that while his speechreading skills were intact in the segmental domain, he showed deficits in prosodic visual speech perception (i.e., discriminating between statements and questions, and identifying emphatic stress). These results suggest that both hemispheres contribute to visual speech perception, but each hemisphere processes different types of linguistic information. This may have important implications for individuals with aphasia. If brain regions responsible for visual speech are damaged, they may be less able to extract meaningful 7  linguistic information through lipreading. However, the relationship between brain damage localization and loss of function is not straightforward. Aphasia typically results from lesions involving different areas of the left perisylvian region, which may selectively impair different areas or levels of speech and language processing. In addition, the different modalities of language (i.e., verbal expression, verbal comprehension, reading, and writing) can be differentially affected even within individuals. Visual speech processing, then, is similarly unlikely to be uniformly impaired in all aphasic individuals, even if the cortical areas involved in visual speech processing have been damaged. The mechanisms involved in audio-visual speech processing have also been extensively investigated. A central debate in this area focuses on whether processing is “amodal” (or “supramodal”), meaning that a non modality-specific code combines the information received from each modality, or whether one modality is dominant and all incoming perceptual information gets converted to that specific modality (Woodhouse et al., 2009). For example, Diehl and Kluender (1989) argue that humans are specifically sensitive to the auditory modality; in contrast, Fowler and Rosenblum (1991) support the amodal, gestural theory of speech perception (see Woodhouse et al., 2009 and Rosenblum, 2005 for reviews). Although the issue of modality-specific versus modality-neutral processing is still debated, most of the evidence points towards early integration. Because the information is coming from two modalities, processing will be separate for some time during the early stages (Green, 1998; for a discussion of a different view, see Rosenblum, 2005). The question arises as to when they become integrated. Summerfield (1987), Green (1998) and Rosenblum (2005) all argue that integration takes place early, before phonetic categorization, perhaps even before phonetic feature extraction (see Massaro 1987 for a 8  discussion of integration after feature extraction). As Rosenblum (2005) mentions, “the research shows evidence that audiovisual speech is integrated at one of the earliest possible stages that can be observed using behaviorally-based perceptual methodologies” (p. 54). To summarize, although these issues continue to be debated, the evidence appears to suggest that the auditory and visual modalities are integrated early in speech processing, and that visible speech movements are processed by cortical sites specialized for language (Schmid & Ziegler, 2006). The following section discusses memory as a factor involved in the variation of speechreading abilities across individuals. 1.2.3  The Role of Memory in Lipreading Abilities Not everyone is equally proficient in lipreading (Calvert & Campbell, 2003;  Woodhouse et al., 2009). Age (Evans, 1965; Feld & Sommers, 2009; Woodhouse, 2007), gender (Bornstein, Hahn, & Hayes, 2004), and intelligence (e.g., Elphick, 1996) have all been speculated to play a role in speechreading abilities. Of interest to this study, however, is the influence of memory on speechreading skills, as poor working memory skills have been postulated to play a significant role in comprehension deficits in individuals with aphasia (see discussion in section 1.3.3 below). Working memory refers to the storage of information and symbolic manipulations of that information (Just & Carpenter, 1992). Lidestam, Lyxell and Andersson (1999) found that working memory, as measured by a reading span test, predicted speechreading performance in 48 normal-hearing individuals. They mention that working memory is probably a necessary prerequisite for lipreading, and that those with inherently poorer working memory may be less apt for practicing speechreading. Lyxell & Holmberg (2000) have further found that a capacious working memory is related to good speechreading skills in children; they explain that speechreaders need to have a large working memory in order to 9  store decoded auditory and visual fragments of a message, and to use that information later on to make inferences about the content of the message. Feld and Sommers (2009) examined specific subtypes of working memory, verbal and spatial. They concluded that individual variability in lipreading can be at least partially explained by differences in spatial working memory, perhaps even more so than verbal working memory. This may possibly be because speechreading requires one to store a “sequence of visually observed movements” and combine them into a “unified percept” (p.1563), skills that may be more closely related to spatial processing. There seems, therefore, to be relatively consistent evidence that some type of working memory is related to speechreading abilities and may even be a good predictor of lipreading performance. However, because lipreading tasks are a component of language processing, and since working memory has been shown to be related to language comprehension, it follows that working memory should correlate with lipreading abilities. In other words, factors in addition to processing of visual speech may at least partially account for the apparent relationship between working memory and speechreading abilities— namely, people who are good with language in general may be better lipreaders. In Lidestam et al.’s (1999) study, their results showed that there was a “tendency in the data that working memory capacity [was] more critical when the messages to be speech-read [were] increasing in length” (p. 216). This finding supports the notion that the relationship observed between working memory and speechreading skills may have been mediated by more general language processing demands. In addition, the storage and computational demands associated with working memory are unlikely candidates for the likely automatized processing of low level visual speech cues. Nonetheless, although variability in lipreading abilities exists across individuals, visual speech has been shown to be beneficial in a variety of circumstances and with 10  different populations. The next section reviews research that has examined the benefits of speechreading in both normal and clinical populations. 1.2.4  The Benefits of Visual Speech The benefits of speechreading have been extensively documented in the literature.  Research has shown that visual speech can be especially helpful in enhancing degraded auditory signals, such as speech in background noise (e.g., Neely, 1956; Schwartz et al., 2004; Sumby & Pollack, 1954), by providing redundant perceptual information. It has been reported that lipreading can improve intelligibility by up to the equivalent of 15 dB added to the auditory signal (Sumby & Pollack, 1954). In their study of visual enhancement of speech comprehension in noise, Ross, Saint-Amour, Leavitt, Javitt and Foxe (2007) found that the benefits of lipreading were greater around a -12 dB signal-to-noise ratio, between the extremes where listeners rely almost exclusively on speechreading (-24 dB) and where the information provided by visual speech is almost entirely redundant to the auditory signal (0 dB). Speechreading has also been shown to be beneficial in multitalker babble. In Rudmann, McCarley and Kramer’s (2003) study, participants were asked to watch and listen to a target speaker, whose voice was distracted by two, three or four additional voices, and to press a button when they heard the target speaker say the words the and and. The subjects were more successful at this word monitoring task when a video of the target speaker was available, which provided them with important visual cues that they were able to use to single out the target words. It is also well known that the visual cues provided by speechreading can enhance comprehension in individuals with a hearing loss (e.g., MacSweeney et al., 2002; Mitchell & Maslin, 2007). As Campbell (1998) notes, English has a complementarity of audition and 11  vision, such that phonemic contrasts that are difficult to hear (e.g., /m/ and /n/) are often the ones that are the most distinctive visually, a fact that greatly benefits hearing impaired individuals. In addition, cochlear implant users can benefit from speechreading, as the speech signal is often degraded by cochlear implant devices (Kaiser, Kirk, Lachs, & Pisoni, 2003; Most, Rothem, & Luntz, 2009). Arnold and Hill (2001) have also demonstrated that visual speech can be beneficial in cases where the listener is an L2 learner of the language, the speaker has a foreign accent, or the message is semantically and syntactically complex. The latter finding shows that lipreading can lead to enhanced comprehension even when the auditory signal is clearly audible and intact. 1.2.5  Summary In summary, a large body of research has convincingly demonstrated the automatic  and irrepressible nature of visual speech processing. The McGurk effect is by far one of the most studied phenomena associated with lipreading, and these findings have led to a large area of research dedicated to investigating where and how visual speech and language processing take place in the brain. Most of these studies have pointed towards early integration of the two modalities, in areas of the brain responsible for language processing. Visual speech has been repeatedly shown to be beneficial in a variety of circumstances, such as in noise, in multitalker babble, when messages are semantically or syntactically complex, and for individuals with hearing loss, cochlear implants, or for those who are second language speakers. Not everyone has the same lipreading skills, however; even within nonclinical populations, speechreading abilities differ across individuals, and working memory has been proposed as a contributing factor. Taken together, these findings provide suggestive evidence that individuals with aphasia should benefit from lipreading. The extra cues 12  provided by the visual channel may help counter their difficulties with language comprehension since it has been shown to help in cases where sentences are more structurally complex, and therefore, harder to process. 1.3  Language Comprehension in Aphasia Aphasia is characterized by an impairment of language in the different modalities  including speaking, listening, reading, and writing (Hallowell & Chapey, 2008), which can be affected to varying degrees. Many different aphasia classifications have been proposed over the past decades; one very influential classification system that has guided much of the clinical practice and research to date is the Boston Group Classification (by Geschwind, Benson, Alexander, Goodglass, Kaplan, and others) (Ardila, 2010). This system makes an important distinction between fluent and non-fluent aphasia (e.g., Alexander & Benson, 1991; Edwards, 2005), with the most common subtypes being Wernicke’s and Broca’s aphasia, respectively. Wernicke’s aphasia is characterized by fluent but informationally empty speech, paragrammatism, and poor language comprehension. Broca’s aphasia is considered to be the polar opposite (Edwards, 2005): in clinical testing, they generally show non-fluent agrammatic speech, but relatively good language comprehension (Mitchum & Berndt, 2008). Although it used to be considered an entirely expressive deficit, Broca’s aphasia has repeatedly been shown to include subtle comprehension problems (e.g., Caramazza & Zurif, 1976; Mitchum & Berndt, 2008). The following two sections discuss the comprehension deficits associated with Wernicke’s and Broca’s aphasias as these are the most prevalent in much of the research to date. 1.3.1  Comprehension Deficits in Wernicke’s Aphasia Comprehension difficulties of spoken and written language are a hallmark of fluent  aphasias, especially Wernicke’s aphasia (e.g., Davis, 2007). Auditory comprehension can be 13  defective at the single word level (Goodglass, 1993), sometimes even more so than at the sentence level (Goodglass & Kaplan, 1983), but generally sentences are more challenging than single words, especially sentences with complex syntactic structures (Edwards, 2005). Although the mechanisms involved in poor language comprehension are not fully understood, recent research in this area has contributed significantly to our current understanding of receptive deficits in Wernicke’s aphasia. Edwards (2005) provides an overview of different observations made with regards to comprehension deficits in fluent aphasia. At the single word level, it has been shown that some individuals with fluent aphasia have difficulties distinguishing between phonemes and have poor phonological awareness. However, as Janse (2006) notes, the relationship between low-level perceptual deficits and language comprehension is far from clear-cut, especially for those with Wernicke’s aphasia. In addition, there can also be problems with relating what is heard to linguistic knowledge. Semantic word classes can be affected differently; verbs may be more difficult to access than nouns, and word frequency and length may also exert an influence. Nouns can also be differentially affected according to their category; for example, “colors” or “body parts” can be selectively impaired (Goodglass & Kaplan, 1983). It has been demonstrated that some individuals with Wernicke’s aphasia have problems matching pictures based on common perceptual features, and that they have “conceptual difficulties in analytical and propositional thinking” (p. 345) (Cohen, Kelter, & Woll, 1980). These subtle non-verbal deficits may therefore negatively impact the organization of the semantic system and cause difficulties in comprehension. Although the general hypothesis regarding Wernicke’s aphasia has been that comprehension deficits may be caused by problems in the semantic domain (Davis, 2007), recent evidence has shown that syntactic capacities may not always be intact as well. Some 14  individuals with fluent aphasia find syntactically complex sentences more difficult to understand than simple canonical sentences (which do not require syntactic processing), and some appear to have difficulties with theta role assignment (Edwards, 2005). However, the underlying cause of these difficulties is not completely understood. The development of on-line measures has helped us determine these underlying mechanisms and pinpoint where and when breakdowns occur. On-line measures offer the advantage of capturing the mechanisms involved in language processing that occur in a very short period of time and that are highly transitory in their effects (Mitchell, 2004). Methods such as eyetracking, event-related potentials (ERPs), and self-paced reading or listening have been used regularly in psycholinguistic research (Mitchell, 2004). Priming studies have also been used extensively. Many of these studies have pointed towards an overactivation of lexical representations combined with a lack of inhibition in individuals with Wernicke’s aphasia (Blumstein & Milberg, 2000; Janse, 2006; Wiener, Connor, & Obler, 2004). Essentially, the claim is that the comprehension deficits observed in Wernicke’s aphasia are caused by lexical items being activated too easily, including competitors (i.e., semantically or phonologically related words), and that the system is unable to suppress these competitors and correctly select the winning candidate, leading to comprehension difficulties. Overall, it appears that various factors influence comprehension in fluent aphasia (Edwards, 2005). These include lexical factors such as word classes and frequency; syntactic factors such as sentence type and grammatical operations; and overactivation of lexical items and reduced inhibition of competitors. In addition, non-linguistic factors such as working or short-term memory and attention may influence performance on sentence level tasks (e.g., Francis, Clark & Humphreys, 2003; Wiener et al., 2004). A discussion of working memory  15  will follow the section on comprehension deficits in Broca’s aphasia as it is highly relevant to both types of aphasias. 1.3.2  Comprehension Deficits in Broca’s Aphasia Comprehension problems in Broca’s aphasia have increasingly become an area of  interest to many researchers. Although it used to be considered a strictly “expressive aphasia”, a large body of research has demonstrated that language comprehension may also be affected. It has repeatedly been shown that individuals with Broca’s aphasia have difficulty with syntax, particularly understanding noncanonical reversible sentences involving syntactic movement such as passives, object-gap relatives, and object whquestions (e.g., Berndt, Mitchum, & Wayland, 1997; Caplan, Matthei, & Gigley, 1981; Hagiwara, 1993; Hickok, Zurif, & Canseco-Gonzalez, 1993) as well as binding structures such as pronouns (e.g., Edwards & Varlokosta, 2007). These observations have led to a number of hypotheses regarding the nature of these difficulties. Several theories have not surprisingly suggested that the problem arises at the syntactic level. Caplan and Futter (1986) proposed the Linearity Hypothesis, which claims that individuals with agrammatic aphasia use a linear (i.e., canonical) agent-action-recipient order to assign thematic roles, which would naturally lead to difficulties with sentences that do not have a canonical word order. However, Grodzinsky (1986, 1989) disagreed and instead proposed the Trace Deletion Hypothesis (TDH), which suggests that people with Broca’s aphasia delete traces from underlying structural representations. Traces are thought to be left behind after syntactic movement in normal language processing. For example, in the sentence “The womani that the boy talked to ti”, the phrase “the woman” moved from the end of the sentence to the beginning, leaving behind a trace. However, those with Broca’s aphasia do not retain these traces, and therefore, end up assigning thematic roles randomly 16  (Davis, 2007), or by using heuristics or guessing strategies (Choy & Thompson, 2010). Similarly, the Double Dependency Hypothesis (DDH) (Mauner, Fromkin, & Cornell, 1993), another trace-based theory, proposes that traces survive, but their relationship to their antecedent is disrupted. In 2005, O’Grady and Lee suggested another alternative, the Isomorphic Mapping Hypothesis (IMH), which states that the deficit arises with sentences in which the order of noun phrases is not aligned with the structure of the corresponding event. For example, in the sentence “He tapped the crayon with the pencil”, the agent picks up the pencil before tapping the crayon, which is inconsistent with the noun phrase order (Davis, 2007). Others have also suggested that the problem arises from slowed or delayed syntactic processing, which affects syntactic-semantic computations (i.e., the Slow-Syntax Hypothesis, or SSH) (Burkardt, Piñango, & Wong, 2003). Still others have proposed that comprehension deficits are due to problems with realtime processing mechanisms in the lexical system. Difficulties at the lexical level may create problems with syntactic processing due to the high amount of cognitive resources that must be allocated at the single word level, resulting in a lower amount of energy available for sentence processing. The Automaticity Hypothesis (Milberg, Blumstein, & Dworetzky, 1987) suggested that agrammatic patients had lexical-semantic impairments at the automatic level of processing. It has also been proposed that overall lexical activation is lowered in Broca’s aphasia (e.g., Blumstein & Milberg, 2000), meaning that the thresholds for activating lexical items are abnormally high. Similarly, Swinney, Zurif and Nicol (1989) concluded that individuals with Broca’s aphasia do not have a normal spread of activation (i.e., the activation of a lexical item leads to a spread of activation only to the closest nodes, but is much slower in reaching more distant nodes). They concluded this based on their finding that their study participants appeared to be primed only for primary meanings in their 17  cross-modal priming study. Complementing the foregoing explanations of aphasia is research that has shown that aphasic individuals are sometimes able to understand complex sentences, and thus that there is not likely a loss of syntactic representations. Taken together, there is much current research pointing to processing capacity limitations (i.e., how much information can be effectively processed per unit time) in the form of activation or inhibition of information, and/or maintenance of information (see following section) as providing more plausible explanations for the comprehension and production deficits observed in both fluent and non-fluent aphasia. 1.3.3  Working Memory and Comprehension Deficits in Aphasia Explanations of comprehension difficulties in aphasia have been proposed, which do  not refer directly to linguistic processes, but rather to other cognitive capacities. This idea has been applied to aphasia in general, not only specific subtypes (Davis, 2007). The general claim is that the comprehension deficits seen in aphasia are not caused by damage to the linguistic system, but rather by a reduction in overall processing capacity, which may target some systems more than others (Davis, 2007). Working memory has been posited as a significant constraint in language comprehension. In order to build complex syntactic structures, there needs to be sufficient working memory to retain the intermediate products of computation (Caplan & Waters, 1999). It has been shown that non-aphasic individuals perform similarly to persons with aphasia in conditions where their working memory is sufficiently taxed (Miyake, Carpenter, & Just, 1994), supporting the claim that the deficits observed in aphasia are not caused by problems in the linguistic system per se, but rather by reduced overall resources. Sung et al. (2009) have demonstrated that verbal working memory is moderately correlated with aphasia severity, and that the effects were most evident in sentence comprehension tasks with greater working memory demands. 18  Interestingly, it has also been shown that language also has an influence on working memory, such that performance increases when individuals have the ability to use linguistic strategies during working memory tasks (Christensen & Wright, 2010). The nature of this working memory capacity continues to be debated. As Sung et al. (2009) describe, some researchers have claimed that there is a single working memory capacity involved in all types of language processing (e.g., Just & Carpenter, 1992; King & Just, 1991), while others have proposed that different types of working memory or short term memory systems are responsible for different types of linguistic information (i.e., phonological, semantic, and syntactic) (Caplan & Waters, 1999; Friedmann & Gvion, 2003; Martin & Fehler, 1990). However, regardless of this ongoing debate regarding the underlying nature of working memory, it appears that at least some form of working memory is involved in language comprehension, and may be partly responsible for difficulties seen in aphasia. 1.3.4  Speech Discrimination Deficits in Aphasia There is evidence that difficulties with speech discrimination may be present across  many types of aphasias. Csépe, Osman-Sági, Molnár and Gósy (2001) conducted a brain imaging study of speech perception in aphasic patients, and found that participants had deficits in processing phonetic contrasts, irrespective of aphasia type. Discrimination between pairs of phonemes appears to be more difficult when only one distinguishing feature differs between the two, but becomes easier when they are distinguished by more than one feature (e.g., place of articulation + voicing, such as in /ba/ vs. /ta/) (Blumstein, Baker, & Goodglass, 1977). The question of which phonetic dimension is most difficult to distinguish for aphasic individuals has been investigated, and the results have been mixed. Place of articulation has been found to be the most difficult by some researchers (e.g., Blumstein et 19  al., 1977, in English; Miceli, Caltagirone, Gainotti, & Payer-Rigo, 1978, in Italian), while others found voicing to be the hardest (e.g., Hessler et al., 2010, in Dutch; Saffran, Marin, & Yeni-Komshian, 1976, for one English speaker with pure word deafness). It may be possible that the most difficult dimension to discriminate may differ across languages, due to the organization of the different phonological systems. As Hessler et al. (2010) note, however, how these speech perception difficulties relate to overall deficits in real-life language comprehension remains unclear. Some studies have found little correlation between phonetic discrimination and language comprehension skills (Blumstein et al., 1977; Gandour & Dardarananda, 1982), suggesting at least a partial dissociation between the two (Becker & Reinvang, 2007), while others have found positive correlations (Tallal & Newcombe, 1978). 1.3.5  Summary Several theories have been proposed to account for language comprehension  difficulties in Wernicke’s and Broca’s aphasias. The lexical, semantic, and syntactic levels have all been examined and found to all be potentially responsible for the deficits seen in both types of aphasias, particularly as these are affected by poor activation or inhibition processes. A reduction in working memory has also been demonstrated to be a possible explanation for these difficulties. In addition to comprehension deficits, some people with aphasia have been found to have problems with low-level phonetic discrimination, which may or may not be correlated with their language comprehension abilities. 1.4  Lipreading in Aphasia Few studies have investigated lipreading abilities in the aphasic population, and those  that have, examined it only in terms of syllable discrimination. Campbell et al. (1990) examined the McGurk effect in 10 control subjects and four participants with localized brain damage (2 with RH damage with resulting prosopagnosia and face processing problems, and 20  2 with LH damage, one with a resulting word-meaning deafness and the other with a pure alexia). They found that while the control subjects had fused responses in the audiovisual condition with incongruent stimuli, the brain-injured subjects showed inconsistent results. The subject with word meaning deafness relied very heavily on the visual channel in the audio-visual condition, while the participant with pure alexia had great difficulty with seen speech and reported no fusion responses. The two individuals with RH damage were both able to lipread normally, despite their visual-spatial deficits. Shindo et al. (1991) found that individuals with word deafness were able to use lipreading to improve their comprehension of speech. Pure word deafness is defined as an “inability to comprehend spoken words and to repeat speech, with preserved ability to identify nonverbal sounds” where “reading, writing and speaking are also preserved” (Hayashi & Hayashi, 2007, p. 863). This type of disorder appears to be only restricted to the auditory channel for language stimuli, and therefore, much more narrow in scope compared to the deficits typically observed in aphasia. Three more recent studies have specifically examined lipreading in adults with aphasia. The first study (Youse, Cienkowski, and Coelho, 2004), investigated the abilities of one person with aphasia to identify the syllables /bi/, /di/ and /gi/ in audio-only, visual-only, and audiovisual conditions. The participant, JP, was an 82-year-old man who had sustained a left thromboembolic CVA involving the left middle cerebral artery 9 months prior. At the time of the study, he was classified as having a mild anomic aphasia. Two non brain-injured participants also took part in this study to serve as controls: one who was age-matched to the participant, and one who was 22 years old and representative of the population typically sampled in McGurk studies.  21  The audiovisual condition consisted of both congruent and incongruent syllables. The incongruent tokens consisted of a mismatch between the heard and seen syllables, and were included to assess the McGurk effect. Fusion and combination responses were considered to be evidence of the McGurk effect, and therefore, integration of the two modalities. Eighteen tokens were each presented in the audio- and visual-only conditions, and 36 tokens were presented in the audiovisual condition. Participants were asked to listen to and/or watch each presentation and repeat out loud what had been said. The results showed that JP had difficulties in all conditions, with the worst performance in the visual-only condition. His accuracy was similar in the audio-only and audiovisual conditions, suggesting that he did not benefit from the added visual cues. For the incongruent stimuli, he had fusion responses (he reported /di/ when presented with auditory /bi/ + visual /gi/); however, he appeared to be biased toward /di/ in all conditions, casting doubt on whether his fusion responses were truly evidence of integration. The authors concluded that JP was unable to use the visual cues provided by speechreading to enhance speech perception. The second study, by Schmid and Ziegler (2006), had two major purposes: 1) to examine the effects of visual speech in individuals with aphasia and/or apraxia of speech, and 2) to determine whether the effects of visual cues are specific to the linguistic domain. Fourteen patients with left hemisphere damage due to strokes, trauma or tumors and fourteen neurologically intact subjects took part in this study. Two of the patients had apraxia of speech only, seven had aphasia only, and five had both aphasia and apraxia. All were native speakers of German. The stimuli consisted of pairs of reduplicated syllables (/apap/ - /afaf/, and /nini/ /nyny/). To assess the non-linguistic domain, they paired the non-speech gestures for 22  “kissing” and “whistling”, and rounded and unrounded alveolar clicks. The procedure consisted of a discrimination task. The stimuli were presented in four different conditions: 1) visual-only, 2) audio-only, 3) audio-visual, and 4) cross-modal matching (A x V), in which the first stimulus was presented auditorily only, and the second was presented visually only. The subjects were asked to press one of two keys, indicating whether the two stimuli were the same or not. The results showed that the patients made more errors than the controls in all modalities. Their worst performance was in the visual-only condition. In the speech domain, they did not perform better in the audio-visual condition compared to the audio-only condition, suggesting that they did not benefit from the added visual cues. They also had particular difficulty in the cross-modal condition; however, this proved to be a difficult task for some of the control participants as well. In the non-speech domain, however, the patients’ performance was better in the bimodal than in the audio-only condition, demonstrating that they were able to utilize the redundant visual and acoustic information for the non-speech gestures. The authors point out that this demonstrates that their lack of improvement in the bimodal condition for the speech stimuli did not result from a visual processing deficit. The authors concluded that the results of their study supported the claim that impaired speech perception in aphasia originates at a level of supra-modal phonological representations. In other words, the difficulties are present regardless of modality. The third study, by Hessler et al. (2010), examined the influence of speechreading on the syllable discrimination abilities of six Dutch subjects with aphasia. All had difficulties with sound discrimination based on the PALPA non-word discrimination task. Their hearing was judged to be within functional limits by their speech therapists. The stimuli consisted of 23  monosyllabic strings of non-words with CVC(C) structure. Participants were asked to discriminate between pairs of syllables in 3 conditions: 1) audio-visual, 2) auditory only, and 3) visual only. The pairs differed in either one, two, or three feature dimensions (i.e., place, manner, and voicing), and the rimes were always identical within each pair. For example, the pair /py:m - ty:m/ differed in place of articulation only; the pair /bo:f - to:f/ differed in both place and voicing. Their results suggested a significant difference between the three conditions, with highest performance in the audio-visual condition. Further analyses by feature dimension, however, failed to show a difference between the audio-only and the audio-visual conditions for both voicing and place of articulation. A trend for higher performance in the audio-visual was found for manner of articulation, but also failed to reach significance. They also found that in the audio-only and audio-visual conditions, pairs of syllables that differed in only one feature dimension were more difficult to discriminate; no difference was found between two and three dimensions. Taken together, their findings suggest that speechreading is helpful for speech discrimination when more than one feature dimension distinguish the two syllables. As the authors note, “the general improvement is, therefore, not due to one dimension in particular, but rather to a summation of improvement on all of them” (p. 990). 1.5  The Present Study This study has two purposes. First, it aims to partially replicate previous research on  low-level visual speech perception in aphasia. The evidence to date has generally failed to show consistent benefits of visual cues on speech perception; however, the paucity of studies available, combined with the relatively low number of participants in these studies, suggest  24  that our understanding of this issue is rather limited. This part of the study will also serve as a comparison point for the main part of the study. The main goal of this study is to investigate the role of lipreading at the discourse level in individuals with aphasia. As mentioned in the introduction and throughout this chapter, speech perception and language comprehension have different types of processing demands. A discourse comprehension task is more representative of the type of linguistic input that people with aphasia are faced with every day, and therefore, has more ecological validity. There are two experimental tasks in this study: a discrimination task, and a discourse task. In the discrimination task, the participants will be presented with nonsense syllable pairs in two conditions, an audio-only and an audio-visual condition. In the discourse task, two levels of syntactic complexity will be used in order to determine whether lipreading has a differential effect on comprehension depending on the grammatical complexity of the passage. 1.5.1  Research Questions and Hypotheses The following query was posed as a question, since previous research and theories of  aphasia and lipreading do not provide clear indications as to expected outcomes: 1.  Will a positive correlation be observed between the low-level phonetic discrimination task and the higher-level discourse comprehension task? Even though good phonetic discrimination is often considered a prerequisite for language comprehension, the literature suggests that many individuals with aphasia have speech perception difficulties, which do not necessarily translate into poor language comprehension. This may be due to the fact that people with aphasia are particularly sensitive to contextual influences during language processing, which would be 25  available in a discourse context, but not during a syllable discrimination task. On the other hand, the discourse task requires processing on a number of different levels (phonological, lexical, and syntactic) and puts greater demands on the working memory system, which may be compromised in individuals with aphasia. The hypotheses for this study are: 1.  The participants in this study will benefit from the added visual cues provided in the audiovisual condition in both the discrimination and discourse tasks. This prediction is based on the compelling evidence in the literature that visual cues help speech perception and language comprehension in many different populations, and in a variety of processing conditions.  2.  In the discourse task, there will be an interaction between modality and grammatical complexity, such that the advantage observed in the audio-visual condition will be greater for the complex passages. Since the complex stimuli add greater processing demands, the redundancy of information provided by the visual cues is likely to be more beneficial in the complex condition.  3.  Working memory will positively correlate with performance on the discourse comprehension task. This expectation is based on the large body of research that found that working memory is involved in language comprehension.  26  CHAPTER 2: METHOD 2.1  Introduction This chapter describes in detail the methodology used in this study. The subjects and  the two experimental tasks are explained in the following sections. The research designs, stimuli and procedures for each task are outlined. Finally, the general procedure for this study is explained in detail in the last section of this chapter. 2.2.  Participants Six participants volunteered for and consented to participating in this study (see  Table 1 for demographic and assessment data). They were recruited through stroke and aphasia recovery groups as well as private speech-language pathology practices in the Vancouver area. They all responded to a letter inviting them to participate. Four were male and two were female, and they ranged in age from 55 to 64 years (mean of 60.2 years). All subjects had English as a first language except one (L1: Farsi); they all spoke English fluently prior to the onset of their aphasia. All participants reported that a stroke had caused their aphasia, and the time elapsed since their stroke ranged from 11 months to 13 years (mean of 8.4 years). They were all currently receiving weekly speech-language therapy and/or were involved in aphasia recovery groups. They had the option of either coming to the University of British Columbia (UBC) for this study or having the experimenter come to their home; 4 chose to come to UBC, and 2 preferred to have the study conducted in their home. Those who came to UBC were offered to be reimbursed for travel expenses. Participants also had the option of completing the entire study in one sitting, or on two separate days. They all chose to complete the study in one session. Table 1 summarizes the subjects’ background information, their results on the hearing screen, and their performance on the memory, language, and apraxia testing. 27  2.2.1  Vision Screening Participants’ vision was informally screened using 8 different 12-point font letters on  a computer screen at approximately 40 cm/16 inches. Subjects with corrected vision were instructed to wear their glasses. They were asked to sit on a chair facing the screen and to report the letters on the screen without leaning closer to the computer monitor. A letter chart and a blank sheet of paper were provided for those whose expressive language and/or speech deficits prevented them from verbally reporting their answers. Two participants missed one of the 8 letters, and another reported that he generally used a different pair of glasses for computer use; however, this participant was able to accurately identify all 8 letters, and therefore, successfully passed the screen. Overall, all subjects were found to have adequate vision for the purposes of this study, as they were all able to read at least seven of the eight letters. 2.2.2  Hearing Screening No participant reported wearing hearing aids; however, one participant complained  of long-standing hearing loss. The hearing screen was conducted with the aid of a calibrated Maico MA 19 Screening Audiometer. Pure tones were presented to each ear independently at frequencies of 500, 1000, 2000 and 4000 hertz. Participants were asked to either close their eyes or look away from the experimenter, and let the experimenter know if the tone was heard, either though raising their hand or saying “yes”. All frequencies were screened at 25 dB HL, since 25 dB represents the cutoff for normal hearing in adults (Bess & Humes, 1995), and the higher intensity circumvents the possibility of background noise contamination, as the screenings were not conducted in a sound booth. The results indicated that 3 subjects successfully passed the hearing screen at all frequencies, while the remaining 3 participants failed at 4000 Hz bilaterally. The stimuli 28  used in the experimental tasks for this study were presented at comfortable listening levels, thereby minimizing the effects of hearing loss to a certain extent. In light of these results, however, hearing loss became a variable of interest in the interpretation of the data. 2.2.3  Working Memory Testing Participants’ working memory was tested using an adapted version of the Digit  Ordering task (MacDonald, Almor, Henderson, Kempler, & Andersen, 2001). This particular task was selected because the stimuli (i.e., numbers) require very little language processing, and thus should be minimally affected by anomia, but nonetheless remain in the verbal domain. On the other hand, there is recent research indicating that some individuals with aphasia may find processing of numbers more challenging than content words (Messina et al., 2009), in which case digit ordering may present increased processing demands. The Digit Ordering task consisted of listening to a series of digits, and repeating them back in ascending numerical order (e.g., “6, 3, 8” becomes “3, 6, 8”). The series of levels/spans ranged from 2 to 6 digits, and 4 trials were presented at each level. Testing was discontinued when all 4 trials at a particular level were incorrect, and all subsequent trials at higher levels were then considered incorrect in the calculation of final scores. A number chart and a blank sheet of paper were used for participants who had difficulty expressing their answers verbally. It is important to note that the use of the number chart allowed the participants to use a visual strategy throughout this task, which is generally not available to subjects when only verbal responses are acceptable. All digit series were pre-recorded using the examiner’s voice and presented on a computer using the E-Studio application of the computer software E-Prime 2.0 Professional (Psychological Software Tools Inc., 2008). Performance on this task varied widely across participants. Results ranged from 2/20 to 20/20 (M = 9.5, SD = 6.9). Participant #6, who scored the lowest on this task (2/20), 29  initially appeared to have significant difficulty understanding the task despite numerous examples and prompts given by the examiner. He was able to remember series of 2 digits but was unable to re-arrange them into ascending numerical order; the two items he answered correctly were presented to him in ascending order, and therefore, did not require any reordering. He had difficulty remembering strings of 3 digits, and his attempts at manipulating them were unsuccessful. This participant appears to be a good example of how number processing, particularly when it involves a manipulation of the sequence, may present significant challenges for some individuals with aphasia. 2.2.4  Language Testing Subjects completed the 60-item Boston Naming Test (BNT) (Kaplan, Goodglass, &  Weintraub, 2001) and the following 3 subtests of the Boston Diagnostic Aphasia Examination (BDAE) (Goodglass, Kaplan, & Barresi, 2001): Complex Ideational Material, Embedded Sentences, and the short form of the Reading Comprehension – Sentences and Paragraphs subtest. These tests were administered according to formal procedures in order to obtain information regarding the participants’ language abilities. However, since the BDAE was not administered in its entirety, this study was unable to classify participants according to specific aphasia subtypes. Overall, scores ranged from 5/60 to 58/60 (M = 35, SD = 18.8) on the BNT. On the BDAE, results ranged from 6/12 to 11/12 (M = 8.8, SD = 2.2) on the Complex Ideational Material subtest, 4/10 to 10/10 (M = 6.8, SD = 2.8) on the Embedded Sentences section, and 2/4 to 4/4 (M = 3.5, SD = 0.8) on the reading comprehension subtest. 2.2.5  Apraxia Screening Screening for oral apraxia consisted of non-speech movements and movement  combinations such as coughing and tongue protrusion. Alternating and sequential motion 30  rates using the syllables /pa/, /ta/, and /ka/, as well as repetition of the multisyllabic words “animal”, “stethoscope”, and “statistical analysis” were used to screen for verbal apraxia. Overall, one participant was found to have verbal apraxia, two were determined to have both oral and verbal apraxia, and the remaining three had no apraxia.  31  Table 1 Summary of participants' profiles Subject  Age  Gender  Cause of Aphasia  Mother Tongue  Hearing screen  Memory Task  BNT  BDAE CIM1  BDAE ES2  BDAE RC3  Oral apraxia?  Verbal apraxia?  CVA  Time PostOnset 10.5 years  P1  58  M  English  20/20  58/60  10/12  9/10  4/4  no  yes  F  CVA  10.5 years  English  Failed at 4000 Hz Passed  P2  61  5/20  39/60  10/12  5/10  4/4  no  no  P3  62  M  CVA  5.5 years  English  Passed  7/20  25/60  11/12  9/10  4/4  no  no  P4  55  M  CVA  11 months  English  7/20  33/60  6/12  4/10  2/4  yes  yes  P5  64  F  CVA  13 years  Farsi  Failed at 4000 Hz Passed  16/20  50/60  10/12  10/10  4/4  no  no  P6  61  M  CVA  10 years  English  2/20  5/60  6/12  4/10  3/4  yes  yes  1  BDAE Complex Ideational Material  2  BDAE Embedded Sentences  3  BDAE Reading Comprehension  Failed at 4000 Hz  32  2.3  Discrimination Task  2.3.1  Overview The purpose of this task was to partially replicate previous research findings on the  influence of visual speech on syllable discrimination abilities in individuals with aphasia, and to provide a comparison point for the discourse task. Since previous research has often used non-words as stimuli for these types of studies, nonsense syllables were also chosen for the present study. In addition, the use of non-words essentially eliminates the possibility of semantic processing, thereby maximizing the difference between the discrimination and discourse tasks (i.e., maximally non-meaningful vs. maximally meaningful). The discrimination task consisted of listening to 48 pairs of syllables and pressing one of two buttons on a computer keyboard to indicate whether the syllables were the same or different. The following sections provide a detailed description of the stimuli, the steps involved in the preparation of the stimuli, and the procedure for the discrimination task. 2.3.2  Stimuli Description The syllables used for this task were adapted from Miceli et al.’s (1978) study of  syllable discrimination in aphasia. The stop consonants /p, t, k, b, d, g/ were used for the onsets and combined with the rime /em/ to create the nonsense syllables /pem/, /tem/, /kem/, /bem/, /dem/, and /gem/. The pairs were created such that only place of articulation differed across the two items of each pair (i.e., both items had either voiced or voiceless onsets). Twenty-four pairs of syllables were created overall. Each syllable was paired with itself and the other two syllables with the same voice setting. Every possible combination was used, and each pair of identical syllables was recorded and presented twice to create an equal number of “same” and “different” syllable pairs. The “same” pairs were different recordings of the same syllables to ensure phonological encoding and processing, thereby preventing 33  subjects from making decisions solely based on superficial phonetic information such as intonation. 2.3.3  Stimuli Preparation A 38-year-old female native English speaker recorded the syllables, which were  pronounced according to the rules of standard Canadian English phonology. The recording took place in a sound booth. A JVC GZ-MG505U camcorder was placed on a tripod facing the speaker, who wore 2 Shure TI Transmitter wireless microphones connected to the camera. Directly in front and below the camera was a table, where a computer laptop was placed to allow the speaker to read the syllables on the screen. The speaker sat at eye level with the laptop screen, and almost at eye level with the camera. Each pair of syllables was presented in large font on the screen one at a time, using Microsoft Powerpoint. The speaker controlled the Powerpoint slideshow using a wireless computer mouse. The speaker was instructed to keep a constant 2 second delay between the two syllables in each pair. Using the computer software Corel VideoStudio Pro X3, each pair was cut approximately half a second before the start of the first syllable and after the end of the second syllable. Each syllable pair was then saved as a video file in WMV format, and the sound file for each video was then extracted and saved in WAV format. The WMV movie files were then used in the audio-visual condition, and the WAV sound files in the audioonly condition. The E-Studio application of the computer software E-Prime 2.0 (build was used to program the experiment, present the stimuli and record participants’ responses. Two different lists were created: one with 24 video files, and one with 24 sound files. Each of these two lists was set up to present the stimuli in random order. A response by the subject was required for each subsequent syllable pair to be presented. 34  2.3.4  Procedure All subjects were tested individually. They sat comfortably on a chair approximately  16 inches from the computer screen, and wore JVC HA-G33 headphones. The subjects who came to UBC were tested using a PC desktop computer running Windows XP, with a dual core processor running at 2.8 GHz, 2 GB of RAM, Intel(R) G35 Express Chipset graphics, and an Acer X233H widescreen 23 inch LCD monitor. The subjects who chose for the experimenter to come to their home were tested on a LG R500 laptop running Windows 7 with a dual core processor running at 2 GHz, 2.5 GB of RAM, and an NVIDIA GeForce 8600M GS graphics card. The laptop screen measured 15.4 inches. On both the desktop and laptop computers, the video display measured approximately 5 x 7 inches. The participants were told that they would be listening to pairs of syllables, and their task was to decide whether the two syllables were the same or different. A green sticker with the letter Y was placed on the S key of the computer keyboard, and a red sticker with the letter N was placed on the L key; participants were instructed to press the green key if the syllables were the same, and to press the red key if the syllables were not the same. These instructions were given in person by the experimenter, and were subsequently also presented in writing on the computer screen at the beginning of the task. Response time and accuracy were recorded by the E-Prime software. Four practice items were used to familiarize the subjects with the task. All six participants completed both the audio-only and the audio-visual discrimination tasks. The order of presentation of these two conditions was counterbalanced across participants.  35  2.4  Discourse Task  2.4.1  Overview The main purpose of this study was to determine whether allowing for lipreading  enhances the comprehension of language at the discourse level in individuals with aphasia. Two independent measures were manipulated: modality of presentation (audio-only vs. audio-visual) and syntactic complexity of discourse content (simple vs. moderately complex), resulting in 4 experimental conditions (see Table 2). A 2x2 repeated measures design was used, such that every participant was tested in all 4 experimental conditions.  Table 2 Experimental conditions Audio-only Simple  Audio-only Complex  Audio-visual Simple  Audio-visual Complex  Two dependent measures were employed: reaction time (RT) and comprehension question accuracy. The auditory moving window method (Ferreira, Henderson, Anes, Weeks, & McFarlane, 1996), a type of self-paced listening technique, was used as an on-line measure of participants’ listening/comprehension times during each discourse passage. As stated in the previous chapter, on-line measures allow us to gain insight into how processing takes place in real time, while a person is engaged in a comprehension task (Mitchell, 2004). In the auditory moving window technique, the recorded auditory discourse input is segmented into sentences and phrases, and the subject is required to press a key to hear each subsequent segment. This allows the person to move through the spoken passage at their 36  own pace. The main dependent measure is the time between the end of a segment and a subsequent key press, which is argued to reflect comprehension processing as it takes place at natural parsing junctures (Little, Prentice, Darrow, & Wingfield, 2005). Post-passage comprehension questions were also used to ensure that the subjects were processing the passages for meaning. 2.4.2  Stimuli Description The stimuli for the discourse task consisted of 40 short stories that were created by  the investigator of this study (see Appendix A). The stories were all about fictional characters to ensure that the subjects would be unable to answer the questions based solely on general world knowledge. All stories consisted of 4 sentences each. Two levels of syntactic complexity were used: simple and moderately complex (henceforth referred to as “complex”). Twenty stories were simple, while the remaining 20 were complex. The 20 stories in each complexity level were divided into two lists, resulting in 4 lists of 10 stories each. The rationale behind creating 2 lists for each syntactic complexity level was that every participant would be tested in both the audio-only and the audio-visual conditions at both complexity levels using different stories. The increased syntactic complexity in the complex condition was achieved by increasing the number of words per sentence and including right and left branching clause structures. This consequently created longer passages with more verbs per sentence. Word frequency was approximately the same in all conditions. In order to employ the auditory moving window technique, each story was segmented into sentences and clauses. They were segmented in places where pauses occur naturally in running speech (e.g., clause boundaries) to ensure that it would not negatively  37  impact comprehension. The complex passages consequently had more segments per story, as they contained more clauses. A number of parameters were carefully controlled so that the two lists within each complexity level were equivalent. The number of words and verbs per segment and per sentence were calculated and averaged, as well as the number of segments per passage. In addition, the number of right- and left-branching clauses were counted and averaged in each list, thereby providing a measure of syntactic complexity. Word frequency, measured using the Kucera-Francis written word frequency (MRC Psycholinguistic Database, 2005), was also calculated for content words only. The following table summarizes the stimuli characteristics of each list (S refers to simple, and C refers to complex). The goal was to create lists that were equivalent within complexity levels, as shown in Table 3.  Table 3 Average stimuli characteristics of each list List  S-1 S-2 C-1 C-2  #words/ #seg/ #words/ #verbs/ #verbs/ RLword seg passage sentence sentence passage branch./ branch./ freq./ passage passage passage 7.98 4.5 8.9 1.65 6.6 0 0 251.66 8.15 4.3 8.7 1.68 6.7 0 0 264.01 12.85 6.7 20.9 3.68 14.7 1.9 0.3 252.87 13.00 6.4 21.0 3.75 15.0 2.3 0.6 224.81  In addition to the auditory moving window procedure, two yes/no questions per story were used as an off-line measure of comprehension (see Appendix A). One of the two questions was considered a detail question, in that the answer to this question was explicitly stated in the passage. Names of fictional characters from the stories were never the focus of the detail questions as they are easily forgotten and do not constitute a valid indication of comprehension. The other question was an inference question, in which the answer was not 38  directly mentioned in the passage but could be inferred from the story. The questions were presented one after the other after each story in both sound and written form simultaneously, regardless of whether the passage had been in the audio-only or audio-visual condition. 2.4.3  Stimuli Preparation The passages for the discourse task were recorded by the same female native English  speaker who recorded the syllables for the discrimination task. The recording took place on the same day, in the same soundbooth and with the same equipment. The speaker sat against a neutral grey background, at eye level with the computer laptop placed on a table directly in front and below the video camera, and controlled the PowerPoint slideshow herself with the wireless computer mouse. The stories were presented one segment at a time on the computer screen. She was instructed to speak naturally but not rapidly, use normal intonation, and avoid making unusual facial expressions or head movements. Her mean speech rate for the simple passages was approximately 135 words per minute (WPM) (List S-1 = 133.6, SD = 11.5; List S-2 = 135.4, SD = 12.6), and 161 WPM for the complex passages (List C-1 = 162.4, SD = 11.6; List C-2 = 158.8, SD = 25.2), based on a random selection of half of the stimuli. The higher speech rate in the complex condition, which arose naturally during the recording, may contribute to the increased complexity of these passages. The comprehension questions were recorded by a 31-year-old male English speaker native to Alberta. The recording took place in a sound booth with the same Shure wireless microphones and JVC video camera as used in the passages. A male speaker was chosen for the questions to ensure that there was a strong voice distinction between the stories and the questions. As with the female speaker, he was instructed to speak naturally, relatively slowly, and with a normal rising intonation for questions. His mean speech rate was approximately 156 WMP (SD = 25.3) based on a random selection of half of the stimuli. The 39  sound from the video recording of the questions was extracted using Adobe Premiere Pro CS5 and edited with the software Audacity 1.2.6. Each question was saved as an individual WAV file. The discourse passages were edited with Corel VideoStudio Pro X3. Each passage was split into the predetermined segments, which were then saved both as WMV video files and WAV sound files. The exact place where the cuts were made were characterized by a natural break in articulation; the speaker had completely finished articulating the last word of the first segment, but had not yet started saying the first word of the following segment. The sections were split such that the end of one segment corresponded to the beginning of the next, thereby creating smooth transitions between segments when played back to back. E-Prime 2.0 was used to program the experiment for this task. Each of the four lists were programmed into a different experiment file to easily allow subjects to take breaks between lists. Each list was used twice, once using the WAV sound files (for the audio-only condition), and once using the WMV video files (for the audio-visual condition), resulting in 8 different experiments. Two practice trials, one using sound files and one using videos, were created to allow participants to familiarize themselves with the procedures. These consisted of 4 different passages that were not used in any of the experimental lists. The program was set to present each story in random order, followed by the associated 2 comprehension questions, which were also presented in random order. The experiment was programmed to accept user inputs (i.e., key presses by participants) only after the sound or video files had played in their entirety to prevent subjects from accidentally pressing keys and missing parts of the passages or questions. Each story was introduced by displaying the words “NEXT STORY” on the screen for 3 seconds; the word “Questions” was displayed for 2 seconds to signal to the participants that the questions were 40  next. A 2-second blank screen was added between each question pair to give participants time to prepare for the second question. 2.4.4  Procedure Subjects completed the experiment individually. They sat on a chair facing the  computer. The desktop and laptop computers used for this task were the same as the ones used for the discrimination task. Prior to the start of the experiment, they were told that they would be listening to short stories consisting of 4 sentences each, and answering 2 questions about each of them. Some of the stories would be in audio form only, while others would be videos; the subjects were told that for the videos, it was important to both watch and listen to the speaker. The examiner explained to them that the stories had been broken up into sentences or parts of sentences, and that to hear each part, they had to press the NEXT button (i.e., a sticker with the word NEXT was placed on the space bar on the keyboard). They were given a concrete example by the examiner (e.g., “You will hear something like Ben went to the store. ___ (pause). To hear the next part, you will need to press on the NEXT button.”). They were told to use the green Y and the red N buttons (i.e., the S and L keys on the keyboard, as in the discrimination task) to answer the questions. The participants then completed the practice trials, and were encouraged to ask questions if they had any concerns. All subjects appeared to understand the nature of the task after only one or two of the four practice stories. Following the practice items, the participants began the experiment. E-Prime presented the stimuli and recorded participants’ answers and response times with millisecond accuracy. Every participant listened to all 40 passages: for each of the two complexity levels, one list was used for the audio-only condition, and the other for the audio-visual condition. This created four possible combinations of lists for the different conditions (see Table 4). 41  Table 4 Possible combinations of lists across conditions Combination 1 Combination 2 Combination 3 Combination 4  Simple Audio S-1 S-2 S-1 S-2  Simple A-V S-2 S-1 S-2 S-1  Complex Audio C-1 C-1 C-2 C-2  Complex A-V C-2 C-2 C-1 C-1  Each participant was assigned to one of these predetermined combinations. Due to the low number of subjects, all combinations were used then recycled after the first four participants to ensure that each combination was used at least once. The order of presentation of the four lists for each participant was randomized. However, since each experiment consisted of a different file, the order of presentation had to be randomly selected prior to the start of the experiment. With four experiments, there were 24 possible orders of presentation (i.e., 4x3x2x1). These 24 possibilities were numbered from 1 to 24; a random number generator was then used to select an order of presentation for each participant. Only 6 of the 24 possibilities were used as there were 6 participants in this study. Participants were encouraged to take breaks between lists if they felt tired. Four of the subjects took one short break during this task, and one took two breaks. 2.5  General Procedure Informed Consent: Two versions of the consent forms were available to participants,  a regular version, and an aphasia-friendly version. The aphasia-friendly consent form was adapted from Kagan, Winckel, and Shumway (1996) and consisted of short phrases accompanied by pictures to facilitate comprehension. Participants were introduced to both versions, and were encouraged to select the most appropriate one for their current reading abilities. All chose the regular version. The experimenter explained the purpose of this study in detail as well as all information regarding the study tasks, compensation, risks, benefits, 42  and confidentiality. Upon consenting, all subjects were given a copy of the consent form for their records. The session began by obtaining background information from the subjects, and conducting the hearing and vision screenings. The working memory, language and apraxia testing followed. The experimental tasks were completed last. Half of the subjects began with the discrimination task, and half with the discourse task. The intensity of the auditory signal was determined during the practice phase, and was set at a comfortable listening level for each participant. Lastly, the participants were asked for feedback regarding the difficulty of the experimental tasks. Specifically, they were asked whether they had found the discrimination or the discourse task easier, and whether the added visual cues from the videos had been helpful, distracting, or made no difference. 2.6  Analysis Data analysis consisted of two main parts: 1) group analyses, using descriptive and  inferential statistics, and 2) descriptive analyses of individual cases. A paired samples t-test was used to determine whether there was a significant difference in performance between the audio-only and audio-visual conditions for the discrimination task. A 2x2 repeated measures ANOVA was performed for both the reaction time data from the moving window procedure and accuracy data from the comprehension questions to examine differences between the audio-only and audio-visual conditions across the two complexity levels. Follow-up pairwise comparisons were conducted using paired samples t-tests to compare performance across conditions in the discourse task. Correlation analyses were performed between working memory scores and RTs and accuracy on the two experimental tasks. Because three of the research subjects failed the hearing screen at 4000 43  Hz bilaterally, participants’ performance on the discrimination and discourse tasks was examined in light of the hearing screen results. Individual case analyses involved comparisons of each participant’s performance across tasks and conditions, and examination of mediating factors such as working memory abilities, language skills, and hearing loss.  44  CHAPTER 3: RESULTS 3.1  Overview This chapter begins by reporting group results for the discrimination and discourse  tasks. Then, since aphasia tends to be characterized by wide variability in language skills, the second main section of this chapter examines each participant individually in a case studies approach. 3.2  Group Results  3.2.1  Discrimination Task The data for the discrimination task consisted entirely of accuracy scores. The  percent correct was calculated for each of the two conditions (i.e., audio-only vs. audiovisual) for each participant, and are reported in Table 5, along with group mean percent correct.  Table 5 Percent correct (out of 24 trials) on the discrimination task in each condition Participant  Audio-only  P1 P2 P3 P4 P5 P6 Mean (SD)  75.0 91.7 66.7 79.2 100.0 79.2 82.0 (12.0)  AudioVisual 83.3 95.8 54.2 79.2 100.0 87.5 83.3 (16.2)  A paired samples t-test (PASW Statistics 18, SPSS) was used to compare accuracy scores across the two conditions. There was no significant difference in the scores between the audio-only and audio-visual conditions, t(5) = .43, p = .68. A small effect size was found, r = .19. These findings do not support Hypothesis 1, which stated that performance would be 45  better in the audio-visual condition. A Pearson’s product-moment correlation coefficient was computed and yielded a strong correlation between the two conditions, as expected; r = 0.89, p = .01. 3.2.2  Discourse Task Preparation of the Data for Analysis As Ratcliff (1993) discusses, reaction time data is almost always contaminated by outlier responses, which are response times that fall outside the expected range and that are generated by processes that are not under investigation; these can include fast guesses, guesses that are based on the subject’s estimate for the usual time to respond, guesses based on the subject’s failure to reach a decision, or participants’ inattention. As such, it was necessary to trim the data and thereby eliminate clear outliers. Tyler’s (1992) guidelines for trimming RT values obtained from clinical populations were followed. First, extreme outliers, as defined by any reaction time lower than 100 ms and higher than 6000 ms, were excluded and replaced by missing values. These cutoff values were selected because reaction times faster than 100 ms are unlikely to reflect true processing, and subjects were highly unlikely to require more than 6 seconds to process a sentence or clause. These exceedingly high values most likely represented interruptions (e.g., doorbells) or lapses of attention. A total of fifty RTs were eliminated this way, representing less than 4% of the data. The means and standard deviations (SDs) per condition per participant were then calculated. The mean +/- 2 SDs were taken, for each individual subject in all 4 conditions separately, as minimum and maximum values; any RTs falling above or below 2 SDs were then replaced by these minimum and maximum values. New means and SDs were obtained for each participant and condition separately, and the missing values were then replaced by 46  the new means. Only 7 RTs were replaced this way, representing less than 3% of the data. The resulting means per subject per condition were then used for analysis. Reaction Time Data Table 6 represents participants’ and group mean response times in each of the four conditions. Figure 1 visually represents the different group means and standard errors (SA = simple audio-only; SV = simple audio-visual; CA = complex audio-only: CV = complex audio-visual).  Table 6 Mean RTs in milliseconds for each subject in the four conditions on the discourse task Participant # P1 P2 P3 P4 P5 P6 Mean (SD)  Simple A-only 823 672 1345 1362 951 784 990 (296)  Simple A-V 440 501 1472 1430 936 1040 970 (441)  Complex A-only 909 729 1606 1231 1317 1163 1159 (309)  Complex A-V 646 603 1377 1164 765 905 910 (306)  1400 1200 1000 RT (ms)  800 600 400 200 0 SA  SV  CA  CV  Conditions  Figure 1. Group mean (SE) RTs across conditions on the discourse task 47  The Kolmogorov-Smirnov test of normality suggested a normal distribution of the data for all conditions, D(6) = .22 for simple audio-only, D(6) = .19 for simple audio-visual, D(6) = .17 for complex audio-only, and D(6) = .18 for complex audio-visual, p > .05 for all. Due to the normal distribution of scores, a repeated measures ANOVA was performed to determine 1) main effects of complexity and modality, and 2) interaction effects. No main effect of complexity was found, F(1,5) = 1.12, MSE = 16080.35, p = .34. There was higher performance in the audio-visual than audio-only conditions (i.e., main effect of modality), which was marginally significant, F(1,5) = 5.38, MSE = 20184.92, p = .06. The interaction between modality and complexity was also found to be marginally significant, F(1,5) = 3.97, MSE = 19915.92, p = .10 1. Follow-up pairwise comparisons revealed a significant difference between the complex audio-only and complex audio-visual conditions, t(5) = 3.63, p = .02, but not between the simple audio-only and simple audiovisual conditions, t(5) = .21, p = .84. Omega squared effect size calculations, based on Field’s (2005) recommendations for repeated measures ANOVA, suggested a virtually nil effect size for complexity (ω2 = .002), a small effect size for the interaction between complexity and modality (ω2 = .052), but a large effect size for modality (ω2 = .399). These findings provide support for Hypotheses 1 and 2. Comprehension Accuracy Data The data for the comprehension questions consisted of accuracy scores. Similar to the discrimination task, percent correct was calculated for each participant in all four conditions, as displayed in Table 7 for individuals and as a group. Figure 2 presents the group means across conditions.  1  Given the small sample size, and concomitant low power, marginally significant effects (i.e., p = .06 to .10) were explored further so as to account for the possibility of a type II error.  48  Table 7 Percent correct (out of 20 questions) on the discourse task in the four conditions Participant #  Simple A-only  Simple A-V  P1 P2 P3 P4 P5 P6 Mean (SD)  100 80 85 80 85 75 84.17 (8.61)  100 85 80 95 100 90 91.67 (8.16)  Complex Aonly 100 95 90 80 95 55 85.83 (16.56)  Complex A-V 100 80 90 90 95 65 86.67 (12.52)  100 90 80 70 Accuracy (%)  60 50 40 30 20 10 0 SA  SV  CA  CV  Condition  Figure 2. Group mean (SE) comprehension question accuracy across conditions on the discourse task  A repeated measures ANOVA was conducted on the comprehension question data. There was no main effect of complexity, F(5,1) = .14, MSE = 116.67, p = .72 or modality, F(5,1) = 1.76, MSE = 59.17, p = .24. In addition, no significant interaction between complexity and modality was found, F(5,1) = 3.08, MSE = 21.67, p = .14. Follow-up pairwise comparisons were conducted on account of the trend for an interaction, and this revealed a marginally significant difference between the simple audio-only and simple audio-visual conditions, t(5) = 2.09, p = .09, but not between the complex audio-only and 49  complex audio-visual conditions, t(5) = .22, p = .83. Small (or nil) effect sizes were found for complexity (ω2 = .00), modality (ω2 = .03), and the interaction between complexity and modality (ω2 = .03). These group results do not provide support for Hypotheses 1 and 2. 3.2.3  Discrimination versus Discourse To address Question 1 of this study, the results from the two experimental tasks were  compared to determine whether there was a positive correlation between performance on the discrimination task and the discourse task, collapsed across the two modality conditions. A Pearson correlation analysis showed no significant correlation between the two tasks, r = .09, p = .87. 3.2.4  Relationship to Working Memory Hypothesis 3 stated that working memory scores would positively correlate with  performance on the discourse task. To address this hypothesis, a correlation analysis was conducted between scores on the working memory test and the question accuracy scores on the discourse task, collapsed across conditions. A strong positive correlation was observed, r = .93, p < .01. No significant correlation was found between working memory and mean RTs, r = -.09, p = .86. In addition, a correlation analysis was conducted between working memory scores and discrimination scores as a means for comparison. No significant correlation was found, r = .17, p = .74. 3.2.5  Effect of Hearing Loss Since three subjects failed the hearing screen at 4000 Hz bilaterally, results on the  hearing screen and performance on the discrimination and discourse tasks were examined to determine whether hearing loss appeared to be related to the outcomes of these tasks. Table 8 shows each participant’s hearing screen outcome and results on each task:  50  Table 8 Effects of hearing loss Participant P1 P4 P6 P3 P2 P5  Discrimination 79 79 83 60 94 100  Discourse 100 86 71 86 85 94  Hearing loss yes yes yes no no no  These results suggest a trend for better performance on the discrimination task for those without hearing loss. However, P3, who passed the hearing screen at all frequencies, had the worst performance on the discrimination task. This suggests a trend, but with a clear exception. No apparent relationship between hearing loss and performance on the discourse task was observed. 3.3.  Case Analyses Due to the small sample size and the differences in aphasia types and language  abilities across participants, each subject’s data was examined in greater detail. In particular, each case is explored for individual differences in performance across tasks and conditions, and for possible mediating factors such as hearing loss, working memory, and language skills. 3.3.1  Participant 1 P1 is a 58-year-old male who suffered a stroke 10.5 years prior to this study. P1’s  aphasia is characterized as very mild; his most significant communication difficulties appear to be resulting from an apraxia of speech. He performed very well on the BDAE subtests as well as the BNT, and obtained a perfect score on the working memory test. His performance differed across the two experimental tasks. While he performed at ceiling (i.e., 100% in all conditions) on the comprehension questions of the discourse task, 51  his score on the discrimination of syllables was much lower, between 75% (audio-only) and 83.3% (audio-visual). This discrepancy across the two tasks was observed in the group analyses and is well exemplified here. Despite difficulties with speech discrimination, P1 had apparently no trouble understanding the passages. P1’s impressions of the tasks support these results: he stated that the passages were much easier than the syllable discrimination task. Although P1 answered all the discourse comprehension questions accurately, he showed a clear pattern of audio-visual benefit in terms of reaction time from the moving window technique, with an average difference of 323 milliseconds (see Table 9).  Table 9 P1's mean RTs across conditions Simple Audio-only 822  Simple AudioVisual 440  Complex Audioonly 909  Complex AudioVisual 646  These results suggest that for this participant, sentence processing appeared to be faster in the audio-visual condition. Since he obtained a perfect score on the comprehension questions for all conditions, it may be possible that he requires more processing time in the absence of visual cues to reach the same level of comprehension. P1 may also have benefited from practice effects: both audio-only conditions were presented before the audio-visual conditions. Increased familiarity with the auditory moving window technique may have led to lower RTs. However, there was no continuous improvement in RTs based on the order of presentation (i.e., the simple audio-only condition was presented before the complex audioonly condition). Order of presentation was, therefore, most likely not responsible for the striking difference in RTs between the audio-only and audio-visual conditions. An examination of other participants’ order of presentation and RTs across conditions revealed 52  that RTs did not get gradually faster with each condition, suggesting that practice did not tend to lead to lower RTs. The benefits of the visual cues for the discrimination task are less clear. He obtained 75.0% in the audio-only condition, and 83.3% in the audio-visual condition. Although his performance in the audio-visual condition was higher than in the audio-only condition, suggesting that the visual cues may have been helpful, because his scores were close, it may not represent a meaningful difference. However, if P1 benefits from the visual cues in terms of real-time processing demands, this off-line measure may not be able to capture this type of benefit. It is important to note that P1 failed the hearing screen at 4000 Hz bilaterally, which may have contributed to his higher performance in the audio-visual condition, since individuals with hearing loss often benefit from speechreading. 3.3.2  Participant 2 P2 is a 61-year-old woman who developed aphasia following a stroke 10.5 years ago.  Her aphasia was characterized by expressive deficits, particularly word finding difficulties. Based on informal conversation and standardized language testing, her comprehension abilities appeared better than her expressive skills; however, she showed some difficulties on the Embedded Sentences subtest of the BDAE (on which she obtained 5/10), which presents reversible sentences where the subject’s task is to correctly match a subordinate clause to either the subject or the object of the sentence by pointing to one of four pictures. No apraxia was observed. P2 reported that she had been having memory problems since her stroke, and her score on the working memory test reflected this (5/20). She passed the hearing screen at all frequencies. P2’s RT data shows the same trend as P1, although less pronounced, where RTs were faster in the audio-visual condition (see Table 10). Also similarly to P1, her discourse 53  accuracy scores do not show consistently greater comprehension in the audio-visual conditions, as shown in Table 11.  Table 10 P2's mean RTs across conditions Simple Audio-only 672  Simple AudioVisual 501  Complex Audioonly 729  Complex AudioVisual 602  Table 11 P2's mean comprehension question accuracy across conditions in the discourse task Simple Audio-only 80  Simple AudioVisual 85  Complex Audioonly 95  Complex AudioVisual 80  Based on the comprehension question accuracy scores, P2 performed better in the complex audio-only condition. Her scores in the other 3 conditions are very close, which probably suggests that her comprehension was relatively the same across these conditions. It is unclear why her score is higher in the complex audio-only condition, which was anticipated to be the most difficult. One possibility relates to this condition being presented first. Perhaps she was more alert and attentive at the beginning of the experiment and showed fatigue effects in the subsequent conditions, leading to lower performance. However, the simple audio-visual condition was presented last, which was not the condition where she obtained the lowest score. This finding argues against a fatigue effect. In addition, there appears to be a speed accuracy trade-off. Her higher RTs in the complex audio-only condition suggested that she required more processing time, and this additional time allowed her to achieve better accuracy on the comprehension questions. P2 performed only slightly better in the audio-visual condition in the discrimination task, 91.7% for audio-only and 95.8% for audio-visual. Based on these results, the visual 54  cues did not appear to significantly improve her syllable discrimination abilities. In addition, unlike P1, P2’s speech discrimination and language comprehension abilities appear to be similar. However, P2’s feedback to the examiner was that the discrimination task was more difficult. 3.3.3  Participant 3 P3 is a 62-year-old man who suffered a stroke 5.5 years ago. His verbal expression  abilities included moderate word finding difficulties, circumlocution, and phonemic paraphasias, while his language comprehension skills appeared relatively spared based on the standardized language testing. He showed working memory deficits, scoring 7/20 on the memory test. No apraxia was observed. Personal communications with P3’s current SLP revealed that P3 appears to benefit from lipreading for one very specific type of word— numbers. The working memory test consisted of recalling and manipulating numbers but the possibility of lipreading was not provided during the test. P3’s RTs in the discourse task were much higher than both P1 and P2’s across all conditions, and therefore, he seemed to require more processing time overall. The pattern was also somewhat different from P1 and P2’s (see Table 12).  Table 12 P3's mean RTs across conditions Simple Audio-only 1344  Simple AudioVisual 1471  Complex Audioonly 1605  Complex AudioVisual 1377  While the mean RT is lower in the audio-visual condition for the complex passages, the reverse pattern was found for the simple conditions. However, the mean difference in RTs for the simple conditions was small (i.e., a difference of 127 ms) compared to the difference between the two complex conditions 229 ms); it is possible that the visual cues 55  were more helpful to P3 in terms of processing time when the demands were greater (i.e., the stimuli are more complex). Similar to both P1 and P2, P3’s accuracy scores were comparable across all four conditions (ranging from 80% to 90%), suggesting that the different complexity levels and modalities did not dramatically affect his performance when measured off-line.  Table 13 P3's mean comprehension question accuracy across conditions in the discourse task Simple Audio-only 85  Simple AudioVisual 80  Complex Audioonly 90  Complex AudioVisual 90  Similar to P1, P3’s performance on the discrimination task (66.7% for audio-only vs. 54.2% for audio-visual) was much lower than on the discourse task. These scores are only slightly or moderately higher than chance level (i.e., 50%), and his performance was worse in the audio-visual condition. As P3 passed the hearing screen at all frequencies bilaterally, hearing loss was unlikely to be a contributing factor to his poor performance. Despite significant speech discrimination difficulties, P3 did not appear to benefit from the visual cues in this task. P3’s feedback to the examiner was that the discrimination task was more difficult, and that he found the visual cues helpful to him in both the discrimination and discourse tasks. 3.3.4  Participant 4 P4 is a 55-year-old man who developed aphasia following a stroke 11 months prior  to this study. His aphasia is characterized by moderate expressive and receptive language deficits as well as some reading difficulties, as observed through informal conversation and  56  standardized language testing. P4 also had both an oral and a mild verbal apraxia, and obtained 7/20 on the working memory task. P4 obtained the same score in both conditions in the discrimination task (79.2%), suggesting that he did not benefit from the visual cues provided by the audio-visual condition. P4 was one of the participants who failed the hearing screen at 4000 Hz bilaterally; however, the availability of visual cues did not improve his performance. On the discourse task, P4 showed a slightly different pattern than the previously discussed subjects, both in terms of RTs and accuracy. His mean RTs, which tended to be relatively high overall, were similar within complexity levels, and did not differ much across modalities. His RTs were faster in the two complex conditions (see Table 14). For accuracy, however, there appears to be a trend for better comprehension in the audio-visual conditions, as shown in Table 15.  Table 14 P4's mean RTs across conditions Simple Audio-only 1362  Simple AudioVisual 1430  Complex Audioonly 1230  Complex AudioVisual 1164  Table 15 P4's mean comprehension question accuracy across conditions in the discourse task Simple Audio-only 80  Simple AudioVisual 95  Complex Audioonly 80  Complex AudioVisual 90  P4, therefore, seems to show a different trend than the other subjects discussed thus far: slow RTs and no striking differences in processing time across conditions as measured by the on-line measure (i.e., RTs), but better comprehension in the audio-visual conditions as measured by the off-line comprehension questions. 57  The main difference between P4 and the other participants mentioned above is his significant difficulties with language comprehension. Whereas P1-P3’s aphasia was characterized mainly by production deficits, P4 clearly also had significant comprehension difficulties. One possible explanation for this difference observed in both the on-line and offline outcome measures is that participants may need to have at least a certain severity of comprehension difficulties for differences in performance to be observed off-line (see P6 for further support). He nonetheless performed relatively well on this task, scoring at least 80%. P4 mentioned to the examiner that he had found the discrimination task more difficult, and felt that the visual cues did not help nor hinder his performance in both tasks. 3.3.5  Participant 5 P5 is a 64-year-old woman who suffered a stroke following an aneurysm surgery 13  years prior to this study. She is a native speaker of Farsi but was fluent in English prior to the onset of her aphasia. She scored very well on all the BDAE subtests as well as the BNT; however, she was observed to have word finding difficulties in conversational speech, often requiring several seconds to think of words. She was nevertheless generally able to eventually find the words she wanted to say. She also scored well on the working memory test (16/20). No apraxia was observed. She passed the hearing screen at all frequencies. P5 obtained 100% in both conditions in the discrimination task, suggesting she had very good speech discrimination abilities. Her RT data for the discourse task are presented in Table 16, while her question accuracy data are displayed in Table 17.  Table 16 P5's mean RTs across conditions Simple Audio-only 951  Simple AudioVisual 936  Complex Audioonly 1317  Complex AudioVisual 764 58  Table 17 P5's mean comprehension question accuracy across conditions in the discourse task Simple Audio-only 85  Simple AudioVisual 100  Complex Audioonly 95  Complex AudioVisual 95  While P5’s mean RTs were similar in the simple audio-only and simple audio-visual conditions, they were almost twice longer in the complex audio-only than in the complex audio-visual condition. When considering both RTs and accuracy data, an interesting pattern emerges. It appears that she took the same processing time in both conditions for the simple passages, which may have led to overall better question comprehension in the audio-visual condition; for the complex passages, she took more time to process the passages in the audio-only condition. When measured off-line, this extra processing time seems to have allowed her to score equally well in both conditions. P5 was the only subject who found the discrimination task easier; she also mentioned that she felt the visual cues did not affect her performance. 3.3.6  Participant 6 P6 is a 61-year-old man who suffered a stroke 10 years ago. He presented with  severe expressive deficits and moderate receptive difficulties; his comprehension appeared significantly better in informal conversation than on standardized testing. He also showed working memory difficulties, scoring 2/20 on the memory test; however, particular difficulties with numbers and/or ranking of numbers, or comprehension of the task itself, may also have contributed to his low score. P6 was observed to use writing spontaneously to communicate; however, his written output was restricted to single words. P6 also had both an oral and a verbal apraxia.  59  Table 18 presents P6’s RT data. The pattern observed suggests that P6 had longer RTs in the audio-only condition for the complex passages; however, for the simple passages, the opposite is observed. These findings are difficult to interpret as they do not show an advantage for either audio-only or audio-visual, and also do not suggest that the complex passages were more difficult to process. However, P6’s highest RTs are for the complex audio-only condition, which coincides with his accuracy data (i.e., his worst performance was also in the complex audio-only condition).  Table 18 P6's mean RTs across conditions Simple Audio-only 784  Simple AudioVisual 1040  Complex Audioonly 1163  Complex AudioVisual 904  On the off-line measure (i.e., comprehension questions), there appeared to be both a main effect of complexity (i.e., he performed better in the simple passages), and modality (i.e., his accuracy scores were higher in the audio-visual conditions) (see Table 19). It is possible that P6 may also have benefited from practice effects, since both audio-only conditions were presented first. However, the simple audio-only condition was presented before the complex audio-only condition. There was, therefore, no continuous improvement in performance based on order of presentation.  Table 19 P6's mean comprehension question accuracy across conditions in the discourse task Simple Audio-only 75  Simple AudioVisual 90  Complex Audioonly 55  Complex AudioVisual 65  60  On the discrimination task, P6 performed somewhat better in the audio-visual condition (87.5% vs. 79.2%), suggesting that the visual cues may have been beneficial. It is worth noting that P6 failed the hearing screen at 4000 Hz bilaterally, which could potentially account for his better performance in the audio-visual condition, since individuals with hearing loss have repeatedly been shown to benefit from lipreading. Of the other two participants who also failed the hearing screen at 4000 Hz, one (P1) also showed this pattern, and the other participant (P4) did not. P6’s feedback to the investigator was that the visual cues made no difference, and that the discrimination task was more difficult.  61  CHAPTER 4: DISCUSSION 4.1  Introduction This study aimed to replicate previous research findings on the influence of  lipreading on speech perception in individuals with aphasia, and to expand this research by exploring the role of lipreading at the discourse level. Previous studies with people with aphasia have generally failed to find consistent benefits from visual cues on phonemic discrimination tasks (Schmid & Ziegler, 2006; Youse et al., 2004), despite strong evidence that lipreading is helpful with different populations, such as hearing impaired individuals (MacSweeney et al., 2002; Mitchell & Maslin, 2007) and second language learners (Arnold & Hill, 2001), and in a variety of circumstances, such as in noise (e.g., Neely, 1956) and in multitalker babble (Rudmann et al., 2003). The visual cues provided by speechreading are thought to provide redundancy of information, which for individuals with aphasia, may help offset their comprehension difficulties by giving them the opportunity to derive meaning from two different sources. In an attempt to answer the question of whether individuals with aphasia benefit from lipreading, the present study examined the performance of six participants with aphasia on both a syllable discrimination task and a discourse comprehension task, in two different modality conditions, audio-only and audio-visual. The findings presented in Chapter 3 will be discussed in the present chapter according to the question and hypotheses posed in Chapter 1. Clinical implications, limitations of the study, and further research directions will be discussed in the conclusion of this chapter. 4.2  Research Question Will a positive correlation be observed between the low-level phonetic discrimination  task and the higher-level discourse comprehension task?  62  This inquiry was posed as a question since the literature did not provide clear indications as to whether individuals with aphasia would show similar performance on speech discrimination and language comprehension tasks. A correlation analysis revealed no relationship between scores on the discrimination and discourse tasks (r = .09). An examination of individual cases supported this lack of correlation. In other words, participants who performed well on the discrimination task did not necessarily do well on the discourse task, and vice versa. P1, for example, obtained a perfect score on the discourse task, but had the second lowest score on the discrimination task. P3 showed a similar and even larger difference, where his speech discrimination skills were well below his language comprehension abilities. P6, who had the most severe language deficits based on the language assessment, performed better than P1 on the discrimination task, but obtained only 71.25% on the discourse task. Overall, three participants did better on the discrimination task, and the other three performed better on the discourse task. These results add evidence to the growing body of research that has found little correlation between speech discrimination and language comprehension skills in the aphasic population. For instance, Blumstein et al. (1977) found that their aphasic subjects with the most severe language comprehension difficulties (i.e., those with Wernicke’s aphasia) were not the ones who performed the worst on a phonemic discrimination task, and were, therefore, unable to support the notion that poor phonological processing is at the root of language comprehension problems in people with Wernicke’s aphasia. Similarly, both Blumstein, Cooper, Zurif, & Caramazza (1977) and Gandour and Dardarananda (1982) found a dissociation between their participants’ level of language comprehension and their ability to identify or discriminate VOT distinctions. Basso, Casati, & Vignolo’s (1977) study 63  of 50 aphasic individuals showed a striking lack of correlation between language comprehension and phonemic identification, and that the deficits in phoneme identification were more common among their non-fluent subjects with good comprehension. The results of the present study do not consistently support this finding. Although P1, who had excellent language comprehension skills, scored more poorly on the discrimination task, P5, who also had very good language comprehension abilities (and who scored very well on the discourse task), obtained 100% on the discrimination task. A larger sample size would be necessary to further test such a trend. One study, however, has found a strong positive correlation between the degree of acoustic temporal processing deficit and language comprehension difficulties (Tallal & Newcombe, 1978). The difference in results between this study and other research may lie in the nature of the language comprehension task used. They explicitly chose a task (the Token test) that eliminates context and top-down processing in order to tap entirely into language decoding. In addition, the stimuli used (computer generated speech CV syllables and nonspeech complex tones) may have also led to the differences in results. They found that their participants had difficulty with rapidly changing acoustic stimuli, regardless of whether the stimuli were linguistic or not. It is possible that their subjects had auditory processing difficulties that perhaps were not present in other studies. Their participants were unique in that they were all ex-servicemen who had sustained missile wounds to the head, whereas most individuals with aphasia have suffered CVAs. Even for those with left hemisphere involvement, the extent and type of damage may have led to different clinical profiles. Overall, the results of the present study are mostly in line with the findings from the majority of the studies on this topic. These suggest a very weak, if any, relationship between phonetic discrimination and language comprehension skills. This can be attributed to 64  selective impairments of the syntactic computational system (in the case where an individual has preserved speech discrimination abilities but poor language comprehension), or the phonological decoding system (in the case where someone has poor speech discrimination skills but adequate language comprehension). An individual with poor phonological decoding skills may rely on top-down processing during discourse comprehension tasks, thereby minimizing the effects of poor speech perception abilities. The participants all provided feedback regarding the difficulty of the tasks, and all but one stated that the discrimination task was more difficult. Not all of their comments were consistent with their performance, however. For example, both P2 and P6 said that syllable discrimination was harder, and yet they both performed better on that task. P5 was the only subject who felt that the discrimination task was easier, and she did perform better on this task. It is interesting that their impressions were not always accurate reflections of their abilities. The unfamiliarity of the discrimination task may have contributed to their overall perception that it was more difficult (except P5). In other words, listening to nonsense syllables is not a common everyday activity, while listening to sentences and discourse certainly is. 4.3  Hypothesis 1 The first hypothesis, re-stated below, pertained to the influence of visual cues on  participants’ performance in the two tasks. The following two sections will address this hypothesis for each task separately. Hypothesis 1: The participants in this study will benefit from the added visual cues provided in the audiovisual condition in both the discrimination and discourse tasks.  65  4.3.1  Discrimination Task The mean percent correct for each of the two conditions (audio-only and audio-  visual) were compared to determine whether the visual cues were helpful to the participants in this study. The means obtained (83% for the audio-visual condition, and 82% for the audio-only condition) did not represent a statistically significant difference. Based on these group results, the added visual cues did not appear to improve participants’ speech discrimination abilities. An examination of individual cases also did not provide strong evidence that lipreading was dramatically helpful to any of the participants in this study for syllable discrimination. Only two participants showed a small audio-visual advantage (P1: 83% vs. 75%, and P6: 87% vs. 79%). These two subjects’ aphasia profiles were drastically different (i.e., P1 had a very mild aphasia and excellent comprehension, while P6 had significant production and comprehension difficulties). On the other hand, both had an apraxia of speech (P6 also had an oral apraxia). Given that both P1 and P6 had apraxia of speech, it is of interest that some research has been conducted to investigate the lipreading abilities of individuals with apraxia. For example, most treatment strategies for apraxia make use of visual speech. Modelling and integral stimulation (Rosenbek et al, 1973) have become common components of apraxia treatments, which involve both the visual and auditory channels (i.e., watch, listen, say it with me) (Duffy, 2005; Wambaugh, 2002). The paucity of treatment efficacy studies makes it difficult to determine whether these approaches work, but the evidence so far appears promising (e.g., Strand & Debertine, 2000 (developmental apraxia); Wambaugh, 2002; Wambaugh, West, & Doyle, 1998; see Rose & Douglas, 2006, for a note on caution regarding multimodal approaches to apraxia treatment). If the use of visual cues is helpful in the treatment of apraxia, it may suggest that individuals with apraxia 66  are especially sensitive to these cues, and that they are able to use the visual channel to their benefit. However, using lipreading to help with speech production may rely on a different subset of linguistic processes (i.e., those used in imitation vs. comprehension). In addition, both P1 and P6 failed the hearing screen at 4000 Hz bilaterally, which could potentially be a confounding factor. Since people with hearing loss have repeatedly been shown to benefit from lipreading to improve speech perception and language comprehension (e.g., MacSweeney et al., 2002; Mitchell & Maslin, 2007), it may be possible that P1 and P6’s hearing loss was driving their greater reliance on and benefit from the visual cues provided in the audio-visual condition. Regarding these two possibilities (i.e., apraxia of speech and hearing loss), it is important to note that P4, who also had an apraxia and a hearing loss, did not perform better in the audio-visual condition. The presence of apraxia and/or hearing loss, therefore, does not appear to reliably account for the differences observed in the audio-only and audio-visual conditions for P1 and P6. The overall results of this experiment, then, are generally consistent with previous research. Both Youse et al. (2004) and Schmid and Ziegler (2006) failed to find that their participants with aphasia benefited from the visual cues on their syllable discrimination or identification tasks. The third study, by Hessler et al. (2010), found that their participants performed better in the audio-visual condition. The stimulus pairs used in their discrimination task differed in either one, two, or three phonetic dimensions (i.e., manner, place, and voice). When stimulus pairs differed only on one dimension, as in the present study, no significant difference in performance between the audio-only and audio-visual conditions was found. This suggests that visual cues are helpful, but this benefit can only be observed when stimulus pairs differ on more than one dimension. This could be due to the difficulty of the task. An easier task (i.e., discriminating between syllable pairs that differ on 67  two or three phonetic dimensions, compared to only one) may reduce the cognitive load by providing extra redundancy for the auditory system, thereby allowing more resources to be devoted to the processing of visual speech. On the other hand, a more difficult task (i.e., discriminating between syllables that differ on only one dimension, e.g., place of articulation) may place such high demands on the auditory system that there are insufficient resources to effectively attend to visual speech cues. This could explain why the benefits of visual speech are greater and/or only observed in tasks with demands that are neither too low or too high. Based on the findings of this experiment, it can be concluded that individuals with aphasia do not appear to benefit from lipreading at a low level of speech perception, when syllable pairs differ only in place of articulation, which does not support part of Hypothesis 1. Even individuals with significant phonemic discrimination difficulties were unable to derive the perceptual information from the visual cues to enhance their performance on this task. It is possible that the lack of context in the discrimination task, which used non-words as stimuli, decreased the benefits gained from lipreading, or that the syllable pairs, which differed only in place of articulation, were too phonologically similar. Two participants did, however, perform somewhat better in the audio-visual condition. It is difficult to determine why these two subjects benefited from lipreading but not the other four, since the aphasia profiles for these two participants were drastically different. While both had some hearing loss and an apraxia of speech, which may have contributed to the difference in performance between the audio-only and audio-visual conditions, another participant with hearing loss and apraxia of speech did not present this pattern.  68  4.3.2  Discourse Task The present study is likely the first to examine the influence of lipreading on  language comprehension in individuals with aphasia. Because research on language comprehension in aphasia has stressed the importance of using both on-line and off-line measures, this study employed the auditory moving window technique to obtain a view of real-time language processing, along with comprehension questions to assess off-line language comprehension. These two measures allowed for a more comprehensive analysis, and revealed differences in the on-line and off-line processing of audio-visual language. The group reaction time data from the auditory moving window procedure revealed a marginally significant main effect of modality, with better performance in the audio-visual conditions (i.e., faster reaction times), and a large effect size. This finding suggests that participants were processing the passages more rapidly when the visual modality was present. Individual case studies showed a consistent pattern of faster RTs in the audio-visual conditions, particularly for the complex passages (to be discussed in the next section on the interaction of modality and complexity). The group comprehension accuracy data showed a very small audio-visual advantage (91% vs. 84% for the simple passages; 87% vs. 86% for the complex passages); however, no significant main effect of modality and a small effect size were observed. Analyses of individual cases revealed that two participants performed consistently better in the audiovisual conditions. These participants, P4 and P6, were the two subjects with the most severe language comprehension difficulties. It may be that a certain degree of language comprehension deficits is required in order to see benefits from the visual cues in off-line tasks. Some research on on-line vs. off-line language processing appears to support this claim. Individuals with Broca’s aphasia, which is characterized by production deficits but 69  overall better comprehension, have been shown to have impaired processing on-line (e.g., Burkhardt, Avrutin, Piñango, & Ruigendijk, 2008; Poirier, Shapiro, Love, & Grodzinski, 2009; Prather, Shapiro, Zurif, & Swinney, 1991; Utman, Blumstein, & Sullivan, 2001). Performance on off-line tasks, however, often reveals much better performance (e.g., Burkhardt et al., 2008, based on the comprehension of reflexive elements). On the other hand, Wernicke’s aphasia has been associated with poorer off-line comprehension, but more preserved on-line processing (Caplan & Waters, 2003; Milberg & Blumstein, 1981). This pattern may help account for why differences in performance between the audio-only and audio-visual conditions were observed only on-line for four of the participants, whose aphasia was characterized mostly by production difficulties, and why the two participants with both production and comprehension deficits may have benefited from lipreading both on-line and off-line. While the language subtests used for this study do not provide a sufficient basis to classify participants into specific aphasia syndromes (e.g., Broca’s and Wernicke’s), the results from this study followed the pattern observed in previous research based on the extent of comprehension problems. If speechreading is beneficial in the specific area of difficulty (i.e., difficulty with on-line or off-line processing), those with Broca’s aphasia would likely benefit from lipreading on-line, while those with Wernicke’s aphasia would gain from speechreading off-line. It is possible that the four participants who only benefited from visual speech on-line in the present study had particular difficulties with online processing, and although they have not been diagnosed as having Broca’s aphasia due to the lack of in-depth language testing, their mild comprehension difficulties are consistent with the profile of Broca’s aphasia. The two participants who likely benefited from lipreading both on- and off-line may have had deficits with both on- and off-line processing.  70  P4 and P6 failed the hearing screen at 4000 Hz bilaterally. Similarly to the discrimination task, hearing loss may have contributed to these comprehension accuracy findings. However, P4 did not benefit from the visual cues on the discrimination task, suggesting that hearing loss was most likely not the only contributing factor, particularly since the stimuli were presented at comfortable listening levels for all participants. The finding that lipreading appeared to be more helpful in the discourse task than in the discrimination task suggests that visual cues may be more beneficial when context is available. In Youse and Cienkowski’s (2004) research with four individuals with aphasia, they concluded that lipreading was more beneficial for real word stimuli than non-word (syllable) stimuli. These authors concluded that the benefits from visual articulatory cues increase with context. The present study adds further evidence to support this claim. Not only was lipreading more beneficial in the discourse task (which provided contextual cues) compared to the nonsense syllable discrimination task, it was also found to be more helpful in the complex passages than the simple passages. The complex passages were longer, and therefore, provided even more context to support top-down processing. As Lau (2009) describes, top-down processing leads to the use of predictive processing mechanisms, which provide access to and construction of stored internal representations before the external input is actually presented. These predictions can come in the form of pre-activation (i.e., less bottom-up processing is required to reach the activation thresholds) or pre-construction (i.e., the process of interpretation has begun even before the input has been presented). Context, then, may speed up computation, thereby facilitating disambiguation of the upcoming signal (p. 2). In the context of lipreading, this may free up resources that can then be allocated to the processing of visual cues. This may also help explain why individuals with Broca’s aphasia tend to benefit substantially from context; the predictive nature of top-down 71  processing may decrease their lexical activation thresholds, and therefore, allow faster access to lexical items. 4.3.3  Summary Hypothesis 1 was only partially supported. Participants did appear to benefit from the  visual cues, but this advantage was not uniform across tasks, participants, and across the two measures on the discourse task. No clear trend of better performance in the audio-visual condition was observed on the discrimination task. Only two subjects may have experienced some benefit from the visual cues; however, hearing loss or apraxia may have been a confounding factor. In the discourse task, the audio-visual advantage was stronger and more consistent for the reaction time data, which was overall faster in the audio-visual conditions. Two subjects also appeared to benefit from the visual cues, based on comprehension question accuracy. These two participants had the most significant language comprehension deficits, suggesting that in order to show benefits from lipreading off-line, a certain degree of comprehension deficits may be necessary. 4.4  Hypothesis 2 In the discourse task, there will be an interaction between modality and grammatical  complexity, such that the advantage observed in the audio-visual condition will be greater for the complex passages. This hypothesis received some support from the reaction time data. The group data revealed a marginally significant interaction between modality and complexity. Follow-up pairwise comparisons revealed a significant difference only between the complex audio-only and complex audio-visual conditions. These results suggest that the participants’ real-time processing of more complex material benefited from the presence of visual cues.  72  The individual case analyses showed that all six participants had faster RTs in the complex audio-visual condition than in the complex audio-only condition. For the simple passages, only three of the six subjects had faster RTs in the audio-visual condition, suggesting that the benefits of lipreading were more consistent and potentially greater when the stimuli were more complex, and therefore, more difficult to process. The complex passages may also have provided more context, suggesting that lipreading may be increasingly beneficial when more context is available. To determine the differential influence of complexity vs. context, future studies may consider controlling for each factor independently (i.e., maintaining length and/or propositional content constant while manipulating syntactic complexity, or keeping grammatical complexity constant while manipulating length and/or propositional content of passages). The comprehension accuracy data, on the other hand, do not appear to support Hypothesis 2. The two participants who appeared to benefit from lipreading off-line did not show greater benefit in the complex condition compared to the simple condition in terms of comprehension question accuracy. The finding that speechreading tends to improve processing speed more under difficult processing conditions is not new. In their study on lipreading in noisy environments, Sumby and Pollack (1954) found that the contribution of visual speech increased under noisier conditions. Similarly, Ross et al. (2007) found that the greatest gains from visual cues were observed when levels of noise increased; however, as the signal-tonoise ratio dropped past -12 dB, the benefits of lipreading decreased. They concluded that the benefits from speechreading were greatest in a particular window of difficulty, between the extremes where listeners rely almost exclusively on the auditory signal (0 dB signal-tonoise ratio) and where they depend almost entirely on visual information (-24 dB signal-to73  noise ratio). To account for this “window of difficulty”, the present study used simple and moderately-complex stimuli, so as to avoid ceiling or floor effects. That is, a very complex condition may have created a situation where the passages were so difficult to process that the system would have become completely overwhelmed and the visual cues would have been only minimally helpful, as in the -24 dB signal-to-noise ratio condition mentioned in the above study. This is possibly what occurred in Hessler et al.’s study, which found that lipreading was helpful for aphasic participants in their syllable discrimination task, but only when the syllable pairs differed on more than one phonetic dimension. Namely, the task was perhaps too difficult when the pairs differed by only one dimension, thereby minimizing the impact of the visual cues. This could also potentially account for the lack of differences found between the audio-only and audio-visual conditions on the discrimination task of the present study, since all syllable pairs differed by place of articulation only. Research has demonstrated that performance on language comprehension tasks can be related to individual differences in working memory; however, these effects are most evident when the working memory system is sufficiently challenged by task demands (Sung et al., 2009). Similarly to visual speech processing, working memory can be considered a resource that can be recruited when external task demands increase. If the demands are not sufficiently high, the use of this resource capacity is not required, and no apparent benefit from this capacity will be observed. When the demands exceed what the system is generally able to process, the resource capacity is then recruited to handle the extra demands. This is a likely explanation for why the interaction between complexity and modality was observed in this study. When the passages became too difficult for the system to handle, visual speech was recruited as an additional source of information to aid in real-time processing. This additional modality resulted in faster processing of sentences and phrases, and therefore, faster reaction times 74  captured by the auditory moving window procedure. Because the comprehension question accuracy data showed that performance was essentially the same in both the complex audioonly and audio-visual conditions, it appears that without the visual cues in the audio-only complex condition, participants had to work harder (reflected in longer RTs) to achieve the same level of comprehension. 4.5  Hypothesis 3 Working memory will positively correlate with performance on the discourse  comprehension task. To address this hypothesis, a correlation analysis was performed between working memory scores and accuracy on the discourse task, collapsed across conditions. The strong correlation (r = .93) provides further evidence that working memory is an important constraint in language comprehension (e.g., Caplan & Waters, 1999; Just & Carpenter, 1992; King & Just 1991). It is also consistent with findings from studies on individuals with aphasia, which indicate a positive relationship between working memory capacity and sentence comprehension ability (e.g., Friedmann & Gvion, 2003; Sung et al., 2009). To the extent that performance on the discourse task represents language comprehension abilities, these results also suggest that working memory is highly correlated with aphasia severity in terms of higher level comprehension skills. A correlation analysis was also performed between working memory scores and the discourse task reaction time data, collapsed across conditions. No significant correlation was found (r = -.09). A negative correlation indicates that as working memory increased, reaction time decreased. However, the strength of this correlation is very weak. This suggests that unlike lipreading, which tends to benefit people with aphasia in terms of real-time processing, increased working memory is associated with improved language 75  comprehension off-line. Research in this area has yielded different findings. For instance, King and Just (1991) found that readers with lower working memory had slower reaction times on their moving window procedure, particularly at sentence points that were crucial for comprehension, suggesting that working memory capacity was a factor in on-line syntactic processing. Other studies have found opposing evidence. For example, Dede, Caplan, Kemtes, and Waters (2004) found that on-line syntactic processing was not mediated by verbal working memory, but off-line sentence comprehension was. They concluded that “the failure to find a significant relationship between [verbal working memory] and on-line sentence processing in the structural models argue against the presence of significant effects of [verbal working memory] on on-line sentence processing.” (p. 611). The findings of the present study are consistent with this conclusion. Off-line tasks require significant storage, computation, and retrieval of information, since portions of language input must be remembered and interpreted over a longer time. Off-line tasks generally involve some type of conscious decision making (e.g., answering questions, making accuracy judgments), which requires the system to not only remember the input but also to interpret it using a particular response strategy. This increases demands on working memory capacity. On-line processing, on the other hand, places only minimal demands on the executive system, particularly for canonical sentences, which do not require complex linguistic computations. Non-canonical sentences likely increase the processing demands, since the system must compute more complex linguistic operations on-line (e.g., syntactic structure movements). However, since off-line tasks depend on both these on-line computations and higher-level interpretations and storage of longer units of language, working memory is more likely to correlate strongly with off-line tasks.  76  4.6  Summary of Key Findings The results of this study suggest that individuals with aphasia likely benefit from  lipreading at the discourse level. This advantage may help with on-line language processing for people with varying degrees of language comprehension difficulties, and may also help with off-line comprehension for those with more significant deficits. The benefits gained from lipreading appear to be more consistent and larger with increased stimulus complexity. Context may also enhance the benefits of lipreading, which could account for the finding that lipreading was likely helpful for discourse comprehension but not syllable discrimination. No significant correlation was found between performance on the discrimination and discourse tasks, suggesting that poor speech perception cannot reliably account for language comprehension deficits in aphasia. Lastly, the strong correlation observed between working memory and scores on the discourse task provides further evidence that working memory may be an important factor in language comprehension. 4.7  Clinical Implications The findings that lipreading may help speed up language processing and possibly  enhance off-line comprehension in individuals with significant comprehension deficits could have important clinical implications, particularly if replicated in future research. Speechlanguage pathologists working directly on language comprehension skills with their clients now appear to have some preliminary evidence to support the use of lipreading as an additional channel to treat comprehension. While previous research has shown that using visual speech in treatment can successfully improve low-level speech perception (e.g., Hessler & Stadie, 2008, in Hessler et al., 2010; Morris et al., 1996) and speech production difficulties related to aphasia (Fridriksson et al., 2009), there appears to be no research that has investigated the use of lipreading to address comprehension problems at the discourse 77  level. Although this study only found marginally significant results, most likely due to the small sample size, the magnitude of the effect sizes provides an indication that lipreading can benefit individuals with aphasia during language comprehension tasks, particularly when the linguistic input is more challenging to process. In addition to faster on-line processing, it may also help those with more severe comprehension deficits in terms of off-line comprehension, which could have a positive impact on everyday communicative interactions. Besides speech-language pathologists, these findings may potentially have implications for other health professionals who work with individuals with aphasia, such as physicians, nurses, physical therapists, and occupational therapists. To ensure that their clients with aphasia benefit as much as possible from their interventions, they could face their clients when sharing information with them, thereby allowing them to lipread and potentially process that information faster and/or more accurately. Because much of the information given to patients in medical settings is new and sometimes difficult to understand even for persons without aphasia, the availability of visual cues may be even more crucial for persons with aphasia, as the present study has shown that the benefits of lipreading are greater when the linguistic input is complex. Aside from lipreading, this study also revealed a strong correlation between working memory and language comprehension, replicating many previous findings (e.g., Caplan, 2006; Caplan & Waters, 1999; Just & Carpenter, 1992; King & Just, 1991; Nakano, Saron, & Swaab, 2010). While it is impossible to specify a causal direction of influence, much research in this area has inferred that working memory supports language comprehension. As such, one strategy for speech-language pathologists wanting to work on language comprehension with their clients may be to focus on working memory skills. As mentioned 78  in the first chapter of this thesis, working memory deficits have been postulated as being a potential contributing factor to comprehension problems in aphasia (e.g., Caplan, 2006; Haarman, Just, & Carpenter, 1997; Miyake et al., 1994). There is, therefore, accumulating evidence that working memory skills may be at least partially responsible for comprehension abilities in both aphasic and non-aphasic individuals, and that targeting working memory in therapy, if found to be deficient, may be worthwhile. A variety of tasks could be used depending on the type and extent of working memory deficits. Traditional working memory tasks using digits, words, or sentences could be implemented; alternatively, working memory could be addressed using n-back tasks (e.g., Christensen and Wright, 2010). The finding that speech discrimination and language comprehension do not appear to be significantly related, as also found in previous studies (e.g., Blumstein et al., 1977; Gandour & Dardarananda, 1982), indicates that focusing on speech discrimination in therapy may not be completely warranted for persons with higher level deficits. People are not typically faced with the task of discriminating between isolated sounds and syllables in everyday life, and if such skills are not significantly related to language comprehension, they may not be worth treating directly if they do not lead to generalization to discourse. While it is true that phonemic discrimination is important at the single syllable and word level (e.g., to discriminate between minimal pairs such as bait and gate), it appears to be less crucial in discourse, which provides the opportunity for top-down support of bottom-up processing. Targeting the sentence or discourse level to address comprehension problems is, therefore, likely a better approach, even if clients show speech discrimination difficulties. 4.8  Limitations of the Study and Future Directions The most important limitation of this study is the small sample size and resulting low  power, suggesting that the results should be interpreted with caution. This study should be 79  replicated with a larger number of participants with different aphasia profiles, which would be more representative of the aphasic population. In the selection of participants, it may be preferable to control for some of the factors that may have influenced outcomes in this study, such as excluding individuals with hearing loss or apraxia of speech to avoid the possibility of confounding factors. However, since hearing loss is more common with increased age, and the incidence of strokes and resulting aphasia also increases with age, it may be difficult to obtain a large enough sample size if all individuals with hearing loss are excluded. Moreover, the findings would not generalize to much of the older population who have hearing loss. In addition, it may be valuable for future studies to address the issue of eye gaze in order to ensure that participants are indeed looking at the speaker in the discourse and discrimination tasks. Participants who do not feel that the visual cues are helping may sometimes be tempted to look elsewhere, particularly if they are feeling fatigued. In the present study, subjects were instructed to both listen to and watch the speaker when the video was available, and were given reminders between lists. If a participant was observed not to be watching the screen, an additional reminder was given. Devices such as eyetracking may help provide more accurate data regarding where the participants are looking. Future studies may consider using a different or perhaps a third level of grammatical complexity to determine whether the benefits of lipreading are mediated by task demands, as found in this study. As previously discussed, it may be possible that the gains from lipreading decrease when the complexity becomes too great for the system to handle. Similarly, using syllable pairs that differ on more than one dimension on a discrimination task may also provide additional evidence of influences from task demands. It would also be 80  interesting to examine the effects of lipreading when the listener is given differing amounts of context (e.g., single words, individual sentences, and discourse). Finally, the importance of using both on-line and off-line outcome measures cannot be underestimated. Since language difficulties are sometimes manifested only in terms of real-time processing, the use of on-line measures is crucial. However, some types of aphasia are characterized more by off-line language comprehension difficulties, which stresses the importance of also using measures of comprehension off-line. Together, these provide a more detailed and potentially more accurate account of language comprehension abilities. Future studies should therefore use both types of measures. 4.9  Conclusion This study appears to be one of the first to examine the influence of lipreading at the  discourse level in individuals with aphasia. The results indicated that lipreading was particularly helpful in terms of real-time language processing. Overall, participants required shorter processing times in the audio-visual condition, especially when the stimuli were grammatically more complex. In addition, the two subjects with the most severe comprehension deficits benefited from the visual cues when responding to off-line comprehension questions. This study also replicated recent research on speech perception, which has typically shown that people with aphasia do not appear to benefit from lipreading on speech discrimination or identification tasks. Taken together, these findings suggest that lipreading might be more beneficial at the discourse level than at the speech perception level, and that this advantage is more evident during on-line language processing. In terms of clinical practice, these findings support the notion that multimodal treatment involving visual speech for comprehension deficits in aphasia may be a valuable option. 81  References Alexander, M. P., & Benson, D. F. (1991). The aphasia and related disturbances. In R. J. Joynt (ed.), Clinical Neurology (pp. 1-58). Philadelphia: Lippincott. Ardila, A. (2010). A proposed reinterpretation and reclassification of aphasia syndromes. Aphasiology, 24(3), 363-394. Arnold, P., & Hill, F. (2001). Bisensory augmentation: A speechreading advantage when speech is clearly audible and intact. British Journal of Psychology, 92, 339-355. Badin, P., Tarabalka, Y., Elisei, F., & Bailly, G. (2010). Can you 'read' tongue movements? Evaluation of the contribution of tongue display to speech understanding. Speech Communication, 52(6), 493-503. Basso, A., Casati, G., & Vignolo, L. A. (1997). Phonemic identification defects in aphasia. Cortex, 13, 84-95. Baynes, K., Funnell, M. G., & Fowler, C. A. (1994). Hemispheric contributions to the integration of visual and auditory information in speech perception. Perception and Psychophysics, 55(6), 633-641. Becker, F., & Reinvang, I. (2007). Successful syllable detection in aphasia despite processing impairments as revealed by event-related potentials. Behavioural and Brain Functions, 3(6). doi:10.1186/1744-9081-3-6 Berndt, R. S., Mitchum, C. C., & Wayland, S. (1997). Patterns of sentence comprehension in aphasia: A consideration of three hypotheses. Brain and Language, 60, 197-221. Bertelson, P., Vroomen, J., Wiegeraad, G., & de Gelder, B. (1994). Exploring the relation between McGurk interference and ventriloquism. Proceedings of the International Conference on Spoken Language Processing, Yokohama, 559-562.  82  Bess, F., & Humes, L. (1995). Audiology: The Fundamentals. Baltimore: Williams & Wilkins. Blumstein, S. E., Baker, E., & Goodglass, H. (1977). Phonological factors in auditory comprehension in aphasia. Neuropsychologia, 15, 19-30. Blumstein, S. E., Cooper, W. E., Zurif, E. B., & Caramazza, A. (1977). The perception and production of voice-onset time in aphasia. Neuropsychologia, 15, 371-383. Blumstein, S. E., & Milberg, W. P. (2000). Language deficits in Broca’s and Wernicke’s aphasia: A singular impairment. In Y. Grodzinsky, L. Shapiro & D. Swinney (eds.), Language and the brain: Representation and processing, (pp. 167-192). New York: Academic Press. Bornstein, M. H., Hahn, C. S., & Hayes, O. M. (2004). Specific and general language performance across early childhood: Stability and gender considerations. First Language, 24(3), 267-304. Brookshire, R. H. (1987). Auditory language comprehension disorders in aphasia. Topics in Language Disorders, 8(1), 11-23. Burkhardt, P. Avrutin, S., Piñango, M. M., & Ruigendijk, E. (2008). Slower-than-normal syntactic processing in agrammatic Broca’s aphasia: Evidence from Dutch. Journal of Neurolinguistics, 21, 120-137. Burkhardt, P., Piñango, M. M., & Wong, K. (2003). The role of the anterior left hemisphere in real-time sentence comprehension: Evidence from split intransitivity. Brain and Language, 86(1), 9–22. Burnham, D., & Dodd, B. (1996). Auditory-visual speech perception as a direct process: The McGurk effect in infants and across languages. In S. Stork & M. Hennecke (eds.), Speechreading by Humans and Machines (pp. 103-14). Berlin: Springer-Verlag. 83  Calvert, G. A., & Campbell, R. (2003). Reading speech from still and moving faces: The neural substrates of visible speech. Journal of Cognitive Neuroscience, 15(1), 57-70. Campbell, R. (1986). The lateralization of lip-read sounds: A first look. Brain and Cognition, 5, 1-21. Campbell, R. (1992). The neuropsychology of lipreading. Philisophical Transcriptions of the Royal Society of London, 335, 39-45. Campbell, R. (1998). Everyday speechreading: Understanding seen speech in action. Scandinavian Journal of Psychology, 39, 163-167. Campbell, R., Dodd, B., & Burnham, D. (1998). Introduction. In Campbell, Dodd, & Burnham (eds.), Hearing by Eye II: Advances in the Psychology of Speechreading and Auditory-visual Speech (pp. ix-xiv). Hove, UK: Psychology Press. Campbell, R., Garwood, J., Franklin, W., Howard, D., Landis, T., & Regard, M. (1990). Neuropsychological studies of auditory-visual fusion illusion: Four case studies and their implications. Neuropsychologia, 28(8), 787-802. Caplan, D. (2006). Aphasic deficits in syntactic processing. Cortex, 42, 797-804. Caplan, D., & Futter, C. (1986). Assignment of thematic roles to nouns in sentence comprehension by an agrammatic patient. Brain and Language, 27(1), 117-134. Caplan, D., & Hildebrandt, N. (1988). Disorders of Syntactic Comprehension. Cambridge: MIT Press. Caplan, D., Matthei, E., & Gigley, H. (1981). Comprehension of gerundive constructions by Broca’s aphasics. Brain and Language, 13(1), 145-160. Caplan, D., & Waters, G. S. (1999). Verbal working memory and sentence comprehension. Behavioral and Brain Sciences, 22(1), 77–126.  84  Caplan, D., & Waters, G. S. (2003). On-line syntactic processing in aphasia: Studies with auditory moving window presentation. Brain and Language, 84, 222-249. Caramazza, A., & Zurif, E. B. (1976). Dissociation of algorithmic and heuristic processes in language comprehension: Evidence from aphasia. Brain and language, 3, 572-582. Choy, J. J., & Thompson, C. K. (2010). Binding in agrammatic aphasia: Processing to comprehension. Aphasiology, 24(5), 551-579. Christensen, S. C., & Wright, H. H. (2010). Verbal and non-verbal working memory in aphasia: What three n-back tasks reveal. Aphasiology, 24(6-8), 752-762. Cohen, R., Kelter, S., & Woll, G. (1980). Analytical competence and language impairment in aphasia. Brain and Language, 10, 331-347. Colin, C., & Radeau, M. (2003). Les illusions McGurk dans la parole: 25 ans de recherches. L’année Psychologique, 103(3), 497-542. Colin, C., Radeau, M., Soquet, A., Colin, F., & Deltenre, P. (2002). Mismatch negativity evoked by the McGurk-MacDonald effect: Evidence for a phonological representation within auditory sensory short term memory. Clinical Neurophysiology, 113(4), 495-506. Csépe, V., Osman-Sági, J., Molnár, M., & Gósy, M. (2001). Impaired speech perception in aphasic patients: Event-related potential and neuropsychological assessment. Neuropsychologia, 39, 1194-1208. Davis, G. A. (2007). Aphasiology: Disorders and clinical practice. New York: Pearson. DeDe, G., Caplan, D., Kemtes, K., & Waters, G. (2004). The relationship between age, verbal working memory, and language comprehension. Psychology and Aging, 19(4), 601-616.  85  Diehl, R. L., & Kluender, K. R. (1989). On the objects of speech perception. Ecological Psychology, 1(2), 121-144. Diesch, E. (1995). Left and right hemifield advantages of fusions and combinations in audiovisual speech perception. The Quarterly Journal of Experimental Psychology, 48A(2), 320-333. Dodd, B. (1987). The acquisition of lip-reading skills by normally hearing children. In B. Dodd & R. Campbell (eds), Hearing by eye: The psychology of lip-reading (pp. 163175). Hillsdale, NJ: Lawrence Erlbaum Associates. Duffy, J. R. (2005). Motor Speech Disorders: Substrates, Differential Diagnosis, and Management. St-Louis: Elsevier Mosby. Edwards, S. (2005). Fluent aphasia. Cambridge: Cambridge University Press. Edwards, S., & Varlokosta, S. (2007). Pronominal and anaphoric references in agrammatism. Journal of Neurolinguistics, 20, 423–444. Elphick, R. (1996). Issues in comparing the lipreading abilities of hearing impaired and hearing 15 to 16 year-old pupils. British Journal of Educational Psychology, 66, 357365. Evans, L. (1965). Psychological factors related to lipreading. Teacher of the Deaf, 63, 131136. Feld, J. E., & Sommers, M. S. (2009). Lipreading, processing speed, and working memory in younger and older adults. Journal of Speech, Language, and Hearing Research, 52, 1555-1565. Ferreira, F., Henderson, J. M., Anes, M. D., Weeks, P. A., & McFarlane, D. K. (1996). Effects of lexical frequency and syntactic complexity in spoken-language  86  comprehension: Evidence from the auditory moving-window technique. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22(2), 324-335. Field, A. (2005). Discovering statistics using SPSS. London: Sage Publications. Fowler, C. A., & Rosenblum, L. D. (1991). Perception of the phonetic gesture. In I. G. Mattingly & M. Studdert-Kennedy (eds.), Modularity and the Motor Theory of Speech Perception, (pp. 33-59). Hillsdale, NJ: Lawrence Erlbaum. Francis, D. R., Clark, N., & Humphreys, G. W. (2003). The treatment of an auditory working memory deficit and the implications for sentence comprehension abilities in mild “receptive” aphasia. Aphasiology, 17(8), 723-750. Fridriksson, J., Moser, D., Ryalls, J., Bonilha, L., Rorden, C., & Baylis, G. (2009). Modulation of frontal lobe speech areas associated with the production and perception of speech movements. Journal of Speech, Language, and Hearing Research, 52, 812819. Friedmann, N., & Gvion, A. (2003). Sentence comprehension and working memory limitation in aphasia: A dissociation between semantic-syntactic and phonological reactivation. Brain and Language, 86(1), 23–39. Gandour, J., & Dardarananda, R. (1982). Voice onset time in aphasia: Thai. I. perception. Brain and Language, 17, 24-33. Goodglass, H. (1993). Understanding aphasia. San Diego, CA: Academic Press. Goodglass, H., & Kaplan, E. (1983). The assessment of aphasia and related disorders. Philadelphia: Lea & Febiger. Goodglass, H., Kaplan, E., & Barresi, B. (2001). Boston Diagnostic Aphasia Examination. Baltimore: Lippincott Williams & Wilkins.  87  Green, K. P. (1998). The use of auditory and visual information during phonetic processing: implications for theories of speech perception. In R. Campbell, B. Dodd & D. Burnham (eds.), Hearing by eye: Advances in the psychology of speechreading and auditoryvisual speech, (pp. 3-25). Hove, UK: Psychology Press. Green, K. P., & Gerdeman, A. (1995). Cross-modal discrepancies in coarticulation and the integration of speech information: The McGurk effect with mismatched vowels. Journal of Experimental Psychology: Human Perception and Performance, 21 (6), 1409-1426. Grodzinsky, Y. (1986). Language deficits and the theory of syntax. Brain and Language, 27(1), 135–159. Grodzinsky, Y. (1989). Agrammatic comprehension of relative clauses. Brain and Language, 37(3),480–499. Haarman, H. J., Just, M. A., & Carpenter, P. A. (1997). Aphasic sentence comprehension as a resource deficit: A computational approach. Brain and Language, 59, 76-120. Hagiwara, H. (1993). The breakdown of Japanese passives and theta-role assignment principle by Broca’s aphasics. Brain and Language, 45, 318-339. Hallowell, B., & Chapey, R. (2008). Introduction to language intervention strategies in adult aphasia. In R. Chapey (ed.), Language intervention strategies in aphasia and related neurogenic communication disorders, (pp. 3-19). Baltimore: Lippincott Williams & Wilkins. Hayashi, K., & Hayashi, R. (2007). Pure word deafness due to left subcortical lesion: Neurophysiological studies of two patients. Clinical Neurophysiology, 118, 863-868. Hessler, D., Jonkers, R., & Bastiaanse, R. (2010). The influence of phonetic dimensions on aphasic speech perception. Clinical Linguistics and Phonetics, 24(12), 980-996. 88  Hickok, G., Zurif, E., & Canseco-Gonzalez, E. (1993). Structural description of agrammatic comprehension. Brain and Language, 45, 371-395. Janse, E. (2006). Lexical competition effects in aphasia: Deactivation of lexical candidates in spoken word processing. Brain and Language, 97, 1-11. Jones J. A., & Munhall K. G. (1997). The effects of separating auditory and visual sources on audiovisual integration of speech. Canadian Acoustics, 2, 13-19. Jordan, T. R., & Thomas, S. M. (2007). Hemiface contributions to hemispheric dominance in visual speech perception. Neuropsychology, 21(6), 721-731. Just, M. A., & Carpenter, P. A. (1992). A capacity theory of comprehension: Individual differences in working memory. Psychological Review, 99(1), 122–149. Kagan, A., Winckel, J., & Shumway, E. (1996). Pictographic communication resources. Aphasia Centre: North York. Kaiser, A. R., Kirk, K. I., Lachs, L., & Pisoni, D. B. (2003). Talker and lexical effects on audiovisual word recognition by adults with cochlear implants. Journal of Speech, Language, and Hearing Research, 46(2), 390-404. Kaplan, E., Goodglass, H., & Weintraub, S. (2001). Boston Naming Test. Baltimore: Lippincott Williams & Wilkins. Kim, J., Davis, C., & Krins, P. (2004). Amodal processing of visual speech as revealed by priming. Cognition, 93(1), B39-B47. King, J., & Just, M. A. (1991). Individual differences in syntactic processing: The role of working memory. Journal of Memory and Language, 30, 580–602. Lau, E. F. (2009). The predictive nature of language comprehension (Doctoral dissertation). Retrieved from 89  Le Dorze, G., Brassard, C., Larfeuil, C., & Allaire, J. (1996). Auditory comprehension problems in aphasia from the perspective of aphasic persons and their families and friends. Disability and Rehabilitation, 18(11), 550-558. Lidestam, B., Lyxell, B., & Andersson, G. (1999). Speech-reading: Cognitive predictors and displayed emotion. Scandinavian Audiology,28(4), 211-217. Little, D., Prentice, K. J., Darrow, A. W., & Wingfield, A. (2005). Listening to spoken text: Adult age differences as revealed by self-paced listening. Experimental Aging Research, 31, 313-330. Lyxell, B., & Holmberg, I. (2000). Visual speechreading and cognitive performance in hearing-impaired and normal hearing children (11–14 years). British Journal of Educational Psychology, 70(4), 505-518. MacDonald, M. C., Almor, A., Henderson, V. W., Kempler, D., & Andersen, E. S. (2001). Assessing working memory and language comprehension in Alzheimer’s Disease. Brain and Language, 78, 17-42. MacSweeney, M., Calvert, G. A., Campbell, R., McGuire, P., David, A. S., Williams, S. C. R., Woll, B., & Brammer, M. J. (2002). Speechreading circuits in people born deaf. Neuropsychologia, 40, 801-807. Martin, R. C., & Fehler, E. (1990). The consequences of reduced memory span for the comprehension of semantic versus syntactic information. Brain and Language, 38, 1-20. Massaro, D. W. (1987). Speech Perception by Ear and Eye: A Paradigm for Psychological Inquiry. Hillsdale, NJ: Lawrence Erlbaum Associates. Massaro D. W., & Cohen M. M. (1993). Perceiving asynchronous bimodal speech in consonant-vowel and vowels syllables. Speech Communication, 13 (1-2), 127-134. 90  Massaro, D. W., & Stork, D. G. (1998). Speech recognition and sensory integration: A 240year-old theorem helps explain how people and machines can integrate auditory and visual information to understand speech. American Scientist, 86(3), 236-244. Mauner, G., Fromkin, V., & Cornell, T. (1993). Comprehension and acceptability judgments in agrammatism: Disruptions in the syntax of referential dependency. Brain and Language, 45, 340–370. McGurk, H., & MacDonald, J. (1976). Hearing lips and seeing voices. Nature, 264, 746-748. Messina, G., Denes, G., & Basso, A. (2009). Words and number words transcoding: A retrospective study on 57 aphasic subjects. Journal of Neurolinguistics, 22, 486-494. Miceli, G., Caltagirone, C., Gainotti, G., & Payer-Rigo, P. (1978). Discrimination of voice versus place contrasts in aphasia. Brain and Language, 6, 47-51. Milberg, W., & Blumstein, S. E. (1981). Lexical decision and aphasia: Evidence for semantic processing. Brain and Language, 14, 371-385. Milberg W., Blumstein, S. E., & Dworetzky, B. (1987). Processing of lexical ambiguities in aphasia. Brain and Language, 31(1), 138-150. Milner, B. (1971). Interhemispheric differences and psychological process. British Medical Bulletin, 27, 272-277. Mitchell, D. C. (2004). On-line methods in language processing: Introduction and historical review. In M. Carreiras & C. Clifton (eds.), The on-line study of sentence comprehension, (pp.15-32). New York: Psychology Press. Mitchell, T. V., & Maslin, M. T. (2007). How vision matters for individuals with hearing loss. International Journal of Audiology, 46, 500-511.  91  Mitchum, C. C., & Berndt, R. S. (2008). Comprehension and production of sentences. In R. Chapey (ed.), Language intervention strategies in aphasia and related neurogenic communication disorders, (pp. 632-653). Baltimore: Lippincott Williams & Wilkins. Miyake, A., Carpenter, P. A., & Just, M. S. (1994). A capacity approach to syntactic comprehension disorders: Making normal adults perform like aphasic patients. Cognitive Neuropsychology, 11(6), 671–717. Morris, J., Franklin, S., Ellis, A. W., Turner, J. E., & Bailey, P. J. (1996). Remediating a speech perception deficit in an aphasic patient. Aphasiology, 10, 137–158. Most, T., Rothem, H., & Luntz, M. (2009). Auditory, visual, and auditory-visual speech perception by individuals with cochlear implants versus individuals with hearing aids. American Annals of the Deaf, 154(3), 284-292. MRC Psycholinguistic Database. (2005). University of Western Australia, School of Psychology. Retrieved from Nakano, H., Saron, C., & Swaab, T. Y. (2010). Speech and span: Working memory capacity impacts the use of animacy but not of world knowledge during spoken sentence comprehension. Journal of Cognitive Neuroscience, 22(12), 2886-2898. Neely, K. K. (1956). Effect of visual factors on the intelligibility of speech. The Journal of the Acoustical Society of America, 28(6), 1275-1277. Nicholson, K. G., Baum, S., Cuddy, L. L., & Munhall, K. G. (2002). A case of impaired auditory and visual speech prosody perception after right hemisphere damage. Neurocase, 8, 314-322. O’Grady, W., & Lee, M. (2005). A mapping theory of agrammatic comprehension deficits. Brain and Language, 92, 91-100. 92  PASW Statistics 18, SPSS Inc., 2009. Pierce, R. S., & DeStefano, C. C. (1987). The interactive nature of auditory comprehension in aphasia. Journal of Communication Disorders, 20, 15-24. Poirier, J., Shapiro, L. P., Love, T., & Grodzinsky, Y. (2009). The on-line processing of verb-phrase ellipsis in aphasia. Journal of Psycholinguistic Research, 38, 237-253. Prather, P., Shapiro, L., Zurif, E., & Swinney, D. (1991). Real-time examinations of lexical processing in aphasics. Journal of Psycholinguistic Research, 20(3), 271-281. Psychological Software Tools. (2008). E-Prime v2.0 Release Candidate. Pittsburgh, PA: Author. Radcliff, R. (1993). Methods for dealing with reaction time outliers. Psychological Bulletin, 114(3), 510-532. Rose, M., & Douglas, J. (2006). A comparison of verbal and gesture treatments for a word production deficit resulting from acquired apraxia of speech. Aphasiology, 20(12), 11861209. Rosenbek, J. C., Lemme, M. L., Ahern, M. B., Harris, E. H., & Wertz, R. T. (1973). A treatment for apraxia of speech in adults. Journal of Speech and Hearing Disorders, 38, 462-472. Rosenblum, L. D. (2005). Primacy of multimodal speech perception. In D. Pisoni and R. Remez (eds.), The Handbook of Speech Perception (pp.51-78). Oxford: Blackwell Publishing. Ross, L. A., Saint-Amour, D., Leavitt, V. M., Javitt, D. C., & Foxe, J. J. (2007). Do you see what I am saying? Exploring visual enhancement of speech comprehension in noisy environments. Cerebral Cortex, 17(5), 1147-1153.  93  Rudmann, D. S., McCarley, J. S., & Kramer, A. F. (2003). Bimodal displays improve speech comprehension in environments with multiple speakers. Human Factors, 45(2), 329336. Saffran, E. M., Marin, O. S. M., & Yeni-Komshian, G. H. (1976). An analysis of speech perception in word deafness. Brain and Language, 3, 209-228. Sams, M., Surakka, V., Helin, P., & Kättö, R. (1997). Audiovisual fusion in Finnish syllables and words. Proceedings of the Auditory-Visual Speech Processing Conference, Rhodes, Greece, 101-104. Schmid, G., Thielmann, A., & Ziegler, W. (2009). The influence of visual and auditory information on the perception of speech and non-speech oral movements in patients with left hemisphere lesions. Clinical Linguistics and Phonetics, 23(3), 208-221. Schmid, G., & Ziegler, W. (2006). Audio-visual matching of speech and non-speech oral gestures in patients with aphasia and apraxia of speech. Neuropsychologia, 44, 546-555. Schwartz, J.-L., Berthommier, F., & Savariaux, C. (2004). Seeing to hear better: Evidence for early audio-visual interactions in speech identification. Cognition, 93, B69-B78. Sekiyama, K., & Burnham, D. (2008). Impact of language on development of auditoryvisual speech perception. Developmental Science, 11(2), 306-320. Sekiyama, K., Kanno, I., Miura, S., & Sugita, Y. (2003). Auditory-visual speech perception examined by fMRI and PET. Neuroscience Research, 47(3), 277-287. Shindo, M., Kaga, K., & Tanaka, Y. (1991). Speech discrimination and lip reading in patients with word deafness or auditory agnosia. Brain and Language, 40(2), 153-161. Smeele, P. M. T., Massaro, D. W., Cohen, M. M., & Sittig, A. C. (1998). Laterality in visual speech perception. Journal of Experimental Psychology: Human Perception and Performance, 24(4), 1232-1242. 94  Strand, E. A., & Debertine, P. (2000). The efficacy of integral stimulation intervention with developmental apraxia of speech. Journal of Medical Speech-Language Pathology, 8, 295–300. Sumby, W. H., & Pollack, I. (1954). Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America, 26, 212-215. Summerfield, Q. (1987). Some preliminaries to a comprehensive account of audio-visual speech perception. In B. Dodd & R. Campbell (eds.), Hearing by eye: The psychology of lip-reading, (pp. 3-51). London, UK: Lawrence Erlbaum Associates. Summerfield, Q., & McGrath, M. (1984). Detection and resolution of audio-visual incompatibility in the perception of vowels. Quarterly Journal of Experimental Psychology, 36A, 51-74. Sung, J. E., McNeil, M. R., Pratt, S. R., Dickey, M. W., Hula, W. D., Szuminsky, N. J., et al. (2009). Verbal working memory and its relationship to sentence-level reading and listening comprehension in persons with aphasia. Aphasiology, 23(7-8), 1040-1052. Swinney, D. A., Zurif, E., & Nicol, J. (1989). The effects of focal brain damage on sentence processing: An examination of the neurological organization of a mental module. Journal of Cognitive Neuroscience, 1, 25-37. Tallal, P., & Newcombe, F. (1978). Impairment of auditory perception and language comprehension in dysphasia. Brain and Language, 5, 13-24. Tyler, L. K. (1992). Spoken Language Comprehension: An Experimental Approach to Disordered and Normal Processing. Cambridge, MA: The MIT Press. Utman, J. A., Blumstein, S. E., & Sullivan, K. (2001). Mapping from sound to meaning: Reduced lexical activation in Broca’s aphasics. Brain and Language, 79, 444-472.  95  Wambaugh, J. L. (2002). A summary of treatments for apraxia of speech and review of replicated approaches. Seminars in Speech and Language, 23(4), 293-308. Wambaugh, J. L., West, J. E., & Doyle, P. J. (1998). Treatment for apraxia of speech: Effects of targeting sound groups. Aphasiology, 12, 731-743. Werker J. F., Frost P. E., & McGurk H. (1992). La langue et les lèvres: Crosslanguage influences on bimodal speech perception. Canadian Journal of Psychology, 46(4), 551568. Wiener, D. A., Connor, L. T., & Obler, L. K. (2004). Inhibition and auditory comprehension in Wernicke’s aphasia. Aphasiology, 18, 599-609. Woodhouse, L. (2007). Speech-reading skills and communication development in a hearing impaired cohort. (Doctoral dissertation, University of Queensland). Woodhouse, L., Hickson, L., & Dodd, B. (2009). Review of visual speech perception by hearing and hearing-impaired people: Clinical implications. International Journal of Language and Communication Disorders, 44(3), 253-270. Wright, H. H., & Newhoff, M. (2004). Priming auditory comprehension in aphasia: Facilitation and interference effects. Aphasiology, 18(5/6/7), 555-565. Youse K. Cienkowski K. (2004). The effect of context on lipreading in adults with aphasia: A follow-up study. 34th Clinical Aphasiology Conference. Park City, UT. Retrieved from Youse, K. M., Cienkowski, K. M., & Coelho, C. A. (2004). Auditory-visual speech perception in an adult with aphasia. Brain Injury, 18(8), 825-834.  96  APPENDIX A: DISCOURSE TASK STIMULI SIMPLE PASSAGES PRATICE ITEMS a.  Cynthia wants to quit smoking. // She bought a book on how to quit smoking cold turkey.// She threw away the book within 3 days,// and bought some Nicorette gum instead.// She knows herself too well.  (detail) Did Cynthia buy nicotine patches? (no)  (inf.) Was Cynthia able to quit cold turkey? (no)  b.  Martha enjoys woodworking.// One morning, a rusty nail went through the wood and into her finger.// Later in the day, her finger became very swollen.// She worried she might be getting an infection.  (detail) Did Martha hurt herself with scissors? (no)  (inf.) Will Martha likely need medical attention? (yes)  LIST S-1 1. David wanted to go for a mountain bike ride. // It had been raining for six days in a row.// Now, even though it had finally stopped raining, the trails were all muddy.// So, David decided to read a book instead.  (detail) Did David want to go for a run? (no)  (inf.) Had the weather been nice lately? (no) 2. Ben is very nervous. //He has to write a document for work,//and his boss gave him a deadline.//It is far from being done.// Ben decided to stay up all night to finish it.  (detail) Is the document for Ben’s work? (yes)  (inf.) Did Ben still have a lot of work to do that night? (yes) 3. Erika lives in Canada.// She went to Greece for her Christmas holidays.// The temperature was between 5 and 10 degrees// and it rained the whole time.// Sadly, she spent almost all of her time in the hotel room.  (detail) Does Erika live in Greece? (no)  (inf.) Did Erika have a good time on her trip? (no) 4. Andrew works as a mechanical engineer.// He has been working for the same company in Vancouver for 17 years.// His employer offered him a higher position in Calgary.// He declined the offer.  (detail) Is Andrew a lawyer? (no)  (inf.) Is Andrew likely happy at his current job location? (yes) 5. Oliver agreed to go on his first blind date.// At the restaurant, he looked for a woman with black hair and a red dress.// He spotted her from behind// and walked over to her table.// He was shocked to see his ex-wife sitting at the table.  (detail) Did the woman have blond hair? (no) 97   (inf.)  Did Oliver already know his blind date? (yes)  6. Allison is sick.// She spent the night vomiting.// She took some medicine// and tried to sleep.// She remembered her dinner had tasted funny last night.  (detail) Did Allison take any medicine? (yes)  (inf.) Did Allison likely catch a cold? (no) 7. Luke loves to play poker.// Unfortunately, he is not a very good player.// However, his friends love to play with him.// They often win a lot of money.  (detail) Does Luke like to play poker? (yes)  (inf.) Does Luke win often at poker? (no) 8. Jill decided to buy a new car for her teenage son.// She found a used Toyota for an excellent price.// She bought it and took it home.// Two weeks later, the car broke down.  (detail) Was the Toyota expensive? (no)  (inf.) Was the car likely in bad condition? (yes) 9. Peter bought a new plant for his office.// He put it on his brand new desk.// On Friday, before heading home, he generously watered his plant.// On Monday, Peter was disappointed to find water stains on his desk.  (detail) Did Peter put his plant on his desk? (yes)  (inf.) Did Peter overwater his plant? (yes) 10. Harold decided to go bungee jumping.// The bungee instructor tied him securely to the body harness and rope// and got him all ready for the big jump.// Harold walked to the edge of the platform.// He screamed and ran back.  (detail) Did Harold wear a body harness? (yes)  (inf.) Was Harold brave enough to jump? (no) LIST S-2 1. Claire decided to go back to college.// She wanted to get a nursing degree.// She applied to the Vancouver Community College,// but she was too late.// The program was already full.  (detail) Did Claire want to get a degree in nursing? (yes)  (inf.) Was Claire able to enroll in the program? (no) 2. Sally loves to cook.// Yesterday, she decided to make a lemon pie.// Her husband tasted it and made a funny face.// Sally had forgotten to add sugar.  (detail) Did Sally forget to add flour? (no)  (inf.) Did the pie taste good? (no) 3. Bill visited his sister and brother-in-law yesterday.// They had dinner together.// They looked at family photos and laughed a lot.// He was back home at 2:00 A.M.  (detail) Did Bill go home before midnight? (no)  (inf.) Did Bill enjoy visiting his sister? (yes) 98  4. Rob and Kristen decided to go camping.// They set up their tent and went fishing.// When they came back, they saw many tears and cuts on their tent.// The campground staff sent out a bear warning.  (detail) Did Rob and Kristen go fishing? (yes)  (inf.) Did a bear damage Rob and Kristen’s tent? (yes) 5. Sandra started her new job as a waitress.// Her first night was a disaster.// She dropped a plate full of food// and spilled water on a customer.// She quit her job at the end of her shift.  (detail) Did she spill water on a customer? (yes)  (inf.) Was Sandra clumsy? (yes) 6. Quinn and his friend decided to go fishing.// Quinn bet that he would catch more fish.// They sat by the lake all afternoon.// At the end of the day, neither one had caught anything.  (detail) Did Quinn go fishing with his brother? (no)  (inf.) Did Quinn win the bet? (no) 7. Margaret went to the public library.// She browsed the fiction section,// but could not find the book she wanted.// She asked the librarian for help.// The librarian said the book had been damaged and was no longer available.  (detail) Did Margaret browse the reference section? (no)  (inf.) Did Margaret borrow the book? (no) 8. Little Matthew begged his mom to go to the zoo.// They packed a lunch and headed to the Greater Vancouver Zoo.// Matthew spent 2 hours watching the monkeys.// He begged his mom to buy him a monkey for his next birthday..  (detail) Did Matthew and his mom pack a lunch? (yes)  (inf.) Does Matthew like monkeys? (yes) 9. When Marty got to his car, he noticed a piece of paper on the windshield.// It was a parking ticket.// Marty suddenly realized this was a 30 minute parking stall.// His car had been parked there since morning.  (detail) Did Marty park his car there in the morning? (yes)  (inf.) Was Marty’s car parked for less than 30 minutes? (no) 10. Lily tried sushi for the first time yesterday.// She liked the California rolls very much.// She ordered a few more,// and ate her friend’s as well.// At the end of the night, she felt very sick.  (detail) Did Lily order California rolls? (yes)  (inf.) Did Lily likely eat too much sushi? (yes)  99  COMPLEX PASSAGES PRACTICE ITEMS a. When Sean’s computer got a virus,// he decided to go to the store and get it fixed right away.// At the store, Sean was told that it would cost him at least $500 to get his computer fixed,// since the virus had damaged many parts of the system.// Sean thought of buying a new computer instead,// since he could probably get a fairly good computer for the same price.// He therefore decided to buy himself a new laptop for under $500.  (detail) Did the virus damage many parts of the system? (yes)  (inf.) Did Sean get his computer fixed? (no) b.  Chad headed to the airport extra early to make sure he wouldn’t miss his plane.// When he got to the airport, he realized he had forgotten his passport,// and therefore decided to hurry back home and get it.// After spending a half hour looking for his passport,// he started wondering if perhaps he had dropped it on the floor somewhere at the airport.// Worried he would never find it,// he opened his suitcase to reach for his cell phone and found his passport neatly tucked under his phone.  (detail) Did Chad spend 3 hours looking for his passport? (no)  (inf.) Did Chad have his passport with him all along? (yes)  LIST C-1 1. On Tuesday morning, Greg headed to the dentist’s office for a checkup.// He hadn’t been to the dentist in over ten years,// and decided that it was finally time to get a thorough cleaning.// After spending more than 3 hours cleaning his teeth, the staff told Greg that his mouth was full of cavities.// Greg promised himself that he would get a regular checkup every year from now on.  (detail) Did the staff spend more than 3 hours cleaning Greg’s teeth? (yes)  (inf.) Did Greg wait too long to get his teeth checked? (yes) 2. Every year, the Johnsons plant six different kinds of tomatoes in their garden.// They use them fresh in salads, soups and sauces in the summer,// and when fall comes they can the rest of the tomatoes for the winter.// Mr. Johnson also makes home-made ketchup and tomato marmalade which he sells at the local farmers’ market for a very reasonable price.// For years, the Johnsons have been known in the neighborhood as the “tomato couple”, a name they have proudly embraced.  (detail) Is Mr. Johnson’s ketchup expensive? (no)  (inf.) Do the Johnsons like their nickname? (yes) 3. Sasha, a busy 32 year old business woman with 4 young children,// decided to hire a nanny to help out around the house and take care of the kids.// The nanny had been recommended by a friend,// so Sasha felt comfortable hiring her without contacting her references.// After the first two weeks, Sasha started noticing that some items were missing from her bedroom,// including a very expensive pearl necklace.// She decided to call one of her nanny’s references,// and soon realized that she had made a big mistake by hiring her. 100     (detail) Did Sasha’s pearl necklace go missing? (yes) (inf.) Was the nanny likely a thief? (yes)  4. Ever since Sam was a young boy, he has always been a big hockey fan.// As a child, he would dress up in his older brother’s hockey shirt and helmet,// go down to the community rink in the winter// and slide down the ice on his boots while pretending to be Wayne Gretsky.// Today, Sam’s wife says that things haven’t really changed.// He still dresses up in his Canucks jersey,// drives down to GM place during NHL seasons,// and sits in the stands, eats popcorn, and cheers for his team.  (detail) Did Sam used to pretend he was Bobby Orr? (no)  (inf.) Is Sam an NHL hockey player? (no) 5. Recently, Geoff noticed that his vision had become increasingly blurry,// and thought that it might be time to get his eyes checked again and perhaps get a new pair of glasses.// During his appointment, the optometrist told Geoff that his vision had actually gotten better.// Relieved, he bought new frames and lenses,// went home and happily told his wife that his vision had improved.// She smiled and said that it was great news,// but couldn’t help laughing at his new glasses and told him they were way too big for his face.  (detail) Did Geoff’s vision get worse? (no)  (inf.) Did Geoff’s wife like his new glasses? (no) 6. Every year, Kim worries about her teenage son Jake, especially when winter comes around.// Jake loves snowboarding,// but often chooses to go snowboarding without a helmet, even though he is well aware of the risks involved.// He insists that he is an excellent snowboarder// and that nothing will ever happen to him because he doesn’t do any of the risky moves many of his friends do.// Kim is convinced she will one day get a phone call telling her that her son is in critical condition at the hospital.  (detail) Does Jake always wear a helmet? (no)  (inf.) Did Jake have a serious snowboarding accident? (no) 7. The Browns decided it was time to redecorate their living room,// which of course inevitably involved changing the colors of the walls.// They went to Home Depot// and picked 2 new colors for their living room,// a dark red and a light forest green.// They started painting the next day,// immediately pleased with the results of the dark red color.// When both colors were applied, however,// they were disappointed to see that the combination of red and green made the room look like it was decorated for the Christmas holidays.  (detail) Did the Browns buy two different colors of paint? (yes)  (inf.) Did the Browns like the look of their new living room? (no) 8. Although Anne wasn’t supposed to give birth until March,// she went into labour one month earlier while making dinner.// Panicked, she called her husband at work, who rushed home as soon as possible.// They hopped into their car and drove to the hospital,// worried that they wouldn’t make it in time.// Anne gave birth to a healthy baby boy less than 20 minutes after arriving at the hospital.  (detail) Did Anne go into labour while eating breakfast? (no)  (inf.) Did Anne and her husband arrive at the hospital in time? (yes) 101  9. Cassie decided that it was time for her to learn how to be a good photographer,// and therefore bought herself a new high-end digital camera.// She started practicing using her camera by going out to the park and taking pictures of people, animals, trees, and flowers.// She tried the many different buttons and options// but remained unhappy with the results.// She therefore decided to switch to the automatic mode// and was instantly pleased with how her pictures turned out.  (detail) Did Cassie practice taking pictures in the park? (yes)  (inf.) Was Cassie skilled in using all the different options on her camera? (no) 10. Every Thursday night, Loretta goes to the community centre to play bingo.// She goes partly because she enjoys playing bingo with her friends,// but also because she secretly hopes to win money or some of the other prizes offered.// Last Thursday, Loretta almost won twice that night,// but every time, someone else was one step ahead of her and claimed the prize.// Because next week is Loretta’s birthday, she strongly believes it will be her lucky week,// and she’ll win for the first time since she started playing at the community centre.  (detail) Does Loretta play bingo with her friends? (yes)  (inf.) Has Loretta ever won at bingo at the community centre? (no) LIST C-2 1. On Saturday, Mike decided to go to the pet store and buy a puppy for his seven year old daughter’s birthday.// Unfortunately, the only puppy available at the store was a young bulldog.// Mike knew that this was not the type of dog his daughter was hoping to get,// but he also knew that she would be very upset if she didn’t get a pet for her birthday.// He then saw a cute orange tabby kitten and hoped that his daughter would be equally happy with a cat.  (detail) Was the only puppy available at the store a bulldog? (yes)  (inf.) Did Mike buy his daughter a cat? (yes) 2. One of Mary Ann’s new year resolutions was to learn to cook.// Last weekend, she bought a new seafood cookbook// and decided to try a very tasty shrimp and scallop soup.// She followed all the steps outlined in the recipe, and her soup looked similar to the picture in the book.// She cautiously took a spoonful,// and decided that her new year resolution was off to a good start.  (detail) Did Mary Ann buy a new dessert cookbook? (no)  (inf.) Did Mary Ann’s soup taste bad? (no) 3. Heather and Karl have always wanted a baby girl,// but were instead blessed with 3 young healthy and very active boys.// They were considering having another baby and hope for a girl,// but Karl worried that 4 kids would be too much to handle,// and that they might end up with another boy instead.// Soon after, Heather found out she was pregnant,// and the couple was full of hopes and dreams for this last child.// At the ultrasound a few weeks later, they were shocked to find out they were having twin boys.  (detail) Did the couple already have 4 boys? (no)  (inf.) Will Heather and Karl likely try again for a girl? (no) 102  4. As a child, Lynn always dreamed of becoming a famous singer and signing autographs for thousands of fans.// At the age of 8, she participated in a singing contest at her school,// but unfortunately finished second to last.// Determined to make it as a singer and convinced that she had a hidden talent as a performer,// she enrolled in singing classes in her community.// Eleven years later, she was named the winner of the TV show Canadian Idol// and pursued a very successful career in entertainment.  (detail) Did Lynn win the singing contest at her school when she was 8 years old? (no)  (inf.) Is Lynn a good singer today? (yes) 5. Janice wondered why her clothes kept disappearing from her closet and reappearing days later.// She sometimes became very upset in the morning when getting ready for work when she couldn’t find the clothes or shoes she was looking for.// She initially blamed her teenage daughter Katherine,// who firmly denied it and insisted she would never wear those types of clothes.// One day when shopping at the mall, Janice spotted her daughter with her friends on the other side of the store,// and realized that her daughter had lied to her about sneaking into her closet.  (detail) Did Janice spot her daughter with her friends in the park? (no)  (inf.) Was Katherine taking her mom’s clothes without her permission? (yes) 6. Tania had been wanting to go work in Mexico for a few months,// and decided that this year was finally the year she would go.// To get ready for this new adventure,// she enrolled in Spanish classes and familiarized herself with authentic Mexican dishes.// After a month of eating spicy tacos and enchiladas and struggling with her Spanish,// she decided to rethink her decision to go to Mexico.// Perhaps Australia would be a better option, she decided.  (detail) Was Tania planning on going to Mexico to work? (yes)  (inf.) Did Tania feel at home eating Mexican food and speaking Spanish? (no) 7. Fred’s dog has always been terrified of thunderstorms.// When Fred first adopted him 9 years ago, the little puppy would run under the kitchen table and bark until the storm passed,// unable to stop shaking.// Over the years, Fred’s dog learned that thunder was not as threatening as it seemed,// and that it never actually hurt him.// Now, when a thunderstorm starts, he sits in the kitchen, never barks,// but still cannot stop himself from shaking.  (detail) Does Fred’s dog sit in the kitchen during thunderstorms? (yes)  (inf.) Does Fred’s dog still fear thunderstorms as much as when he was little? (no) 8. For the past few months, Blake had been trying to convince his wife to switch to satellite TV,// insisting that it wasn’t too expensive, and that they would get a lot of different channels.// When Blake’s wife finally agreed,// he immediately called Star Choice and got it all set up.// Thrilled, he spent the next 3 weeks sitting on the couch, watching sports, movies, and reality TV shows.// After trying unsuccessfully to get her husband off the couch,// Blake’s wife called Star Choice and cancelled their contract.  (detail) Did Blake spend a lot of time watching sports? (yes)  (inf.) Did Blake’s wife regret getting satellite TV? (yes)  103  9. One morning while making his bed, Lionel noticed tiny black dots on his mattress.// Confused, he picked up a flashlight to take a closer look,// and realized that they were moving around on his mattress and bedsheets.// He let out a scream,// took a step back,// and picked up the telephone to call the exterminator.// In the end, it took the exterminator close to a month to get rid of all the bedbugs that had taken over Lionel’s bed, couches, and carpet.  (detail) Were the dots on Lionel’s bed moving? (yes)  (inf.) Were the bedbugs difficult to get rid of? (yes) 10. Althea, a high school English teacher, was very excited about the new pants she had bought on sale at the mall the previous weekend.// Yesterday, she decided to wear them for the first time.// She headed to school and walked into her classroom,// put her bag down by her desk and headed over to the blackboard to write the daily schedule.// One girl raised her hand and told Althea that she had forgotten to take the price tag off her new pants,// and the other students started laughing when Althea started blushing.  (detail) Did a boy in the class tell Althea about the price tag? (no)  (inf.) Was Althea embarrassed? (yes)  104  


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items