Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

The effects of non-native English on the language processing abilities of native English speakers Dawlings, Kathryn 2002

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2002-0059.pdf [ 6.79MB ]
Metadata
JSON: 831-1.0090201.json
JSON-LD: 831-1.0090201-ld.json
RDF/XML (Pretty): 831-1.0090201-rdf.xml
RDF/JSON: 831-1.0090201-rdf.json
Turtle: 831-1.0090201-turtle.txt
N-Triples: 831-1.0090201-rdf-ntriples.txt
Original Record: 831-1.0090201-source.json
Full Text
831-1.0090201-fulltext.txt
Citation
831-1.0090201.ris

Full Text

THE EFFECTS OF NON-NATIVE ENGLISH ON THE LANGUAGE PROCESSING ABILITIES OF NATIVE ENGLISH SPEAKERS by KATHRYN DAWLINGS B.Sc, The University of Victoria, 1999  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in THE FACULTY OF GRADUATE STUDIES (School of Audiology and Speech Sciences)  We accept this thesis as conforming to the required  standard  THE UNIVERSITY OF BRITISH COLUMBIA April, 2002 © Kathryn Dawlings, 2002  UBC Special Collections - Thesis Authorisation Form  Page 1 of 1  In p r e s e n t i n g t h i s t h e s i s i n p a r t i a l f u l f i l m e n t of the r e q u i r e m e n t s f o r an advanced degree at the U n i v e r s i t y of B r i t i s h Columbia, I agree t h a t the L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r r e f e r e n c e and study. I f u r t h e r agree t h a t p e r m i s s i o n f o r e x t e n s i v e c o p y i n g of t h i s t h e s i s f o r s c h o l a r l y purposes may be g r a n t e d by the head of my department or by h i s or her r e p r e s e n t a t i v e s . I t i s understood t h a t c o p y i n g or p u b l i c a t i o n of t h i s t h e s i s f o r f i n a n c i a l g a i n s h a l l not be allowed without my w r i t t e n p e r m i s s i o n .  Department of The U n i v e r s i t y of B r i t i s h Columbia Vancouver, Canada Date  http://www.library.ubc.ca/spcoll/thesauth.html  4/14/02  ABSTRACT Listening to people with foreign-accented English is often challenging for native English listeners. The current project aimed to investigate the effects of listening to nonnative English on language processing. The roles of syntactic complexity, working memory, and familiarity with accented English in processing non-native English were considered from three theoretical views of working memory and language processing. Twenty-two native English participants listened to sentences of three levels of syntactic difficulty spoken by a native English speaker and a non-native English speaker in an on-line word monitoring experiment. Listeners' response times to monitoring for a target word in each sentence were taken as a measure of processing difficulty. The results showed that response times were slower when listening to non-native English than native English speech. Response times were also slower for the most syntactically difficult sentence type than for the more syntactically simple sentences. The slowest response times occurred for target words that occurred early in the most syntactically difficult sentence type spoken by the non-native English speaker. Working memory scores and self-reported experience listening to accented English did not have a significant effect on response times. There was a trend towards faster response times for the non-native English input from block 1 to block 2 suggesting the presence of adaptation to the accent. These findings support a theory of language processing where acoustic-phonetic features interact with syntactic processing.  iii TABLE OFCONTENTS  Abstract  ii  Table of Contents  iii  List of Tables and Figures  vii  List of Figures  viii  Acknowledgments  ix  CHAPTER 1: Literature Review  1  1.1  Introduction  1  1.2  The Effects of Non-Native Speech  2  1.2.1  Characteristics of Non-Native Speech  3  1.2.2  The Effects of Familiarity with Non-Native English  10  1.2.3  The Interaction of Accent and Processing Time  11  1.2.4  Listening to Non-Native Speech in Noise  13  1.2.5  Speaking Rate and Non-Native Speech  15  1.2.6  Summary of the Effects of Non-Native Speech  18  1.3  Working Memory and Language Processing: Three Views  22  1.3.1  Capacity theory  25  1.3.2  Separate Resource Theory  31  1.3.3  A Connectionist Approach  38  1.4  Adaptation  45  1.5  The Present Study  49  1.6  Research Hypotheses  50  iv CHAPTER 2: Method  56  2.1  Overview  56  2.2  The Comprehensibility Study  56  2.2.1  Participants  56  2.2.2  Comprehensibility Stimuli  57  2.2.3  Recordings  58  2.2.4  Procedure  58  2.2.5  Analysis  59  2.3  The Main Experiment  60  2.3.1  Participants  60  2.3.1.1 Speakers  60  2.3.1.1.1 Native English Speaker  60  2.3.1.1.2 Non-Native English Speaker  61  2.3.1.1.2.1 Differences in English and Cantonese Phonology  61  2.3.1.1.2.2 Phonology of the Non-Native Speaker  64  2.3.1.2 Listeners  68  2.3.2  Description of the Word Monitoring Task  68  2.3.3  Stimuli for the Word Monitoring Task  69  2.3.3.1 Syntactic Complexity  70  2.3.3.2 Fillers  73  2.3.3.3 Plausibility  74  2.3.3.4 Comprehension Questions  75  2.3.3.5 Lists  76  V  2.3.4  Recordings for the Word Monitoring Task  78  2.3.5  Procedure for the Word Monitoring Task  80  2.3.6  Working Memory  81  2.3.7  Language Background Questionnaire  82  2.3.8  Analysis  83  CHAPTER 3: Results  86  3.1  Main Effects in the Word Monitoring Task  86  3.2  Interactions in the Word Monitoring Task_  87  3.3  Adaptation to Non-Native Speech  89  CHAPTER 4: Discussion  91  4.1  Introduction  91  4.2  Speaker  91  4.3  Syntactic Complexity  93  4.4  Target Word Position  97  4.5  Speaker and Syntactic Complexity  99  4.6  Syntactic Complexity, Target Position, and Speaker  99  4.7  Working Memory Capacity  101  4.8  Language Processing Resources  104  4.9  Familiarity  107  4.10  Adaptation to Non-Native English  109  4.11  Implications  Ill  4.12  Conclusions  113  :  vi REFERENCES  115  APPENDIX A: Comprehensibility Ratings  120  APPENDIX B: Experimental Sentences  121  APPENDIX C: Experimental Target Words  126  APPENDIX D: Filler Sentences  128  APPENDK E: Comprehension Questions  130  APPENDIX F: Listening Span Stimuli  136  APPENDIX G: Working Memory Span and Familiarity Groups  139  APPENDIX H: Language Background Questionnaire  140  Vll  LIST O F TABLES  Table 1. Percent (%) of Listeners' Significant Correlations of Accent Features with Accent Ratings, Comprehensibility Ratings, and Intelligibility Ratings: A Comparison of Two Studies  8  Table 2. A Compilation of Sentence Types and Examples Used in the Present Summary of Studies Table 3. Possible Combinations of Stimuli Presentation  26 77  Table 4. Conditions for which Mean Response Times were Calculated  84  Table 5. Mean Response Times (N = 22) and Standard Error by Speaker, Sentence Type, and Target Word Position (in msec)  87  Table 6. Mean Response Times (in msec) to Target Words for Early (Blocks 1-3) and Late (Blocks 11-12) Occurring Blocks of 5 Trials for the Non-Native English Sentences .. 90 Table 7. Mean Response Times (in msec) to Target Words for 4 Blocks of 15 Trials for the Non-Native English Sentences Table 8. Three Examples of Simple Active and Conjoined Sentences  90 95  Vlll  LIST OF FIGURES Figure 1. Mean response times and standard error for the early and late target word positions across sentence types  88  Figure 2. Mean response times (msec) and standard error for the early and late target positions across sentence types (simple active, conjoined, center-embedded) for the NE (native English) and NNE (non-native English) speakers  89  ACKNOWLEDGEMENTS Special thanks goes out to Jeff Small, without whose unwavering support and guidance the completion of this project would not have been possible. I would also like to thank Barbara Purves and Barbara Bernhardt whose thoughtful comments and suggestions were very much appreciated.  1 CHAPTER 1: Literature Review 1.1  Introduction Listening to people with foreign-accented English is often challenging for native  English listeners. Anecdotally, native English listeners will report that they have to listen much more carefully to non-native English speakers and that they do not understand everything that is said, although they may find that comprehension of accented speech improves over time. The current project aims to investigate the effects of listening to nonnative English on the language processing of native English listeners. Previous research has focused on characteristics of second language learners' English, native English listeners' ability to transcribe and rate non-native speech, and native English listeners' comprehension of non-native narratives in off-line tasks. No research as of yet has focused on native English listeners' on-line language processing abilities while listening to accented speech. The current study will investigate whether the time it takes for native English listeners to process accented speech differs from the time it takes to process native speech. If there are differences in processing time, these differences may be caused by the increased demand on processing resources due to the difficulty of processing the accent. Increasing the syntactic complexity of sentences also increases the demand on working memory resources. The role of working memory is a key element in many theories of language processing. One proposal is that working memory capacity determines the amount of resources that are available for completing language processing tasks; tasks that require a lot of storage and/or processing put a large demand on working memory resources (Just & Carpenter, 1992). This study will investigate the combined effects of syntactic complexity and non-native accent on processing time in sentences and will relate these findings to individual differences in  2  working memory. Furthermore, the time course that the native English listener takes to adjust to accented speech will be investigated as well as the listener's previous exposure to non-native English. As the listener becomes more familiar with the accent, he may need to dedicate fewer resources to the decoding of the acoustic-phonetic information and may have more resources available for syntactic processing. This chapter begins with a review of the research on the effects of non-native speech on native English speakers' comprehension. It will then focus on discussions of working memory for language and adaptation to speech. Finally, the research hypotheses for this project will be presented. 1.2  The Effects of Non-Native Speech on Native English Listeners Much of the literature concerning the effects of non-native speech on native English  speakers' comprehension has been motivated by English as a second language pedagogy. Researchers have been questioning which aspects of English are essential to developing native-like proficiency in English as well as how to maximise E S L students' communicative success in their new language. The latter could theoretically be achieved by identifying a variable that, when modified, increases the success of the L 2 (second language) learner and native speaker interactions, for example, finding an optimal speaking rate for the non-native English speaker. This section addresses the characteristics of non-native English that affect English listeners' understanding. It also presents research that has investigated the effects of familiarity with non-native speech as well as the interaction of non-native speech and processing time. Furthermore, studies that have increased the language processing demand by adding background noise to non-native speech and changing speaking rate are also presented.  1.2.1  Characteristics of Non-Native Speech Native speakers of a language have often been described as using a simplified  register, or "foreigner talk," when interacting with non-native speakers. Varonis and Gass (1982) and Gass and Varonis (1984) investigated potential triggers for the use of foreigner talk. Varonis and Gass (1982) questioned whether a characteristic of the non-native speaker's talk, such as errors in grammar, leads the listener to react differently to the speaker or whether the non-native speaker somehow demonstrates a lack of comprehension that leads to adaptations in the native speaker's speech. The latter did not seem to be the case because in a naturalistic study where non-native and native speakers asked strangers for directions, the listeners adapted their responses before the non-native speakers had the opportunity to demonstrate decreased comprehension (Varonis & Gass, 1982). This led the authors to investigate the contribution of grammar and pronunciation to native speakers' reactions to non-native speech. Non-native English speakers were recorded reading two sentences; one was grammatically correct and the other was not. Native speakers judged the pronunciation of the sentences as either "good" or "bad." They found that when the non-native speaker's pronunciation was in the middle of a good-bad scale of pronunciation, the grammaticality of the sentence had an effect on the native listeners' ratings; however, when the non-native speaker's pronunciation was at either extreme of the scale, the grammaticality of the sentence had only a minimal effect. Another group of native speakers listened to the sentences and judged them on comprehensibility on a five point scale ranging from "I understood this sentence easily" to "I didn't understand this sentence at all." Varonis and Gass (1982) found that higher ratings of comprehensibility and the number of "good" pronunciation responses were correlated for both the grammatical and ungrammatical sentences. This study suggests  4 that native English listeners found non-native speech easier to understand when pronunciation was considered to be good, keeping in mind that pronunciation ratings were sometimes affected by the grammaticality of the sentence. The latter occurred especially when a separate group of expert raters rated pronunciation as moderate. Munro and Derwing continued to investigate the interaction of foreign accent, comprehensibility, and intelligibility in the speech of non-native speakers. Munro and Derwing (1995a) defined intelligibility as the extent to which the native speaker understands the message. They aimed to measure intelligibility by asking the native speaker to transcribe the L2 learner's utterances in standard orthography. This became a measure of the amount of the utterance that was actually understood by the listener. Comprehensibility was defined as the listener's perception of intelligibility and was measured by judgments on a rating scale of how easy or difficult an utterance is to understand (Munro & Derwing, 1995a). Accentedness referred to how strong the non-native speaker's foreign accent was perceived to be and was also measured on a subjective scale (Munro & Derwing, 1995a). It should be noted, however, that Munro and Derwing's use of transcriptions to measure comprehension is an off-line measure. Participants were required to listen to a sentence, hold it in memory, and then write it down. Therefore, it is impossible to isolate the influences of having to remember the sentences that varied in length from 4 to 17 words, and having to remember deviances in grammar that may occur in non-native English. At least one study (Munro & Derwing, 1995b), however, reported that the length of the sentence did not have a significant effect on intelligibility scores. Munro and Derwing (1995b) examined the relationship between three dimensions of non-native speech: accent, intelligibility, and comprehensibility. A group of 18 native  5 English speakers listened to a set of speech samples produced by 10 non-native English speakers (Mandarin was their first language), transcribed them orthographically, and rated them for degree of foreign accent and comprehensibility. They found moderate correlations between intelligibility and comprehensibility (i.e. perceived intelligibility) and between comprehensibility and accent (Munro & Denying, 1995b). This is consistent with the findings described above where comprehensibility judgments were correlated with pronunciation judgments (Varonis & Gass, 1982). Although comprehensibility ratings were correlated with accent ratings, they were not equivalent (Munro & Derwing, 1995b). This means that when an utterance was rated as easy to understand, it did not mean that there was no foreign accent. Furthermore, comprehensibility scores were a much more accurate indicator of the listener's actual comprehension than were the accent scores (Munro & Derwing, 1995b). This could result because "accent" is an abstract, perceptual dimension influenced by stereotypes and social expectations. Interestingly, the native speakers sometimes rated the non-native utterances as moderately or heavily accented even when they were able to transcribe them perfectly (Munro & Derwing, 1995b). These findings suggest that being perceived as having a heavy accent does not necessarily lead to either reduced intelligibility or reduced comprehensibility. The above results were replicated in a study investigating non-native English speakers from four different first language backgrounds: Cantonese, Japanese, Polish, and Spanish (Derwing & Munro, 1997). Accent, intelligibility and comprehensibility were related but were not equivalent; listeners rated accent more severely than comprehensibility which in turn suggested more severe comprehension difficulties than were actually revealed by the intelligibility scores. This pattern occurred regardless of the non-native speaker's first  6  language. This shows that some features of accent may be highly noticeable, although they do not necessarily interfere with comprehensibility or intelligibility. The question returns to which characteristics of the L2 learner's speech are contributing to how well it is understood and judged by native English listeners. Munro and Derwing (1995b) compared intelligibility, accent, and comprehensibility scores with phonetic, phonemic, and grammatical errors, and goodness of intonation ratings in 10 nonnative speakers' English utterances; the speakers' LI was Mandarin. They found significant correlations between accent ratings and all the error types, as well as the goodness of intonation ratings, for the majority of their listeners (over 70%). This suggests that the listeners took all of the above factors into account when they made accentedness judgments. When comprehensibility ratings were compared with the linguistic measures, goodness of intonation was correlated for the majority of listeners, but the number of correlations for the other measures dropped substantially (44% for phonemic and 11 % for phonetic errors). Intelligibility scores were found to be correlated with error types and intonation for only a minority of the listeners (except 0% for phonetic errors). These findings suggest that phonemic, phonetic, and grammatical errors in the stimuli have more influence on perceptions of accent than they do on perceptions of comprehensibility whereas errors in prosody affect perceptions of both accent and comprehensibility (Munro & Derwing, 1995b). The contributions of four accent features (grammatical errors, phonemic errors, prosody, and speaking rate) were also investigated in the English productions of speakers with four other first language backgrounds: Cantonese, Japanese, Polish, and Spanish (Derwing & Munro, 1997). A comparison of the results of these two studies is presented in Table 1. As in the above research with Mandarin (Munro & Derwing, 1995b), the accent  7 features were not correlated with intelligibility scores for very many listeners; there were, however, far fewer correlations between accent features and both accent ratings and comprehensibility ratings in this study than in the 1995 study with Mandarin speakers. Instead, only a small majority of listeners in the 1997 study had correlations between grammatical errors and accent and comprehensibility ratings (50% and 54% respectively). Furthermore, in the 1997 study, a minority of listeners had correlations between intonation scores and accent and comprehensibility ratings as opposed to the over 80% majority that had significant correlations in the 1995 study. Results from the 1997 study suggest that grammatical errors are the most salient accent feature. However, as this was true for only a small majority of listeners and because of the differences in the results of the two studies, the influence of accent features on accent and comprehensibility judgments is not clear. The researchers note, however, that the non-native speakers in the 1997 study had lower English proficiency levels than the speakers from the 1995 study which led to more grammatical and phonemic errors and less native-like prosody; for example, there were 2.52 grammatical errors per utterance in the 1997 study compared to 0.6 errors in the 1995 study (Derwing & Munro, 1997). Derwing and Munro (1997) suggest that the abundance and range of errors may have had a cumulative effect on accent and comprehensibility judgments that was not reflected here. They also hypothesise that the listeners may not have been able to apply a single set of criteria when rating the non-native sentences because of potential difficulty being consistent over four language backgrounds (Derwing & Munro, 1997). Anderson-Hsieh and Koehler (1988) evaluated the speech of three non-native English speakers (Chinese was their Ll) on a seven-point scale of nativeness of segmentals, syllable structure, and prosody for three speaking rates. The speakers were fairly consistent in these  8 Table 1 Percent (%) of Listeners' Significant Correlations of Accent Features with Accent Ratings, Comprehensibility Ratings, and Intelligibility Ratings: A Comparison of Two Studies.  Accent Feature Phonemic score Phonetic score Grammar score Intonation score a b  Percent of Significant Correlations ;%) Intelligibility Rating Comprehensibility Accent Rating Rating 1995 1997 1995 1997" 1995 1997" a  b  a  78  15  44  15  28  8  72  _  11  _  0  _  78  50  56  54  17  15  89  27  83  35  22  8  Munro and Derwing, 1995b, N - 18 Derwing and Munro, 1997, N = 26  speech characteristics across speaking rates. Higher ratings (i.e. more native-like) of segments, syllable structure, and prosody in the L2 learners' speech generally correlated with higher comprehension scores by the native English listeners. Comprehension scores were obtained through multiple choice questions to passages (310 to 475 syllables long) read aloud by the non-native English speakers. Ratings and comprehension scores from one of the three speakers suggests, however, that prosodic deviance may affect comprehension to a greater degree than segmental deviance (at the fast rate only) (Anderson-Hsieh & Koehler, 1988). A further study by Anderson-Hsieh, Johnson, and Koehler (1992) was consistent with the latter finding. They studied the relationship between these components of pronunciation— prosody, segmentals, and syllable structure—and judgments of pronunciation in narrative samples of 60 ESL learners from 16 language backgrounds. Segmental and syllable structure errors were calculated and overall prosody was rated on a nativeness scale. Measures of  9 components of prosody (stress, rhythm, phrasing, and intonation) were also rated and were all highly correlated, and so the "overall prosody" rating was used. Overall prosody was found to be the most strongly correlated with pronunciation scores; segmental errors and syllable structure errors were also found to be significantly correlated but to a lesser degree. The results between two broad language groups (East Asian and Indian Subcontinent) suggested, however, that the relative effect of prosodic deviance on pronunciation compared to segmental and syllable structure errors may be dependent on the native language of the speaker (Anderson-Hsieh, Johnson, & Koehler, 1992). Namely, although both language groups showed the strongest correlation between prosody and pronunciation scores, this correlation was significantly lower for the East Asian group. These results, however, point to the influence of prosody on judgments of pronunciation in L2 learners, regardless of the speaker's first language background. The above studies suggest that some features of speech may affect native English listeners' perceptions of accent strength and understandability. In particular, prosody has been most strongly correlated with accent ratings, comprehensibility ratings (Munro & Derwing, 1995b), and pronunciation scores (Anderson-Hsieh, Johnson, & Koehler, 1992). Fewer correlations have been noted between pronunciation and both segmental errors and syllable structure errors (Anderson-Hsieh, Johnson, & Koehler, 1992), and between accent and comprehensibility ratings and phonemic errors, phonetic errors, and grammatical errors (Derwing & Munro, 1997; Munro & Derwing, 1995b). However, the above research suggests that these features do not regularly affect native English listeners' actual off-line comprehension. Good intelligibility scores in spite of the presence of accent features may arise in orthographic transcription tasks because the listener is able to use "top-down"  10 processing, or is integrating information from higher levels of analysis by "replaying" the sentence. In this way, the listener may have time to resolve comprehension difficulties that are a result of deviations in prosody or phonology. In on-line comprehension tasks, more immediate effects of accent features on language comprehension may be revealed. 1.2.2 The Effects of Familiarity with Non-Native English Gass and Varonis (1984) investigated the effects of familiarity on native speakers' comprehension of non-native speech. They examined the effects of familiarity with topic, non-native speech in general, a particular non-native accent, and a particular non-native speaker. Native English listeners transcribed sets of sentences that occurred before and after they heard a short narrative. The narrative provided a context for the following sentence set in some conditions. Furthermore, the non-native speaker sometimes varied between the narrative and the post-narrative sentence set. Gass and Varonis found that familiarity with all of the above factors tended to have a facilitating effect on native speaker comprehension, with the most important factor (and the only statistically significant one) being familiarity with topic. Participants' self-reported familiarity with various accents has also been found to be correlated with how correctly they were able to transcribe sentences spoken by second language learners (Derwing & Munro, 1997). Native English participants' familiarity with accents in general also predicted their success at identifying the first language of the speaker as Japanese, Cantonese, Polish, or Spanish (Derwing & Munro, 1997). Cantonese was the easiest accent to identify followed by Spanish, Polish, and Japanese; listeners were more likely to mistake the speakers of the two Asian languages or the two Indo-European languages for each other (Derwing & Munro, 1997). Familiarity with accented speech has not been significantly correlated with language processing measures in all studies, however.  11 Munro and Derwing (1995a) did not find any significant effect of familiarity with accented speech (regular contact vs. little or no contact with non-native speakers) on sentence verification response times, verification scores, transcriptions scores, comprehensibility ratings, or accent ratings. Generally, however, these findings suggest that both speaker variables such as pronunciation and grammar (Varonis & Gass, 1982) and listener variables such as familiarity (Gass & Varonis, 1984; Derwing & Munro, 1997) may influence the comprehension of non-native speech. 1.2.3 The Interaction of Non-Native Speech and Processing Time Munro and Derwing (1995a) hypothesised that the finding that native listeners sometimes assigned low comprehensibility scores to accented speech could be due, in part, to increased processing difficulty for accented speech; they suggested that this could lead to increased processing time. They hypothesised that accented speech segments may require a greater amount of time to be recognised if they differed significantly from category prototypes. Hawkins and Warren (1994) suggest that less reliable acoustic information may not give sufficient local cues to activate the correct phonemes adequately, and in turn, to activate the correct lexical item out of an array of phonetically similar lexical choices. However, Hawkins and Warren (1994) purport that many other information sources are available even in the face of acoustic-phonetic deviance. These higher level factors include the listener's knowledge of the sound inventory in his language. The ability to recognise the correct word can depend on the number of competing phonemes in the language. For example, English has few velar phonemes (k, g, rj) but many alveolars (t, d, n, s, z, 1, r), and so if a phone is clearly velar, the chances of making the correct choice among velars is greater. Word recognition may also depend on the possibility of competing lexical choices;  12 for example, the /g/ in target can only be a /g/ in order to form an English word whereas the ig/ in rig could be Ibl or Idl and form the words rib and rid respectively. Another factor is the knowledge that a listener may have of the idiosyncrasies of a particular speaker's pronunciation (Hawkins & Warren, 1994). The listener may know through experience with a speaker that, for example, he consistently substitutes Ifl for /9/ or, more subtly, that he has a lateral Is/. Finally, other higher level factors are the context surrounding the word (including its predictability), its grammatical category, and its recency (Hawkins & Warren, 1994). Therefore, it appears that word recognition systems are able to gather information from many sources to lead to successful word activation even when the acoustic information is not entirely clear. However, Hawkins and Warren's (1994) factors are based on acousticphonetic variability among native English speakers in normal conversational speech. The effects of larger degrees of acoustic degradation or phonetic deviance, such as may be present in L2 learners' speech, are not tackled. There may be a point where higher order factors do not provide enough information for lexical selection to be performed reliably. Or, it may take more processing time for an adequate amount of information to be gathered to sufficiently activate a lexical item. Munro and Derwing (1995a) suggest that miscomprehension of the word, and awareness of the miscomprehension, may result in a need for special top-down processing (such as "replaying" the message) which could also increase the processing time. A speaker's message may be understood, but the listener may have had to work hard to do so. This is not captured in off-line assessments because final comprehension is measured, not the process of comprehension. When a listener is aware that more effort was required to understand a message, he may perceive the message to be more difficult to understand, and therefore assign a higher comprehensibility score.  13 Munro and Derwing (1995a) investigated the effect of a foreign accent on sentence processing time. Native English speakers made accuracy judgments on a set of true/false statements that were spoken by both native English and non-native English (Mandarin) speakers. They also rated accentedness and comprehensibility. The results showed that the native English utterances were verified correctly significantly more often than the Mandarin utterances. Furthermore, the response latencies were greater when the listeners were evaluating the Mandarin utterances than the English utterances. Utterances that were rated to be low in comprehensibility had longer response latencies; however, there was no significant relationship found between accentedness ratings and response latencies. This suggests once again that a perceived strong accent alone does not necessarily affect language processing. However, non-native English utterances, especially those with low comprehensibility, seem to require more time to process than native English utterances. One point to note about the above study is that Munro and Derwing consider the sentence verification task in this study to be an on-line task. They allowed the listener to hear each sentence one time only stating that "if the listener heard a statement for a second time, he or she might not evaluate it on-line, but rather from memory, and the response latency would be meaningless (p.293)." Arguably, the above task could be considered an off-line task because the participants had to integrate acoustic and semantic information and make an evaluative judgment after processing the sentence. Thus, their findings showing a relationship between verification response times and comprehensibility may not reflect the immediate demands of non-native speech on comprehension. 1.2.4 Listening to Non-Native Speech in Noise In a further study, Munro (1998) investigated the effects of processing non-native  14 English in noise. Native listeners report anecdotally that non-optimal listening conditions can make speech especially difficult to understand, for example, when talking in a noisy room or on the telephone (Munro, 1998). Adding background noise (signal to noise ratio of 7.9 dB) to a sentence verification task simulated a non-optimal environment for listening to non-native English. Native English listeners heard statements spoken by both native and non-native English speakers (Mandarin) with and without background noise. The listeners transcribed the sentences and then verified them as true, false, or unknown. The Mandarinaccented phrases were correctly verified significantly less often than the native English phrases both with and without background noise; additionally, the Mandarin-accented phrases themselves were verified correctly less frequently in the [+noise] than in the [-noise] condition. Munro adds that the addition of noise had a greater effect on stimuli verification than did the presence of the Mandarin accent alone. The Mandarin-accented utterances, however, were more greatly affected by the addition of noise than were the native English utterances. The mean drop in verification scores for the Mandarin-accented sentences was 32% compared to a mean drop of 23% for the native English sentences; likewise, intelligibility scores dropped by 28% and 19% respectively (Munro, 1998). This suggests that the increased demands on processing resources due to the addition of noise had a greater impact on the Mandarin-accented sentences because the available resources were already being designated to processing the accent itself. Recently, a series of experiments by Biirki-Cohen, Miller and Eimas (2001) investigated native speakers' ability to detect word-initial phonemes in non-native English words. They questioned whether acoustic-phonetic alterations in non-native English speech would lead to differences in language processing. Biirki-Cohen et al. (2001) hypothesised  15 that alterations in the non-native English speech would result in the native English listeners needing to hear more of the target word in order to identify the initial phoneme. Native English participants listened to 80 monosyllabic target words (half were high frequency and the other half low frequency) spoken by two female speakers—one native English speaker and one native speaker of Swiss German with a "moderate" accent. The word-initial phonemes were /b/, /p/, Idl, and IM and the words were presented in a background of multitalker babble with a signal to noise ratio of -3 dB. The results showed that when native English participants listened to words produced by a non-native English speaker presented in degraded listening conditions, the overall reaction times for word-initial phoneme identification increased. Furthermore, there was an effect of word frequency which led the authors to suggest that the listeners were using postlexical information (i.e. the rest of the word) to complete the phoneme identification task. When listening to words by a native English speaker, word frequency did not have an effect. Furthermore, there was neither an increase in reaction time nor use of postlexical information when the non-native English speech was not presented in noise (Burki-Cohen et al., 2001). These results suggest that the language processing system was able to compensate for the effects of listening to a moderate, intelligible accent to such a degree that reaction times in a phoneme monitoring task were not significantly increased. However, with the added demands imposed by background noise, the language processing system slowed down and the effects of non-native language input were notable. 1.2.5  Speaking Rate and Non-Native Speech Anderson-Hsieh and Koehler (1988) presented anecdotal reports by students and  faculty at Iowa State University suggesting that an increase in speaking rate in non-native  16 speakers may play a critical role in comprehension of this speech. Likewise, when the listeners in Derwing and Munro's (1997) study were asked to comment on the factors that they felt interfered with their comprehension, 5 out of 13 reported that some of the speakers spoke too fast. For 10 of the 26 listeners, significant negative correlations were observed between speaking rate and comprehensibility ratings, but not between speaking rate and intelligibility scores (Derwing & Munro, 1997). Anderson-Hsieh and Koehler (1988) studied the effect of speaking rate on native speaker comprehension of spoken passages. In this study, one native English speaker and three native speakers of Chinese, with varying proficiency in English, read passages at three different speaking rates. Native listeners listened to the passages and then took a listening comprehension test (multiple-choice questions) and rated the speech samples for degree of foreign accent and speaking rate ("too slow" to "too fast"). Comprehension scores were found to be lower for non-native passages than the native ones; furthermore, comprehension scores corresponded to the speaker's degree of foreign accent. Comprehension scores also decreased significantly from the regular rate to the fast rate. This decrease was most dramatic for the speaker who had the most pronounced accent. The speakers with the most pronounced accents were also perceived by the listeners as speaking faster even when they were not (Anderson-Hsieh & Koehler, 1988). The above findings suggest that native speakers of English may prefer to hear accented speech produced at a slower rate than native speech. Munro and Derwing (1998) hypothesise that non-native speakers may have time to allot more resources to produce accurate articulations when they talk at a slower rate. Furthermore, they suggest that listeners will have more time to process accented speech if it is presented at a slower rate.  ESL learners have been noted to speak at significantly slower rates than native speakers (Munro & Derwing, 1995a). Munro and Derwing (1998) investigated whether the slower rates which are typical of ESL learners are optimal from the native English listener's point of view. The expectation was that slow L2 speech would be considered to be less accented and more comprehensible by native speakers. However, when native speakers rated passages spoken by 10 high proficiency L2 speakers (their LI was Mandarin), they rated the slow passages to be significantly more accented and less comprehensible than those spoken at a normal rate (the speaker's natural rate). The L2 speakers' normal rate was already slower than native English speakers' normal rate. The native listeners did not rate a further reduction of the L2 speakers' rate to be optimal. The authors noted that asking the L2 subjects to speak slowly sometimes resulted in pronunciation errors and intonation irregularities that did not occur at their normal speaking rates. To keep pronunciation constant, rate was then manipulated in the above study by a digital speech compressor-expander and native listeners rated the passages on a scale of "too slow" to "too fast." Mandarin speakers' English productions received ratings closest to the middle of the scale ("just right") when they were listened to at unmodified rates; increasing the rate to an average native English rate and decreasing the rate by 10% both had negative effects on the English listeners' perceptions of the acceptability of the rate (Munro & Derwing, 1998). However, listeners' preference for unmodified rates may also reflect the unnaturalness of the digitally modified speech, regardless of the rate. In all, native English listeners generally preferred to hear non-native speech at somewhat slower rates than native speech; however, there were no results that supported further reductions of non-native speech as a means to obtain higher levels of favour on native listeners (Munro & Derwing, 1998).  18 It appears that L2 speakers may naturally speak at a rate that optimises their accent and comprehensibility. The above study did not measure intelligibility or make any other measurements of how well the L2 passages were processed, or differences in the processing accuracy at different presentation rates. However, given previous research, it seems probable that intelligibility scores would have followed the same patterns as the comprehensibility scores (e.g., Munro & Derwing, 1995b). Contrary to Munro and Derwing's (1998) hypothesis, the native listeners rated comprehensibility higher at the non-native speakers' normal rates than at their slow rates. They suggested that instead of giving the L2 learners more time to form accurate articulations, decreasing rate sometimes led to increased reading articulation errors and intonation errors (Munro & Derwing, 1998). On the flip side, instead of giving more time to process accented speech, the slower presentation rate may have created extra demands on the processing resources available to the listener. A potential increase in pronunciation irregularities with slowed speech may also have increased the processing demands. 1.2.6  Summary of the Effects of Non-Native Speech The main findings of the research concerning the effects of non-native English on  native English listeners will be summarised here. Native English listeners understand English spoken by LI speakers more successfully than English spoken by L2 learners. They are able to correctly judge utterances as true or false significantly more often, and more quickly, when the utterances are spoken by native English speakers than when they are spoken by non-native English speakers (Munro & Derwing, 1995a; Munro, 1998). They have higher listening comprehension scores after listening to narratives read by native English speakers than by non-native English speakers (Anderson-Hsieh & Keohler, 1988).  19 Furthermore, native English listeners' ability to transcribe spoken utterances orthographically - a measure of their actual comprehension of the utterance - is significantly greater for native English utterances than for the utterances of L2 learners (Munro & Derwing, 1995a, 1995b; Munro, 1998). Seen from a processing resources perspective, native English listeners seem to have more difficulty with language processing activities when they are composed of accented English than when they are composed of native English. For example, when extra processing resources are being directed towards fitting the non-native phonemes, prosody, and/or lexical choices into native English prototypes, fewer resources are readily available for other aspects of processing in the language task. The above tasks such as the sentence verification task, listening comprehension, and orthographic transcription take longer or are not completed as successfully. Additional demands such as listening in noise (Munro, 1998) or at increased or decreased rates (Anderson-Hsieh & Koehler, 1988) may increase the demands on processing resources even further, compounding the demands of accent alone, and resulting in larger drops in performance on language tasks. Native English listeners can consistently rate L2 speech samples on scales that represent their degree of accent and their comprehensibility (or perceived understandability). These scores can be compared with intelligibility scores that represent actual comprehension through orthographic transcription. Accent and comprehensibility ratings are not equal but they are correlated with each other (Munro & Derwing, 1995b; Derwing & Munro, 1997); for example, an utterance can be rated as moderately accented and also be considered to be easy to understand. Furthermore, comprehensibility ratings are more strongly correlated with actual comprehension (intelligibility scores) than are accent ratings (Munro & Derwing,  20 1995b). Comprehensibility ratings have also been related to the time taken before making a verification response after hearing a statement spoken by an L2 learner—low comprehensibility leads to longer response latencies (Munro & Derwing, 1995a). There has been no relationship found between accent ratings and response latencies. Significant correlations between the non-native speaker's speaking rate and comprehensibility score have also been reported (Derwing & Munro, 1997) such that increased speaking rates led to decreased comprehensibility. From these findings, it appears that subjective measures of comprehensibility are related more closely to actual comprehension and processing time than are subjective measures of accentedness. Several studies looked at the characteristics of the non-native English in an attempt to understand which components affect native English listeners' perception and comprehension. The frequency of correlations and the degree of correlations between accent features and listeners' language measures may be influenced by the L2 learner's proficiency in English (Derwing & Munro, 1997) and the first language background (Anderson-Hsieh, Johnson, & Koehler, 1992). Prosody has been found to be the most strongly correlated with pronunciation scores (Anderson-Hsieh, Johnson, & Koehler, 1992) and with both accent ratings and comprehensibility ratings (Munro & Derwing, 1995b). This was not found, however, in all studies (see Derwing & Munro, 1997). Correlations, although to a lesser degree, have also been noted between pronunciation and both segmental errors and syllable structure errors (Anderson-Hsieh, Johnson, & Koehler, 1992), and between accent and comprehensibility ratings and phonemic errors, phonetic errors, and grammatical errors (Derwing & Munro, 1997; Munro & Derwing, 1995b). These accent features, however, were not found to be correlated very reliably with intelligibility scores (Derwing & Munro, 1997;  21 Munro & Derwing, 1995b) which suggests that accent features may not interfere with actual comprehension during short, off-line processing tasks. Some listener variables have also been investigated. Anecdotally, listening to accented English sometimes becomes easier over time. Familiarity with the topic of the L2 learners' English utterances has been found to improve native English listeners' comprehension significantly (Gass & Varonis, 1984). Experimental variations in experience listening to speakers with accented English, including a particular speaker or a speaker with the same LI, also tended to facilitate native listeners' comprehension (Gass & Varonis, 1984). In the longer term, self-reported familiarity with various accents was related to increased ability to both transcribe L2 learners' speech and to identify the learners' LI background (Derwing & Munro, 1997). The present study aims to further investigate native English listeners' ability to process non-native English. Previous studies have focused on determining native English listeners' subjective reactions to non-native speech (e.g., accent and comprehensibility ratings), their off-line processing abilities when listening to non-native input (e.g., transcribing sentences and answering true/false or comprehension questions), and characteristics of the non-native speech (e.g., rate and error patterns) that influence subjective reactions and off-line processing. The present study targets a gap in the literature addressing changes in native English listeners' on-line processing abilities when listening to non-native English speech compared to native English speech. Accented English has been found to affect native English listeners' ability to report what has been said and to understand more fully the content of the utterance after the utterance has been heard. This study will investigate the extent that non-native English input affects the sentence processing  22 capabilities of native English listeners using a real-time measure. Processing resources will be targeted in this study as a tool for explaining on-line sentence processing differences between accented and native English input. Specifically, it is proposed that non-native English input will tax the listener's processing system to such a degree that language processing itself will become challenged. The following section presents three views of working memory resources and language processing and discusses the predictions of each view vis-a-vis the variables in this study. 1.3  Working Memory and Language Processing: Three Views Working memory is defined as a limited capacity memory system that can store and  process information simultaneously, for short durations, while a task is being performed (Baddeley, 1992). Working memory is argued to play a role in many human cognitive functions, such as reasoning and problem solving, and it has shaped theories of language comprehension. Just and Carpenter (1992) have developed the capacity theory of language comprehension, supported by a computational simulation model, that centers on the limited capacity of working memory. They hypothesise that an individual's working memory capacity directly constrains language comprehension processes and that individual differences in working memory capacity are a primary contributor to differences in language comprehension abilities. Just and Carpenter argue that all language comprehension tasks draw on a single pool of working memory resources and that information from different levels of language processing (such as syntax and semantics) interact and draw on the same set of resources. They measure working memory for language capacity through tasks such as the reading span task (Daneman & Carpenter, 1980) where participants are required to recall sentencefinalwords after reading a set of unrelated sentences. Just and Carpenter believe  23 that this task simultaneously draws on the processing and storage resources of working memory because the individual continues to process sentences while storing words for recall. Individuals are classified as high-, medium-, or low-span by their success on the reading span task. According to Just and Carpenter's (1992) theory, differences in performance on language comprehension tasks between high- and low-span participants is evidence of the constraints imposed on the language processing system by the capacity of working memory. Furthermore, if added processing demands imposed by one level of language (e.g., acousticphonetic decoding) affect comprehension at another level (e.g., syntax), this suggests that both levels are relying on the same pool of processing resources—i.e. there is a single working memory for language. Caplan and Waters (1999) also advocate the role of working memory in language comprehension as a space for both storage and processing. Contrary to Just and Carpenter (1992), however, they posit that verbal working memory does not function as a single system but is instead divided into "interpretive processing" which is the unconscious extraction of meaning from the linguistic signal and "post-interpretive processing" which is using the meaning to accomplish other cognitive tasks, such as planning actions and storing information in long-term memory. Caplan and Waters (1999) argue that the reading span task often used to determine high- and low-capacity individuals does not assess the working memory used for language processing because it requires consciously controlled processing. They therefore predict, and provide experimental evidence, that there should be no interactions between individual differences measured by the reading span task and language processing. Namely, individuals who differ on the reading span task do not show differences in language interpretation tasks. Furthermore, Caplan and Waters (1999) theorise that taxing  24 working memory capacity at an unconscious language processing level will not directly affect processing ability at a conscious level. The debate concerning the modularity of working memory for language, and subsequent issues concerning the aspects of working memory that are targeted during experimental tasks, have fueled much of the current literature on language processing. More recently, MacDonald and Christiansen (2002) proposed a connectionist approach to language processing that minimises the importance of the above debate. In their view, language is processed by the passing of activation through a multi-layer computational network. Instead of postulating working memory as an independent entity, MacDonald and Christiansen argue that working memory falls out from properties of the network, such as efficiency at passing activation based on experience with the input (i.e. familiarity with different patterns of activation). In such a network, capacity and knowledge are inseparable such that processing is not functionally separated from long-term knowledge of language. That is, patterns and flow of activation serve both to represent that knowledge and process it in real time. Individual differences in linguistic knowledge and processing capacity are explained by two factors. First, MacDonald and Christiansen (2002) argue that high-span individuals have greater experience with language which leads to increased processing proficiency. Second, they claim that biological differences, such as differences in the ability to form and access phonological representations, influence the quality and quantity of language experience that an individual acquires. The next section reviews research that supports the capacity theory and how the hypotheses of the present study fit into this model. The hypotheses will then be looked at briefly in terms of the other two views of working memory.  25 1.3.1  Capacity theory Just and Carpenter (1992), hereafter "JC," define capacity as the maximum amount of  activation available in working memory to support both storage and/or processing of language. When an element from long term memory has an activation level that is above some minimum threshold value, then the element becomes part of working memory. Language comprehension requires that the listener be able to retrieve and store words presented early in an utterance and relate them to words and phrases that come later. If a language task requires more activation than is available, a tradeoff between storage and processing can occur. Working memory resources that are available for activation of elements can be scaled back resulting in a decrease in processing speed, or there can be a deallocation of old elements in working memory which then become "forgotten." The language comprehension system can therefore be challenged by language tasks that have high resource demands. The allocation of resources in processing syntax during sentence comprehension has been supported by experimental results. Caplan and Waters (1999) refer to research measuring eye-fixation durations, self-paced word-by-word reading times, phrase-by-phrase reading times, lexical decision times, and self-paced listening times where increases in processing time have been found at points in sentences where increased demands on processing resources are predicted due to syntactic demands (e.g., Ferreira et al., 1996; King & Just, 1991). JC argue that individual differences in working memory capacity constrain language comprehension by determining the amount of processing resources available for language tasks. In this view, working memory capacity differences lead to individual differences in the ability to process portions of sentences that are particularly demanding on resources. A summary of the research that JC use to support this view  26 follows. Table 2. A Compilation of Sentence Types and Examples Used in the Present Summary of Studies Sentence Type 1 Reduced relative clause -  Example The evidence examined by the lawyer shocked the jury.  unambiguous 2 Reduced relative clause -  The defendant examined by the lawyer shocked the jury.  ambiguous 3 Unreduced relative clause  The evidence that was examined by the lawyer shocked the jury.  4 Center-embedded subject-  The reporter that the senator attacked admitted the error.  object relative clause 5 Center-embedded subject-  The reporter that attacked the senator admitted the error.  subject relative clause 6 Ambiguous sentence frequent interpretation 7 Unambiguous sentence  The experienced soldiers warned about the dangers before the midnight raid. The experienced soldiers spoke about the dangers before the midnight raid.  8 Ambiguous sentence infrequent interpretation  The experienced soldiers warned about the dangers conducted the midnight raid.  Ferreira & Clifton (1986) found that even when readers were presented with nonsyntactic disambiguating information in a syntactically ambiguous sentence, they continued to be "led down the garden path." The sentences used are presented in Table 2, for example, (1) "The evidence examined by the lawyer shocked the jury" and (2) "The defendant examined by the lawyer shocked the jury." These sentences are reduced relative  27 clauses because they omit the complementizers and the verb ("who was" or "that was") of the relative clause. Initially, (2) is ambiguous. "The defendant examined" could temporarily be interpreted as the main verb ("The defendant examined the courtroom") or the correct interpretation as a relative clause. When the head noun is inanimate, as in (1), the ambiguity is not present because only animate nouns can be followed by the main verb "examined." The encounter with the following by phrase should mark an area of processing difficulty if the reader is taking the main verb interpretation. Ferreira and Clifton (1986) found that the animacy cue did not influence reading times and therefore argued that syntactic processing was modular and not influenced by other sources of information. JC repeated the experiment with high- and low-span groups and included sentences with unreduced relative clauses such as (3) "The evidence that was examined by the lawyer shocked the jury." They found that high-span readers had decreased reading times at the by phrase when reading sentences with an inanimate head noun than an animate head noun for both reduced and unreduced relative clauses. The low-span readers did not show a difference in reading times. JC explained these results in terms of a capacity difference between the two groups, suggesting that only the high-span readers had the capacity to take pragmatic information into account. The interaction between the animacy cue and syntactic processing support the existence of a common capacity for language processing. Center-embedded subject-object relative clauses, such as (4), "The reporter that the senator attacked admitted the error" make large demands on working memory capacity because the embedded clause interrupts the main clause requiring that elements of the clause be maintained in working memory throughout this time. Also, one of the nouns ("reporter") is the subject of the main clause and also the object of the embedded clause and therefore  28 receives multiple thematic roles (agent and theme). This contrasts with subject-subject relative clauses, such as (5), "The reporter that attacked the senator admitted the error" where the head noun plays the same role in both clauses. King and Just (1991) found that all three groups (high-, medium-, and low-span) had longer reading times on the more demanding object-relative sentences than the subject-relative sentences. Furthermore, all groups showed an increase in reading time at the verb of the embedded clause ("attacked") and at the verb of the main clause ("admitted") where the processing load is the greatest. The increase in reading time was greatest for the low-span readers in the object-relative sentences. Low-span readers also had poorer comprehension accuracy when tested on their interpretation of the sentences. This suggests that readers with smaller working memory capacities were the most susceptible to demanding language processing tasks. When a sentence is syntactically ambiguous, a comprehender may represent a single interpretation (such as the most likely one) or he may select all possible interpretations until the correct one is confirmed. If both interpretations were constructed, it follows that this would demand additional capacity (JC). MacDonald, Just, and Carpenter (1992) theorised that all comprehenders initially construct multiple representations and that each representation has an activation level that corresponds to its frequency, syntactic complexity, and pragmatic plausibility. They propose that the individual's working memory capacity determines how long each interpretation can be maintained such that a low-span comprehender may lose the less preferred interpretation and the high span comprehender may not. MacDonald et al. (1992) used ambiguous sentences such as (6), "The experienced soldiers warned about the dangers before the midnight raid." Here, the interpretation of "warned" as either a main verb or as a past participle in a reduced relative construction is  ambiguous until the period at the end of the sentence. In contrast, (7) is unambiguous, "The experienced soldiers spoke about the dangers before the midnight raid." They found that only high-span participants took longer to read ambiguous sentences with the increase in reading time being the greatest at the last word of the sentence. MacDonald et al., (1992) argue that high-span readers were maintaining both representations of the syntactic ambiguity and paid the price by slowing down in processing. In contrast, the low-span readers chose only the more frequent interpretation, and it turned out to be the right one. In (8), "The experienced soldiers warned about the dangers conducted the midnight raid," the correct interpretation is the less frequent one—"warned" is a reduced relative instead of a main verb. In these sentences, high-span readers again have slower reading times because they maintain both interpretations. Low-span readers, however, do not construct the less frequent reduced relative interpretation and therefore have near chance level performance when asked comprehension questions concerning the sentences (MacDonald et al., 1992). Working memory demands have also been manipulated during comprehension tasks through imposing extrinsic memory loads, such as a series of words or digits that must be remembered (JC). King and Just (1991) found that when participants were required to remember one or two words while listening to subject-object and subject-subject relative clauses (e.g., sentences [4] and [5]), their accuracy at answering comprehension questions decreased. Accuracy was higher for the subject-subject relative clauses than the subjectobject relative clauses. Accuracy was also higher for the high-span participants than the lowspan participants. JC argue that maintaining an extrinsic load competes for the resources used in sentence processing. Comprehension accuracy decreases because of three factors: individual differences in capacity, the linguistic difficulty of the sentence, and/or additional  30  extrinsic memory loads (JC). The experimental conditions and hypotheses in the present study can be discussed in terms of the capacity theory of comprehension. Participants in this study were asked to monitor for target words in sentences spoken by non-native and native English speakers. The syntactic complexity of the sentences varied such that sentences impose varying degrees of processing demands on working memory resources for language. Target words that occur in positions of high syntactic complexity were predicted to incur longer response latencies. Decoding acoustic-phonetic information would also require processing resources. When decoding accented English, which may deviate substantiallyfromstandard English phonetic prototypes or prosodic contours, the demand on processing resources should increase. According to the capacity theory, both decoding the acoustic information and processing the syntactic information would require resources from a common source—working memory capacity for language. If processing non-standard acoustic information pushes the working memory system near its maximum, insufficient resources would remain available to process the syntactic information. Language processing would slow down, which would result in longer response times in the word monitoring task when listening to sentences spoken by non-native compare to native English speakers. In particular, as the syntactic complexity of the sentence increases and the resource demands increase, response latencies will also increase. Participants with high working memory capacities should have more resources available to decode difficult acoustic information and therefore have a greater amount of resources left to process syntactic information than would participants with low spans. Lowspan participants should show an increase in response times as syntactic complexity increases, and this should be compounded by the additional demands of interpreting  31 degraded acoustic input. Individual differences between high- and low-span participants in their abilities to process syntactic information in the face of high loads on perceptual decoding would show that both these language processing parameters relied on a single pool of working memory resources. 1.3.2  Separate Resource Theory Caplan and Waters (1999) propose that working memory is not composed of a set of  resources that is common to all language processing tasks. Instead, working memory is specialised for different language tasks. Namely, interpretive aspects of sentence comprehension, such as assigning syntactic structure and using it to determine meaning, rely on a specialised part of verbal working memory. Caplan and Waters (1999), hereafter "CW," reinterpret many of the studies presented by Just and Carpenter (1992), as well as present their own and others' findings to support their view. CW draw upon two sources of evidence. First, they argue that the individual differences in verbal working memory capacity, as measured by tasks such as the reading span task, are not related to the efficiency of sentence interpretation because they each draw on separate working memory resources. Second, they argue that holding a load in verbal short-term memory and interpreting sentences concurrently do not interfere with each other, again because the two tasks use separate processing resources. CW argue that in reports by King and Just (1991) and Just and Carpenter (1992) there is a lack of statistical proof for certain interactions between reading span and syntactic complexity. King and Just (1991) measured the reading times of high- and low-capacity readers for subject-object and subject-subject center-embedded sentences. Examples are sentences (4) and (5) respectively, in Table 2. King and Just (1991) report that the greatest  32 reading time differences between individuals occurred in the most syntactically difficult area of the subject-object relative clause. C W argue that King and Just (1991) and Just and Carpenter (1992) did not report sufficient statistical analyses to show that the difference in reading times between high- and low-span readers was specifically localised to the area of increased processing load. Other studies have failed to find differences between high- and low-span individuals in sentence processing tasks. C W conducted a study using an auditory moving window task where the participants press a key to hear successive constituents of a sentence; although the group of 100 subjects had increased listening times for the embedded verb of subject-object center-embedded sentences, this effect was no different for low-capacity and high-capacity participants. Two continuous lexical decision tasks (one visual, one auditory) also resulted in statistically significant increases in reading time on the verb in subject-object as compared to subject-subject relative clauses; again, this effect was no larger in low-span than in high-span participants. As described above, MacDonald et al. (1992) investigated high- and low-span participants' processing of syntactically ambiguous sentences such as sentences (6) and (8) in Table 2. C W reported that, in general for sentences such as (8), "group differences in reading times and accuracies were not statistically significant and differences in reading times were not found while subjects were reading the ambiguous portions of the sentence" (p. 81). Waters and Caplan (1996b, as cited in CW) also compared the ability of high- and low-capacity participants to process three types of ambiguous sentences. They found that although the ambiguous sentences were more difficult to interpret than the nonambiguous controls, there was no effect of working memory capacity. As reported earlier, MacDonald  33 et al. (1992) found that high-span readers took longer to process syntactically ambiguous sentences such as (6) because they construct both interpretations and choose the correct one when they reach the last word of the sentence. CW argue that there is not enough evidence to support this claim. Reading times increased only at the last word and not throughout the ambiguous region. Furthermore, low-span participants made more errors in answering comprehension questions which may indicate a speed-accuracy trade-off for different capacity groups in decision making. In a replication of the MacDonald et al. (1992) study using the same materials, CW failed to find differences between high- and low-capacity participants. CW critique Just and Carpenter's (1992) findings concerning the interaction of semantic and syntactic information through sentences that were initially ambiguous depending on the animacy of the first noun (sentences [1] and [2], Table 2). According to CW, the results show that high- and low-span groups had similar degrees of modularity of syntactic processing from semantic processing because both groups had longer reading times for reduced relative sentences than unreduced relative sentences regardless of the animacy of the first verb. Just and Carpenter's (1992) study also showed that high-span readers had longer reading times for relative clauses with animate first nouns (both reduced and unreduced) than inanimate first nouns whereas low-span readers did not. CW suggest that this result is puzzling because high-capacity individuals were unable to use a clear syntactic cue, that was, in the case of unreduced clauses with animate nouns to disambiguate the sentence. Caplan and Waters (1995) reanalysed data from a series of experiments by Miyake, Carpenter, and Just (1994) that reported interactions between span groups and sentence type.  34 These experiments used RSVP technique (rapid serial visual presentation) and required the participants to indicate the actor or answer questions about sentences. In their reanalysis, they found that individuals with different working memory capacities did not perform differently as a function of syntactic complexity. They argue that low-span individuals perform more poorly on sentences with two propositions compared to one proposition and that this caused the group-by-sentence-type interaction. In this case the task requirements, not the difficulty processing syntax, caused the differences between high- and low-span participants. Retaining information about the actor of two propositions in memory was more difficult for low-capacity individuals than high-capacity individuals. CW suggest that increasing the perceptual demands of the task may make certain post-interpretive processes more difficult, but will not affect abilities to process syntactic structure. In the research presented above, CW claim that studies showing interactions supporting the capacity theory lack the support of clear statistical analyses. Other researchers disagree. MacDonald and Christiansen (2002) argue that although Just and Carpenter (1992) "sometimes overinterpreted marginal data.. .the bulk of the evidence suggests that there are real though sometimes subtle individual differences in linguistic processing abilities within the normal population" (p. 3-4). Likewise, Miyake, Emerson, and Friedman (1999) suggest that more powerful statistical techniques in the studies in question would likely lead to statistically significant results for interactions which have shown nonsignificant trends. CW looked at research in age-related changes in sentence comprehension to investigate further the specialisation of working memory capacity. CW refer to studies that have shown that working memory declines with age. However, CW also cite studies that show no effects of age on the ability to process syntactic information. CW argue that studies  35 that do show decreased comprehension of more complex syntactic structures as a function of age can be explained by making a distinction between interpretive and post-interpretive processes. They suggest that the tasks that result in effects attributed to aging require complex post-interpretive processing such as retaining and reordering large amounts of material in memory. The lack of evidence for the effects of aging on the ability to process syntax suggests that changes in working memory capacity do not affect syntactic processing (CW). According to CW's theory of specialised resource pools, verbal memory loads that are imposed external to the comprehension task, such as digit spans (the maximum number of numbers that can be remembered), will not interfere with the sentence comprehension task itself. In a sentence-picture matching task, Waters, Caplan, and Rochon (1995, as cited in CW) found that there was an effect of maintaining a concurrent verbal load on the accuracy of the participants' responses; however, there was no effect of syntactic complexity and no interaction of syntactic complexity with load. This experiment was replicated with an enactment task whereby the participants were required to enact the thematic roles in sentences; again there was no interaction of load and syntactic complexity (Waters & Caplan, submitted, as cited in CW). CW suggest that some studies showing the effect of digit load on syntactic processing have done so only under task conditions where the stimuli elicit higher order interactions or where the complexity of the response (e.g., a sentence-plausibility judgment) changes as a result of changes in the syntactic complexity of the stimuli. CW report that studies that require the final word of a sentence to be recalled as their measurement of digit load have found an interaction between digit load and syntactic complexity. CW suggest that digit load and syntactic processing interact depending on the  36  relationship between the recall task and the sentence processing task. When the digit load is presented before the sentence and is recalled after the sentence has been processed, there is no interference between the two. However, when sentence processing is interrupted by the digit load task, such as when the words to remember are distributed throughout the sentence processing task, there is an effect of digit load size on the ability to process syntactically complex sentences. C W argue that this is because the attentional shifts required by the interrupted tasks interfere with the participants' abilities to structure complex sentences, not because these tasks are competing for working memory resources from the same source. Although they do not believe that syntactic processing and span tasks share working memory resources, C W propose that operations on the propositional content of a sentence do. Digit load has been found to interfere with processing of sentences with two propositions more than sentences with one proposition in acceptability judgments (Waters, Caplan, & Hildebrandt, 1987 as cited in CW), sentence-picture-matching tasks (Waters et al., 1995), and enactment tasks (Waters & Caplan, submitted). This suggests that using the propositional content to plan and execute actions, or matching this information with knowledge in semantic memory, shares resources with span tasks (CW). C W argue that the literature supports a working memory system for sentence processing that is separate from the working memory measured by standard tasks, such as reading span tasks and memory loads. They theorise that the specialisation of the working memory system for syntactic processing could extend to other operations involved in language interpretation, including acoustic-phonetic conversion, lexical access, recognition of intonational contours, and discourse level semantic values. C W suggest that all of these processing operations are integrated because they generally operate unconsciously, quickly,  37 and accurately and "they always compute items within the same restricted set of representational types" (p.93). Because of this integration and because they are all highly practiced cognitive functions, Caplan and Waters (1996) and Waters and Caplan (1996a) propose that one working memory resource system is used by all these different types of processes, as measured on-line (as cited in CW). CW acknowledge that many of the arguments made to support their theory of language processing are based on null findings. Specifically, the null findings were "the failure to find differences between different capacity groups in syntactic processing, to find effects of a concurrent verbal memory load on syntactic processing, or to find differential effects of load on syntactic processing in low- versus high-capacity subjects" (p.86). Although repeated occurrence of null findings increases their strength, they should be interpreted cautiously because they indicate an interaction that is absent, not one that is present. CW argue that the experimental designs in many of the reported studies were sufficiently statistically powerful because they did find interactions involving the number of propositions, which they consider to be a post-interpretive variable, and other postinterpretive factors. Miyake et al. (1999), however, argue that the experimental designs reported by CW did not have enough statistical power to detect theoretically relevant interactions and that this led to null findings. They claim that CW's "choice of designs and analyses reduces statistical power so much that obtaining nonsignificant interactions is not much of a hurdle for [Caplan and Waters'] hypothesis and, hence, does not provide much support for it" (p. 109). Caplan and Waters' (1999) view of a working memory resource dedicated to language interpretation makes predictions for the present study that differ from Just and  38 Carpenters' view (where working memory for language is a single resource regardless of whether the language task is largely unconscious, as in Caplan and Waters' view of "language interpretation," or conscious). Processing sentences spoken by non-native speakers of English will increase the processing load imposed by interpretative operations, such as acoustic-phonetic conversion and recognition of intonational contours, due to deviations from native English that non-native English speakers may produce in these areas. The increased demands from these operations could limit the resources that are available for processing syntactic information. Consequently, participants' difficulty in processing syntactically complex sentences compared to less complex sentences may be exacerbated when the input is non-native English. This effect is not different from what would be predicted based on Just and Carpenter's (1992) capacity theory. However, whereas Just and Carpenter would predict that the participants' capacity scores, as measured by the listening span test, would interact with their ability to process non-native English sentences of different syntactic complexities, Caplan and Waters would not. Caplan and Waters profess that interpretive processes draw on a separate working memory resource from verbal working memory tasks such as the listening span task. As a result, the listening span scores should not interact with performance on on-line sentence processing tasks. 1.3.3  A Connectionist Approach In a connectionist approach to language comprehension, processing is implemented  by the passing of activation through a multi-layer network; the complexity of the input, the properties of the network, and experience determine the network's capacity to process information (MacDonald & Christiansen, 2002). In this approach, "working memory" exists  39  only as a property of the network itself, not as a separate space or storage entity. Variations in working memory capacity are therefore variations in network architecture and experience, one domain of experience being language processing. Thus, language and working memory are inseparable because they both result from the interaction of architecture and experience (MacDonald & Christiansen, 2002). MacDonald and Christiansen (2002), hereafter "MC," refer to studies that have investigated the effects of experience on comprehension through cognitive models of visual word recognition. Activation is passed from a layer of orthographic representation units to a layer of phonological units via sets of intermediate "hidden" units. The orthographic units that are activated and the weight of the connection between units determine how much activation will be passed. Prior learning experience sets the weights such that greater experience leads to stronger connections and faster, more accurate word recognition. Words with irregular orthography-phonology correspondences rely more on prior experience than regular words because they are not helped by experience with words that have similar spelling-sound correspondences (MC). MC argue that this frequency x regularity interaction varies with overall exposure to language and contributes to individual differences. To support this claim, MC refer to research that showed that good readers were able to read aloud irregular words just as well as regular words, unless the irregular words were very low frequency. Comparatively, poor readers had longer latencies for irregular over regular words except for very high frequency irregular words. This could occur because good readers read a great deal and have acquired enough experience with irregular words that their retrieval and pronunciation can be computed quickly. MC claim that differential exposure to language  40 leads to individual differences in processing ability that in turn results in differential frequency x regularity interactions. MC extend the frequency x regularity interaction to sentence processing. They argue that subject-subject relative clauses (e.g., sentence [5], Table 2) are relatively regular in their word order because their structure is similar to frequently used simple active one-clause sentences. Interpreting subject-relatives will therefore benefit from the reader's experience with simple sentences. On the other hand, word order and thematic role assignment in subject-object relative clauses (e.g., sentence [4], Table 2) is more irregular; therefore, experience with simple sentences will not be as beneficial. MC suggest that findings showing that high-span individuals are more successful than low-span individuals with object-relatives (such as King and Just, 1991) reflect differences in experience with language between individuals, not differences in working memory capacity for language. They claim that high-span participants tend to read more and have more experience with relative clauses as a result. MC investigated the role of experience in language processing through computational simulations using simple recurrent networks (SRNs). They found that the network's performance on processing object- and subject-relative clauses early in training compared to late in training roughly corresponded to the pattern of results that King and Just (1991) found for low- and high-span individuals, respectively. More training resulted in fewer errors at the critical main verb region and there were higher error rates with object-relatives than with subject-relatives. Furthermore, there was a larger effect of training for the object-relative ("irregular") at the main verb than the subject-relatives ("regular"). Both types of relatives were equally frequent in the networks' training material; however, the network became  41 superior at processing subject-relatives because of the networks' abilities to generalise to less frequent sentence structures as a result of experience with similar, more common, simple sentences. Additional experience then aided processing of object-relatives. In this way, an individual's capacity to process language is not independent from his knowledge and emerges from network architecture and experiential factors. MC interpret the findings of the study on syntactic ambiguity resolution by MacDonald et al. (1992) (described above) in terms of a connectionist framework. Whereas Just and Carpenter (1992) purport that individual differences in ambiguity resolution arise as a result of individual differences in working memory capacity, MC suggest that individual differences in ambiguity resolution reflect variations in the amount of exposure to language input through language related activities, such as reading (i.e. high-span individuals read more, have a greater experience with language input as a result, and are thus more able to process language forms efficiently). Variations in exposure to language input lead to variations in sensitivity to probabilistic constraints that guide ambiguity resolution (such as whether it is more plausible for experienced soldiers to warn someone or be warned). They suggest that due to their increased experience with language, high- span participants are more sensitive to subtle probabilistic constraints and can compute alternative syntactic interpretations more efficiently. Low-span participants do not compute complex constraints rapidly and instead rely more on frequency information. MC also approach issues in pronominal reference in terms of their connectionist model. Just and Carpenter (1992) attribute high-span participants' ability to determine the referent of pronouns further back in discourse than low-span participants to the high-span participants' larger working memory capacities. MC, however, suggest that "resolution of  42  pronominal reference is a constraint satisfaction process in which several syntactic, semantic, and discourse-level constraints can guide computation of the correct pronoun-antecedent pairing" (p. 17). Tasks that measure pronominal reference tend to use discourse where the referent is distant, which rarely occurs in non-experimental English. MC argue that this situation is equivalent to resolving an ambiguity with a low frequency, complex interpretation and that high-span comprehenders are more likely to pick up on subtle constraints that guide the interpretation of uncommon structures. MC claim that biological differences among individuals also influence their language processing skills. They argue that some individuals have more precise phonological representations that interact with experience and contribute to individual differences in language processing ability. This argument is investigated through tasks that impose an extrinsic memory load on language comprehension. They propose that substantial activation of phonological representations is necessary for maintaining a set of unrelated words because phonological representations must be activated in order to say the words out loud for the load task, and they are also important in written and spoken sentence comprehension. Phonological information is likely more crucial when the comprehender has less experience with the input; likewise, there appear to be individual differences in the "precision" of phonological representations due to experience and biological factors. Phonological activation appears to be part of articulatory planning processes in speech production as well as a critical component of comprehension. MC argue, therefore, that extrinsic load and comprehension are related because they both involve activation of phonological representations. Furthermore, they suggest that phonological activation required by the load task and the phonological activation required in language  43  comprehension interfere with each other. Therefore, the larger the load size, the more competition for phonological activation; additionally, challenging sentence types rely more on the activation of phonological representations during processing than simple sentence types because they are less frequently processed (MC). The effect of sentence type in load experiments reported by Just and Carpenter (1992) is attributed to the degree of involvement of phonological representations that different sentence types require. The effect of reading span is explained through this approach. According to MC, low-span individuals have more limited language processing skills and therefore require more phonological activation to help them process difficult syntactic structures (e.g., subject-object relatives) than do high-span individuals. This leads to more competition for phonological activation from the extrinsic load task for low-span individuals. MC suggest that the representation of phonological information differs in individuals. To support this claim, MC refer to studies showing that the accuracy of phonological representations depends on reading experience as well as some biological differences. For example, neonates have shown differences in evoked potential responses to speech stimuli which were linked to verbal abilities five years later; furthermore, pre-reading children have shown individual differences in phonological skills that correlated with sentence comprehension and later reading abilities. MC suggest that biological differences could affect the amount of experience that an individual then seeks out in language processing (such as reading) that could compound individual differences due to biological factors alone. The present study can be examined vis-a-vis the connectionist approach. MC view the differences between high- and low-span individuals as being largely related to their differing experience with language. High-span participants are more proficient at processing  44  complex syntactic structures because they have acquired a greater familiarity with complex syntactic forms and are therefore better able to deal with irregular or rare structures. MC believe that the tasks that determine reading or listening span are subject to the same effects of experience and that it is the experience with language processing that leads to the correlations between span tasks and comprehension tasks. The connectionist approach, therefore, would predict that high-span participants would perform better on processing complex syntactic structures than would low-span participants; however, this would not be due to working memory capacity differences between the two groups, but rather because of differences in proficiency at processing language forms based on experience with language. Experience with listening to non-native English could result in greater abilities to process sentences spoken by non-native English speakers. More exposure to non-native English where acoustic-phonetic information, such as phonemic accuracy and intonational contours, may deviate substantially from native English standards, could lead to the development of stronger, faster connections between this input and its representational counterpart in some individuals. Experienced listeners develop more flexibility in their criteria for English phonemic prototypes and may be faster or more successful at matching non-standard or degraded English input to linguistic representations. Listeners who report more experience listening to non-native English would therefore have faster listening times to sentences spoken by non-native speakers than would listeners with little experience. Finally, MC's model would predict that individuals who had greater experience with both processing language and with processing accented English input would have stronger connections and faster processing in their language processing system than individuals with little experience. They would therefore be better able to process syntactically complex  45 sentences, accented English, and the two combined. 1.4  Adaptation Anecdotally, listeners report that they are better able to understand accented English  over time, after they get used to the speaker's accent. Even when listening to other native English speakers, native English listeners need to adjust to the specific spectral characteristics of the speaker. When native English speakers listen to other native English speakers, they contend with a lot of variability in the acoustic signal, yet they are able to consistently map the acoustic input onto phonological representations. In other words, there is no one-to-one correspondence between acoustic cues and phonological units; furthermore, the acoustic signal contains more than one cue that may identify the target phoneme. There are many factors that contribute to the variation in the acoustic signal for a given phoneme. The size and shape of the speaker's vocal tract create acoustical variation. For example, vowels are most readily identified by bands of energy in frequency ranges that result from the physical properties of the vocal tract. Peterson and Barney (1952, as cited in Dupoux & Green, 1997) showed that different speakers produce formants with different frequency values for the same target vowel; furthermore, these formant values overlapped between vowel categories. Other characteristics of speech can also cause variations in the acoustic signal. For example, the speaker's rate of speech can cause variations in acoustic cues that are used to identify the sound, such as voice-onset time, transition duration, and vowel duration (Dupoux & Green, 1997). Speech sounds are also influenced by the sounds that are produced directly before and after, and sounds often overlap with each other when they are coarticulated (Yeni-Komshian, 1993).  46 Yet, despite the acoustical variations created by different talkers, or the same talker at different times, native English speakers are able to perceive speech accurately. This results because the speech perception system is able to normalise for these variations. During perceptual normalisation, "listeners compensate for acoustic-phonetic variability in the speech signal and derive a standardized abstract phonetic representation which can then be matched to canonical forms stored in long-term memory" (Sommers, Nygaard & Pisoni, 1994, p. 1314). Studies have investigated listeners' ability to normalise to changes in different dimensions of the acoustic signal that vary with changes in talker and rate. Sommers et al. (1994) propose that increasing the demand on the perceptual normalisation system by varying the talker and the rate effectively increases the processing resources that are required for word recognition. They found that when words in a list were spoken by multiple talkers, word recognition scores declined compared to when the list was spoken by a single talker. These results were consistent with a previous study by Mullennix, Pisoni, and Martin (1988). When the word recognition task contained words spoken at variable speaking rates, word recognition scores also declined. Furthermore, when both talker and speaking rate varied simultaneously, the word recognition scores were worse than when either dimension was varied alone (Sommers et al., 1994). Sommers et al. (1994) proposed that normalising for changes in talker characteristics and rate require processing resources that decrease the resources available for phonetic identification and therefore affect word recognition scores. To further investigate perceptual normalisation, Dupoux and Green (1997) presented sentences that were compressed to 38% and 45% of their original durations, in four sets of five sentences (20 sentences in total). Participants were asked to recall the words in the  47 sentences. Dupoux and Green found that participants improved in their ability to recall the sentences after exposure to only 5 or 10 compressed sentences. Participants required more time to adjust to the more highly compressed sentences (38%). Furthermore, participants' performance did not return to baseline after an intervening change in talker occurred after the first 10 sentences. A small decline occurred in the first two sentences of the new talker; however, the listeners' ability to recall the sentences recovered and was comparable to sets produced by the previous talker (Dupoux & Green, 1997). Intervening uncompressed sentences had similar effects. Although there was a small decline in performance, the intervening uncompressed sentences also did not cause performance to return to baseline (Dupoux & Green, 1997). Dupoux and Green (1997) suggest that some kind of perceptual learning must be occurring whereby "the perceptual system may extract a set of phoneticphonological parameters that work best in a given situation and store them for later use" (p. 926). Pisoni (1993) also suggests that acoustic information gained through experience listening to a particular speaker is stored somewhere in long term memory. He trained listeners to recognise voices over a period of nine days so that the voices became familiar. The listeners were better able to recognise words in noise when they were spoken by the familiar voices than when they were spoken by new, unfamiliar voices (Pisoni, 1993). Pallier et al. (1998) suggest that the phonological properties of different languages, including syllable structure and stress, determine how speakers of that language use the acoustic signal to determine meaning. Pallier et al. (1998) found that when languages had similar phonology, such as Spanish and Catalan, (but not different phonology, such as French and English) adapting to compressed speech in one of the languages led to similar performance in the other language on sentence recall. This occurred for both speakers who  48  were bilingual in Spanish and Catalan and for monolingual speakers of one of the languages. Comprehension of the sentences was therefore not necessary for normalisation to compressed speech (Pallier et al. 1998). In the present study, the acoustic-phonetic characteristics of the non-native English speech often deviate greatly from native English speech. The listener must match the acoustic information that identifies a particular phoneme with forms stored in memory. Decoding a new non-native English voice could very well require more processing resources than decoding a new native English voice because it varies more extremely from the native English phonological prototypes. Competition for processing resources could result in longer response latencies in the experimental task. This may be particularly evident for the sentences in the word monitoring task that already require more resources because of complexities in syntactic structure. The present study will investigate the time course of normalisation to the non-native English accent compared to the native English speech. The studies concerning normalisation to compressed speech and talker variability report that performance on language tasks improves and plateaus by the fifth to tenth sentence. If, as Dupoux and Green (1997) suggest, parameters for a particular situation or voice are stored in memory, listeners who have experience listening to native Cantonese speakers producing English sentences may have an advantage over listeners with little experience. Experienced listeners may be able to retrieve information from memory, match the current acousticphonetic properties, and adapt more quickly to the non-native speech. Furthermore, listeners who frequently listen to non-native English speakers in general (i.e. from different language background than Cantonese) may have a greater tolerance for acoustic information that  49 approximates the target, and therefore be more able to identify phonemes spoken with an accent. 1.5  The Present Study The present study investigated native English listeners' ability to process language  when spoken by non-native English speakers. Many of the previous studies of non-native English processing have used off-line measures, whereas the present study used an on-line measure. The word monitoring task is an on-line task that permits real-time measurement of language processing. Tyler (1992) claims that on-line language processing tasks tap into intermediate representations of a spoken utterance. Namely, as a sentence is heard word by word, intermediate representations are constructed that reflect automatic and unconscious processing of the sentence. Off-line language processing tasks tap into the final representation of the utterance. Tyler (1992) states that the final representation is "a pragmatically coherent utterance in which the details of the intermediate representations have been lost" and that "it is only this final representation that the listeners can, in principle, gain conscious access to" (p. 4). Using the word monitoring task, native English listeners in the present study were presented with a target word on the computer screen followed by a sentence heard binaurally. They were requested to press a key on the keyboard as quickly as possible upon hearing the target word. The time between presentation of the word and pressing the button was recorded as a measure of processing time. The target word appeared in one of two positions in the sentence—early or late. Examples of sentences of each syntactic type follow in order of increasing syntactic complexity (target words are underlined). 1. Simple active: The children beside the bully quickly lied to the principal about the fight.  50 2. Conjoined: The children dreaded the bully and lied to the principle about the fight. 3. Center-embedded subject-object: The children who the bully threatened lied to the principle about the fight. The role of working memory capacity in processing accented speech was considered through three views of working memory: capacity theory, separate resource theory, and a connectionist approach. Working memory capacity was measured using a modified version of the Daneman and Carpenter (1980) listening span procedure. Experience listening to accented speech was determined through a language background questionnaire. Changes in processing time as a function of syntactic complexity, target word position, working memory capacity, and familiarity with accented English were investigated. Finally, changes over time in processing time were explored with respect to adaptation to accented speech. 1.6  Research Hypotheses In this section, the research hypotheses for this study will be presented along with the  rationale and theory behind each hypothesis. 1. A main effect of speaker will be observed such that response times will be longer when listening to sentences spoken by the non-native English speaker than the native English speaker. Rationale: Decoding the non-native English input will impose greater processing demands on the language processing system than decoding native English because of deviations in acoustic-phonetic parameters. This will effectively slow down the language processing system (reallocation of resources to lower level processing) which will be reflected as an increase in the time required for sentence processing.  51 Theory: This effect is predicted by the capacity theory (Just & Carpenter), the separate  resource approach (Caplan & Waters), and the connectionist approach (MacDonald & Christiansen). 2. There will be a main effect of syntactic complexity with response times being longer for more syntactically complex sentences. Rationale: Processing syntactically complex sentences increases the demands on the  language processing system which is reflected as an increase in the time required to process the sentences. Theory: This effect is predicted by the capacity theory, the separate resource approach, and  the connectionist approach. 3. A main effect of working memory capacity (as measured by the listening span task) on sentence processing will be observed. Listeners with high spans will have shorter response latencies compared to listeners with low spans. Rationale: Listeners with high spans have greater capacities to process language than  listeners with low spans. High span listeners will therefore be able to process sentences more quickly. Theory: This effect is predicted by both the capacity theory and the connectionist approach,  but not by the separate resource approach. The separate resource approach proposes that the listening span task draws upon a pool of working memory resources that is separate from those supporting unconscious language interpretation. 4. A two-way interaction between syntactic complexity and speaker will be observed such that response latencies will be longest when listening to syntactically complex sentences spoken by the non-native English speaker.  Rationale: Decoding non-native English input and processing syntactically complex sentences both place increased demands on the language processing system. Longer response latencies result when both variables occur simultaneously. Theory: This interaction is predicted by the capacity theory, the separate resource approach, and the connectionist approach. 5. (a) A two-way interaction between target word position and syntactic complexity is predicted. Response times will be longer for late position target words in the syntactically difficult sentences than in the syntactically simple sentences. Response times for early target words will not differ across sentence types, (b) A three-way interaction between target word position, syntax, and speaker will also be observed such that response times will be longer for the late position target words in center-embedded subject-object sentences presented by the non-native English speaker. Rationale: The late target word occurs in a position that has differing degrees of processing demand according to the syntactic complexity of the sentence. The highest processing demand is for the verbs in the center-embedded subject-object sentences; therefore, the longest response times will occur for the late target word in the center-embedded sentences. Processing demands are increased further with the addition of a non-native English accent. In the late target word position, processing demands will be highest when the centerembedded subject-object clause is spoken by a non-native English speaker and lowest when a syntactically simple sentence is spoken by a native English speaker. Theory: This interaction is predicted by the capacity theory, the separate resource theory, and the connectionist approach.  53  6. A two-way interaction between listening span and syntactic complexity is predicted. Response latencies will be shorter to more complex sentences for listeners with high spans compared to listeners with low spans. Rationale: Listeners with high spans have greater language processing capacities and are  therefore better able to cope with processing tasks that are high in processing demands, such as processing syntactically complex sentences. Theory: This interaction is predicted by the capacity theory and the connectionist approach,  but not the separate resource approach. The separate resource approach proposes that the listening span task and processing syntax use separate working memory resources. 7. (a) A two-way interaction between listening span and speaker will be observed such that listeners with low spans will show longer response times than high span listeners when listening to sentences spoken by the non-native English speaker compared to the native English speaker, (b) A three-way interaction between listening span, speaker, and syntactic complexity is also predicted. Listeners with low spans will show longer response latencies than high span listeners for sentences with accented speech when the sentences are syntactically complex. Rationale: Listeners with low spans will have greater difficulty processing non-native  English input than listeners with high spans because they have more limited language processing capacities. Their language processing capacities are therefore more challenged by the increased demand on processing imposed by the non-native English accent. Processing demands are increased further when the sentences are syntactically complex. Low span listeners are less able to meet the combined processing needs of accent and complex syntax than are their high span counterparts.  54 Theory: These interactions are predicted by the capacity theory and the connectionist approach, but not the separate resource approach. The separate resource approach proposes that the listening span task taps a separate pool of resources from processing syntax and decoding acoustic-phonetic information. The former involves conscious post-interpretive processing and the latter two unconscious interpretive processing. 8. (a) A two-way interaction between familiarity and speaker will be observed such that listeners with low familiarity with accented speech will show longer response times than high familiarity listeners when listening to sentences spoken by the non-native English speaker, (b) A three-way interaction between familiarity, speaker, and syntactic complexity is also predicted. Listeners with low familiarity will show longer response latencies than high familiarity listeners for accented sentences that are syntactically complex. Rationale: The language processing system of listeners who are familiar with accented input is more practiced at processing non-native English input. These listeners will therefore be more effective at sentence processing than listeners who are unfamiliar with non-native English. With the added demand on language processing systems imposed by syntactically complex sentences, listeners who are familiar with accented English will be less affected because they have only one challenge to deal with instead of two. Unfamiliar listeners are faced with the challenges of decoding unfamiliar acoustic-phonetic input as well as processing syntactically difficult sentences. Theory: These interactions are predicted by the connectionist approach, but not the capacity theory nor the separate resource approach. Neither the capacity theory nor the separate resource approach consider the influence of learning on the language processing system.  55 9. There will be a two-way interaction between trial block and speaker such that response latencies will be longer for non-native English sentences presented early than for sentences presented late. Rationale: Adaptation to the non-native English speaker will occur. The adaptation process  will impose additional demands on the language processing system. The language processing system will slow down, which will be reflected as an increase in response time for sentence processing. Early trials will be most affected in the adaptation process which, over time, will result in a decrease in response latencies.  56 CHAPTER 2: Method 2.1  Overview This chapter outlines the experimental designs and procedures used in this study. It  starts with a description of the comprehensibility study used to select the native and nonnative English speakers. It then describes the main experiment including the participants, the word monitoring task, the working memory task, the language background questionnaire and the analysis. 2.2  The Comprehensibility Study The goal of the comprehensibility study was to choose the two speakers—one native  English speaker and one non-native English speaker—who would record the stimuli for the word monitoring task in the main experiment. Comprehensibility is a measure of how easy or difficult to understand a listener perceives a speaker to be. Comprehensibility ratings are more strongly correlated with actual comprehension (Munro & Derwing, 1997; 1995b) and with processing time (Munro & Derwing, 1995a) than are ratings of accentedness. Although the two are correlated, accentedness ratings are not equivalent to comprehensibility ratings, which means that a speaker can be rated as moderately accented and also be rated as easy to understand. The aim here was to find a non-native English speaker who was rated as difficult to understand in order to ensure that the non-native English stimuli in the word monitoring task placed significant processing demands on the listener. Furthermore, the comprehensibility study ensured that there were no anomalous characteristics of the native English speaker that affected her perceived intelligibility. 2.2.1 Participants Five non-native English speakers and three native English speakers recorded  57 sentences for the comprehensibility task. The non-native English speakers were all females, ranging in age from 20 to 50 years old (20, 23, 25, 28, 50), who were recruitedfromthe University of British Columbia and the surrounding community. All five non-native speakers were born in China and spoke Cantonese as their first language. Although each participant reported some basic English language instruction in their home country, they had first been immersed in an English speaking environment and began to use English on a daily basis when they came to Canada between 1.5 months and 6.5 years before the recording date. None of the non-native English speakers was bilingual in English and Cantonese. The three native English speakers were females aged 23 to 31 years old. All spoke English as their first language, did not speak any Cantonese, and were not bilingual in any other language. Eight participants were recruited as listeners for the comprehensibility test. The eight female participants ranged in age from 23 to 42 years old with an average of 27 years old. All were graduate students at the University of British Columbia. None of the participants reported any history of speech/language or hearing difficulties. None of the participants were speakers of Cantonese, although some were fluent in other languages. The participants reported different degrees of familiarity with listening to accented speech. 2.2.2 Comprehensibility Stimuli Sixty-four sentences were created to be used in the comprehensibility judgments. These sentences consisted of eight sentences of each of eight sentence types. The sentences ranged in length from 6 to 14 words. The sentences were divided into eight lists, one list for each speaker. Each speaker spoke different sentences in the comprehensibility judgment task so that the listeners would not become familiar with the sentences. Furthermore, within one list, no two sentences had any repeated content words. This was to ensure that a particular  58 speaker did not say the same content word twice and thus potentially make the word easier for the listeners to understand the second time. Two extra sentences were recorded by each speaker to be used as examples for the comprehensibility judgment task. 2.2.3 Recordings For each speaker, the ten sentences were recorded during one recording session. All recordings were made in a sound booth through a microphone connected directly to a computer. The stimuli were recorded using the Cool Edit 2000 program (Syntrillium Software Corporation, 2000) through one channel at 44,100 samples per second with 32 bit resolution. Both the speaker and the experimenter wore JVC HA-D30 headphones during the recordings for audio feedback. The recordings were edited within the Cool Edit 2000 program as follows. The sentences were trimmed on either end and 0.5 seconds of silence was added to the beginning and the end of the sentence. The sentences were then passed through a bass cut filter that attenuated the frequency range from 0 Hz to approximately 400 Hz by up to approximately 8 decibels. This filter helped to decrease any background noise that was in the recordings due to the equipment. The sentences were then converted to 16 bit resolution so that they could be imported into the E-Prime software for the experiment. Finally, to boost the volume of the stimuli, the sentences were amplified to 65% of the maximum volume at which clipping would occur. 2.2.4 Procedure The comprehensibility stimuli were presented to the eight native English speaking participants using E-Prime Version 1.0 software (Psychology Software Tools, 2002). The participants were seated in a sound booth and randomly heard the 64 sentences binaurally  59 through JVC HA-D30 headphones. Instructions and prompts were presented to the participant via the computer monitor. The participants were instructed to judge how easy or difficult they found the sentence to understand on a scale that ranged from 1 to 9; a score of 1 corresponded to "extremely easy to understand" and a score of 9 corresponded to "extremely difficult or impossible to understand." After hearing each sentence, the participants circled their response on the sheet provided. Each sentence was heard only once and could not be repeated. Participants worked at their own pace and there was no time limit. They were prompted to press the space bar when they were ready to hear the next sentence. The 64 experimental sentences were preceded by two examples so that the participants could become familiar with the task and with the rating scale. The participants were prompted to ask any questions after the examples. This task took approximately 15 minutes to complete. After finishing the experimental task, the participants were asked to complete a short questionnaire on their language background and their experience listening to non-native English. 2.2.5 Analysis The median comprehensibility rating for each speaker was calculated. From these results, one non-native English speaker and one native English speaker were chosen to record the stimuli for the main experiment. On the 9-point scale, the native English speaker was required to have a median of 1 (extremely easy to understand) in order to qualify for the main experiment. The non-native English speaker was required to have a median of at least 4, and could not have a median of 8 or 9 (extremely difficult or impossible to understand). The median ratings for the five non-native English speakers and three native English speakers are shown in Appendix A. The non-native English speaker 2 was selected for the  60 main experiment. Her median comprehensibility score was 6, with a range of 2 to 9. The non-native English speakers 1,3, and 5 did not have median comprehensibility scores that were high enough to include them in the main experiment. Although the non-native English speaker 4 had the highest median comprehensibility score, and was therefore rated the most difficult speaker to understand, she was not selected for the main experiment because she had some difficulty reading the sentences out loud. The three native English speakers in the study all received comprehensibility medians of 1 with a range of 1 to 2. The native English speaker 2 was selected for the main experiment. 2.3  The Main Experiment The main experiment consisted of three parts: a language background questionnaire, a  word monitoring task, and a working memory task. The following sections describe the participants, the development and organisation of the stimuli, and an explanation of the procedure for the word monitoring task. The working memory task and the language background questionnaire are then described. 2.3.1 Participants 2.3.1.1 Speakers The speakers for the main experiment were determined by the comprehensibility study described above. One native English speaker and one non-native English speaker participated. 2.3.1.1.1 Native English Speaker The native English speaker was a 22-year-old female whose first language was English. She did not speak any additional languages. She reported no history of speech/language or hearing difficulties.  61 2.3.1.1.2 Non-Native English Speaker The non-native speaker was a 28-year-old female whose first language was Cantonese. Her native city was Guangzhun city in Guangdong Province in China. She learned some basic English in high school and college, but reported that she first started using English on a daily basis when she moved to Vancouver just over five months before the recordings were made. This participant was also fluent in Mandarin. She reported no history of speech/language or hearing difficulties. In order to describe the components of this non-native English speaker's accent, 15 sentences that were recorded by the non-native English speaker for the word monitoring task (described below) were transcribed. Of these sentences, there were five sentences of each of three sentence types. The following section describes differences in the phonology systems of the Cantonese and English languages, and relates how these differences may affect the language skills of Cantonese learners of English. Then, a description of the phonology and word stress of the non-native English speaker in the present study ensues. 2.3.1.1.2.1  Differences in English and Cantonese Phonology  Differences in the phonological systems of English and Cantonese may contribute to the pronunciation difficulties that Cantonese learners of English experience. Significant differences between the two languages occur in the consonant and vowel inventories, syllable structure, tone, and rhythm. These differences are reviewed by Chan and Li (2000) and are summarised below. English has 24 consonants and Cantonese has 19. Like English, Cantonese has six plosives; however all plosives in Cantonese are voiceless. Instead of voicing, they are distinguished by aspiration: /p, t, kl are aspirated and /b, d, g/ are unaspirated. In Cantonese,  62 word-final stops are unreleased and only the stops /p, t, kl can occur in syllable-final position. This leads to difficulties formulating the contrast between voiced and voiceless stops in word final position for Cantonese learners of English. Cantonese has two additional stops, /k , w  g /, which are labio-velar stops. w  There are three fricatives in Cantonese and all are voiceless: labio-dental If I, alveolar /s/, and glottal /h/. The affricates in Cantonese are the alveolar /ts/ and /dz/ which are both voiceless and, like the plosives, are distinguished by aspiration. Fricatives and affricates in Cantonese may only occur in syllable-initial position. The comparatively large inventory of fricatives in English often creates difficulties for Cantonese learners of English. English fricatives are often substituted with the nearest sound in the Cantonese inventory. For example, the voiced labio-dental fricative NI is often substituted with [f] in word-final position and with [w] in word-initial position. The voiced alveolar fricative Izl is replaced by the voiceless alveolar fricative [s]. The dental fricatives /0, 6 / tend to be substituted with the alveolar fricatives [t, d] or with the labio-dentalfricative[f]. The fricative [s] is frequently substituted for the phoneme ///; on the other hand, learners tend to substitute [f] in place of Isl in front of rounded front vowels, such as lui. For example, learners will pronounce "soup" /sup/ as [Tup]. Finally, the English affricates /tf, cty are often replaced with their alveolar counterparts [ts, dz] or are produced with lip-spreading. English and Cantonese have the same three nasals /m, n, rj/ as well as the lateral IV. In Cantonese, however, IV may not occur after vowels or in word-final position. This contrasts with the two English allophones: clear III (before vowels) and dark III (after vowels). Dark III may be substituted with [u] for Cantonese learners of English. In syllableinitial position, learners may also vary between Inl and IV or produce IV with nasalisation.  63 The phoneme kl is not in the Cantonese inventory and is frequently substituted with [1] or [w] by learners of English. The approximants /j/ and /w/ occur in both the Cantonese and English inventories. Because there are no consonant clusters in Cantonese, learners of English will tend to use deletion to reduce the number of consonants occurring together, or they will use epenthesis to break up the cluster by inserting a vowel. In word-final clusters, alveolar consonants /l, t, d/ are most commonly deleted. Past tense on verbs in English is often marked with the "ed" ending which often forms a IXl or Id/ word-final cluster. For example, "touched" and "hugged" both have word-final clusters. Cantonese learners of English may reduce this cluster by dropping the "ed" ending. The English vowel system is composed of 12 vowels; the Cantonese inventory has eight vowels, although there are 13 vowel allophones. Notably, the Cantonese vowel system includes front rounded vowels that do not occur in English and may replace English central vowels. Many Cantonese learners of English also have difficulty with the open front vowel /ae/, which does not occur in Cantonese, and may be substituted with /e /. Furthermore, long and short vowel distinctions are often difficult for learners because the long and short vowel distribution in Cantonese is allophonic, not phonemic. Both English and Cantonese have many diphthongs; however, the three possible second vowels differ across languages. Cantonese learners of English may replace English diphthongs with short vowels, or separate the two vowels in the diphthong by a glottal stop. Cantonese, like all Chinese languages, is a tonal language. Each syllable in Cantonese has a contrastive pitch that is part of the pronunciation of the syllable and distinguishes one word from another (Li & Thompson, 1987). English is an intonational  64 language and although changes in tone, or intonation, can register changes in attitude and intended meaning, changing a tone does not usually result in a change in lexical meaning. As  Cantonese speakers use pitch changes at the syllable level, it is often difficult for them to maintain English stress and rhythm patterns that occur over phrases or sentences. In English, stressed syllables tend to occur at regular intervals. The much shorter and faster unstressed syllables fill in the time between these intervals. In Cantonese, syllables are all given about the same amount of time and are not reduced. Cantonese learners of English tend to pronounce syllables in English for an equal length of time and in their stressed (unreduced) forms which greatly changes the intonation pattern of the connected speech. Furthermore, they tend to place a pause at word boundaries in English rather than to link the last sound of one word with the beginning of the next. 2.3.1.1.2.2  Phonology of the Non-Native Speaker  Fifteen sentences spoken by the female Cantonese learner of English were selected out of the 240 sentences that she recorded. The sentences were transcribed and analysed to give a brief description of the phonology and word stress patterns used by this speaker. Many of her phonological patterns are consistent with the description above. The Cantonese speaker was able to produce the voiced/voiceless distinction in English stop consonants fairly consistently in all word positions. Voiceless stops were produced with a considerable amount of aspiration. There were a few occasions where the voiceless alveolar stop [t] was used in place of the alveolar flap in intervocalic position (e.g., "suddenly"). Although there were few substitutions for stops, stops were frequently omitted in word final position. Since all the recorded sentences were in past tense, there were 25 opportunities for word final It, dJ to occur in the 15 transcribed sentences. Fifteen verbs  65 required a IM or Idl ending immediately following the last phoneme of the word (e.g., "changed") and the other ten verbs required the "ed" ending (e.g., "shouted"). Out of the 25 cases where a word final It, dl was obligatory to mark past tense, the stop was present eight times. Five of these were in /rd/ or /nd/ endings (e.g., "angered", "frightened") and three occurred in "ed" endings (e.g., "decided"). For six of the verbs requiring the "ed" (/ad/) ending, the speaker substituted the schwa /a/ with the vowel [i] and deleted the final Idl (e.g., "hated" was [heit i]). For the verbs that required an immediate IM or Idl, the speaker either h  omitted the ending (e.g., "reviewed" was [rivju]) or inserted a short vowel and then omitted the final stop (e.g., "changed" was [tfeincrjT]). English fricatives were sometimes difficult for this speaker. The most notable substitution was production of the plosive [d] in place of the voiced and voiceless interdental fricatives lb, QI. This substitution occurred primarily in word-initial position (e.g., "the," "that"), although it also occurred in syllable-initial position (e.g., "mother," "author"). Interestingly, the phoneme Ibl was correctly produced 9/15 times when it occurred in wordinitial position as the first phoneme of the sentence in the word "the." Production of the fricative NI was variable; it was produced correctly 5/8 times in syllable-initial position. In mispronounced words, NI was substituted with [w] (e.g., "movie" was [muwi]). The voiced fricative Izl tended to be replaced with the voiceless fricative [s] that is in the Cantonese inventory; for example, the plural phoneme in "teenagers" was produced with an Is/. Production of /// was also variable and was replaced with [s] in the word "shouted" but was correctly produced in the word "sharply." Finally, the affricates /tf, QV were frequently well produced by this speaker.  66  The Cantonese learner of English in this study was able to produce Iri correctly over 75% (37/49) of the time. The phoneme Iri was used in word-initial (e.g., "reviewed") and word-final positions (e.g., "editor") as well as in consonant clusters (e.g., "street," "turned"). The speaker tended to use r-colouring (giving an r quality to the vowel) when Iri appeared post-vocalically which is also common in native English speakers (e.g., "car," "her"). The most common error pattern was to delete the Iri, particularly when it occurred in syllablefinal (e.g., "carpenter") or word-final (e.g., "mother") positions. Another error pattern was to substitute an [1] (e.g., "briefly" was /balifli/) or to substitute a alveolar lateral flap [J] which is a phoneme somewhere between an IV and an Iri (e.g., "story" was [stDJi]). Many consonant clusters were well produced by the non-native English speaker in this study, for example, /fr/, /pi/, /st/, /sir/, /tr/, /pr/, and /rnd/. When cluster reductions occurred, the most common solutions were to delete the final consonant or to use epenthesis. For example, the vowel /a/ was inserted to break up the cluster /pi/ in "sharply" to pronounce it as [Tarpali]. Consonant deletion and epenthesis were often used to break up the consonant clusters formed by past tense markers on many English verbs. Deletion of a consonant occurred in the word "child" where the III was deleted and the phoneme Idl was syllabified ([tfaida]). The cluster /kw/, as in "quit," was also often reduced to [k]. In general, English vowels were well produced by the Cantonese learner of English in this study. However, many of the vowels that are reduced to schwa in spoken English were produced in their strong, unreduced forms by this participant. Furthermore, many vowels were nasalised to a greater extent than is common in native English speech. The vowel that was the most difficult for this speaker was the unrounded open back vowel lal which was substituted with either / A / , /or/, or the rounded open back vowel lol. The latter vowel may be  67 present as a result of exposure to British English. Diphthongs that are not in the Cantonese inventory, such as hil, were sometimes replaced with short vowels, for example, "jointly" was pronounced as [djentati]- Furthermore, the short front vowel 111 was sometimes produced as the front rounded vowel [y] that occurs in Cantonese (e.g. "quit" was /k yt/). On h  the other hand, the presence of [y] in "quit" may be the result of coalescence of the rounded labio-velar /w/ and the front vowel I'll. In English, word stress is often determined by the rhythm and stress patterns of the phrase or sentence. In Cantonese, however, each syllable is given about the same amount of time in the sentence. The speaker in this study displayed a common tendency among Cantonese learners of English to pronounce syllables equally in English sentences. This affected stress at both the word and the sentence levels. As described above, this speaker often failed to reduce vowels that are unstressed by native English speakers. Furthermore, she would produce two or more vowels in a word with equal stress. Therefore, a word like "editor" which would be transcribed as /'edatar/ by many native English speakers, was pronounced as ['e'ditar] by the speaker in this study where equal stress fell on the first and second syllables. Function words that are often reduced to their weak forms in connected speech by native English speakers were often stressed by the Cantonese speaker. For example, the sentence, "The player who the new coach admired reviewed the game plan" was produced as [5a'plei'er 'hu da'n u 'k otf a'mid 'ri'vju da 'germ 'p laen]. Here, the words J  h  h  "who" and "new" were stressed and took up equal time in the sentence. The compound "game plan" was also stressed equally. Finally, both syllables were stressed in the words "player" and "reviewed." All of these deviations from typical English stress resulted in giving the sentence a staccato and monotone sound.  68 The above description has shown that the Cantonese learner of English who participated in this study displayed many of the phonological and word stress patterns that are typical in Cantonese speakers who are learning English. 2.3.1.2 Listeners The 22 listeners were all native speakers of English with an average age of 25.1 years (SD = 4.35, range = 19 to 34 years) and an average of 16.3 years of education (SD = 1.99, range = 13 to 22 years). Ten of the participants were male and 12 were female. The participants were recruited from the University of British Columbia and the surrounding community. None reported any speech/language or hearing disorders and all reported normal or corrected-to-normal vision. None of the listeners were fluent in Cantonese or Mandarin, although six were fluent in other languages (French, German, Dutch, and Spanish). 2.3.2 Description of the Word Monitoring Task Word monitoring was selected to measure the effects of non-native English on sentence processing. The word monitoring task had two parts. First, the participant was presented with the written target word in isolation. The participant then heard a sentence that contained the target word. The participant was required to press the spacebar as quickly as possible upon hearing the target word in the sentence. Response latencies between the presentation of the target word in the sentence and response to the target word were recorded as measures of processing time. Kilborn and Moss (1996) state that "monitoring latencies are considered to provide a sensitive measure of contextual and lexical factors operating in real time" (p. 689). Word monitoring has been used in previous studies to show the effect of semantic, syntactic, and pragmatic contextual support on word recognition (Marslen-Wilson & Tyler, 1980). Syntactic/semantic violations (such as verb-argument violations) and  69 morphological violations have been linked with increased response latencies in word monitoring tasks (Marslen-Wilson, Brown, & Tyler, 1988; Tyler, 1992). Furthermore, Haarmann and Kolk (1994) have shown effects of subject-verb agreement violations in simple sentence types versus complex sentence types for participants with Broca's aphasia. In the present study, syntactic complexity was manipulated in the word monitoring paradigm. Three levels of syntactic complexity were chosen: simple active sentences, conjoined sentences, and center-embedded subject-object sentences. Two target word positions were used—early and late. Tyler (1992) states that the target word in a word monitoring task should "immediately follow the linguistic manipulation of interest" (p. 278). Therefore, the late target word occurred directly after the point in the sentence that would have the highest demand on processing resources due to the syntactic difficulty of the sentence. The main reason for having an early target word was so that the listeners would not be able to predict where the target word would appear in each sentence. The early target word occurred in a position in the sentence that was not expected to have differing degrees of demand on processing resources across sentence types. Tyler (1992) reported that response latencies in word monitoring tasks became progressively faster the later the target word appeared in the sentence when the sentence was normal and anomalous prose, but not when the sentence was a scrambled string (Marslen-Wilson & Tyler, 1975, 1980). They argued that semantic and syntactic structure develops across the sentence, which allows the listener to identify the word more readily the later it occurs. 2.3.3 Stimuli for the Word Monitoring Task Two hundred and forty sentences were created as stimuli for the word monitoring task. Sixty sentences were created for each of three syntactic structures—center-embedded  70 subject-object sentences, conjoined sentences, and simple active sentences. These syntactic forms were chosen because they have differing degrees of syntactic complexity. Centerembedded subject-object clauses are the most complex, followed by the conjoined and then active clauses. Sixty filler sentences were also created so that the listener would remain naive to the experimental sentence types. Finally, comprehension questions were formulated for each sentence. 2.3.3.1 Syntactic Complexity King and Just (1991) argue that center-embedded relative clauses are examples of syntactic structures that place heavy demands on working memory. Center-embedded subject-object sentences were chosen as the most syntactically difficult sentences for this experiment. The sentences ranged in length from 9 to 13 words with an average of 10.6 words per sentence. An example of a center-embedded subject-object sentence is: (a) The carpenter who the plumber insulted decided to quit. This is a subject-object construction because the subject of the first clause, "the carpenter," is the object of the embedded clause ("the plumber insulted the carpenter"). This syntactic construction imposes a high processing demand because the listener is required to hold the representation of the carpenter during processing of the embedded clause so that it is available for processing as the subject of the main clause. The noun phrase the carpenter is also assigned a thematic role for each verb; it is agent when it is the subject and theme when it is the object. When the noun phrase is assigned the role of theme, it violates the normal canonical thematic role assignment whereby the first noun phrase is the agent. The centerembedded clause causes an increase in demand on processing resources; therefore, the second verb in the sentence, occurring immediately after the center-embedded clause was  71 selected as the late target word. In the above example, the late target word is decided. In the experimental stimuli, the second verb occurred in the 7 to 9 word positions. The early th  th  target word position was chosen in a place where the demand across syntactic types was not expected to differ significantly. In the center-embedded construction, the early target word was chosen as the noun that preceded the first verb; in the above example, the early target word is plumber. In the experimental stimuli, the early target occurred at the 5 to 7 word th  th  positions. In all cases, the early target appeared two words before the late target. Both nouns that preceded the late target word, decided, were semantically plausible subjects for that verb. Here, both carpenter and plumber have the capacity to decide. Target words were all one to four syllables long. Both the conjoined sentences and the simple active sentences were constructed based on the center-embedded subject-object clauses. This was done so that the same target words would appear in the same word positions in all three levels of syntactic complexity. The content of the sentences was kept constant as much as possible across sentence types. The conjoined sentences ranged in length from 9 to 13 words with an average of 10.6 words per sentence. An example of a conjoined construction is: (b) The carpenter insulted the plumber and decided to quit. The conjoined construction is less syntactically complex than the center-embedded clause. Although there are two clauses and therefore two verb-argument structures to construct, these clauses can be processed independently. The processing demands are therefore less than in the center-embedded constructions where parts of both clauses are being processed simultaneously. Furthermore, the noun phrase the carpenter is assigned the same thematic role by both verbs (i.e. agent). The assignment of thematic roles in the conjoined sentences  72 follows the canonical pattern for each clause whereby the first noun phrase is the agent. The late target (decided) was in the 7 to 9 word position, and the early target (plumber) th  th  appeared in the 5 to 7 word position. In some constructions of the conjoined sentences, th  th  the first verb could not remain the same as thefirstverb in the center-embedded sentences because it compromised the semantic plausibility of the sentences. In these cases, a verb was used in the conjoined sentences that was comparable in syllable length and frequency in English and that retained comparable meaning whenever possible. The simple active sentences were also created to retain the early and late target words in the same positions as the center-embedded subject-object sentences. These sentences also ranged in length from 9 to 13 words with an average of 10.6 words per sentence. For example: (c) The carpenter and the plumber suddenly decided to quit. (d) The pencil beside the student slowly rolled off the desk. (e) The family's most annoying neighbours finally sold the house. These sentences were formed by adding a conjunction to the noun phrase of the main clause, as in (c), the carpenter and the plumber, including a prepositional phrase with the noun phrase, as in (d), the pencil beside the student, or using a possessive noun phrase, as in (e), the family's most annoying neighbours. In (c), the early target, plumber, and the late target, decided, occur in the 5 and 7 word positions as they do in the center-embedded sentence th  th  and the conjoined sentence. An adverb was added before the verb in order to keep the early and late targets separated by two words. The simple active sentences contained only one clause and therefore required the construction of only one verb-argument structure. Consequently, this syntactic structure was predicted to impose the least demands on sentence  73 processing resources. A complete list of the experimental sentences used in this study is in Appendix B. Whenever possible, the number of syllables and words before the late target (e.g., decided in (a),(b), and (c)) were kept constant across the three sentence types. The number  of syllables before the late target was always within one syllable. For example, the number of syllables before the target word decided in (a), (b), and (c) was 11. The number of syllables occurring before the early target could not be kept constant between the active sentences and the other sentence types. The number of words occurring before both the early and late targets was equal over all three sentence types except for eight sentences (7,14,20,34,44,47,52,54). In these constructions, an extra function word, such as from, was added as required by the preceding verb. The words before the target words in each sentence were acoustically dissimilar to the target in order to prevent false alarms during the monitoring task. Target words were never used twice such that there were 60 different target verbs (late target) and 60 different target nouns (early target). Furthermore, repetition of the target words as non-target words in other sentences did not occur. There are three cases out of 540 where a verb is repeated. These verbs are not target words. In these three cases, the word appears in a consecutive list, not in the same list (see section 2.3.3.5). A list of the experimental target words is in Appendix C. 2.3.3.2 Fillers Sixty filler sentences were created so that the participants would not become accustomed to sentence types or to the position of the target word within the sentence. The filler sentences ranched from 8 to 15 words in length with an average length of 10.8 words per sentence. Of these 60 sentences, seven each were of the three types in the experimental  74 sentences (subject-object center-embedded, conjoined, and simple active); however, in the filler sentences, the target word position ranged from the 2 to the 11 word. Target words nd  th  never appeared as thefirstor the last word in the sentence. Target words only appeared in the 5 position if they were verbs and in the 7 position if they were nouns to counterbalance th  th  the early and late target classes in the experimental stimuli. Seven object-subject centerembedded sentences were also included in order to introduce sentences where the words who and that did not appear in the same pattern as the subject-object clauses. For example, (f) The teacher congratulated the student who easily won the award. The remaining 32 filler sentences were of varying sentence structure types. As well as varying the word position of the target word, nouns and verbs were balanced in the filler sentences so that target verbs appeared in early sentence positions and target nouns appeared in late sentence positions. There were 35 target nouns and 25 target verbs. This was done to ensure that participants did not try to guess the position of the target in the sentence according to the class of the target. Adverbs appeared before (non-target) verbs in 33% of the filler sentences. This was to counterbalance the adverbs that appeared before the target verb in 33% of the experimental sentences (the simple active sentences). In this way, an adverb would not serve as a cue to the participants that the target word was following. None of the target words in the filler sentences were the same as the targets in the experimental sentences. Furthermore, the target words did not appear in any non-target positions in any of the experimental or filler sentences, as much as possible. Target words ranged in length from one to five syllables. The filler sentences are in Appendix D. 2.3.3.3 Plausibility The 240 experimental and filler sentences were divided into three lists. Nine native  75 English speakers read these sentences (three raters per list) and rated the plausibility of the sentences on a scale from 1 to 7. A rating of 1 corresponded to a "very unlikely" sentence that described a very bizarre or unexpected event that was not likely to occur. A rating of 7 corresponded to a "very likely" sentence that described a very ordinary event that had a high probability of occurring in everyday life. Fifteen of the 240 sentences received a rating of 3 or below by two or more of the readers and were reformulated or discarded. New sentences were given ratings above 3 by at least two new readers before they were included in the word monitoring task. 2.3.3.4 Comprehension Questions Comprehension questions were constructed for each of the experimental and filler sentences. The questions were included to encourage the participants to process the meaning of the sentences and not just monitor for the target word. The comprehension questions were yes/no questions where half of the questions had yes as the correct answer and half of them had no as the correct answer. Furthermore, for the sentences that had two verbs such as the subject-object center-embedded and the conjoined sentences, half of the questions were written around the first verb and half around the second verb. For example, for the subjectobject center-embedded sentence: (g) The students who the bully threatened lied to the principal. The following are the four possible questions: 1) Verb 1, Yes: Did the bully threaten the students? 2) Verb 1, No: Did the students threaten the bully? 3) Verb 2, Yes: Did the students lie to the principal? 4) Verb 2, No: Did the bully lie to the principal?  76 If the question was formulated around the first verb in the conjoined sentence, it was formulated around the second verb in the center-embedded sentence and vice versa. In addition, the no questions were varied such that half of them used different verbs (or predicates) from the original sentence and half of them changed the NP or PP that acted as an argument to the verb. For example, the no question in (4) above changed the argument for the verb lie from students to bully in order to formulate a negative sentence. A change in the predicate of the sentence would also formulate a negative question, such as in (5) below. 5) Verb 2, no: Did the students confess to the principal? In this question, the verb was changed from lie to confess. Out of the 120 sentences that each participant heard (60 experimental and 60 fillers), the comprehension questions appeared after 30 sentences, or 25% of the stimuli. Kempler, Almor, and MacDonald (1998) established that this ratio of yes/no comprehension questions in their language processing study with patients with Alzheimer's Disease was sufficient for encouraging the participants to attend to the content of the sentences. In a study that measured eye movements during reading, Rayner, Carlson, and Frazier (1983) asked their participants to paraphrase the sentence that they had read after about every five sentences to ensure that they were understanding the sentences. In the 102 sentences in their study, the comprehension checks occurred after approximately 20% of the stimuli. The comprehension questions are in Appendix E. 2.3.3.5 Lists There were six possible combinations of stimuli presentation developed for the word monitoring task. In each combination, half of the sentences were spoken by the native English speaker and half by the non-native English speaker. Table 3 shows the six possible  77  combinations of lists that a participant could encounter. Table 3 Possible Combinations of Stimuli Presentation Combination  Non-native English speaker  Native English speaker  1  List A  ListD  2  ListD  List A  3  ListB  ListE  4  ListE  List B  5  ListC  List F  6  ListF  ListC  For example, in Combination 1, the listener would hear List A spoken by the non-native English speaker and List D spoken by the native English speaker. In Combination 2, this is reversed and List A is spoken by the native English speaker whereas List D is spoken by the non-native English speaker. The experimental stimuli were therefore divided into six lists—List A through List F. Each list contained 30 experimental sentences, 10 sentences of each syntactic type. As described above, the experimental sentences were formulated in 60 sets of three. The target words remained in the same word position in each sentence type. The three sentences below show an example of a set of sentences. (h) Simple active: The children beside the bully quickly lied to the principal. (i)  Conjoined:  The children dreaded the bully and lied to the principal.  (j)  Center-embedded:  The children who the bully threatened lied to the principal.  Each set of three corresponding sentences was divided such that they did not occur in the same list. Of the 10 sentences of each type in a list, corresponding comprehension questions for half of them had no responses and the other half had yes responses. Furthermore, the  78  conjoined and center-embedded sentences were divided such that half of the no questions were formulated around the first verb and the other half were formulated around the second verb. The six lists of 30 experimental sentences were grouped in pairs. Each pair of lists contained all 60 experimental sentences—i.e. one sentence from each set of three corresponding sentences. Therefore, there were three possible pairs of lists: List A and List D; List B and List E; and List C and List F. Each list was recorded by both the native and non-native English speaker leading to six possible combinations. In each combination, the participant heard 30 sentences by the native speaker and 30 sentences by the non-native speaker, but heard only one sentence from any set. The filler sentences were divided into two lists—Fillers A and Fillers B. Sentences of each type were divided as equally as possible across the two lists. An equal number of yes and no questions were included in each list. One set of filler sentences was included with each list. Lists A, B, and C included Fillers A and Lists D, E, and F included Fillers B. Therefore, each combination of lists contained the complete set of filler sentences. 2.3.4 Recordings for the Word Monitoring Task The non-native English speaker and the native English speaker each recorded the 240 sentences for the word monitoring task. The recordings were conducted over two recording sessions. All recordings were made in a sound booth to keep background noise to a minimum. The speakers read the sentences through a microphone connected directly to a computer. The stimuli were recorded using the Cool Edit 2000 program (Syntrillium software Corporation, 2000) through one channel at 44,100 samples per second with 32 bit resolution. Both the speaker and the experimenter wore JVC HA-D30 headphones during the  79 recordings for audio feedback. The speakers were instructed to read the sentence over before each sentence was recorded. If the speaker was unsure about how to pronounce any of the words, she was encouraged to ask the experimenter. Furthermore, if the speaker or the experimenter was displeased with the quality or the content of any of the recordings, the sentence was re-recorded. The recordings were edited with the Cool Edit 2000 software. Each sentence was trimmed on either end and 0.5 seconds of silence was added to both the beginning and the end of the sentence. Using spectrographic analysis, the time of the beginning of the target words in each sentence was determined. Segmenting successive segments in continuous speech is challenging because there is no one-to-one correspondence between the acoustic signal and a particular phoneme. Furthermore, segments are influenced by the segments that precede and follow them. The beginning of words in this study were determined as consistently as possible. For words that began with stop consonants, the beginning of the word was taken to be the time of the burst. Words beginning with fricatives were demarcated by the beginning of high frequency friction in the spectrogram. The beginning of vowels and glides were determined to be at the center point of formant transitions. The times were recorded to the nearest millisecond. These measurements were used to determine the response times from the beginning of the target word to when the participant responded to the word by pressing the spacebar in the word monitoring task. In order to ensure that each sentence was the same volume, all 480 sentences for the word monitoring task were joined together and each section of the combined waveform was amplified by its average root mean square power. The combined waveform was then normalised to 96% of the maximum volume at which clipping would occur. The waveform  80 was passed through a bass cut filter that attenuated the frequency range 0 Hz to 400 Hz by approximately 8 decibels. The waveform was then converted into 16 bit resolution so that it could be imported into the E-Prime software for the word monitoring task. Finally, the waveform was divided back into the 480 sentences. 2.3.5 Procedure for the Word Monitoring Task The word monitoring task was developed using E-Prime Version 1.0 software (Psychology Software Tools, 2002). The listeners were seated in a quiet room facing a 21" monitor and wore JVC HA-D30 headphones. After a plus sign (+) appeared on the screen for one second to warn the listener that the trial was about to begin, the target word appeared on the screen for two seconds. The target word was centered on the screen in 36 point bold New Courier font, with black type on a white background. After a pause of 0.5 seconds, the sentence was presented binaurally over headphones at a comfortable listening level. The participants were instructed to press the spacebar as soon as they heard the target word in the sentence. The time between the beginning of the sentence and the spacebar press was recorded by the software. After the sentence, another 0.5 second pause occurred. In 25% of the trials, the second pause was followed by a yes/no comprehension question. The question was presented in the center of the screen in 24 point bold New Courier font, with black type on a white background. The listeners were instructed to press a "Y" or "N" key on the keyboard to register their response. The participants were given as much time as needed to answer the question, which remained on the screen until their response. The question was followed by a 0.5 second pause before the next trial began. The sentences were presented randomly and the questions appeared in 25% of the trials as determined by the E-Prime software. Therefore, each participant answered a different set of comprehension questions.  81 In the word monitoring task, the sentences were presented in two blocks. Each block consisted of one list of 30 experimental sentences intermixed with one list of 30 filler sentences. As described in Section 2.3.3.5, there were six possible combinations of lists in the word monitoring task and each participant heard one combination. For example, in Combination 3, the listener was presented with List B by the non-native English speaker and List E by the native English speaker. The E-Prime software randomly assigned a combination of lists to each subject as well as randomly assigning each list to a block. For example, Participant 1 and Participant 2 could both be assigned Combination 3 but Participant 1 may hear List B in the first block and List E in the second block whereas Participant 2 may hear List E in the first block and List B in the second. The 120 experimental trials were preceded by a set of six practice items that included two practice comprehension questions so that the participants could become familiar with the task. The practice trials were presented by both native and non-native English speakers who were not the same speakers in the experimental trials. The participants were given the opportunity to ask questions concerning the task after the practice items before the experimental trials began. The experimenter checked at this time that the volume was at a comfortable listening level. The participants were given the opportunity to take a break after the first block of trials was presented. 2.3.6 Working Memory Working memory was determined by administering the listening span task. The listening span task is an adaptation by Daneman and Carpenter of the reading span task and has been found to be correlated with this measure of working memory (Daneman & Carpenter, 1980; Daneman & Merikle, 1996). The listening span task was composed of 100  82 sentences that were presented to the participant binaurally through JVC HA-D30 headphones via E-Prime Version 1.0 software (Psychology Software Tools, 2002). A copy of the sentences is in Appendix F. The sentences were divided into 5 span levels—levels 2 to 6. Level 2 contained sets of two sentences and the level 6 contained sets of six sentences. There were five sets of sentences at each span level. Starting at level 2, the participants listened to two sentences and were asked to judge whether each sentence was plausible by responding "Yes" or "No" out loud. Half of the sentences were plausible, and the other half were not. The participant controlled the presentation of the next sentence by pressing a key on the keyboard. After all the sentences in a set were presented, the participant was asked to recall the final word in each of the sentences in the set. The test was terminated when the participant failed to recall all the final words in four out of five sets at a particular level. Listening span was determined to be the highest level that the participant correctly recalled all the words in three out of five sets of sentences. A score of 0.5 was added if the participant correctly recalled all the words in two out of five sets at the next level. Participants who received a score of 5-6 were assigned to the High span group, a score of 4-4.5 to the Medium span group, and a score of 2-3 to the Low span group. Individual working memory scores and span groups are shown in a table in Appendix G. 2.3.7 Language Background Questionnaire The language background questionnaire was developed to determine the participants' first language and fluency in additional languages as well as to assess their familiarity with listening to English spoken by non-native speakers. Participants were asked how often (ranging from never to several times a day) they spoke in English with people who speak English with an accent in general and with a Chinese accent in particular. Furthermore,  83 participants were asked to describe the types of these interactions such as short greetings or extended conversations. A copy of the questionnaire is in Appendix H. Answers to the questions in the language background questionnaire were used to divide the listeners into two groups: the Familiar group who was experienced listening to accented English, and the Nonfamiliar group who was not. The scale used in questions 5 and 6 (how often they listened to accented English) was assigned an ordinal ranking such that the response "never" was assigned 0 and the response "several times a day" was assigned 6. Those participants who received a combined total of 9 or above for these two questions was assigned to the Familiar group. All others were assigned to the Non-familiar group. Questions 9 and 10 (what types of interactions they most commonly had) were also assigned an ordinal ranking where "short greetings" was assigned 0 and "social interactions" was assigned a score of 3. None of the participants in the Familiar group received a score of 0 for either of these two questions. A table showing scores on the familiarity questionnaire and familiarity groups is in Appendix G. 2.3.8 Analysis The interval between the beginning of the sentence and the spacebar presses was recorded by the E-Prime program with millisecond accuracy, giving a measure of sentence reaction time (Sentence RT). The interval between the beginning of the sentence and the onset of the target word, which had been previously determined through spectrographic analysis, was subtracted from the Sentence RT to determine the target word response time (RT). Negative reaction times occurred when the participants did not respond before the end of the sentence and when they responded before the presentation of the target word, and these were excluded from the analysis at this point. Fifty-one responses (3.9%) were negative  84 response times. Mean response times were calculated for each participant for the 12 experimental conditions shown in Table 4. Response times that were more than, or equal to, two standard deviations above and below the mean for each condition were considered to be outliers and were omitted from further analysis. In total, 88 responses (6.9%) were outliers. Mean response times for each condition were then recalculated. Table 4 Conditions for which Mean Response Times were Calculated Condition 1 2 3 4 5 6 7 8 9 10 11 12  Sentence type Simple active Simple active Simple active Simple active Conjoined Conjoined Conjoined Conjoined Center-embedded Center-embedded Center-embedded Center-embedded  subject-object subject-object subject-object subject-object  Speaker Non-native English Non-native English Native English Native English Non-native English Non-native English Native English Native English Non-native English Non-native English Native English Native English  Target position Early target Late target Early target Late target Early target Late target Early target Late target Early target Late target Early target Late target  A 2 x 2 x 3 repeated measures analysis of variance (ANOVA) was performed comparing the two speakers (native English and non-native English), two target word positions (early and late) and three sentence types (simple active, conjoined, and centerembedded subject-object sentences). Working memory capacity and familiarity with accents were between-subjects variables. An alpha level of .05 was used for all statistical tests. Scores on the listening span task were used to divide the participants into three groups based in their working memory capacity. Participants in the High group (N = 9) had listening spans of 5-6, the Medium group (N = 7) had spans of 4-4.5, and the Low group (N = 6)had spans of  85 2-3. Answers on the Language Background Questionnaire were used to assign participants to one of two familiarity groups-—Familiar (N = 11) and Non-familiar (N = 11). Participants were required to answer yes/no questions following 25% of the experimental sentences and 25% of the filler sentences. Correct answers received a score of 1 and incorrect answers received a score of 0. Question accuracy, the percent of correct responses, was calculated for each subject for the questions that followed sentences spoken by the non-native and native English speakers. A repeated measures ANOVA with speaker (non-native speaker vs. native speaker) as the within-subjects factor was performed to determine the effect of speaker on question accuracy. In order to investigate normalisation to the non-native speaker over time, the trials for each participant were arranged in chronological order. The trials in the normalisation analysis included both experimental and filler trials. Response times had been previously corrected (see above) such that negative response times and response times that were more than two standard deviations from the mean were omitted. Response times for each participant were grouped into blocks in order to investigate normalisation to speaker over time. Two block sizes were used; blocks containing five trials (e.g., block 1 = trials 1-5, block 2 = trials 6-10 etc.), and blocks containing 15 trials (e.g., block 1 = trials 1-15, block 2 = trials 16-30 etc.). A repeated measures ANOVA design with block as the within-subjects variable was used to determine the effects of block on response times.  86 CHAPTER 3: Results  3.1  Main Effects in the Word Monitoring Task The main effect of sentence type was significant, F (2, 19) = 19.93, MSE = 116,650,  p = .000 (see Table 5). The center-embedded subject-object sentences had longer response times than both the simple active and conjoined sentences. However, the simple active sentences had longer response times than the conjoined sentences even though they were assumed to be a less syntactically difficult sentence type. Follow-up pairwise comparisons showed that the differences between the mean response times of each pair of sentence types were significant: the center-embedded subject-object sentences had longer response times than the conjoined sentences (p_ = .000) and the simple active sentences (p_ = .004); and the simple active sentences had longer response times than the conjoined sentences (p_ = .009). The main effect of speaker was significant, F (1, 20) = 241.16, MSE = 3,027,044, p_ = .000, such that there were longer response times for the non-native English speaker than the native English speaker (see Table 5). The main effect of target word position was significant, F (1, 20) = 17.31, MSE = 103,584, p_ = .000, with longer response times to target words in early (M = 522, SE = 12) than in late position (M = 482, SE = 14). A main effect of speaker on question accuracy (% of questions accurately answered) was marginally significant, F (1, 21) = 4.149, MSE = 1090, p_ = .054. The mean question accuracy was 83.0% (SD = 4.2) for the questions pertaining to sentences spoken by the nonnative English speaker and was 93.0% (SD = 2.3) for the questions following sentences spoken by the native speaker of English.  87 Table 5 Mean Response Times (N = 22) and Standard Error by Speaker, Sentence Type, and Target Word Position (in msec) Simple Speaker Early Late All Native 406 383 398 (20) (19) (13)  Sentence Type Conjoined Early Late All 370 350 354 (20) (16) (12)  NonNative  600 (27)  598 (26)  598 (20)  600 (22)  570 (26)  578 (18)  All  503 (21)  491 (16)  498 (12)  485 (19)  460 (19)  468 (14)  Embedded Early Late All 433 415 431 (22) (18) (15)  All 395 (12)  713 (28)  552 (38)  649 (22)  609 (16)  573 (20)  483 (28)  540 (17)  The main effect of span group was not significant. Response times on the word monitoring task were not different between the High span group (N = 9, M = 506, SD = 78), the Medium span group (N = 7, M_ = 506, SD = 46), and the Low span group (N = 6, M = 492, SD = 34), F (2, 19) = .122, MSE = 5127, p_ = .886. Listening span scores for the High span group (span = 5-6) and the Medium group (span = 4-4.5) were both quite high compared to the Low group (span = 2-3). Therefore, another analysis was conducted comparing only two groups, the High group vs. the Low group. No significant main effect was observed in this reanalysis, F (1, 13) = .172, MSE = 8615, p = .685. 3.2  Interactions in the Word Monitoring Task There was a significant two-way interaction between target word position and  syntactic complexity, F (2, 19) = 8.09, MSE = 35,336, p_ = .001. Response times were significantly longer for early target words in center-embedded subject-object sentences (see Figure 1).  88 700  simple active  conjoined  centerembedded  Sentence type  Figure 1. Mean response times and standard error for the early and late target word positions across sentence types. A significant three-way interaction between target word position, syntactic complexity, and speaker was observed, F (2, 19) = 8.24, MSE = 40,259, p_ = .001, indicating that the longer response times found for early target words in center-embedded subject-object sentences was restricted to the non-native English condition (see Figure 2). No other interactions involving syntactic complexity, speaker, and target word position were significant. Listening span group was used as a between-subjects factor in the repeated measures ANOVA. There were no significant interactions involving listening span when categorised as high, medium, and low or when categorised as high and low. Familiarity was also a between-subjects factor in the repeated measures ANOVA. There were no significant interactions found involving the familiarity variable.  89  800  NE  NNE  Simple  NE  NNE  Conjoined  NE  NNE  Embedded  Figure 2. Mean response times (msec) and standard error for the early and late target positions across sentence types (simple active, conjoined, center-embedded) for the NE (native English) and NNE (non-native English) speakers. 3.3  Adaptation to Non-Native Speech A repeated measures ANOVA indicated a marginally significant effect of adaptation  between the mean response times across blocks of five trials in the non-native speaker condition, F (4, 18) = 2.29, MSE = 44,132, p_ = .067. Table 6 shows the mean response times for early and late blocks of five trials. Because blocks of five trials may not be sufficient to observe robust changes in response times over time, another analysis was conducted in which the trials were divided into four blocks of 15 trials. Table 7 shows the mean response times for each block. Again, there was no significant main effect of block, F (3, 19) = 1.97, MSE = 13,471, p = .128, though there was a trend towards shorter response times from block 1 to block 2.  90 Table 6 Mean Response Times (in msec) to Target Words for Early (Blocks 1-3) and Late (Blocks 11-12) Occurring Blocks of 5 Trials for the Non-Native English Sentences Block 1 2 3 11 12  Trial Numbers 1-5 6-10 11-15 50-55 56-60  Mean 548 617 590 619 517  Standard Deviation 132 169 148 155 118  Table 7 Mean Response Times (in msec) to Target Words for 4 Blocks of 15 Trials for the NonNative English Sentences Block 1 2 3 4  Trial Numbers 1-15 16-30 31-45 46-60  Mean 585 534 544 577  Standard Deviation 91 115 70 92  91 CHAPTER 4: Discussion  4.1  Introduction This study examined the effects of listening to non-native English on the language  processing abilities of native English listeners. In the discussion that follows, the effects of non-native English speaker, syntactic complexity, and target word position will be explored. The role of working memory in processing accented English will be discussed as will the effects of familiarity with listening to accented English on processing of non-native English. Additionally, the native English listeners' ability to adapt to the non-native English over time will be considered. Finally, clinical implications and future research directions will be suggested. 4.2  Speaker The present study found that when native English listeners monitored for words in the  sentences spoken by the non-native English speaker, their response times were longer than when they were listening to sentences spoken by the native English speaker. This supports hypothesis 1 (chapter 1). Non-native English can differ substantially from native English pronunciation as the ESL learner attempts to produce native-like phonological elements. A description of the sound segment and word stress patterns of the Cantonese learner of English in this study was presented in chapter 2. As with many Cantonese learners of English, the non-native English speaker in this study had difficulty with word final voiced stops, interdental and palato-alveolar fricatives, Ixl, and consonant clusters. Deviations in English word stress patterns also characterised her speech; multi-syllabic words often had syllables of equal stress and weak words were frequently stressed in sentences. Decoding the non-native  92 English input imposed greater demands on the language processing systems of the native English listeners leading to an increase in response times. Before comparing these findings with the results of other studies, it is important to revisit the distinction between on-line and off-line measures of language processing. Whereas off-line tasks tap into the final representation of an utterance, on-line measures tap into intermediate representations of the utterance. The present study has provided a first glance at the on-line processing of non-native English speech by native English listeners. Greater demands were placed on the listeners' language processing systems when they decoded non-native than native English. Previous studies have used off-line measures to study the effects of non-native English. These studies have found comparable results— native English listeners understood English spoken by native speakers more successfully than English spoken by non-native speakers. When sentences are spoken by native compared to non-native English speakers, native English listeners correctly judge sentences as true or false more often and more quickly (Munro & Derwing, 1995a; Munro, 1998), have higher listening comprehension scores (Anderson-Hsieh & Keohler, 1998) and are better able to orthographically transcribe utterances (Munro & Derwing, 1995a, 1995b; Munro, 1998). Participants in the present study were presented with yes/no comprehension questions after 25% of the sentences in the word monitoring task. The analysis of the accuracy of responses to the comprehension questions provided a measure of the off-line comprehension of the native versus non-native English. The mean question accuracy for the questions following the native English input was 93% compared to 83% for questions following the non-native English sentences. Although this effect was only marginally significant, it reflects a trend of decreased comprehension of non-native English that is consistent with the  93 off-line studies presented above. The motivation for including comprehension questions iri the present study was to ensure that the participants attend to the content of the sentences, not simply wait for the target word to appear. The question accuracy scores suggest that this was, in fact, the case; correct responses were well above chance for both the native English and non-native English speakers. Anecdotally, many of the participants reported that they had to listen much more carefully to the non-native English speech and that it was much more difficult both to attend to the meaning of the sentence (in order to answer the yes/no question) and to monitor for the word in the sentence, for the non-native English sentences. One participant reported that she felt unable to do both tasks at once—that she either listened to the content or monitored for the word. Another participant reported that she was able to answer the questions spoken by the native English speaker without thinking about it, but with the non-native English speaker, she often had to "replay the sentence in her head" in order to answer the question. This suggests that "top-down" processing may have been involved in determining the meaning of the non-native English sentences. The listener may have integrated information from higher levels of analysis, such as syntactic and semantic levels, in order to identify missing sounds or words. 4.3  Syntactic Complexity The present study found that response times in the word monitoring task were longer  for the more syntactically complex sentences (the subject-object sentences), than for the less syntactically complex sentences (the simple active and conjoined sentences), thereby confirming hypothesis 2 (chapter 1). This suggests that processing syntactically complex sentences increases the demand on the language processing system which is reflected as an  94 increase in the time required to process the sentence. These results are consistent with King and Just (1991) who found that subject-object relative clauses had longer response times in on-line reading tasks than did other, less syntactically complex sentence types. The center-embedded subject-object sentence requires that the listener hold the representation of the first noun phrase, process the embedded clause, and then have the first noun phrase available to process as the subject of the main clause. Furthermore, the subject of the main clause is also the object of the embedded clause each of which receives a thematic role (the latter violating normal "canonical" thematic role assignment). The sentence is more complex than the conjoined sentence where, although there are two separate clauses and the subject of the first clause must be stored and also processed as the subject of the second clause, the assignment of thematic roles follows the canonical pattern for each clause. The center-embedded sentence is also more difficult than the simple active sentence which has only one clause. Surprisingly, however, the simple active sentences had significantly longer response times than the conjoined sentences, suggesting that the simple active sentences placed heavier demands on language processing than the conjoined sentences. This finding is contrary to findings from other studies. For example, Caplan and Hildebrandt (1988) found that people with aphasia were significantly better able to enact the thematic roles in active sentences (e.g., The elephant hit the monkey) than in conjoined sentences (e.g., The elephant hit the monkey and hugged the rabbit). The finding in the present study is difficult to explain. In the present study, the simple active sentences were formed to match the content of the center-embedded sentences and the position in the sentence of both the early and the late target words. In order for this to occur, one of three changes were made to the subject noun phrase, as described in chapter 2. Table 8 shows  95 examples of pairs of simple active and conjoined sentences. In each of the simple active examples, the subject noun phrase is more complex than in its conjoined counterpart. The noun phrase in sentence (1) consists of two conjoined nouns; the noun phrase in sentence (2) includes a prepositional phrase ("beside the student"); and there is a possessive noun phrase in sentence (3) ("the family's most annoying neighbour"). The increased complexity of the subject noun phrase may have led to increased demands on processing resources for this sentence type. A study by Small, Kemper, and Lyons (2000) also showed that active sentences were more difficult to process than expected. In this study, participants with Alzheimer's disease had significantly more errors repeating active sentences than passive sentences, even though passive sentences are typically more difficult for this population. In their study, the active sentences included a prepositional phrase branching from the subject noun phrase which made them similar to the active sentence (2) in the present study (Table 8). This may have increased their processing difficulty. Furthermore, Caplan and Hildebrandt (1988) found that dative sentences (e.g., The elephant gave the monkey to the rabbit), where an extra prepositional phrase adds a thematic role to the sentence, were more difficult than the active sentences. Table 8 Three Examples of Simple Active and Conjoined Sentences # 1  Simple active The carpenter and the plumber suddenly decided to quit.  Conjoined The carpenter angered the plumber and decided to quit.  2  The pencil beside the student slowly rolled off the desk.  The pencil dropped from the student and rolled off the desk.  3  The family's most annoying neighbours finally sold their house.  The family detested the neighbours and sold their house.  96 Another characteristic of the simple active sentences that differed from their conjoined counterparts is the presence of an adverb before the main verb (e.g., "suddenly"). This adverb in the simple active sentences served to keep the early and late target words equal distances from each other across sentence types. For example, in sentence (1), "plumber" and "decided" are the 5 and 7 word in both sentence types. However, the th  th  presence of the adverb before the verb, which was also the late target word, may have increased the processing demands at that point in the sentence. One important point is that there has been no previous research that has shown a hierarchy of syntactic complexity as a function of increased response time in a word monitoring task. Only one study has investigated syntactic complexity in a word monitoring paradigm. Haarmann and Kolk (1994) studied sensitivity to subject-verb agreement violations using a word monitoring task with two levels of syntactic complexity in Dutch. In their study, 20 Dutch listeners did not have longer response times to the target word in complex sentences (embedded) than in simple sentences (conjoined). Furthermore, 15 participants with Broca's aphasia also did not show a significant difference in response times between the conjoined and embedded sentences. However, Haarmann and Kolk did find a marginally significant effect of subject-verb agreement violations when they occurred in the simple sentences (conjoined) but not when they occurred in the complex sentences (embedded) in some conditions for participants with Broca's aphasia. These results and the findings of the present study suggest that the word monitoring task can be sensitive to the effects of grammatical complexity on language processing.  97 4.4  Target Word Position Longer response times were found in this study for target words that occurred in the  early position than for those that occurred in the late position. This is consistent with previous research that has found faster response times in word monitoring tasks the later the target word appears in the sentence (Marslen-Wilson & Tyler, 1975; 1980). This decreased response time may result because word recognition is aided by the syntactic and semantic structure that develops over the course of the sentence. In this study, the early target words were nouns and the late target words were verbs. Nouns were more frequent in the sentences than were verbs; therefore, there was a greater choice of potential words to choose from when the target word was a noun, potentially making them more challenging to access than the target verbs. However, verbs also appeared more than once in many of the experimental and filler trials, and the target verbs were such that they could logically appear in more than one position in the sentence. Furthermore, cues as to the word class of an upcoming word were present for both the target nouns and the target verbs; for example, the determiner "the" would alert the listener to a noun and a noun phrase followed by the adverb "slowly" would be a cue for a following verb. A further question that arises is whether characteristics of the stimuli from each word class made the early target nouns more difficult to process and identify than the verbs. A post hoc analysis of word frequency and lexical density was performed via an on-line lexical database based on the 20,000-word Hoosier Mental Lexicon (Sommers, 2002). Word frequency (Kucera & Francis, 1967) is a measure of occurrences per million. If a word is high frequency, identification of the word should be faster (see Ferreira et al., 1996). Lexical density is the number of words that can be created from a target word by substitution, addition, and  98 deletion of one phoneme from the word. If a word has a high lexical density, identification should slow down because there is more competition from surrounding neighbours in the lexicon. The early target nouns (N = 43) had a mean frequency of 57.7 per million (SD = 69.3) and the late target verbs (N = 52) had a mean frequency of 127.8 per million (SD = 219.9). The difference between these means was statistically significant, t (93) = -2.01, p_ = .047. Two of the verbs had very high word frequency values. When these two verbs were omitted from the analysis, the late target verbs (N = 50) had a mean frequency of 89.5 per million (SD = 105.8). In this case, the difference between the means failed to reach significance, t(91)= -1.68, p_ = .095. Mean lexical density was also higher for the target verbs (N = 52,M= 10.7, SD = 11.0) than for the target nouns (N = 43, M = 6.7, SD = 7.8). The difference between the mean lexical densities was significant, t (93) = -2.02, p_ = .046. In summary, the target verbs were both higher frequency and higher density than the nouns. Thus, the effects of these two characteristics may have counteracted each other. The opposite pattern occurred for the nouns. The nouns had lower mean lexical density which should have made their selection faster, but they were also lower frequency than the verbs which should have led to slower access. Therefore, insofar as word frequency and lexical density influence lexical access, the target nouns in the present study should not have presented an advantage over the verbs in ease or speed of access. Word stress patterns may also have contributed to the differences in accessing the target nouns versus verbs. In multi-syllabic words in English, stress is most frequently placed on the first syllable. Because of their regularity, words that start with a strong (stressed) syllable are accessed more readily than words that start with a weak (unstressed) syllable (Cutler, Dahan, & van Donselaar, 1997). In the present study, although the  99 difference between the mean number of syllables of the target nouns (N = 60, M = 2.1, SD = .9) was not significantly different from the mean number of syllables of the verbs (N = 60, M = 1.8, SD = .8), the verbs had a higher proportion of multi-syllabic targets that started with a weak syllable. Sixty-three percent (20/32) of the multi-syllabic verbs began with a weak syllable compared to 17% (7/41) of the nouns. This finding suggests that the nouns may have been easier to process than the verbs; however, this is contrary to the results of the present study where the early targets (nouns) had longer processing times than the late targets (verbs). 4.5  Speaker and Syntactic Complexity It was hypothesised that response times would be longest for the subject-object  sentences when spoken by the non-native English speaker (see hypothesis 4, chapter 1). This interaction was not significant. Although there were main effects of both speaker and syntactic complexity, as reported above, they did not interact. However, when target position was included, this hypothesis received support as discussed in the next section. 4.6  Syntactic Complexity, Target Position, and Speaker The hypothesis that there would be an interaction between target position and  syntactic complexity was supported (see hypothesis 5, chapter 1). However, the interaction did not occur in the predicted direction. Response times were predicted to be longer for late target words in the most syntactically difficult sentence type. Instead, response times were longer for the early target words in the subject-object sentences. The target words occurred in positions that had differing degrees of processing demand according to the syntactic complexity of the sentence. Examples of each sentence type follow (target words are underlined):  100 (a) simple active: The carpenter and the plumber suddenly decided to quit. (b) conjoined: The carpenter angered the plumber and decided to quit. (c) center-embedded: The carpenter who the plumber angered decided to quit. In the center-embedded sentence (c), the late target word occurred in a position where sentence processing demands are higher than in the simple and conjoined sentences. King and Just (1991) found that the highest processing demands in the center-embedded sentences occurred at the verbs. Therefore, a late target word was expected to yield a slower response time in the center-embedded sentence than for both the early target word in the same sentence and the late target word in the simple and conjoined sentences. Processing demands and response times at the early target word were not expected to differ much over sentence types. However, it was the early target word position that resulted in a significant change in response time across sentence types. Response times for the late target word were more stable across sentences types. These results suggest that the demands on language processing in the region of the early target word were higher for the center-embedded sentence type than for the simple and conjoined sentence types. The increase in processing demand could be incurred by the beginning of the relative clause marked by "who" or "that" in the centerembedded sentences; the early target is the subject noun phrase of the embedded clause which directly follows the relative clause marker. This differs from the other sentence types where the early target word is the object of the verb that immediately precedes it (conjoined sentences), or the second noun in a complex noun phrase (simple sentences). In other words, listeners may have been cued by the relative pronoun that a complex sentence was being presented. This may have led them to reallocate processing resources to sentence  101 comprehension, leaving less for word monitoring (and thus slower response times at the early target). A three-way interaction between target word position, syntactic complexity, and speaker was also hypothesised (see hypothesis 5, chapter 1) and supported by the results. However, the resultsfromthis interaction were opposite the predictions with respect to target position. It was expected that response times would be longest for late target words, in syntactically complex sentences, spoken by the non-native English speaker. Instead, early target words in the center-embedded sentences spoken by the non-native English speaker resulted in the slowest response times. Nevertheless, this interaction shows that the nonnative English input increased the demands on processing resources. This increase in demand had the greatest effect on language processing when the language processing demands were already high due to the syntactic complexity of the sentence and the position of the target word. 4.7  Working Memory Capacity It was hypothesised that there would be a main effect of working memory capacity, as  measured by the listening span task (see hypothesis 3, chapter 1). Scores on the listening span task were used to categorise the participants as belonging to one of three groups: the high-span group, the medium-span group, and the low-span group. High-span listeners were predicted to have shorter response times in the word monitoring tasks as a result of greater capacities to process language. In the present study, the main effect of working memory span was not significant. The effect of working memory span was also not significant when only the high- and low-span groups were used to ensure that the listening span scores were maximally different.  102 It was hypothesised that listeners with high working memory capacity would be better able to cope with the added processing demands produced by both the increased syntactic complexity of the sentences and the non-native English input. Listeners with low listening span scores were expected to have slower response times for the center-embedded sentences than the listeners with high spans (see hypothesis 6, section 1.6). Listeners with low spans were also expected to have slower response times when listening to sentences spoken by the non-native English speaker than the native English speaker (see hypothesis 7, section 1.6). Furthermore, the interaction of syntactic complexity and non-native English with working memory capacity was expected to result in slower response times for the participants with low spans for the center-embedded sentences spoken by the non-native English speaker (see hypothesis 7, section 1.6). However, none of these interactions was significant. The lack of interactions with working memory capacity scores in this study adds fuel to the debate on the role and nature of working memory in language processing. The present results are consistent with Caplan and Waters' (1999) separate resource theory whereby working memory is specialised for different language tasks. In this view, the listening span task draws on a working memory resource that is dedicated to conscious processing because participants are required to consciously hold verbal information in memory. The on-line word monitoring task, however, is viewed as an interpretive language processing task because participants are unconsciously processing linguistic information and it therefore draws on a separate working memory resource. Because the listening span task and the word monitoring task draw on different working memory supplies, having a greater capacity in one will not increase the supply available to the other. Other studies have also failed to find differences in performance on language processing tasks between high-span and low-span  103 participants (e.g., Waters & Caplan, 1996b; Caplan & Waters, 1995; 1999). Lack of an effect of working memory on response times in a word monitoring task has also been found in a study of language processing in children. Montgomery (2000) found that children with specific language impairment and normally developing children showed a correlation between working memory and sentence comprehension in an off-line comprehension task, but not in an on-line word monitoring task. Two other views of working memory, Just and Carpenter's capacity theory and MacDonald and Christiansens' (2002) connectionist approach, do not appear to support the absence of effects of working memory span on language processing found in the present study. Just and Carpenter (1992) propose that all language processing tasks draw on a single pool of working memory resources. Therefore, participants with high working memory capacities, as measured by the listening span task, would have a greater amount of working memory resources available for all language processing tasks than would participants with low capacities. When the demands imposed by a language task are high, differences in speed or accuracy of language processing become apparent between high- and low-span participants. Differences in language processing with respect to working memory span have been reported in the literature (e.g., Ferreira & Clifton, 1986; King & Just, 1991; Just & Carpenter, 1992; MacDonald et al., 1992). MacDonald and Christiansens' (2002) connectionist approach to language processing also support differences in performance on language processing tasks as a result of individual working memory spans. In this view, however, working memory span does not measure the size of an individual's working memory, but instead reflects an individuals' experience with language. Experience with language can arise because of the frequency of certain language forms, such as simple  104 sentences, and because of individual experience, such as reading. A language processing network's capacity to process information is determined by characteristics of the network and experience with language input that strengthens connections within the network. Support for the effects of experience have been found (e.g., MacDonald & Christiansen, 2002). The lack of correlation in the present study may be explained from a connectionist view by noting the different processing demands and patterns of activation required in processing complex sentences (word monitoring task) versus holding words over time for later recall (listening span task). In this view, experience with one task does not generalise to the other. Although the present study did not find differences in response times on the word monitoring task with respect to listening span scores, some caution is necessary before rejecting the capacity theory and the connectionist approach in favour of the separate resource theory of language processing. The distribution of listening span scores for the participants in the present study was not normal; the majority of participants had scores above the mid-way score. With more participants and additional measures of working memory capacity, the power of the interactions may have been sufficient for effects of working memory to be significant. 4.8  Language Processing Resources Although the nature of working memory for language processing is under debate—  whether it is a single resource, modular, or a property of a connectionist network—the existence of language processing capacity is not. Just and Carpenter and Caplan and Waters agree that working memory for language processing is limited by the amount of activation available to support storage and processing; MacDonald and Christiansen believe that working memory is limited as a result of limits in the architecture of the network. If a  105  language task requires more working memory resources than are available, or if the language processing network is not experienced with the language task, the result is slower or less complete language processing. The results of the present study support this view. The results will first be discussed with respect to the capacity theory (Just & Carpenter) and the separate resource theory (Caplan & Waters) and will then be looked at in terms of the connectionist approach (MacDonald & Christiansen). When participants listened to sentences spoken by the non-native English speaker, their response times on the word monitoring task were slower than when the sentences were produced by the native English speaker. Increases in the syntactic difficulty of the sentences also increased the response times. Decoding acoustic-phonetic information requires processing resources, as does processing the syntax and semantics of the sentence. When the speaker is a non-native English speaker whose pronunciation differs from standard English speakers, listeners may need to dedicate a greater proportion of working memory resources to the task of matching the acoustic input to phonological forms. Response times also varied with respect to the position of the target word in the sentences. This reflected the varying demand on working memory resources at different points within a particular sentence. Late target words required fewer processing resources to detect because the preceding semantic and syntactic context possibly provided a head start in activation. When an early target was presented in the most syntactically difficult sentence type, processing demands at that point increased to a greater degree than for the less complex sentence types. Decoding non-native English acoustic-phonetic input further increased the language processing demands at this point in the sentence. Both the capacity theory and the separate resource hypothesis agree that decoding acoustic information and processing  106 syntactic information draw on a common pool of working memory resources. Therefore the tasks of decoding non-native English input, monitoring for a word, and processing syntax would compete for the same pool of working memory resources. Demands imposed by one aspect of the task would affect other aspects of the task. The most difficult target word position, the most difficult sentence type, and the most difficult acoustic-phonetic input push working memory capacity to its limits, thereby resulting in slower processing time and longer response latencies in the word monitoring task. The connectionist view of language processing would attribute the effects of speaker in the present study to experience with language. Listeners have more experience decoding standard English and so the connections between acoustic input and phonological correspondences are stronger, resulting in faster processing time. Acoustic segments that deviate from their phonological prototype have weaker or fewer connections with the target phoneme and therefore take longer to process. Listeners also have more experience with simple syntactic forms in their everyday language environment than with complex syntactic forms such as center-embedded subject-object sentences (Dick & Elman, 2001). Language processing networks develop as a result of this experience and have stronger and faster activation pathways for simple and conjoined sentences. Again in the connectionist framework, words that occur later in the sentence will have faster activation times because experience with language uses the syntactic and semantic information provided by the preceding context to predict upcoming words. Target word identification, syntactic processing, and decoding acoustic input are all processed in the connectionist approach by a common multi-layer network. Inexperience with one of the above features will slow down processing. Inexperience with two or three of  107 the features will slow processing down even more. Therefore, when the listeners heard syntactically difficult sentences spoken by non-native English speakers and had to monitor for the early target word, their language processing networks were plagued by weaker connections and language processing was slowed. 4.9  Familiarity Familiarity listening to accented speech was hypothesised to interact with speaker  such that listeners who reported that they were familiar with accented speech would have faster response times to the non-native English input then would the listeners who were not familiar with accented English (see hypothesis 8, chapter 1). This interaction was not significant. Furthermore, it was hypothesised that the listeners who were highly familiar with non-native English accents would have faster response times than listeners with low familiarity when listening to accented sentences that were syntactically complex. The interaction between familiarity, speaker, and syntactic complexity was also not significant. The familiarity variable was included in this study to investigate the experience-based connectionist approach to working memory and language processing. Experience listening to accented English is a specific form of experience that has not been tested by a connectionist language processing model. Connectionist models have tended to focus on broader measures of experience, such as a greater experience with language due to increased reading. Furthermore, it is experience with the content of sentences, such as their syntactic forms and word regularities, that has been studied, not experience with properties of the acoustic structure of sentences, such as variations in acoustic input in accents. However, it follows from this model that experience with non-native English input would lead to stronger connections and faster processing between acoustic parameters and phonological  108 correspondences that may deviate more greatly from the standard, or prototypical English forms. Greater experience listening to non-native English could enlarge the range of acceptable forms of a target phoneme and strengthen connections between the attempted phoneme and its target, thereby increasing the speed of activation of targets and decreasing the demand on the language processing system. The absence of an effect of familiarity with non-native English input in the present study does little to support the connectionist approach to working memory and language processing. However, caution must be used in interpreting these null results. Familiarity groups (Familiar or Non-familiar) were determined by a language background questionnaire that was completed by the participants at the outset of the experimental session. Listeners were asked to report the frequency of their interactions with non-native English speakers (in general, and with Cantonese speakers specifically) and the type of exchanges that they had, from brief greetings to social conversations. These are subjective measures of experience listening to accents. Furthermore, the criteria for determining familiarity group were not available from previous research. The difference in reported language experience was not always great between a participant classified in the Familiar group and one classified in the Non-familiar group. Failure to find effects of familiarity on the processing of non-native English speech is consistent with previous research by Munro and Derwing (1995a) who also judged familiarity on a subjective, participant-reported scale (regular vs. little/no contact with nonnative speakers). However, other studies have found effects of experience with accented English input in off-line tasks. Self-reported familiarity with various accents was correlated with participants' ability to transcribe non-native English and to identify the first language of  109 the speaker (Derwing & Munro, 1997). Gass and Varonis (1984) found that listeners' familiarity with the non-native speech in general, a particular non-native accent, and a particular non-native speaker all tended to lead to increased comprehension of non-native speech as measured by an off-line sentence transcription task. This suggests that experience with non-native English gained through exposure during the experiment may be a way of manipulating familiarity with accents experimentally. In such a case, listeners' experience with non-native English could be better controlled and effects of familiarity in on-line language processing tasks may become apparent. The following section discusses the effects of a short window of experience with accent on performance in the word monitoring experiment. 4.10  Adaptation to Non-Native English The acoustic variation that is present in all standard speech can be far greater in non-  native English speech. Over time, native English listeners may come to recognise particular acoustic patterns in the non-native English input and be able to match them more easily to forms stored in memory. Both Pisoni (1993) and Dupoux and Green (1997) suggest that acoustic information is extracted through experience listening to a particular speaker and stored for later use. As a consequence, the demands on working memory resources created by perceptual analysis decrease, more resources are available for language processing, and language processing speeds up. It was hypothesised that response times would be longer for trials that occurred early in the experiment compared to trials that occurred late (see hypothesis 9, chapter 1). There was a marginally significant difference in response times between blocks of five trials that were presented towards the beginning of the experiment and blocks that were presented  110 towards the end. The mean response times for these blocks, however, did not reflect a steady improvement in response times over time. Blocks of five trials were initially chosen because research concerning adaptation to compressed speech have shown that normalisation occurs between the 5 and 15 trial items. However, it was proposed post hoc that adaptation to th  th  this non-native English speaker may take place more gradually and that blocks of five trials may not be sensitive enough to capture changes over time. The data were therefore reanalysed in four blocks of 15 trials. Again, the results were not significant; however, there was a trend towards shorter response times in the second 15 trials than in the first 15 trials, suggesting that listeners were adapting to the non-native English speech early on in the experiment. There are two patterns of interest in the mean response times by block in the present study. First, mean response times tended to increase between the first five trials and the second five trials such that listeners were faster at monitoring for the word initially then they were five trials later. One explanation for this is that the participants were being extra vigilant at the beginning of the experimental task and were attending to the task to a greater degree in the initial trials than in subsequent trials. The other pattern of interest is that response times tended to increase again in the last few trials. In other words, listeners became faster at monitoring for the word by the middle of the experiment, but then became slower again towards the end. This pattern may be explained as a decrease in attention to the task and/or an increase in fatigue towards the end of the experiment. In fact, some participants reported that they found their attention wandering towards the end of the task. Studies investigating normalisation to accented English have not been reported in the literature. However, the tendency towards adaptation to the non-native English speaker seen  Ill in the present study is consistent with research concerning other forms of acoustic variability. Word recognition scores declined when the speaker varied and when the speaking rate varied (Sommers et al., 1994). Dupoux and Green (1997) found that listeners adapted to compressed speech over time. Furthermore, Pisoni (1993) found that when listeners were trained to recognise particular voices, their word recognition scores in noise were better for the voices they knew than for new voices. These studies, however, measured the effects of acoustic variability and adaptation in off-line tasks. The present study suggests that on-line measures may also be sensitive to normalisation to acoustic variability over time. 4.11  Implications This study points to a number of directions for future research. First, it shows that  on-line language processing measures can be used to investigate native English listeners' comprehension of non-native English speech. Future studies could continue to probe the influence of variations in acoustic-phonetic input, such as accented English, on real-time language processing. Second, the word monitoring task revealed differences in language processing as a result of the syntactic complexity of the sentences. Syntactic complexity differences have previously been manipulated in other language processing paradigms, such as the moving window paradigm; however, the present study shows that the word monitoring paradigm can also be employed to investigate language processing where syntactic complexity manipulations are a variable of interest. The findings need to be replicated since the locus of effects of syntactic complexity was different in this study. Third, although the results of this study did not show effects of familiarity with non-native English, anecdotal reports suggest that comprehension of accented speech may become better for some people with more exposure to the accent. Future studies could manipulate experience by exposing  112 some participants to a particular non-native accent or non-native speaker (e.g., Gass & Varonis, 1984). Finally, the present study shows that on-line language processing tasks may be sensitive to normalisation to new input. Future research could investigate adaptation to accented English while maintaining the stimuli in the on-line task constant instead of varying the syntactic complexity, as in the present study. Clinical implications also arise from the results of this study. The present study has shown that the language processing system is challenged by increases in the processing difficulty of the input, whether that be due to acoustic variability, such as is present in accented English, or to changes in syntactic complexity. When language input is difficult to process in more than one way, there is an even larger effect on the language processing system's ability to process the incoming speech quickly and completely. In the present study, we saw that when three variables of the input were at their most complex levels - target word position, syntax, and speaker - listeners' ability to process the sentences was slowed down. It has been suggested that some clinical populations, such as people with aphasia or dementia and children with developmental language delays, may have decreased ability to process language because of reduced processing resources. Haarmann, Just and Carpenter (1997) developed a computer model of sentence comprehension that simulated a reduction in available processing resources; this model led to a pattern of language comprehension that was consistent with language comprehension patterns found in people with aphasia. Deficits in working memory have also been suggested to underlie language processing difficulties for people with Alzheimer's disease (e.g., Kempler et al., 1998; Small et al., 2000) and for children with language impairments (e.g., Montgomery, 2000). The present study showed that it is possible to decrease language processing efficiency in non-clinical populations when  113 processing demands are high. It follows that the combined effects of input variables, such as degraded acoustic input and complex language forms, may tax the language processing systems of clinical populations to an even greater degree. Conversely, controlling these variables may increase the processing resources that are available for language processing tasks. 4.12  Conclusions This study has shown that real-time sentence processing is affected by both acoustic  features of the input and syntactic characteristics of the sentence. Non-native English phonological and intonational patterns increased processing demands such that word monitoring response times increased and question accuracy decreased. Syntactically complex sentences also placed higher demands on language processing and yielded longer response times. Word monitoring was affected by the position of the target word in the sentence with early target words resulting in longer response times. Interactions between target word position, syntactic complexity, and speaker yielded the slowest response times for early target words in complex sentences spoken by the non-native English speaker. This suggests that syntactic processing and acoustic-phonetic decoding are tackled by a common language processing system. Demands incurred by one component of processing affect the system's ability to process other language components. All three theories of working memory presented in this study can explain these results. From the capacity theory and separate resource theory's viewpoints, these language components share a common and . limited set of processing resources; acoustic-phonetic decoding and syntactic processing compete for these resources. The connectionist model proposes that these effects are a  114 reflection of the existence and strength of connections in a language processing network constructed largely by experience. The lack of effect of working memory span on response times in the word monitoring task lends support to Caplan and Waters' separate resource theory of working memory which advocates separate working memory resources for interpretive and post-interpretive language processing tasks. Furthermore, familiarity with accented English also failed to have a significant effect on response times. This finding is contrary to the connectionist approach to language processing which proposes that experience with language input is integral to the development of a highly functioning language processing system. The findings for both working memory and familiarity with accent should be interpreted with caution, however, because the participants were not evenly distributed across the possible range of scores for either variable, and additional measures of each variable need to be considered. Participants' response times tended to improve from the first 15 to the second 15 trials showing that adaptation to the non-native English accent may have occurred. As the participants' language processing systems became better able to identify target phonemes and word stress patterns that differed from native English prototypes, more language processing resources were available for the word monitoring task. This suggests that listeners are able to deal with variability in acoustic input over time. This study has shown that an on-line language task reveals the effects of listening to non-native English speakers on the language processing abilities of native English listeners. Furthermore, it has shown that the word monitoring task can reveal differences in processing of sentences of different syntactic complexities. Finally, it shows that on-line language processing tasks are sensitive to normalisation of new input.  115 REFERENCES  Anderson-Hsieh, J., & Koehler, K. (1988). The effect of foreign accent and speaking rate on native speaker comprehension. Language Learning, 38(4), 561-613. Anderson-Hsieh, J., Johnson, R., & Koehler, K. (1992). The relationship between native speaker judgments of nonnative pronunciation and deviance in segmentals, prosody, and syllable structure. Language Learning, 42(4), 529-555. Baddeley, A.D. (1992). Working memory. Science, 255, 556-559. Biirki-Cohen, J., Miller, J.L., & Eimas, P.D. (2001). Perceiving non-native speech. Language and Speech, 44(2), 149-169.  Caplan, D. & Hildebrandt, N. (1988). Disorders of syntactic comprehension. Cambridge, MA: The MIT Press. Caplan, D. & Waters, G. (1995). Aphasic disturbances of syntactic comprehension and working memory capacity. Cognitive Neuropsychology, 12, 637-649. Caplan, D. & Waters, G. (1996). Syntactic processing in sentence comprehension under dual-task conditions in aphasic patients. Language and Cognitive Processes, 11, 525-551. Caplan, D. & Waters, G. (1999). Verbal working memory and sentence comprehension. Behavioral and Brain Sciences, 22, 77-126.  Chan, A.Y.W. & Li, D.C.S. (2000). English and Cantonese phonology in contrast: Explaining Cantonese ESL learners' English pronunciation problems. Language, Culture, and Curriculum, 13(1), 67-85.  Cool Edit 2000 [Computer software]. (2000). Phoenix, AZ: Syntrillium Software Corporation. Cutler, A., Dahan, D. & van Donselaar, W. (1997). Prosody in the comprehension of spoken language: A literature review. Language and Speech, 40, 141-201. Daneman, M. & Carpenter, P.A. (1980). Individual differences in comprehending and producing words in context. Journal of Memory and Language, 25, 1-18. Daneman, M. & Merikle, P.M. (1996). Working memory and language comprehension: A meta-analysis. Psychonomic Bulletin and Review, 3(4), 422-433.  Derwing, T.M. (1996). Elaborative detail: Help or hindrance to the NNS listener? Studies in Second Language Acquisition, 18(3), 238-297.  116 Derwing, T.M. & Munro, M.J. (1997). Accent, intelligibility, and comprehensibility: Evidence from four Lis. Studies in Second Language Acquisition, 19(1), 1-16.  Dick, F. & Elman, J.L. (2001). The frequency of major sentence types over discourse levels: A corpus analysis. Center for Research in Language Newsletter, 13(1), 3-  18. Dupoux, E. & Green, K. (1997). Perceptual adjustment to highly compressed speech: Effects of talker and rate changes. Journal of Experimental Psychology, 23(3), 914-927. E-Prime Version 1.0 [Computer software]. (2002). Pittsburgh, PA: Psychology Software Tools, Inc. Ferreira, F. & Clifton, C. (1986). The independence of syntactic processing. Journal of Memory and Language, 25, 348-368.  Ferreira, F., Henderson, J.M., Anes, M.D., Weeks, P.A. & McFarlane, D.K. (1996). Effects of lexical frequency and syntactic complexity in spoken language comprehension: Evidence from the auditory moving window technique. Journal  of Experimental Psychology: Learning, Memory and Cognition, 22, 324-335.  Gass, S. & Varonis, E.M. (1984). The effect of familiarity on the comprehensibility of nonnative speech. Language Learning, 34(1), 65-89. Haarmann, H.J. & Kolk, H.H.J. (1994). On-line sensitivity to subject-verb agreement violations in Broca's aphasics: The role of syntactic complexity and time. Brain and Language, 46(4), 493-516.  Haarmann, H.J., Just, M.A. & Carpenter, P.A. (1997). Aphasic sentence comprehension as a resource deficit: A computational approach. Brain and Language, 59, 76-120. Hawkins, S. & Warren, P. (1994). Phonetic influences on the intelligibility of conversational speech. Journal of Phonetics, 22, 493-511. Just, M.A. & Carpenter, P.A. (1992). A capacity theory of comprehension: Individual differences in working memory. Psychological Review, 99(1), 122-149. King, J. & Just, M.A. (1991). Individual differences in syntactic processing: The role of working memory. Brain and Language, 41, 275-288. Kempler, D., Almor, A. & MacDonald, M.C. (1998). Teasing apart the contribution of memory and language impairment in Alzheimer's Disease: An on-line study of sentence comprehension. American Journal of Speech-Language Pathology, 7(1),  61-67.  117 Kempler, D., Almor, A., Tyler. L.K., Anderson, E.S. & MacDonald, M.C. (1998). Sentence comprehension deficits in Alzheimer's disease: A comparison of off-line vs. on-line sentence processing. Brain and Language, 64, 297-316. Kilborn, K. & Moss, H. (1996). Word monitoring. Language and Cognitive Processes, 11(6), 689-694. Li, C.N., & Thompson, S.A. (1987). Chinese Dialect Variations and Language Reform. In T. Shopen (Ed.), Languages and their Status (pp. 295-335). Philadelphia: University of Pennsylvania Press. MacDonald, M.C. & Christiansen, M.H. (2002). Reassessing working memory: A comment on Just & Carpenter (1992) and Waters & Caplan (1996). Psychological Review, 109(1), 35-54.  MacDonald, M.C, Just, M.A. & Carpenter, P.A. (1992). Working memory constraints on the processing of syntactic ambiguity. Cognitive Psychology, 24, 56-98. Marslen-Wilson, W.D. & Tyler, L.K. (1975). Processing structure of spoken language comprehension. Nature, 257, 784-786. Marslen-Wilson, W.D. & Tyler, L.K. (1980). The temporal structure of spoken language understanding. Cognition, 8, 1-71. Marslen-Wilson, W.D., Brown, C , & Tyler, L.K. (1988). Lexical representations in language comprehension. Language and Cognitive Processes, 3, 1-21.  Miyake, A., Carpenter, P. & Just, M. (1994). A capacity approach to syntactic comprehension disorders: Making normal adults perform like aphasic patients. Cognitive Neuropsychology, 11, 671 -717'.  Miyake, A., Emerson, M.J., & Friedman, N.P. (1999). Good interactions are hard to find. Behavioral and Brain Sciences, 22, 108-109.  Montgomery, J.W. (2000). Relation of working memory to off-line and real-time sentence processing in children with specific language impairment. Applied Psycholinguistics, 21, 117-148. Mullenix, J.W., Pisoni, D.B., & Martin, CS. (1988). Some effects of talker variability on spoken word recognition. Journal of the Acoustical Society of America, 85(1),  365-378.  Munro, M.J. (1998). The effects of noise on the intelligibility of foreign-accented speech. Studies in Second Language Acquisition, 20(2), 139-154.  Munro, M.J. & Derwing, T.M. (1995a). Processing time, accent, and comprehensibility in the perception of native and foreign-accented speech. Language and Speech,  118 38(3), 289-306. Munro, M.J. & Derwing, T.M. (1995b). Foreign accent, comprehensibility, and intelligibility in the speech of second language learners. Language Learning, 45, 73-97. Munro, M.J. & Derwing, T.M. (1998). The effects of speaking rate on listener evaluations of native and foreign-accented speech. Language Learning, 48(2), 159-182. Pallier, C , Sebastian-Galles, N., Dupoux, E., Christophe, A., & Mehler, J. (1998). Perceptual adjustment to time-compressed speech: A cross-linguistic study. Memory and Cognition, 26(4), 844-851.  Pisoni, D.B. (1993). Long-term memory in speech perception: Some new findings on talker variability, speaking rate, and perceptual learning. Speech Communication, 13, 109-125. Rayner, K., Carlson, M. & Frazier, L. (1993). The interaction of syntax and semantics during sentence processing: Eye movements in the analysis of semantically biased sentences. Journal of Verbal Learning and Verbal Behavior, 22, 358-374. Salthouse, T.A. (1991). Theoretical perspectives on cognitive aging. Hillsdale, NJ:  Erlbaum.  Sommers, M.S. (2002). Speech and Hearing Lab Neighborhood Database. Available on  the Internet: http://128.252.27.74/Neighborhood/SearchHome.asp.  Sommers, M.S., Nygaard, L.C., & Pisoni, D.B. (1994). Stimulus variability and spoken word recognition. I. Effects of variability in speaking rate and overall amplitude. Journal of the Acoustical Society of America, 96(3), 1314-1324.  Small, J.A., Kemper, S. & Lyons, K. (2000). Sentence repetition and processing resources in Alzheimer's disease. Brain and Language, 75(2), 232-258. Tyler, L.K. (1992). Spoken language comprehension: An experimental approach to  disordered and normal processing. Cambridge, Mass.: MIT Press.  Varonis, E.M., & Gass, S. (1982). The comprehensibility of non-native speech. Studies in Second Language Acquisition, 4, 114-136.  Waters, G. & Caplan, D. (1996a). The capacity theory of sentence comprehension: A reply to Just and Carpenter. Psychological Review, 103, 761-772. Waters, G. & Caplan, D. (1996b). Processing resource capacity and the comprehension of garden path sentences. Memory and Cognition, 24, 342-355.  119  Waters, G. & Caplan, D. (submitted). The effect of a digit load on syntactic processing in subjects with high and low working memory capacity. Waters, G.S., Caplan, D. & Hildebrandt, N. (1987). Working memory and written sentence comprehension. In M.Coltheart (Ed.), Attention and Performance XII: The psychology of reading. Erlbaum.  Waters, G.S., Caplan, D. & Rochon, E. (1995). Processing capacity and sentence comprehension in patients with Alzheimer's disease. Cognitive Neuropsychology,! 2, 1-30.  Yeni-Komshian, G. (1993). Speech perception. In J. Gleason & N. Ratner (Eds.), Psycholinguistics (pp. 89-131). Fort Worth: Harcourt Brace College Publishers.  120 APPENDIX A: Comprehensibility Ratings  Median Comprehensibility Ratings and Ranges for the Non-Native English and Native English Speakers Speaker Non-native English 1 Non-native English 2 Non-native English 3 Non-native English 4 Non-native English 5  Median 3 6 3 7 3  Range 1-9 2-9 1-8 2-9 1-8  Native English 1 Native English 2 Native English 3  1 1 1  1-2 1-2 1__2  a  a  speaker selected for the word monitoring experiment.  a  121 APPENDIX B: Experimental Sentences Simple Active Sentences 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42  The children beside the bully quickly lied to the principal about the fight. The prince and the dragon both waited for the princess outside the castle. The electrician with the wire easily fixed the light. The choir and the band eagerly pleased the crowd. The vacationers near the seashore gladly rented a room for the week. The graffiti by the gang really upset the child. The queen beside the peasant loudly scolded the king for being soft-hearted. The detective for the woman always examined the evidence carefully. The ring on the teenager really impressed his girlfriend. The dog outside in the storm abruptly woke up the family. The man in the sports car always amazed the boys. The secretary for the doctor usually ordered take-out food for lunch. The caterer for the hostess thankfully received a bonus for the work. The bear ahead of the hiker slowly made his way toward the campground. The carpenter and the plumber suddenly decided to quit. The girl in the hospital fully trusted the doctor. The biologist and the gardener carefully studied the plants in the greenhouse. The family's most annoying neighbours finally sold their house. The painting of her husband greatly delighted the wife. The pencil beside the student slowly rolled off the desk. The pianist and the conductor gladly attended the performance. The lawyer for the defendant angrily called for a mistrial. The captain with the current weather report calmly sailed the ship to safety. The child beside the guard safely crossed the busy street. The server and the chef quickly threw the food into the trash. The man with the rude clerk secretly devised a plan to get revenge. The player and the new coach briefly reviewed the game plan. The dog and the coyote swiftly drank some water. The employee and the manager truly enjoyed the party. The new puppy for the child happily accompanied the family home. The editor and the author jointly changed the ending to the story. The director and the actor often extended the rehearsal longer than planned. The family on the trip completely bored their friends with the photos. The boy with the bunny quickly ran after his father. The candidate beside the reporter quickly denied the error after the meeting. The book from the babysitter briefly entertained the toddler. The architect and the historian beautifully designed the hall. The catcher with many fans skillfully caught the foul ball. The airplane behind the mechanic almost failed the brake test. The journalist and the boy quickly reported the good news to the town. The patient's very last visitor briefly spoke to the nurse. The car full of teenagers sharply turned onto the street.  122 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60  The victim and the thief both talked to the detective. The money from the bank just supported the businessman. The rower and the swimmer easily qualified for the finals. The steak with the barbecue sauce clearly satisfied the customer. The pollution from the factories nearly killed the fish. The jogger in the heat almost slowed to a walk. The chauffeur in the traffic jam slowly drove the president home. The music by the orchestra always inspired the dancers. The artist and the curator both left the exhibit before the show ended. The keyfromthe locksmith only opened thefrontdoor. The exam from the new professor really worried the class. The mailman near the dog quickly entered the house for safety. The horses and the goats calmly grazed in the nearby field. The child at the movie suddenly shouted for her mother. The nurse and the therapist quickly selected the restaurant for dinner. The florist and the worker carefully typed the addresses into the computer. The psychologistfromthe clinic almost lost the client's file. The volunteers and the spokeswoman certainly helped the event run smoothly.  Conjoined Sentences 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24  The children dreaded the bully and lied to the principal about the fight. The prince fought the dragon and waited for the princess outside the castle. The electrician replaced the wire and fixed the light. The choir accompanied the band and pleased the crowd. The vacationers enjoyed the seashore and rented a room for the week. The graffiti encouraged the gang and upset the child. The queen yelled at the peasant and scolded the king for being soft-hearted. The detective believed the woman and examined the evidence carefully. The ring tempted the teenager and impressed his girlfriend. The dog howled at the storm and woke up the family. The man raced the sports car and amazed the boys. The secretary disappointed the doctor and ordered take-out food for lunch. The caterer complimented the hostess and received a bonus for the work. The bear startled the hiker and made his way toward the campground. The carpenter angered the plumber and decided to quit. The girl feared the hospital but trusted the doctor. The biologist respected the gardener and studied the plants in the greenhouse. The family detested the neighbours and sold their house. The painting alarmed the husband but delighted the wife. The pencil dropped from the student and rolled off the desk. The pianist praised the conductor and attended the performance. The lawyer enraged the defendant and called for a mistrial. The captain obeyed the current weather report and sailed the ship to safety. The child watched the guard and crossed the busy street.  123 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60  The server ignored the chef and threw the food into the trash. The man cornered the rude clerk and devised a plan to get revenge. The player approached the new coach and reviewed the game plan. The dog chased the coyote and drank some water. The employee greeted the manager and enjoyed the party. The new puppy befriended the child and accompanied the family home. The editor questioned the author and changed the ending to the story. The director deceived the actor and extended the rehearsal longer than planned. The family enjoyed the trip but bored their friends with the photos. The boy picked up the bunny and ran after his father. The candidate attacked the reporter and denied the error after the meeting. The book relaxed the babysitter and entertained the toddler. The architect respected the historian and designed the hall. The catcher astonished the fans and caught the foul ball. The airplane dismayed the mechanic and failed the brake test. The journalist informed the boy and reported the good news to the town. The patient welcomed the visitor and spoke to the nurse. The car avoided the teenagers and turned onto the street. The victim found the thief and talked to the detective. The money arrived from the bank and supported the businessman. The rower motivated the swimmer and qualified for the finals. The steak improved with barbecue sauce and satisfied the customer. The pollution leakedfromthe factories and killed the fish. The jogger disliked the heat and slowed to a walk. The chauffeur avoided the traffic jam and drove the president home. The music thrilled the orchestra and inspired the dancers. The artist shocked the curator and left the exhibit before the show ended. The key arrived from the locksmith and opened the front door. The exam excited the new professor but worried the class. The mailman fled from the dog and entered the house for safety. The horses followed the goats and grazed in the nearby field. The child hated the movie and shouted for her mother. The nurse invited the therapist and selected the restaurant for dinner. The florist assisted the worker and typed the addresses into the computer. The psychologist rushedfromthe clinic and lost the client's file. The volunteers recruited the spokeswoman and helped the event run smoothly.  Center-Embedded Subject-Object Sentences 1 2 3 4 5 6  The children who the bully threatened lied to the principal about the fight. The prince who the dragon fought waited for the princess outside the castle. The electrician who the wire shocked fixed the light. The choir who the band accompanied pleased the crowd. The vacationers who the seashore relaxed rented a room for the week. The graffiti that the gang painted upset the child.  124 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52  The queen who the peasant displeased scolded the king for being soft-hearted. The detective who the woman believed examined the evidence carefully. The ring that the teenager wore impressed his girlfriend. The dog who the big storm terrified woke up the family. The man who the sports car injured amazed the boys. The secretary who the doctor disappointed ordered take-out food for lunch. The caterer who the hostess complimented received a bonus for the work. The bear who the hiker startled made his way toward the campground. The carpenter who the plumber angered decided to quit. The girl who the hospital scared trusted the doctor. The biologist who the gardener respected studied the plants in the greenhouse. The family who the neighbours irritated sold their house. The painting that the husband finished delighted his wife. The pencil that the student borrowed rolled off the desk. The pianist who the conductor praised attended the performance. The lawyer who the defendant despised called for a mistrial. The captain who the current weather report cautioned sailed the ship to safety. The child who the guard watched crossed the busy street. The server who the chef ignored threw the food into the trash. The man who the rude clerk irritated devised a plan to get revenge. The player who the new coach admired reviewed the game plan. The dog who the coyote chased drank some water. The employee who the manager greeted enjoyed the party. The new puppy who the child befriended accompanied the family home. The editor who the author trusted changed the ending to the story. The director who the actor deceived extended the rehearsal longer than planned. The family who the trip delighted bored their friends with the photos. The boy who the bunny adored ran after his father. The candidate who the reporter attacked denied the error after the meeting. The book that the babysitter read entertained the toddler. The architect who the historian respected designed the hall. The catcher who the fans applauded caught the foul ball. The airplane that the mechanic inspected failed the brake test. The journalist who the boy informed reported the good news to the town. The patient who the visitor comforted spoke to the nurse. The car that the teenagers avoided turned onto the street. The victim who the thief stabbed talked to the detective. The money that the bank loaned supported the businessman. The rower who the swimmer motivated qualified for the finals. The steak that the barbecue sauce improved satisfied the customer. The pollution that the factories produced killed the fish. The jogger who the heat bothered slowed to a walk. The chauffeur that the traffic jam annoyed drove the president home. The music that the orchestra produced inspired the dancers. The artist who the curator shocked left the exhibit before the show ended. The key that the locksmith carved opened the front door.  125 53 54 55 56 57 58 59 60  The exam that the new professor scheduled worried the class. The mailman who the dog chased entered the house for safety. The horses who the goats followed grazed in the nearby field. The child who the movie frightened shouted for her mother. The nurse who the therapist invited selected the restaurant for dinner. The florist who the worker assisted typed the addresses into the computer. The psychologist who the clinic hired lost the client's file. The volunteers who the spokeswoman recruited helped the event run smoothly.  126 APPENDIX C: Experimental Target Words  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42  Early Target  Late Target  bully dragon wire band seashore gang peasant woman teenager storm car doctor hostess hiker plumber hospital gardener neighbours husband student conductor defendant weather report guard chef clerk coach coyote manager child author actor trip bunny reporter babysitter historian fans mechanic boy visitor teenagers  lied waited fixed pleased rented upset scolded examined impressed woke amazed ordered received made decided trusted studied sold delighted rolled attended called sailed crossed threw devised reviewed drank enjoyed accompanied changed extended bored ran denied entertained designed caught failed reported spoke turned  43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60  thief bank swimmer sauce factories heat traffic jam orchestra curator locksmith professor dog goats movie therapist worker clinic spokeswoman  talked supported qualified satisfied killed slowed drove inspired left opened worried entered grazed shouted selected typed lost helped  128 APPENDIX D: Filler Sentences  61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103  The movie that my brother saw compelled him to travel. The outrageous stories that the drunken host told amused some guests and offended others. The clowns that the children saw performed stunts at the circus. The pedestrian who the cyclist passed crossed the street illegally. The musician who the bride booked cancelled at the last minute. The girl who the boy disliked happily played on the swings in the park. The painter who the roofer recommended provided excellent references. The farmers furiously made threats to the politicians who disregarded their concerns. The photographer gladly aided the model who needed a portfolio. The clerk thanked the officer who kindly closed the window in the office. The teacher congratulated the student who won the award. The illustrator criticized the publisher who abandoned the project. The musician passionately kissed the surgeon who brought the flowers to dinner. The salesperson phoned the bookseller who wondered about the deal. The soccer players won the match and happily celebrated in the clubhouse. The tourist misread the sign and parked his car in a tow-away zone. The woman shyly thanked the man and took his seat on the bus. The grandfather loved flowers and planted tulips outside his house. The scholar remembered historical details and easily passed the test. The shopper returned the jacket and quickly bought a sweater instead. The father treasured the drawing from his daughter and took it to work. The waitress at the busy cafe hated the picky customer. The construction workers often disrupted the sleep of the residents nearby. The news report on television suddenly interrupted the talk show. The politician contested the accusations in his speech. The dentist explained the procedure to the woman in the waiting room. The friends from camp invited the girl to their house. The diver located many tropical fish and coral reefs. The toddler loudly screamed when the baby grabbed the rattle. The president resigned when the police exposed the scandal. The officers watched carefully when the gang neared the park. The grandson fell asleep while the grandmother quietly read stories aloud. The husband went to the movie but his wife stayed home. The farmer decided to buy some young dairy cows. The picnickers could not find the entrance to the city park. The traveler paid extra for a hotel because he had not made a reservation. The photographer used all his film because the sunset was so beautiful. The kitchen table was cleared so the children could finish their project. The children stared at the dinosaurs in the museum. The leak under the sink slowly caused the wood to rot. The hockey fan listened closely to the radio to find out which team was winning. The attendant at the information desk showed them the schedule. The girl dreamed of becoming a famous opera singer.  129 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120  The library downtown ordered many new books on birds. The bride suddenly realised that she had left the rings at home. The renovations cost the family more than they could afford. The winter was so cold that the pipes in the cabin burst. The producer was angry that the cameraman arrived late for work. The grade five class cleaned the garbage off the field. The bus driver talked to the passengers during the trip. The hikers all agreed that good equipment was essential. The mother made chicken soup for her sick daughter. The girls decorated the hall with balloons for the dance. The grandma who had ten grandchildren loved to knit them sweaters. The pirate quietly stole the jewels and vanished into the night. The kids eagerly ate the cookies that were baked by their aunt. The tomatoes were the juiciest the chef had ever tasted. The puzzle confused the boy because the pieces looked the same. The view from the mountain convinced the artist to paint again. The apple pie served with tea was the most popular item on the menu.  A P P E N D I X E : Comprehension Questions  Simple Active Sentences 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42  Did the children lie to the principal? Did the prince wait for the princess outside the castle? Did the electrician fix the light? Did the choir please the crowd? Did the vacationers rent a room? Did the graffiti upset the child? Did the queen scold the king? Did the detective examine the evidence carefully? Did the ring impress his girlfriend? Did the dog wake up the family? Did the man amaze the boys? Did the secretary order take-out food for lunch? Did the caterer receive a bonus? Did the bear make his way towards the campground? Did the carpenter decide to quit? Did the girl trust the doctor? Did the biologist study the plants in the greenhouse? Did the neighbours sell their house? Did the painting delight the wife? Did the pencil roll off the desk? Did the pianist attend the performance? Did the lawyer call for a mistrial? Did the captain sail the ship to safety? Did the child cross the busy street? Did the server throw the food into the trash? Did the man devise a plan to get revenge? Did the player review the game plan? Did the dog drink some water? Did the manager enjoy the party? Did the new puppy accompany the family home? Did the editor accept the ending to the story? Did the director shorten the rehearsal? Did the family entertain their friends with the photos? Did the father run after the boy? Did the reporter deny the error? Did the book entertain the babysitter? Did the architect design the community center? Did the catcher drop the ball? Did the airplane almost fail the engine test? Did the journalist report the disaster? Did the patient speak to the nurse? Did the car miss the turn onto the street?  131 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60  Did the victim refuse to talk to the detective? Did the businessman steal the money? Did the rower lose the race? Did the chicken satisfy the customer? Did the factories package the fish? Did the jogger increase his pace? Did the chauffeur drive the diplomat home? Did the dancers compose the music? Did the artist stay to the end of the show? Did the key open the car door? Did the exam worry the professor? Did the mailman leave the house? Did the goats graze on the nearby mountain? Did the child whisper to her mother? Did the nurse select the restaurant for breakfast? Did the florist type the addresses onto the labels? Did the psychologist forget the client's file? Did the volunteers cause trouble at the event?  n n n n n n n n n n n n n n n n n n  Conjoined Sentences 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25  Did the children dread the bully? Did the prince fight the dragon? Did the electrician replace the wire? Did the choir accompany the band? Did the vacationers enjoy the seashore? Did the graffiti encourage the gang? Did the queen yell at the peasant? Did the detective believe the woman? Did the ring tempt the teenager? Did the dog howl at the storm? Did the man race the sports car? Did the secretary disappoint the doctor? Did the caterer compliment the hostess? Did the bear startle the hiker? Did the carpenter anger the plumber? Did the girl trust the doctor? Did the biologist study the plants in the greenhouse? Did the family sell their house? Did the painting delight the wife? Did the pencil roll off the desk? Did the pianist attend the performance? Did the lawyer call for a mistrial? Did the captain sail the ship to safety? Did the child cross the busy street? Did the server throw the food into the trash?  VI VI VI VI VI VI VI VI VI VI VI VI VI VI VI V2 V2 V2 V2 V2 V2 V2 V2 V2 V2  y y y y y y y y y y y y y y y y y y y y y y y y y  132 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60  Did the man devise a plan to get revenge? Did the player review the game plan? Did the dog drink some water? Did the employee enjoy the party? Did the new puppy accompany the family home? Did the author question the editor? Did the actor deceive the director? Did the family dislike the trip? Did the boy put the bunny in a cage? Did the reporter attack the candidate? Did the book relax the toddler? Did the architect dislike the historian? Did the catcher disappoint the fans? Did the airplane crush the mechanic? Did the boy inform the journalist? Did the patient ignore the visitor? Did the car hit the teenagers? Did the detective find the thief? Did the money come from customers? Did the swimmer motivate the rower? Did the customer dislike the steak? Did the factories package the fish? Did the jogger increase his pace? Did the chauffeur drive the diplomat home? Did the dancers inspire the orchestra? Did the artist stay to the end of the show? Did the key open the car door? Did the exam worry the professor? Did the mailman leave the house? Did the horses gallop in the nearby field? Did the mother shout for her child? Did the therapist select the restaurant for dinner? Did the florist type the addresses onto the labels? Did the psychologist forget the client's file? Did the volunteers cause trouble at the event?  V2 V2 V2 V2 V2 VI VI VI VI VI VI VI VI VI VI VI VI VI VI VI V2 V2 V2 V2 V2 V2 V2 V2 V2 V2 V2 V2 V2 V2 V2  y y y y y n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n  Center-Embedded Subject-Object Sentences 1 2 3 4 5 6 7 8  Did the children lie to the principal? Did the prince wait for the princess outside the castle? Did the electrician fix the light? Did the choir please the crowd? Did the vacationers rent a room? Did the graffiti upset the child? Did the queen scold the king? Did the detective examine the evidence carefully?  V2 V2 V2 V2 V2 V2 V2 V2  y y y y y y y y  133 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54  Did the ring impress his girlfriend? Did the dog wake up the family? Did the man amaze the boys? Did the secretary order take-out food for lunch? Did the caterer receive a bonus? Did the bear make his way towards the campground? Did the carpenter decide to quit? Did the hospital scare the girl? Did the gardener respect the biologist? Did the neighbours irritate the family? Did the husband finish the painting? Did the student borrow the pencil? Did the conductor praise the pianist? Did the defendant despise the lawyer? Did the weather report caution the captain? Did the guard watch the child? Did the chef ignore the server? Did the clerk irritate the man? Did the new coach admire the player? Did the coyote chase the dog? Did the manager greet the employee? Did the child befriend the puppy? Did the author change the ending to the story? Did the actor delay the rehearsal? Did the family entertain their friends with the photos? Did the boy get chased by his father? Did the candidate admit the error? Did the book entertain the babysitter? Did the architect design the community center? Did the catcher drop the ball? Did the airplane fail the engine test? Did the boy report the good news? Did the visitor speak to the nurse? Did the car miss the turn onto the street? Did the thief talk to the detective? Did the businessman steal the money? Did the swimmer qualify for the finals? Did the barbecue sauce ruin the steak? Did the factories clean up the pollution? Did the jogger like the heat? Did the chauffeur annoy the president? Did the dancers produce the music? Did the artist shock the curator? Did the locksmith change the lock? Did the professor postpone the exam? Did the mailman chase the dog?  V2 V2 V2 V2 V2 V2 V2 VI VI VI VI VI VI VI VI VI VI VI VI VI VI VI V2 V2 V2 V2 V2 V2 V2 V2 V2 V2 V2 V2 V2 V2 V2 VI VI VI VI VI VI VI VI VI  y y y y y y y y y y y y y y y y y y y y y y n n n n n n n n n n n n n n n n n n n n n n n n  134 55 56 57 58 59 60  Did the horses follow the goats? Did the movie calm the child? Did the nurse invite the therapist? Did the florist assist the worker? Did the clinic fire the psychologist? Did the volunteers recruit the spokeswoman?  VI VI VI VI VI VI  n n n n n n  Filler Sentences 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97  Did the movie discourage him from traveling? Did the stories amuse some guests? Did the children miss the clown show? Did the cyclist pass the pedestrian? Did the musician cancel a month in advance? Did the girl play on the swings? Did the roofer provide references? Did the politicians disregard the farmers? Did the model aid the photographer? Did the clerk thank the officer? Did the teacher win the award? Did the publisher abandon the project? Did the musician bring flowers to dinner? Did the salesperson phone the bookseller? Did the soccer players lose the match? Did the tourist misread the sign? Did the woman take his seat on the train? Did the grandfather plant tulips? Did the scholar forget historical details? Did the shopper return the jacket? Did the father take the photograph of his daughter to work? Did the waitress hate the customer? Did the residents disrupt the construction workers? Did the news report interrupt the talk show? Did the politician admit to the accusations? Did the dentist explain the procedure? Did the girl invite her friends to her house? Did the diver locate many fish? Did the baby give the toddler the rattle? . Did the police expose the scandal? Did the gang watch the officers? Did the grandson fall asleep? Did the wife go to the movie? Did the farmer decide to buy cows? Did the picnickers find the entrance to the park? Did the traveler pay extra for a hotel? Did the photographer have any film left?  n y n y n y n y n y n y n y n y n y n y n y n y n y n y n y n y n y n y n  135 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120  Did the children work on their project? Did the dinosaurs stare at the children? Did the leak cause the wood to rot? Did the hockey fan watch TV? Did the attendant show them the schedule? Did the girl dream of becoming an actress? Did the library order books on birds? Did the bride realise that she had left the flowers at home? Did the renovations cost a lot? Did the pipes survive the winter? Did the cameraman arrive late for work? Did the class throw garbage on the field? Did the bus driver talk to the passengers? Did the hikers disagree about the equipment? Did the mother make chicken soup? Did the girls take down the balloons? Did the grandma love to knit sweaters? Did the pirate get caught? Did the aunt bake the cookies? Did the chef taste the strawberries? Did the puzzle confuse the boy? Did the view discourage the artist? Did the apple pie get ordered often?  y n y n y n y n y n y n y n y n y n y n y n y  APPENDIX F: Listening Span Stimuli  1. The house quickly got dressed and went to work. 2. I took a knapsack from my shovel and began removing the earth. 3. The lamp bucked and sent the horse tumbling to the ground. 4. The cop spent a good half-hour questioning his trusted friend. 5. People are given by money at Christmas time. 6. She worked quickly but quietly while the others were asleep. 7. It was a foggy day and everything was dripping wet. 8. The girl was awakened by the gusts of rain blowing against the house. 9. The story started as a joke, but soon got out of hand. 10. He quickly put the carrot in the ignition and started the car. 11. The murky swamp slipped into the waters of the crocodile. 12. The castle sat nestled in the refrigerator above the tiny village. 13. It wasn't all her fault that her marriage was in trouble. 14. When he reached the top of the heart, his mountain was pounding. 15. The barn raged through the abandoned old fire. 16. With afrownof pain, the old ranger hung up his hat forever. 17. The man fidgeted nervously, once again checking his watch. 18. Clouds of cigar smoke wafted into the open eraser. 19. Convictions for all offenses increasedfromthe turn of the century. 20. He was pleased to receive so much love and attention. 21. The warrior was completely clothed from head to toe with deadly spikes. 22. The oven stretched over the rapidly moving bridge. 23.1 couldn't believe he fell for the oldest book in the trick. 24. The scrap yard outside the old cabin was filled with discarded metal junk. 25. Torrential rains swept over the tiny deserted island. 26. They waited at the water's edge, the raft bobbing up and down. 27.1 let the potato ring and ring, but still no answer. 28. The red wine looked like blood on the white carpet. 29. The children put on their closets and played in the snow. 30. He stood up and yawned, stretching his arms above his head. 31. The young girl wandered slowly down the winding path. 32. The purpose of the course was to learn a new language. 33. The sock set the table, while I made dinner. 34. At some life, everyone ponders the meaning of point. 35. The bars roared and began banging on the ape of the cage. 36. Being sued for malpractice was the doctor's main concern. 37. The shampoo was vibrant with music, theatre, and dance. 38. The class homework was done by everyone in the history. 39. Thick foliage surrounded him, and the air was heavy and still. 40. The deserted calendars rocked mournfully, driven by the wind and the tide. 41. The men were all gathered for a training flight near the base. 42. The sudden grizzly bear caused the noise to look in our direction. 43. The coral reefs support an infinite variety of beautiful marine life. 44. The crowd parted, waiting for someone to pass through.  45. As the flower talked about its busy life, it began to cry. 46. An eerie breeze suddenly chilled the warm, humid air. 47. As the ideas flowed, I jotted them down on some water. 48. The flash was dark, lit only by the occasional room of lightning. 49. He stepped back as the ghoul moved forward. 50. The robber bounded across the bridge and entered the dimly lit garage. 51. Three of the pillows were dead and he was next. 52. My escape out of the telephone was blocked by a wire fence. 53. She turned around and sucked in a startled breath. 54. They ran until their lungs felt like they were going to burst. 55. The additional evidence helped the verdict to reach their jury. 56. No one ever figured out what caused the crash to plane. 57. His eyes were bloodshot and his face was pale. 58. As a full-time university student, he studied hard. 59. The C N Tower raced across the sailboat to the finish line. 60. Somewhere in the deepening twilight, a loon sang its haunting evening song. 61. The fish glided majestically into the deepening recipe and was gone. 62. Gender roles persist because their roots are deep. 63. His men now flatly refused to continue with the journey. 64. The forest passed and the dead echo regained its quiet. 65. The letter burned until all that remained was a bit of ash. 66. The thought of going back in there made my skin crawl. 67. The wind started as a distant whisper, but soon began to howl. 68. They ran like the wind but they would never get away. 69. She couldn't wait to go to the zoo to visit her cheese. 70.1 waited a few hours, holding my breath, watching the loud silence. 71. Trails are supposed to stay on the hikers, but they usually don't. 72. He stormed out without giving me so much as a backward glance. 73. The paperclip was flaked white and red with sunburn. 74. Returning with an eagle, a branch breaks to land at its nest. 75. A television droned from the dark interior of the apartment. 76. They talked about what the world would be like after the war. 77. His mouth was twisted into an inhuman smile. 78. Silverware clunked, drawers slammed, and closet doors were wrenched open. 79. A welt was forming on his bottle where the forehead made contact. 80. I'd been naive to think he would fall into my trap. 81. The piercing yellow eyes glowed hauntingly in the mist. 82. The beach hung down over the window, filtering the moonlight from outside. 83. These operations are only done as a last resort. 84. The first impression is often a lasting one. 85. The throat tightening around her arm turned her scream into a croak. 86. The soap hovered over the elephant, waiting to attack. 87. They watched in silence as a brilliant carpet dipped behind the horizon. 88. The rumbling of the distance faded into the feather. 89. The sun had gone and the evening skies were tinted purple. 90. Opposite the chimney doorway was the yawning cabin mouth.  91. Usually the visual images are the ones people remember best. 92. She crept towards the door, following the moving shadow. 93. A deafening cheer rose up from the kids watching the parade. 94. A blue-uniformed security guard moved quickly out of the dog. 95. She wore a huge, white dress bigger than a camping tent. 96. He popped the sandwich into the VCR and watched the movie. 97. A hush seemed to have fallen over the entire park. 98. The umbrella grabbed its bat and stepped up to the plate. 99. The starving hamburger bit into the juicy man. 100. The hurricane left a path of destruction through the tiny town.  139 APPENDIX G: Working Memory Span and Familiarity Groups Participant 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 a  Working Memory Span" 2 2 2.5 2.5 3 3 4 4 4 4.5 4.5 4.5 4.5 5 5 5 5 5 6 6 6 6  Span Group  Familiarity Score 12 11 5 2 7 2 2 7 9 9 12 7 12 8 10 2 9 8 12 7 9 10 b  Low Low Low Low Low Low Medium Medium Medium Medium Medium Medium Medium High High High High High High High High High  Working memory span as measured by the Listening Span test. Total score determined by the Language Background Questionnaire.  Familiarity Group Familiar Familiar Non-familiar Non-familiar Non-familiar Non-familiar Non-familiar Non-familiar Familiar Familiar Familiar Non-familiar Familiar Non-familiar Familiar Non-familiar Familiar Non-familiar Familiar Non-familiar Familiar Familiar  140 APPENDIX H: Language Background Questionnaire  1. Age: 2. First Language:  .  3. Other Languages: Please list any other language(s) that you know, your age when youfirstbegan learning the language(s), and how many years of experience you have with the language(s). Language  A g e w h e n first l e a r n e d  Y e a r s of e x p e r i e n c e  4. Are you bilingual in any language(s)? If yes, which language(s)? Please check the box that most closely corresponds to your answer for the following questions: 5. In general, how often do you talk in English with people who speak English with an accent? • • • • • • •  several times a day once a day a few times a week once a week once a month occasionally never  6. How often do you talk in English with people who speak English with a Chinese accent? • • • • • • •  several times a day once a day a few times a week once a week once a month occasionally never  141 7. Do you have the following types of interactions with people who speak English with an accent? Please circle Yes or No. Yes/No  short greetings (e.g., saying "how are you?" while passing by)  Yes/No  brief conversations/transactions (e.g., between a customer and server at a restaurant or in a store)  Yes/No  extended conversations (e.g., group projects, conversational language partners)  Yes/No  social interactions (e.g., talking with friends)  8. Do you have the following types of interactions with people who speak English with a Chinese accent? Please circle Yes or No. Yes/No  short greetings (e.g. saying "how are you?" while passing by)  Yes/No  brief conversations/transactions (e.g., between a customer and server at a restaurant or in a store)  Yes/No  extended conversations (e.g., group projects, conversational language partners)  Yes/No  social interactions (e.g., talking with friends)  9. Of those described above, which type of interactions do you most commonly have with people who speak English with an accent? • • • •  short greetings brief conversations/transactions extended conversations social interactions  10. Of those described above, which type of interactions do you most commonly have with people who speak English with a Chinese accent? • • • •  short greetings brief conversations/transactions extended conversations social interactions  142 11. Please describe any other situations when you speak with or listen to non-native speakers of English.  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0090201/manifest

Comment

Related Items