The Open Collections website will be unavailable July 27 from 2100-2200 PST ahead of planned usability and performance enhancements on July 28. More information here.

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Speech audiometry in Cantonese and other non-native English speakers : the use of digits and Cantonese… Siu, Carrie K 2005

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata


831-ubc_2005-0316.pdf [ 3.88MB ]
JSON: 831-1.0092040.json
JSON-LD: 831-1.0092040-ld.json
RDF/XML (Pretty): 831-1.0092040-rdf.xml
RDF/JSON: 831-1.0092040-rdf.json
Turtle: 831-1.0092040-turtle.txt
N-Triples: 831-1.0092040-rdf-ntriples.txt
Original Record: 831-1.0092040-source.json
Full Text

Full Text

SPEECH AUDIOMETRY IN CANTONESE AND OTHER NON-NATIVE ENGLISH SPEAKERS: THE USE OF DIGITS AND CANTONESE WORDS AS STIMULI by CARRIE K. SIU B.A., University of British Columbia, 2001 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in THE FACULTY OF MEDICINE i Audiology and Speech Sciences THE UNIVERSITY OF BRITISH COLUMBIA May 2005 © Carrie K. Siu 2005 ABSTRACT The purpose of the present study was to investigate validity and accuracy issues with the use of English speech audiometry on non-native English speakers. Two widely used tests of speech audiometry, Speech Recognition Threshold (SRT) and Word Recognition Score (WRS) were measured on 45 participants with English as a non-native language. The effects of test stimuli (English words versus English digits versus Cantonese words) and the correlations between language background factors (length of residence, age of exposure, years of instruction, birthplace, first language, preferred language, home language, daily language use, English TV and internet use) and the performance of SRT and WRS were analyzed. English digit pairs were found to be a more accurate measure of hearing threshold for English speech than English words, but Cantonese words elicited the lowest audiometric thresholds from the Cantonese-speaking participants. Age and birthplace were found to significantly correlate with the extent to which speech audiometric performance will be affected by the language of test stimuli. An analysis of the differences in English and Cantonese speech acoustic spectra was provided, and the implication that hearing levels measured using English speech-based stimuli might not reflect real life impairment for non-native English speakers was discussed. Clinical implications include being cautious in applying test results to real life impairment for non-native English-speaking clients, so as to avoid over-estimating the need for amplification and misdiagnosis of the nature of hearing loss. When administering speech audiometry on non-native English speakers, familiarization to test materials before SRT testing, the use of digit pairs as SRT stimuli, and the use of subjective questionnaires to assess listening need and impairment are recommended. ii Table of Contents Abstract ii Table of Contents iii List of Tables vi List of Figures vii Acknowledgements viii 1 Introduction 1 1.1 Speech Audiometry 2 1.1.1 Speech Recognition Threshold (SRT) 4 1.1.2 Word Recognition Score (WRS) 4 1.1.3 Standardized Test Materials 5 1.2 Issues with Non-Native English Speakers 6 1.2.1 Speech Perception in a Non-Native Language 7 1.2.2 Clinical Issues 8 1.2.3 Speech Audiometry on Non-Native English Speakers 9 1.3 Cantonese Speech Characteristics 11 1.4 The Current Project 13 2 Method 14 2.1 Participants 14 2.2 Stimuli 15 2.3 Procedure 17 3 Results 19 3.1 Performance in SRT 19 iii 3.1.1 Comparison between CID-SRT and Digit-SRT 21 3.1.2 Comparison among Cant-SRT, Digit-SRT, and CID-SRT 22 3.1.3 Effect of Hearing Loss on SRT 24 3.1.4 Effect of Language and Background Factors on SRT 26 3.2 Performance on WRS 30 3.2.1 WRS across languages and conditions 30 3.2.2 Effect of Hearing Loss on WRS 31 3.2.3 Effect of Language and Background Factors on WRS 31 4 Discussion 34 4.1 SRT 34 4.2 WRS 36 4.3 Language and Background Factors 39 4.4 Implications 42 4.4.1 Theoretical Implications 43 4.4.2 Clinical Implications 45 4.5 Limitations and Further Directions 48 5 References 52 6 Appendices 57 Appendix 1 Interview Script 57 Appendix 2 CID W-l Spondee Word List 58 Appendix 3 Digit Pairs List 59 Appendix 4 NU-6 List 3A word list 60 Appendix 5 Cantonese Word Lists 61 iv Appendix 6 Case History Questions ..62 Appendix 7 Reference List of Speech Audiometry in other languages and of the Use of Interpreters 63 Appendix 8 Suggestions on Assessing Non-Native-English Speakers 65 v List of Tables Table 3.1 Mean PTA and SRT (in dBHL) across stimuli for all participants 22 Table 3.2 Mean PTA and SRT (in dBHL) across stimuli for Cantonese-speakers...23 Table 3.3 Mean SRT (in dBHL) across stimuli for normal hearing ears and hearing-impaired ears... 25 Table 3.4 Descriptive Statistics on Language Background factors of Participants...26 Table 3.5 Correlation between Language Factors with SRT 28 Table 3.6 Correlation between Hearing (PTA) and SRT and Language Predictors .29 Table 3.7 Mean WRS across Stimuli and Conditions 29 Table 3.8 Paired-t test results of WRS across Stimuli and Testing Conditions 30 Table 3.9 Correlation between Language Factors and WRS 33 vi List of Figures Figure 3.1 PTA and SRT obtained from Right and Left Ears 21 Figure 3.2 Effect of SRT stimuli on SRT, compared to PTA, for Cantonese-speaking participants 23 Figure 3.3 Mean WRS across Stimuli and Conditions 29 Figure 3.4 WRS performance across language and conditions for normal hearing and hearing-impared participants 32 vn Acknowledgements This project would not have happened without the support of the following people My supervisor, Dr. Stefka Marinova-Todd Thank you for sharing your knowledge and resources, and for losing sleep and food over my endless questions, tables, and figures! My committee members, Dr. Navid Shahnaz and Sharon Adelman Thank you for your knowledgeable and unique insights, for sharing your valuable resources, and for the after-hour mind-boggling discussions over this project! The volunteers who generously donated their time and interest in this project; Mom and Dad, who supported me unconditionally throughout this project; Audiologists Christine Cheung, Praise Chow, Lena Wong, and Vanessa Chan, who provided me with valuable insights into their clinical practices; My aunt, Alice Fok, who spent her valuable hours connecting me to audiologists in Hong Kong; The School of Audiology and Speech Sciences, who provided me with the facilities to implement this project; and Patrick, my private tech support, practice audience, and solid post against which I leaned countless times throughout this endeavour. viii Non-native speech audiometry 1 1. Introduction Speech audiometry, a series of hearing tests that use words or sentences as stimuli, is routinely administered by audiologists as part of their audiologic evaluation battery. The word and sentence stimuli used in speech audiometry are language based and are often spoken by a native speaker of a particular language. In the United States, where 80% of audiologists are English speakers and where 14% of residents do not speak English (Ramkissoon & Khan, 2002), the need for research in the reliability of speech audiometric test results with non-native English speakers has long been acknowledged (Beverly-Ducker, 2003). A survey conducted by the Canadian Association of Speech-Language Pathologists and Audiologists (CASLPA) in 2003 showed that among its 423 registered audiologists, 93% spoke English as their first language and the rest spoke French as their first language. That is, all registered Audiologists in Canada spoke either English or French as their first language. However, the 2001 Census conducted by Statistics Canada revealed 18% of Canadians spoke neither English nor French as their first language. This percentage increased to 40% in the Greater Vancouver metropolitan area of British Columbia. The under-representation of racial minority members in the Audiology profession is similarly seen in the United States, where 7% of certified Audiologists identified themselves as belonging to racial minority groups, compared to 25% in the general U.S. population (ASHA, 2005). These statistics demonstrated the need for a language/cultural-sensitive approach towards conducting speech audiometry on non-native English speakers. When hearing tests using English speech stimuli are administered on people whose English is not their first language, knowledge and proficency of English, in addition to hearing sensitivity, may contribute significantly to test performance. The present study investigated the Non-native speech audiometry 2 language background factors that may affect non-native English speakers' performance in two widely used speech audiometric tests - Speech Recognition Threshold (SRT) and Word Recognition Score (WRS) - administered with English words, English digits, and in the participants' first language for the Cantonese-speaking participants. The differences in test performance according to the different speech stimuli were correlated with the participants' language background factors, with the goal of examining the way these factors determine the validity of English speech audiometry on non-native English speakers. The language background factors studied were age of first exposure to English, length of residence in an English-speaking country, amount of daily use of English and native language, language first acquired, preferred language for daily communication, language used at home, years of formal English instruction, and the use of English TV and internet. Results from this study will be summarized into suggestions regarding the use of appropriate speech stimuli to obtain accurate hearing thresholds from non-native English speakers with different language background. 1.1 Speech Audiometry Speech audiometry was defined as "any method for assessing the state or ability of the auditory system of an individual, using speech sounds as the response evoking stimuli" (Lyregaard, Robinson, & Hinchchffe, 1976, cited in Lyregaard, 1997, p.35). A recent survey conducted with Canadian audiologists showed that 96% routinely employed speech recognition threshold testing (SRT); 85% routinely employed word recognition measures; and 76%) used the word recognition score percentage (WRS) in their regular assessments (DeBow & Green, 2000). The advantage of speech audiometry, as opposed to pure tone Non-native speech audiometry 3 audiometry, is its high face validity - the applicability of test scores to real life functioning. Speech tests used to measure hearing are intended primarily to be representative of everyday speech and thus give an indication of the real life impairment imposed by a hearing loss. Therefore, speech audiometry results are often used to monitor rehabilitation progress and hearing aid fitting as well as to assess occupational functioning (Lutman, 1997). Materials used for speech audiometry range from phonemes and nonsense syllables to meaningful words and sentences. The more meaningful the stimuli are, the higher the redundancy of the information each speech stimulus provides. In other words, when speech audiometry is conducted using meaningful speech as stimuli, e.g. sentences, it is more likely that cognitive and linguistic factors, in addition to peripheral hearing sensitivity, contribute to test performance. On the other hand, nonsense speech stimuli primarily measure hearing sensitivity because knowledge of the language contributes little to the auditory processing of such stimuli. Although this seems to suggest that using nonsense speech as stimuli would result in a more sensitive test of hearing compared to using meaningful speech as test stimuli, this is not necessarily the case. The use of nonsense speech sacrafices face validity; as nonsense speech does not appear in real life, performance on tests using nonsense speech stimuli does not necessarily reflect an individual's impairment due to his/her hearing loss. Meaningful words are most often employed because they are a good balance between redundancy and face validity, compared to sentences or nonsense syllables. SRT and WRS are two common measures in speech audiometry that use meaningful words as stimuli. For these reasons, SRT and WRS are the chosen measures for the present study. Non-native speech audiometry 4 1.1.1 SRT Speech Recognition Threshold testing (SRT) is often conducted to measure the hearing threshold for speech, to confirm results obtained from pure tone audiometry, and to establish a base intensity level for word recognition testing (Rupp, 1980). Stimuli used in SRT measurement are spondees, or bisyllabic words with equal stress on each syllable, such as duckpond, greyhound, and baseball. Spondees are presented auditorily at steadily decreasing intensity levels; the lowest intensity at which half of the stimuli presented are correctly repeated is the SRT. The accuracy of SRT can be determined by comparing it to the pure tone average (PTA), the average of pure tone audiometric thresholds obtained at 500, 1000, and 2000 Hz. The SRT and PTA are generally required to be within 5-10 dBHL in order for either to be considered accurate and reliable. 1.1.2 WRS Word Recognition Score (WRS) is often measured to reflect an individual's impairment in real life, to predict the potential success with hearing aids, and to monitor progress in an aural rehabilitation program (Garstecki, 1980). WRS is assessed using monosyllabic words that are phonemically balanced; that is, each phoneme in the specified list of words appears as frequently as it does in everyday speech A list of 50 words is presented at normal conversational level (45 dBHL), or at a level most comfortable for the individual with hearing loss, to assess the ability to recognize words when hearing sensitivity is not a concern. DeBow & Green's (2000) survey indicated that most Canadian audiologists used half-lists (25 words) for the sake of efficiency, and research showed the reliability of half-lists to correlate highly with that of full lists (Grubbs, 1963, cited in Non-native speech audiometry 5 Stockdell, 1980, p. 104). WRS in background noise is a realistic measure of impairment and is often used for communication handicap assessment, hearing aid success evaluation, as well as for central auditory processing evaluation (Garstecki, 1980). 1.1.3 Standardized Test Materials To constitute good validity and accuracy, all speech stimuli should meet four criteria: 1) familiarity, 2) phonetic dissimilarity, 3) should be representative of everyday speech, and 4) be characterized by auditory homogeneity (von Hapsburg & Pena, 2002). That is, the words should be relatively simple and acquired early in life; they should comprise a wide range of speech sounds that are a realistic occurrence in daily speech; and their acoustic properties such as intensity, frequency, and duration, should be relatively uniform. Standardized stimuli for SRT and WRS are available and among the most widely used are the Central Institute for the Deaf (CTD) W-l spondees for SRT, and W-22 and Northwestern University No.6 (NU-6) word lists for WRS (Ramkissoon, Proctor, Lansing, & Bilger, 2002). Two of these commonly used word lists, the CID W-l spondees and NU-6, were used in the present study. When the CTD and NU-6 word lists are administered on non-native English speakers, criteria (1) and (3) for good accuracy and validity become questionnable; test scores may be confounded by English competency and thus cannot be interpreted without accounting for language factors. For testing WRS in noise, a wide range of standardized testing methods and CD-recorded test materials are available in English, such as the Speech-Perception-In-Noise test (SPIN) developed by Kalikow, Stevens, and Elliott, in 1977 and the Hearing-In-Noise Test (HINT) developed by Nilsson, Soli, and Sullivan in 1994 (cited in Mendel & Danhauer, Non-native speech audiometry 6 1997, p.73). Standardized materials in Cantonese, however, were not available and since the purpose of the present study was to compare performance on English stimuli against Cantonese stimuli, word recognition in noise testing was performed using cafeteria noise recorded and calibrated by Auditec of St. Louis. By using the same noise stimuli, English and Cantonese speech-in-noise testing could be administered under the same conditions. 1.2 Issues with Non-Native English Speakers The problem of using speech audiometry on non-native English speakers is particularly evident in the increasingly multicultural society of North America. The 2001 Census indicated that 32% of Canadians did not speak English at home (Statistics Canada, 2001). The 2001 U.S. Census indicated that 14% of U.S. residents did not speak English (Ramkissoon & Khan, 2002). Since the majority of audiologists in North America were English speakers (Ramkissoon & Khan, 2002), the multicultural clientele posed a challenge for the monolingual English-speaking audiologist in terms of testing, management, counselling, and general communication and rapport-building with clients. In a 2003 ASHA newsletter, Beverley-Ducker reported a need to prepare for and respond to increasing racial, ethnic, and linguistic diversity of current "projected caseloads"; a need to develop culturally and linguistically sensitive assessment tools; and a need to conduct research on the reliability of speech audiometric test results with non-native English speakers (Beverly-Ducker, 2003). The present study attempted to address this last need. Non-native speech audiometry 7 1.2.1 Speech Perception in a Non-Native Language In addition to the normal sound encoding and decoding mechanisms involved in speech perception in a native language (LI), speech perception in a second language (L2) depends on the degree of familiarity with the L2 words and the degree of semantic and phonetic similarity to words in LI (Meador, Flege, & Mackay, 2000). Meador et al. (2000) examined how age of arrival in Canada and the amount of continued LI use affected perception of L2 sentences in noise; they found that "early" Italian bilinguals - those who arrived in Canada before age 7 - who seldom used Italian after they arrived in Canada obtained significantly higher word recognition scores in noise than "late" bilinguals - who arrived after age 14 - who used Italian relatively often. They also found that cross-language phonetic differences between LI and L2 contributed to word recognition in L2, independent of age of acquisition, LI use, and length of residence in Canada. For example, as some word-initial English consonants (/z/, /h/) do not occur in in word-initial position in Italian, Italian listeners were found to err more on English words with these initial conosonants. In addition to the amount of LI and L2 useage and phonetic and semantic proximity between LI and L2, a myriad of background factors such as age of L2 acquisition, length of residence in an L2 environment, and years of formal L2 instruction were shown to correlate significantly with speech perception in L2. Mayo, Florentine, and Buus (1997) investigated the relation between age of L2 acquisition and perception of L2 speech in noise. The Speech Perception In Noise test (SPLN) in English was administered; early bilingual Mexican-Spanish-speaking listeners who acquired fluent English1 before age 6 were found to be able to correctly repeat more words at a higher noise level and to benefit more from context than late bilinguals who 1 Fluency was assessed by an interview with the bilingual first author of the study. Non-native speech audiometry 8 acquired fluent English after age 14. This study demonstrated a significant relationship between age of L2 acquisition and L2 speech perception in noise. Furthermore, a recent study examined the cross-language lexical priming effect on early and later Spanish learners of English (Silverberg & Samuel, 2004). Despite controlled levels of English proficiency, only early learners of English benefited from lexical priming in Spanish. That is, the Spanish speakers who began learning English before age 7 demonstrated better word recognition in English when words matched in meaning in Spanish were presented before the English targets. Those who began learning English at or after age 7 did not show any benefit from LI priming. From these results, the authors hypothesized that for early bilinguals, LI and L2 shared the same conceptual level while for late bilinguals, LI and L2 have different conceptual systems. Based on the findings from these past studies, the present study examined the possible effects of such language factors as age of acquisition, length of residence, L2 usage, and the participants' first language on the speech audiometry outcomes of adult learners of English as an L2.. 1.2.2 Clinical Issues Although most clinicians were aware of the limitation of English speech tests on non-English-speaking clients, they still used them because of their availability, longevity, research support, and most importantly, compatibility with their own language (Ramkissoon & Khan, 2002). Those who used alternatives to speech audiometry used subsets of the English standardized word lists; however, research showed that using a smaller list than the standardized list resulted in better SRT due to familiarization to test stimuli, thus sacraficing Non-native speech audiometry 9 test validity (Ramkissoon, Proctor, Lansing, and Bilger, 2002). Another alternative was to administer speech audiometry in the client's language. Problems with this were that bilingual audiologists were few and validity of these tests in other languages was questionnable (von Hapsburg & Pena, 2002). Research on speech test normative data often did not specify the language of participants and did not take their linguistic background into account in the interpretation of results (Stoppenbach et al., 1999; Heckendorf, Wiley, & Wilson, 1997). Among audiology researchers and clinicians, there was a general lack of understanding of how linguistic variability might affect performance on speech audiometry tests (von Hapsburg & Pena., 2002). 1.2.3 Speech Audiometry on Non-Native English Speakers Some researchers attempted to address the issue of speech audiometry on non-native English speakers. Ramkissoon et al. (2002) compared the SRT of native English speakers and non-native speakers using digit pairs versus standardized bisyllabic words (spondees) as stimuli. They assessed the accuracy of the two stimuli by comparing the SRT obtained to the pure tone average (PTA), and found that compared to bisyllabic words, digit pairs more accurately measured the hearing threshold for speech for non-native English speakers: In their conclusion, the authors addressed the limitation of not being able to use digit pairs for new learners of English, who might not know even basic English. The present study attempted to replicate the Ramkissoon et al. (2002) study in order to relate the linguistic background of non-native English speakers to their SRT accuracy using digit pairs versus CID spondees. Non-native speech audiometry 10 Other studies that addressed the validity of clinical procedures provided by monolingual audiologists to multilingual clients involved speech-in-noise testing. The performance of non-native English speakers in speech discrimination in noise tests was shown to be poorer compared to native English speakers, and the difference in performance was shown to be larger in older bilinguals. Poorer word recognition performance in noise compared to native speakers was shown in Spanish, French, Hebrew, and Japanese speakers (von Hapsburg & Pena, 2002). These past research studies confirmed the significant effect of noise on speech discrimination for bilingual speakers. However, participants' speech discrimination ability in noise in their native language was not considered in either the analysis of results or participant recruitment. According to the wholistic view of bilingualism, a bilingual's two languages interact to form a unique linguistic system (Grosjean, 1997, cited in von Hapsburg & Pena, 2002, p.205). Therefore, in bilinguals the two languages are constantly engaged in speech perception and perception in either language should be affected by the combined linguistic system. The present study involved a pioneer investigation of speech perception in noise in the native language in Cantonese-English bilinguals. Non-native English speakers perform with varying degree of success on English measures, and the language background factors that account for their individual differences in speech audiometry performance have not been adequately studied. The present study attempted to address these issues by relating the language background of non-native English-speaking participants to their word recognition performance in English and in their native language. Non-native speech audiometry 11 Aside from language proficiency, language preference was also shown to contribute to speech test performance in English. Native Spanish speakers who preferred using English performed below normal standards in the Staggered Spondaic Word test (SSW) administered in Spanish, but within normal range in English (von Hapsburg & Pena, 2002). Thus, the present study included language preference of participants into the analysis of results. In addition, the language status, language history, stability, and demand for use of participants were also investigated. These four factors had all been shown to have direct or indirect implications on non-native participants' knowledge of English (von Hapsburg & Pena, 2002). 1.3 Cantonese Speech Characteristics Originating from two distant language families, Indo-European and Sino-Tibetan respectively, English and Cantonese exhibit many linguistic differences. Most common mistakes Cantonese speakers make in English speech stem from these differences, as exhibited in final consonant omission, consonant deletion in consonant clusters, omission of inflectional morphemes, and failure to detect syllabic stress (Cheng, 2002). The main differences between linguistic features of Cantonese and English are as follows: 1) Each Cantonese word represents a single syllable while many English words are multi-syllabic, 2) only seven consonants appear in word-final position in Cantonese while 24 consonants occur word-finally in English, 3) there are no consonant clusters in Cantonese while they are common in English, and 4) Cantonese has no grammatical markers such as the plural /s/, past /ed/, or articles (Cheng, 2002) while English depends extensively on grammatical markers to signify meanings. Knowledge of these characteristics of Cantonese linguistics Non-native speech audiometry 12 may help speech and hearing clinicans differentiate between real linguistic errors and accent differences in Cantonese speakers' production of English. Differences in sentence processing strategies between English and Chinese may also contribute to test performance of Chinese speakers in hearing tests that use sentences as stimuli, such as the commonly used Speech-Perception-In-Noise test (SPIN). In the SPIN, the listener is instructed to repeat the final word of each of a series of sentences. For some of these sentences (high context), the sentence-final word can be predicted from the context of the rest of the sentence; and for some (low context), the final word cannot be easily predicted. For the high context sentences in SPIN, sentence processing strategies may contribute to test performance. Liu, Bates, & Li (1992) investigated patterns of transfer in the sentence processing strategies displayed by early and late English-Chinese and Chinese-English bilinguals. They found that Chinese speakers tended to use animacy-based strategies - interpreted based on sentence meaning - while English spekers tended to use word order strategies. For example, when asked to identify the subject in the sentence "Are kissing the cows the rock" , most English speakers chose the rock as the subject, following the rule that a noun after a verb is an object. Most Chinese speakers, however, chose the cows as the subject, adhering to the fact that cows can kiss but rocks cannot. The researchers also found that late bilinguals tended to exhibit forward transfer, transferring processing strategies from the earlier-acquired language to the later-acquired language; while early bilinguals exhibited a variety of transfer patterns. Liu et al.'s (1992) study showed that language processing involves the complex interaction of various language background factors, among which are age of exposure to second language and daily Non-native speech audiometry 13 language use. These two factors were among the language background factors included in the present study. 1.4 The Current Project The present study aimed to expand upon previous research on issues relating to speech audiometry and non-native English speakers by establishing the most appropriate auditory stimuli for accurate measurement of SRT and WRS for this group, and by incorporating the language background of the non-native English-speaking participants to explain any observed differences in their speech audiometry results. The first part of the study replicated Ramkissoon et al.'s (2002) study, the goal of which was to determine whether digit pairs or spondees are more accurate indicators of hearing thresholds for non-native English speakers. The second part pertained only to the Cantonese-speaking participants and aims to determine which speech audiometry measures (i.e., English spondees, English digits and Cantonese spondees) lead to the most accurate meaure of hearing sensitivity in this group. The third part investigated the Cantonese speakers' word recognition performance in quiet and noise using Cantonese versus English stimuli. Lastly, the language background factors that correlated with non-native English speakers' varying levels of performance in SRT and WRS were investigated. The present study investigated three research questions: 1) Is there a test stimulus effect on performance in speech audiometry for non-native English speakers? That is, is performance in the same test better when certain test stimuli are used than others? 2) If a test stimulus effect were found to be present, does this stimulus effect exist only for some non-native English speakers and not for others? 3) What are the language background Non-native speech audiometry 14 factors that determine the extent to which test performance will differ depending on the use of different test stimuli? It was hoped that through the empirical understanding of the language background factors of non-native English speakers that could contribute to their performance in speech audiometry administered in English, the sensitivity and validity of speech audiometry on non-native English speakers could be increased. I also hoped that future research on speech perception of bilinguals could benefit from the present findings and be able to more effectively incorporate the language background of participants into their interpretation of results. 2 Method 2.1 Participants Participants were recruited by word-of-mouth, by email, and by advertisements posted around the campus of the University of British Columbia and at various libraries and community centres across the Vancouver Lower Mainland. Forty-five non-native English speakers participated. Among them, 30 spoke Cantonese as their first language. The first languages of the remaining 15 subjects were Tagalog, Japanese, German, Bulgarian, Punjabi, Mandarin Chinese, and French. Their ages ranged from 19 to 69, with a mean age of 48 for. the Cantonese-speaking group and 37 for the group who spoke other languages. Gender was evenly divided among the Cantonese speakers (15 males; 15 females) as well as the speakers of other languages (7 males; 8 females). Twenty-nine of the participants had normal hearing; 8 had hearing losses in the high frequencies (2000-4000 Hz) in one or both ears; 1 had a hearing loss in the low frequencies (250-1500 Hz); arid 7 had hearing losses in both low and Non-native speech audiometry 15 high frequencies. As all participants were adults, "normal hearing" was defined as pure tone audiometric thresholds of 25 dBHL or lower from 250 to 4000 Hz. 2.2 Stimuli Stimuli for the English SRT were 18 randomly chosen spondees from the CID W-l word list (Appendix 1). Stimuli for the digit-SRT test were compiled in the same way as the study conducted by Ramkissoon et al. (2002). That is, two individual numbers from "1" to "9", excluding "7", were paired. The number "7" was excluded because it has two syllables and when paired with another digit, would result in digit pairs with more than two syllables, violating the restriction that all spondee stimuli must have two syllables. The resulted pairing produced 56 digit pairs with no items with a repeated number. To match the number of stimuli for the CID-spondees, 18 pairs were randomly selected from these 56 pairs for use in the present study (Appendix 2). Stimuli for the WRS were the 50 words of the NU-6 list 3A (Appendix 3). This list, among the six lists developed by Tillman and Carhart in 1966, consisted of 50 phonemically balanced monosyllabic words. That is, the phonetic composition of the words in each list was intended to resemble that in a sample of 100,000 words in newsprint (Garstecki, 1980). The NU-6 was adapted from the initial phonemically balanced (PB) word list developed by the Harvard University Psychoacoustic Laboratory (PAL) during World War II to assess speech communication in wartime (Stockdell, 1980). For this study, two 25-word lists - the first and last 25 words of NU-6 List 3A - were randomized during testing. Half lists (25 words), instead of full lists of 50 words, were used Non-native speech audiometry 16 because 1) research has shown that the reliability of half-lists correlates highly with that of full lists (Grubbs, 1963, cited in Stockdell, 1980, p. 104) and 2) a survey conducted in 2000 indicated that this was the list of choice by most Canadian audiologists (Debow & Green, 2000). Therefore, half-lists were used to increase the efficiency and clinical applicability of this study. Stimuli for the Cantonese SRT and WRS were the Cantonese spondees and monosyllabic words in the list attached in Appendix 4. Since at the time of the present study, standardized Cantonese word lists were not available, this word list of unknown source was used in the present study. It was discussed with three native Cantonese speakers with background in linguistics, and was deemed to adequately represent all Cantonese speech sounds in a phonemically balanced manner. All stimuli were spoken by a Cantonese-English bilingual female speaker and were digitally recorded with computer software with an intensity monitor via a visual absolute decibel scale. To one-third of the participants (N=15), stimuli were presented via monitored live voice, and to the rest (N=30) via recorded speech. Independent sample t-tests were conducted to investigate any significant differences in the dependent variables - PTA, CTD-SRT, digit-SRT, Cantonese-SRT, English and Cantonese WRS in quiet and noise - between the group of participants who received stimuli presented through monitored live voice and the group who received stimuli presented through CD recording. No significant differences between the two groups at the .05-level were found in any dependent variable; therefore, stimuli presentation mode had no effect on the variability of the participants' performance, and results from the two groups were combined for the rest of the analyses. Non-native speech audiometry 17 Since a goal of the present study was to relate language background to performance in speech audiometry for non-native English speakers, a language background profile of each participant was compiled into an interview script (Appendix 5) for the present study based on the recommendations made by von Hapsburg and Pena (2002). Von Hapsburg and Pena (2002) recommended that research on non-native speech perception should account for the following language background factors - language preference, language status, language history, language stability, and demand for use. Questions on all these factors were included in the interview script. 2.3 Procedure Participants' language background was first investigated using an interview format, following the questions in the interview script attached (Appendix 5). A typical audiologic case history was then taken (Appendix 6). Pure tone audiometry was then conducted based on ASHA's guidelines (ASHA, 1978) to obtain bilateral pure-tone air conduction thresholds at 500, 1000, 2000, and 4000 Hz for each participant. The PTA was calculated for later comparison to the SRT. Word recognition testing was conducted following the standard guidelines published by ASHA in 1979, starting with familiarization, instructions, and orientation-attending phrase (ASHA, 1979). The 3 stimuli sets used, CID W-l spondees, English digit pairs, and Cantonese spondees (for Cantonese speakers only), were randomized in the order presented to participants. Word recognition testing was conducted by presenting 25 words from NU-6 List 3A to both ears at a suprathreshold level, i.e. the higher of 45 dBHL or the MCL (Most Comfortable Level, for use with hearing-impaired participants with a high pure tone Non-native speech audiometry 18 threshold). Participants were instructed to repeat aloud the words heard and the percentage of correctly repeated words was recorded. Word recognition in noise testing was performed binaurally via two CD players (JVC, Model XL-PG5 and Sony, model SCD-XE670), with the recorded NU-6 words routed through one CD player and the Auditec-calibrated cafeteria noise routed through the second player. Twenty-five NU-6 List 3A words along with background cafeteria noise were presented to both ears via supra-aural earphones (TDH 50-P, Telephonies). The two half-lists were randomized in the order of presentation. The words were presented at the same intensity as when testing in quiet, while the noise was presented at an intensity of 5 dB lower than the presentation level of the words. That is, a signal-to-noise ratio of +5 was obtained. For the Cantonese-speaking participants, in addition to word recognition in English, word recognition in Cantonese was tested using the same procedure as in English. A 25-word list was presented at the same intensity as described above in quiet and in noise. The noise used was the same track of cafeteria noise as used in English testing, presented at a signal-to-noise level of+5. Two recorded Cantonese word lists were used and their order of presentation was randomized among participants. The order of English versus Cantonese testing was also randomized. Except for the language background interview, all parts of testing were performed using a diagnostic audiometer (Grason-Stadler, GSI61) in a sound-treated booth meeting ambient noise requirements (ANSI, 1979). At the end of testing, each participant was briefed on their test results and the possible implications on the findings of the present study. Non-native speech audiometry 19 3 Results First, the study aimed to replicate Ramkisson et al.'s (2002) finding that non-native English speakers achieved more accurate SRT when stimuli were digit pairs compared with CID W-l spondee words. The accuracy of SRT was assessed by comparing to the PTA. In addition, for the Cantonese-speaking participants, the SRT obtained using Cantonese spondees was compared against that obtained using CED W-l spondees and digit pairs. Second, this study aimed to investigate Cantonese speakers' performance in word recognition in quiet and in noise in English versus Cantonese. Finally, the study aimed to investigate the language background factors that contributed to the participants' varying levels of performance in English speech audiometry. The effects of hearing loss on SRT and WRS were also analyzed, as hearing sensitivity was one of the background variables. To address the first and second research objectives, paired t-tests were performed with the different test stimuli and conditions as the within-subject factor. The paired-t test analyzed for any significant difference in the mean SRT and mean WRS among the different stimuli materials. The third research goal was addressed with Pearson and Spearman correlational analyses, which examined the relationship between the various language factors and the significant findings in SRT and WRS. Partial correlational analyses were also performed to control for the effect of hearing loss. 3.1 Performance in SRT SRT was obtained from each of both ears of each participant, resulting in 90 individual SRT values for each test stimulus. Any difference between the left and right ears was investigated using the Independent Samples t-test (equal variances assumed); results Non-native speech audiometry 20 indicated no significant differences between the left and right ears for all the dependent variables - PTA, CID-SRT, Digit-SRT, and Cant-SRT (p>.05). However, since 5-dB increments were used during SRT testing, a statistically significant difference might have been overlooked. In general, this limitation applies to all analyses of SRT findings. Any significant differences within 5 dB of one another may not generalize to the clinic, while the absence of a significant difference might have been due to the 5-dB increments used during SRT testing. This is a limitation that will be further discussed in the last chapter. Nevertheless, the present findings on SRT revealed several important trends. As shown in Figure 3.1, thresholds obtained from the right ear were consistently lower (better) than those obtained from the left ear, especially so for the tests with speech stimuli. This interesting supplemental finding, though statistically insignificant, nevertheless lends support to the well-documented right ear advantage in speech perception. Due to left cerebral lateralization of language processing and stronger contralateral transmission of sensory information from the periphery, speech stimuli have been shown to be more efficiently processed when presented to the right ear than the left (Mildner, Stankovic, & Petkovic, 2004). Despite past speculations that the right ear advantage was less prominent during speech perception in a non-native language due to presumably more diffuse cerebral representation of the foreign language (Mildner et al., 2004), the present results indeed showed a right ear advantage across all speech stimuli (in the LI and L2). However, differences between ears were not significant; and due to this, results from all 90 ears, left and right combined, were used in all SRT analyses. The Independent Samples t-test (equal variances not assumed) was also performed to investigate any differences in SRT performance between the Cantonese-speaking group and Non-native speech audiometry 21 the non-Cantonese-speaking group; results indicated no significant difference in the mean SRT obtained using English words, /(88)=-0.8, p>.05, or that obtained using digit pairs, /(88)=-1.2, p>.05. These results suggested that participants' first language (Cantonese versus other languages) did not play a significant role in SRT performance; and results from the Cantonese-speaking participants and those who spoke other languages were combined in all SRT analyses. Figure 3.1 PTA and SRT obtained from Right and Left Ears PTA CID-SRT Digit- Cant-SRT SRT Test stimuli 3.1.1 Comparison between CID-SRT and Digit-SRT To investigate any significant differences in SRT obtained using CID-spondees versus SRT obtained using digit pairs, the paired-t test was performed for CID-SRT and Digit-SRT. The dependent variables were the SRT's obtained using CID-spondees and digit pairs. Analyses were first performed for the group as a whole; then independent analyses were performed for the normal hearing versus hearing-impaired group (Section 3.1.3). Non-native speech audiometry 22 Mean SRT values, compared to PTA, are presented in Table 3.1. Paired t-tests revealed that subjects had significantly lower mean SRT when the stimuli were digit pairs than when they were English spondees, t(89)=4J, p<.000T. That is, measured hearing sensitivity for speech was better when the speech stimuli were English digits than when they were English words. When compared to pure tone averages, paired t-tests revealed a significant difference between PTA and CID-SRT, /(89)=-3.0, p<.005. No significant difference was found between PTA and Digit-SRT, f(89)=l.l, p>.05. Correlational analyses were subsequently performed, and results confirmed a higher correlation between Digit-SRT and PTA (r=.95, p<.001) than between CID-SRT and PTA (r=.94, p<.001). Therefore, digit pairs were found to be a more accurate measure of hearing sensitivity than CID W-l spondees. Table 3.1 Mean PTA and SRT (in dBHL) across stimuli for all participants (N=90) Mean Median SD Range PTA 11.5 5.0 16.1 -5-90 CID-SRT 13.2 10.0 16.8 -5-85 Dig-SRT 10.9 5.0 15.4 -5-85 3.1.2 Comparison among Cant-SRT, Digit-SRT, and CID-SRT Table 3.2 and Figure 3.2 show the mean SRT values and PTA obtained by Cantonese-speaking participants. The difference between Cant-SRT and CID-SRT was found to be significant, f(59)=5.9, p<.001, so was that between Cant-SRT and digit-SRT, t(59)=2.9, p<.01. The Cantonese-speaking participants obtained significantly lower mean Non-native speech audiometry 23 SRT, or better hearing, when stimuli were Cantonese spondees compared to English spondees, /(59)=5.9, p<.00T, and English digit pairs, f(59)=2.9, p<.01. These significant differences confirmed a stimulus effect on the performance in speech recognition threshold testing among our participants; that is, SRT values differed depending on the test stimuli used. Table 3.2 Mean PTA and SRT (in dBHL) across stimuli for Cantonese-speakers (N=60) Mean Median SD Range PTA 13.5 8.0 17.0 -5-90 CID-SRT 14.2 10.0 17.0 -5-85 Dig-SRT 12.3 5.0 16.5 0-85 Cant-SRT 10.7 5.0 16.5 -5-85 Figure 3.2 Effect of SRT stimuli on SRT, compared to PTA, for Cantonese-speaking participants (N=60) 16 14 8 12 1 10 in R « ° £ 6 £ 4 fa-ro „ « 2 0 4 • Se r ies 1 P T A C I D - S R T D i g - S R T C a n t - S R T Test Stimuli Non-native speech audiometry 24 When compared to the mean pure tone average (M=13.5), a significant difference was found between PTA and Cant-SRT, /(59)=4.1, p<.001, and between PTA and Digit-SRT, ?(59)=2.0, p<.05; while no significant difference was found between PTA and CID-SRT, t(59)=-\.0, p>.05. Correlational analyses supported the paired-t test findings. Correlation was highest between PTA and CID-SRT (r=.96, p<.001), followed by PTA and Digit-SRT (r=.954, p<.001), and lastly between PTA and Cant-SRT (r=.95, p<.001). Therefore, SRT obtained using CID-spondees or digit pairs as stimuli was found to be a more accurate measure of hearing sensitivity than Cantonese stimuli for the Cantonese-speaking participants. This stimulus effect on the accuracy of SRT was found for all participants, Cantonese or non-Cantonese. In general, for all participants, digit pairs resulted in a more accurate measure of hearing sensitivity than English spondees. For the Cantonese-speaking participants only, both digit pairs and CTD W-l spondees resulted in more accurate measures of hearing sensitivity than Cantonese stimuli. 3.1.3 Effect of Hearing Loss on SRT Within the group as a whole (Cantonese and non-Cantonese), those with normal hearing (pure tone threshold of lower than 25 dBHL between 500 and 4000 Hz) were separated from those with a hearing loss (pure tone threshold of 25 dBHL or greater at any frequency tested). Paired t-tests were performed for each group to investigate any significant differences between CTD-SRT and Digit-SRT as well as between CTD-SRT and Cant-SRT. The mean SRT values for each group are presented in Table 3.3. Both the normal hearing and hearing-impaired groups demonstrated significantly lower SRT when stimuli were Cantonese spondees than when they were CID W-l spondees (normal hearing: Non-native speech audiometry 25 MCID=5.8; Mcant=3.3, t(35)=3.3, p<0.005, hearing-impaired: MCiD=27.4; Mcant=22.2, t(22)=5.7, p<0.001). When stimuli were changed from CID spondees to digit pairs, however, only the hearing-impaired group demonstrated a significant difference in SRT obtained (MCID=27.4, Mdig=23.5, t(22)=4.7, p<0.001). The normal hearing group did not show a significant difference between CID-SRT and Digit-SRT, p>0.05. This finding indicated that the presence of hearing loss increased the stimulus effect on SRT; the hearing-impaired participants demonstrated significant differences in SRT obtained when stimuli were changed from words to digits as well as from English to Cantonese, whereas the normal hearing participants demonstrated a significant difference only when stimuli were changed from English to Cantonese. This finding also indicated that the effect of changing the stimuli from words to digits on SRT was less robust than changing the stimuli from one language to another (non-native to native, or vice versa), as the former effect applied only to the hearing-impaired participants; while the latter effect applied to both normal hearing and hearing-impaired participants. Table 3.3 Mean SRT (in dBHL) across stimuli for normal hearing ears (N=67)2 and hearing-impaired ears (N=23) CID-SRT Digit-SRT Cant-SRT Normal hearing 5.8 5.1 3.3 Hearing-impaired 27.4 23.5 22.2 2 For Cant-SRT, N=42 for normal hearing and N=18 for hearing-impaired. Non-native speech audiometry 26 Table 3.4 Descriptive Statistics on Language Background factors of Participants (N=45) Mean Median Std. Dev. Range PTA (dBHL) 11.5 8.00 16.1 -5-90 LoR (yrs) 12.3 11.5 8.1 1-46 AgelstExpo (yrs old) 16.1 8.1 5.4 2-31 Yrslnstr (yrs) 10.4 10.5 4.7 2-22 Age (yrs old) 44.4 54.5 15.1 19-69 3.1.4 Effect of Language Background Factors on SRT The relationship between the language background factors of participants and performance in SRT was investigated through Pearson correlational analysis for the continuous language background variables and through Spearman correlational analysis for the categorical variables. Results for the group as a whole, before controlling for hearing loss, will first be presented. After that, results from partial correlational analyses with hearing controlled will be presented. Continuous language background variables included age (Age), length of residence (LoR), age of first exposure (AgelstExpo), years of formal instruction (Yrslnstr), frequency of English television viewing (TV), and whether English internet is used at all (Englnternet). Categorical language and background variables included birthplace (Birthplace), first language (FirstLang), preferred language for communication (PreferredLang), language currently used at home (Home), and daily Non-native speech audiometry 27 language use (Usage). The means and ranges of the continuous language and background factors are presented in Table 3.4. Results from the correlational analyses are presented in Table 3.5. Age and Englnternet were significantly correlated with SRT obtained using all 3 stimuli (CID spondees, digit pairs, and Cantonese spondees). LoR and Usage were significantly correlated with both CID-SRT and Digit-SRT. The rest of the language background factors either showed no significant correlations with the outcomes, or were correlated only with SRT obtained using one type of stimuli. As the measured hearing threshold for speech is directly related to hearing sensitivity, partial correlational analyses were performed to control for the effect of hearing loss (as defined by PTA) on the relation between the language background factors and SRT. Results revealed that after controlling for hearing loss, none of the language and background factors were significantly correlated with SRT (p>.05 for all predictors). Post-hoc correlational analyses were performed to investigate the relationship between PTA and the language background variables that were found to significantly correlate with SRT - LoR, Age, Englnternet, and Usage. As Table 3.6 revealed, all were significantly correlated with hearing sensitivity (PTA), thus supporting the finding that when hearing sensitivity was controlled, no single language background factor was significantly correlated with SRT obtained using any stimuli. That is, the significant correlations in Table 3.4 were indeed correlations between hearing sensitivity and SRT, not any language or background variables with SRT. To summarize findings on the effect of language and background factors on SRT, none of the language background factors showed a significant correlation with SRT, after hearing (PTA) was controlled. The language background variables - LoR, Age, Englnternet, Non-native speech audiometry 28 and Usage - were significantly correlated with SRT because of their correlation with hearing sensitivity, which as expected, significantly correlated with SRT obtained using any stimuli (Table 3.6). Table 3.5 Correlation between Language Factors with SRT (before controlling for the effect of hearing loss) CID-SRT Digit-SRT Cant-SRT Age .636** .612** .545** LoR .366** .276** .053 AgelstExpo .268** .174 .198 TV -.184 -.217* -.224 Englnternet -.381** -.380** FirstLang -.178 -.221* 3n/a Usage .290** .338** -.092 Home .050 .149 -.257* Yrslnstr -.204 -.102 -.151 Birthplace -.139 -.195 -.021 PrefLang .184 .186 -.147 ** Correlation is significant at the 0.01 level (2-tailed). * Correlation is significant at the 0.05 level (2-tailed). 3 FirstLang not applicable for Cant-SRT because all participants tested with Cantonese spndees had the Cantonese as their first language. Non-native speech audiometry 29 Table 3.6 Correlation between Hearing (PTA) and SRT and Language Predictors CID-SRT .943** LoR .341** Digit-SRT .950** Age .658** Cant-SRT 947** Englnternet -.455** Usage .351** Correlation is significant at the .01 level (2-tailed). Table 3.7 Mean WRS across stimuli and conditions (N=30) Mean Median SD Range EngQuiet 907 940 108 60.0-100.0 CantQuiet 98.1 100.0 4.4 80.0-100.0 EngNoise 81.2 84.0 14.4 40.0-100.0 CantNoise 94.8 96.0 5.9 80.0-100.0 Figure 3.3 Mean WRS across stimuli and conditions (N=30) o u CO O) o o o 5 120 100 80 60 40 20 0 ^1-£0 -iiili^^ Quiet Noise Test Conditions -Eng l i sh • C a n t o n e s e Non-native speech audiometry 30 Table 3.8 Paired-t test results of WRS across Stimuli and Testing Conditions t df Sig. (2-tailed) EngQuiet ~ CantQuiet -4.6** 29 .000 EngNoise ~ - CantNoise -6.2** 29 .000 EngQuiet ~ - EngNoise 5.9** 29 .000 CantQuiet - CantNoise 3.8** 29 .001 3.2 Performance in WRS 3.2.1 WRS across languages and conditions WRS was conducted binaurally on all participants, but only the Cantonese speakers were tested with the Cantonese stimuli. As the goal of the study was to compare performance for stimuli in the native language against those in the non-native language, only results from the 30 Cantonese-speaking participants were analyzed. Table 3.7 and Figure 3.3 show the mean values of Cantonese-speaking participants' performance in WRS in quiet (Q) and noise (N), in English (Eng) and in Cantonese (Cant). Paired t-tests were performed to investigate significant differences in WRS within language (EngQ vs EngN, CantQ vs CantN) and within condition (EngQ vs CantQ, EngN vs CantN); results are presented in Table 3.8. For all participants, significant differences in WRS existed both across languages (English versus Cantonese) and across test conditions (quiet versus noise). Performance was significantly better when stimuli were in Cantonese than in English for both quiet and noise conditions. Significantly better performance was also demonstrated when testing was administered in quiet, as opposed to in noise, for both Cantonese and English stimuli. Non-native speech audiometry 31 3.2.2 Effect of Hearing Loss on WRS The effect of hearing loss was also investigated by comparing the group with normal hearing to the group with hearing impairment. An Independent Samples t-test (equal variances not assumed) was performed accounting for the different sample sizes; significant differences in WRS performance between normal hearing and hearing-impaired participants were present only when stimuli were in English for both quiet and noise conditions, quiet: t(\ 1.3)—-2.6, p<.05; noise: £(13.1)=-3.3, p<.005, and were absent in both Cantonese conditions, p>.05. Therefore, word recognition at supra-threshold level was affected by hearing sensitivity only when stimuli were in English. When stimuli were in the participants' native language (Cantonese), the presence of hearing loss had no significant effect on supra-threshold word recognition. These results, once again, confirmed a stronger stimulus effect on auditory speech discrimination in the non-native language for non-native speakers of English. Results on WRS are summarized in Figure 3.4. 3.2.3 Effect of Language and Background Factors on WRS Pearson and Spearman correlational analyses were performed to investigate the relationship between the language background factors of participants and their performance in the four WRS conditions (English quiet, English noise, Cantonese quiet, Cantonese noise). Correlational coefficients are presented in Table 3.9. Age, Birthplace, and AgelstExpo showed significant correlations consistently across stimuli, except for one or two test conditions. Older participants performed less well in WRS in all conditions except Cantonese in noise. Compared to those born in Hong Kong, those born in Mainland China performed less well in all word recognition measures, though less significantly in CantQ. Compared to those first exposed to English at a later age, those exposed at an earlier age Non-native speech audiometry 32 obtained significantly higher WRS when tested in both quiet and noise in English only, but not in Cantonese. Figure 3.4 WRS performance across language and conditions for normal hearing and hearing-impared participants Partial correlational analyses were performed to control for the effect of hearing sensitivity; results suggested that after controlling for hearing (PTA), Age was only significantly correlated with EngN, r (24)=.48, p<.05, and not with the other testing conditions. Therefore, older listeners performed poorer in speech discrimination in noise in a second language, and this effect could not be fully explained by the decrease in hearing sensitivity that accompanied aging. Birthplace remained significantly corelated with EngQ: r (27)=-.57, p<.01, EngN: r (27)=-.49, p<.01, and CantN: r (27)=-.53, p<.01., and became significantly correlated with CantQ as well, r (27)=-.38, p<.05. Those born in Hong Kong performed significantly better in speech discrimination than those born elsewhere in China. Test conditions Non-native speech audiometry 33 For AgelstExpo, correlations became insignificant with all WRS conditions (p>.05), after controlling for hearing. Table 3.9 Correlation between Language Factors and WRS EngQ CantQ EngN CantN Age -.462* -.456* -.660** -.251 AgelstExpo -.379* -.199 -.530** -.280 Birthplace -.542** -.281 -.399* -.443* TV .257 .115 .263 .292 Yrslnstr .330 .041 .269 .205 LoR -.229 .061 -.073 .054 Englnternet .318 .119 .182 .178 PrefLang .096 -.103 -.047 -.070 Home .065 .061 .002 .032 Usage -.204 -.013 .062 .233 ** Correlation is significant at the 0.01 level (2-tailed). * Correlation is significant at the 0.05 level (2-tailed). The relationship between Birthplace and the significant differences in WRS due to test stimuli was further investigated through paired-t tests; results revealed no significant effect of Birthplace on the stimulus effect of WRS. That is, Cantonese speakers who were born in Hong Kong demonstrated significant differences in WRS when stimuli were in English versus Cantonese, in the same way as those born in Mainland China. Non-native speech audiometry 34 To summarize the findings on WRS, the mean suprathreshold word recognition was significantly better when stimuli were in Cantonese than when they were in English. This applied to testing in quiet and in noise, and to both normal hearing and hearing-impaired listeners. The presence of hearing loss affected performance in WRS when stimuli were in English, but not when stimuli were in Cantonese. Age and Birthplace were significantly correlated with WRS performance. Older listeners performed less well than younger listeners in word recognition in noise in English, but not in Cantonese or in quiet. Cantonese-speakers born in Hong Kong performed better in word recognition in both languages in both quiet and in noise. 4 Discussion 4.1 SRT The present study confirmed Ramkisson et al.'s (2002) finding that using digit pairs as stimuli resulted in SRT measures that more closely approximated the pure tone average (PTA) than did using CTD-spondees as stimuli. The results also supported Ramkisson et al.'s (2002) conclusion that compared to CID-spondees, digit pairs should facilitate more accurate SRT testing for non-native speakers of English. Results from the Cantonese participants, in addition, suggested that PTA might not accurately reflect hearing sensitivity for speech in all languages. Although CTD-spondees resulted in significantly higher (poorer) SRT than Cantonese spondees, SRT obtained using CTD-spondees more closely approximated PTA than Cantonese spondees. That is, CDD-spondees more accurately measured hearing threshold for speech than Cantonese spondees for the Cantonese participants, despite the better hearing sensitivity measured by Cantonese spondees. The Non-native speech audiometry 35 definition of PTA - the average of pure tone thresholds at 500, 1000, and 2000 Hz, however, was derived from the acoustic spectrum of English speech sounds, the majority of which lie within 500-2000 Hz. The lower threshold obtained from using Cantonese spondees compared to the PTA suggests that the frequencies 500-2000 Hz may not be an accurate . representation of Cantonese speech sounds. This speculation will be further discussed. A review of the literature revealed three major differences between English and Cantonese phonetic systems. First, vowels have a more dominant representation in Cantonese than English (Cheung & Cheung, 1994). In addition to the vowels and diphthongs4 present in English, Cantonese has 5 diphthongs and 3 triphthongs that do not exist in English. As vowels consist of harmonics of fundamental frequencies in the range of 80 to 300 Hz and the lower formants Fl and F2 are the two most significant cues for vowel identification (Palmer & Shamma, 2004), the vowel dominance in Cantonese phonetics suggests that compared to English speech, Cantonese speech perception may depend more on the low fequencies, i.e. below the PTA. Second, most consonants occur in word-initial positions in Cantonese; 70% of Cantonese words do not have a coda5 (Leung, Law, & Fung, 2004). Word-initial consonants tend to be more salient as their acoustic properties are often partially represented in the vocalic portion of a word. Since each word consists of one syllable in Cantonese, consonants are never embedded between vowels and do not occur in clusters6 in a word. This suggests that the consonants that lie within the PTA range of 500-2000 Hz, [p], [h], [g], [ch], and [sh] (AAA, 1991), are less critical for speech discrimination in Cantonese than in English. 4 A diphthong is a lengthened vowel that spans 2 phonetic units; a triphthong is one that spans 3. 5 A linguistic term for the end unit of a syllable, or word-final consonant. 6 A consonant cluster is a combination of consonants that represent a single unit, e.g. "spl" in "splash". Non-native speech audiometry 36 Finally, the last major distinction between English and Cantonese speech is the use of tones to differentiate word meanings. Cantonese contains 6 tones that signify meaning differences, 3 of which have additional variations realized only in short syllables with a voiceless stop coda (Cutler & Chen, 1997). Tonal perception relies mainly on low frequency processing; and all lexical tones in Cantonese have fundamental frequencies below 400 Hz (Dayle & Wong, 1996). Therefore, the three major distinctions from English in Cantonese - vowel dominance, word-initial consonants, and tonal distinctions - may be the reasons that the PTA taken from 500, 1000, and 2000 Hz may not be an accurate estimate of hearing sensitivity for Cantonese speech. Of interest was the finding that the presence of hearing loss may increase the likelihood that SRT obtained using stimuli in participants' first language would differ from that obtained using English stimuli; while the likelihood of a difference between CTD-SRT and digit-SRT was not affected by the presence of a hearing loss. This finding further supports the speculation that hearing loss impairs functioning in different languages differently. 4.2 WRS A significant stimulus effect due to language was found in the WRS results. Both in quiet and in noise, performance of the Cantonese-speaking participants in supra-threshold word recognition was significantly better when stimuli were in their first language than in English. Decrease in word recognition performance in noise in a non-native language had been documented in previous research (von Hapsburg & Pena., 2002). Most of the past studies compared non-native speakers' performance to that of native speakers. Despite Non-native speech audiometry 37 effort in matching individual participant's characteristics between the native and non-native groups, inter-subject variability, nevertheless, could not be entirely eliminated. The present study compared performance between a native and a non-native language within subjects, thus eliminating the effect of inter-subject variability on the results. As the same speech and noise stimuli and conditions were applied to all participants, the demonstrated decrease in word recognition performance in English compared to Cantonese could thus be deduced to be related to the language of stimuli, and not to individual variability. Another interesting finding was that for both normal hearing and hearing-impaired Cantonese listeners, noise had little effect on WRS when stimuli were in their native language. Stoppenbach et al. (1999) determined word recognition norms for NU-6 word lists and found the average WRS in noise at +5 signal-to-noise ratio (SNR) to be 86%. The mean WRS in noise at the same SNR for the Cantonese-speaking participants in the present study was 95%. As Stoppenbach et al. (1999) did not specify whether their participants spoke English as their native language, their lower word recognition norm in noise might have been explained by a language confound. Since this confound was not mentioned in their study, however, it could be deduced that the participants were native English speakers. Therefore, the higher WRS in noise demonstrated by the Cantonese participants when stimuli were in their native language was more likely due to differences in the acoustic properties between English and Cantonese. In an effort to keep experimental conditions consistent, the same recorded cafeteria noise was used in the measurement of both English and Cantonese WRS. As the background speech in the cafeteria noise was in English, a release of masking7 effect might 7 Release of masking is the decrease in the effect of a competing noise on the perception of the signal due to a change in the composition of the competing noise. Non-native speech audiometry 38 have resulted during Cantonese WRS measurement because the signal and noise were in different languages. As discussed earlier, Cantonese speech seems to span across a frequency range different from English speech, enabling a release of masking when the competing background noise was in English while the signal was in Cantonese. The type of noise used - cafeteria noise - further enabled a release of masking because unlike other speech noise such as four-talker-babble, the background speech was not as prominent as the background restaurant commotions in cafeteria noise. Applying our results to real life, it could be deduced that Cantonese speakers in English-speaking societies may be less affected by the background English speech, and that Cantonese hearing-impaired people may experience less social impairment than their English-speaking counterparts. A review of literature supported this view. Dayle and Wong (1996) reported mismatch between pure tone screening test results and perceived hearing impairment among Cantonese speakers. Among those aged 49-83 years, 78% of those who failed a 25-dB pure tone screening reported slight or no hearing difficulties and 69% reported no problems hearing during conversations. One hypothesis the researchers proposed which supported findings from the present study was that lexical tones in Cantonese have fundamental frequencies below 400 Hz; thus Cantonese speakers with high frequency hearing loss, as is common with most sensorineural hearing loss, will still be able to utilize tonal distinctions in Cantonese speech perception. Another hypothesis the researchers proposed was that the Cantonese people were accustomed to speaking loudly by culture and to living in a noisy environment; thus the hearing-impaired may indeed experience less difficulties in real life. Non-native speech audiometry 39 4.3 Language and Background factors After controlling for the effect of hearing loss, no individual language factor exhibited a significant correlation with SRT differences across stimuli (CID-spondees, digit pairs, and Cant-SRT). Although it could indeed be true that the language background factors investigated in the present study had minimal effect on measured hearing threshold for speech, a more plausible reason may be the limitation of the present study design of not having sufficient sample size for each of the language background factors being investigated. Except for first language, gender, and age, the rest of the 13 language background factors had not been controlled for duing participant recruitment. This resulted in unbalanced sample sizes and uneven distribution within individual linguistic factors; for example, only 2 participants watched TV less than once in 2 weeks (TV=0) while 32 watched it more than twice a week (TV=3), rendering comparison across categories within individual variables unreliable. The relationship between language factors and WRS was similarly limited, but two background variables demonstrated significant effects. The age of participants was negatively correlated with WRS in noise in English, even after hearing sensitivity and other interacting language variables were controlled for. Older Cantonese speakers demonstrated poorer word recognition scores in noise in English, compared to younger Cantonese speakers. Since hearing and other language factors were controlled and the effect was only seen in English, a possible explanation for the poorer performance in English WRS in noise demonstrated by the older participants may be a less efficient auditory processing system due to aging. Listeners often rely on the redundancy in speech during speech perception in noise (Assmann & Summerfield, 2004). Redundancy is "any characteristic of the language Non-native speech audiometry 40 that forces spoken messages to have more basic elements per message, or more cues per basic element, than the barest minimum necessary for conveying the message" (Coker & Umeda, cited in Assmann & Summerfield, p.231). To be able to assess the additional cues that speech provides, knowledge and familiarity of the phonotactic rules of the language are important. This knowledge needs to be learned in a non-native language, and is likely not as accessible as in the native language. Therefore, in addition to the normal speech perception mechanisms involved in word recognition in a first language, additional cognitive resources and processing are needed to assess this learned knowledge when perceiving speech in a second language and in adverse conditions such as noise. The reduction in efficiency of an aging cognitive system renders general cognitive processing less efficient, thus leading to impaired word recognition under adverse conditions, especially in a non-native language. Birthplace was the other background factor that sigificantly correlated with WRS. Unlike age, which affected only English WRS in noise, birthplace was related to WRS in all conditions. Cantonese speakers born in Hong Kong demonstrated significantly better word recognition than those born elsewhere in China. This effect existed both across languages (both English and Cantonese) and testing conditions (quiet and noise). The better performance in English exhibited by Hong Kong-born participants may be explained by the different English education systems between Hong Kong and the rest of China. In Hong Kong, most children start learning English during kindergarten; while in Mainland China, formal English instruction starts in secondary school (Grade 7-8). The earlier exposure to English among Hong Kong Cantonese may be the reason that their performance in English word recognition was better than Mainland Chinese. However, the present study found no significant correlation between age of exposure and performance in speech audiometry in a Non-native speech audiometry 41 non-native language; therefore, the finding that Hong Kong Cantonese speakers performed better than Mainland Cantonese speakers in word recognition in English may be explained with additional aspects of the educational system in the two countries (e.g., whether teachers were native speakers of English, amount of English instruction per week, opportunities to hear and speak English on a daily basis, etc.) that were not examined in the context of the present study. Other studies have shown significant effects of age of second language acquisition on second language processing. Silverberg and Samuel (2004) found that early Spanish learners of English represent and access words in English differently from late learners. Mayo et al. (1997) found that early Spanish learners of English performed better in speech perception in noise in English than late learners. Another reason for the better performance in Cantonese word recognition by Hong Kong-born participants may be the slight dialectal variations between Hong Kong Cantonese and Cantonese spoken in Mainland China. Spoken by 64 million people around the world, Cantonese is not only spoken in Southern China and Hong Kong, but also in Malaysia, Vietnam, Macao, Singapore, and Indonesia (UCLA, n.d.). Four dialects exist in Cantonese - Guangzhou, Yuehai, Taishan, and Gaoyang. While the Guangzhou dialect is spoken in Hong Kong and Guangzhou city, the other three dialects are spoken in the rest of China and other countries. As the speaker of the Cantonese stimuli was born in Hong Kong, the speaking style and accent reflect that of Hong Kong Cantonese, which may differ slightly from Cantonese spoken elsewhere. Also, although Cantonese was the first language for all Cantonese-speaking participants, those born in Mainland China learned Mandarin Chinese at a young age and most only spoke Mandarin at school and at work in China; while Cantonese is the dominnt language at home, school, and work in Hong Kong. Therefore, Non-native speech audiometry 42 knowledge and familiarity of Hong Kong Cantonese might have contributed to the better Cantonese word recognition performance demonstrated by the participants born in Hong Kong. These explanations for better word recognition performance demonstrated by Cantonese speakers born in Hong Kong implied possible indirect effects of age of second language exposure and daily language use. Birthplace itself was likely not a contributing factor to performance in speech audiometry as its effect was shown only in the Cantonese participants and not others. Also, to my knowledge no past literature demonstrated an effect of birthplace on L2 speech perception. 4.4 Implications The major findings of the present study are 1) digit pairs as SRT stimuli more accurately measured hearing threshold for English speech than CID W-l spondees; 2) for the Cantonese participants, Cantonese spondees elicited better performance in hearing threshold measure than English stimuli, but were not as accurate when compared to the pure tone average; 3) for the Cantonese participants, performance in monosyllabic word recognition was significantly better when stimuli were in Cantonese than when they were in English; this effect existed across testing conditions (quiet and noise) and despite the presence of hearing loss; 4) for the Cantonese participants, the presence of hearing loss affected word recognition in English significantly more than word recognition in Cantonese; 6) older Cantonese speakers performed significantly poorer in word recognition in noise in English than their younger counterparts, despite controlling for hearing sensitivity; and Non-native speech audiometry 43 lastly, 7) Cantonese speakers born in Hong Kong demonstrated higher word recognition scores in both Cantonese and English compared to those born in Mainland China. These findings provided answers to the research questions of the present study, i.e. 1) a stimuli effect does exist in speech audiometry on non-native English speakers, 2) this stimuli effect is more prominent in some non-native English speakers than others, and 3) age and birthplace are the background factors that determine the extent to which performance in English speech audiometry is compromised by the language of test stimuli. The theoretical and clinical implications of the present findings will be discussed. 4.4.1 Theoretical Implications Previous studies that investigated word recognition in a second language have obtained variable results. Meador et al. (2000) found age of arrival and daily language use to contribute to differences in word recognition scores among their Italian bilingual participants. Those who were exposed to English later in life and those who continued using Italian exhibited poorer English word recognition. The length of residence in an English-speaking environment was also investigated, but was found to have little effect on word recognition scores. On the other hand, Bradlow and Pisoni (1999) found length of residence to contribute significantly to difference in word recognition scores when words were "hard", characterized as having a dense lexical neighborhood8. Year of instruction and age of arrival were also investigated and were found to have no significant effect on "hard" word recognition. The age of arrival, however, had an effect on hard word familiarity. An explanation for the finding that non-native listeners performed poorly in word recognition in noise was provided. Due to the lack of experience with the sound system of the non-native 8 A dense lexical neighborhood is when a word has many similar sounding "neighbors". Non-native speech audiometry 44 language, additional acoustic-phonetic cues are necessary for non-native listeners to comprehend speech; these cues are lost in the presence of background noise. Van Wijngaarden, Steeneken, & Houtgast's (2002) study investigated the effect of cross-language phonetic differences in word recognition in a non-native language. Their findings supported the importance of knowledge of the phono tactic rules of the non-native language as a contributing factor to word recognition in a non-native language. They also stressed the relation between the native language and the target non-native language as another important contributor. Although the present study found no significant effect of first language on word recognition performance, any potential effect might have been limited by the small sample size. While the Cantonese sample had a sufficient size of 30, the sample size for each of the other languages ranged from 1 to 4. While no significant relationship was found between first language and performance in speech audiometry for the group of 45 participants as a whole, finding from the Cantonese participants supported a first language effect on speech audiometry in a non-native language. That is, results from speech audiometry in English did not accurately measure hearing sensitivity of speech in Cantonese, as indicated by the difference in scores obtained using English versus Cantonese stimuli. The model of a central pool of reallocable cognitive resources, detailed in Pichora-Fuller, Schneider, and Daneman (1995), is supported by the present findings. Two of the major findings are that age and hearing impairment contribute to poorer speech recognition in adverse conditions, namely in a non-native language and in noise. These findings echoed past findings that elderly listeners performed significantly poorer in adverse listening conditions (noise, fast speech etc.) than younger listeners, and that the decrease in performance could not be fully accounted for by the decrease in hearing sensitivity with age. Non-native speech audiometry 45 The model of a central pool of reallocable cognitive resources centres around the theory that one pool of limited mental resources is responsible for all cognitive processing. When signal is degraded, more resources are required for listening, which deplete the working memory resources available for linguistic and cognitive processing. This model adequately explains the finding that older and hearing-impaired listeners performed poorer in speech perception in a non-native language and in noise. As resources are allocated to process auditory signals in noise, fewer resources are available for applying the learned phonotactic and semantic cues in a second language to speech perception. Aging and hearing loss add to the burden of this pool of limited resources, thus further impairing performance. 4.4.2 Clinical Implications The present study found the language of test stimuli to significantly affect test performance in speech audiometry for non-native English speakers. As the consistency between SRT and PTA is an important indicator of pseudohypacusis9, and testing in the non-native language more likely results in discrepancy between SRT and PTA, pseudohypacusis may be misdiagnosed when insufficient knowledge of the non-native language may be the true cause of SRT-PTA discrepancy. Therefore, clinicians serving multicultural clients should constantly be aware that it is not hearing sensitivity per se that they are measuring, but also their clients' language ability. The interesting supplemental finding that hearing-impaired listeners performed differently from normal hearing listeners only in their non-native language (English) has significant clinical implications. As most clients at an Audiology clinic have a certain degree of hearing loss, extra caution should be taken in applying and interpreting English 9 A kind of hearing loss with non-organic source, as exhibited in inconsistent test results. Non-native speech audiometry 46 WRS for non-native speakers of English because a hearing loss increases the confounding effect of language on their performance in English WRS. While SRT may over-estimate the need for amplification for non-native English speakers, WRS mayunder-estimate thepotentialbenefit from amplification. WRS assesses speech discrimination ability after hearing loss has been compensated; a high WRS indicates higher potential benefit from amplification. The lower WRS exhibited by non-native English speakers due to stimuli in their non-native language may falsely suggest less potential benefit from amplication. Another purpose of WRS testing is differential diagnosis. A poor word recognition score is indicative of a cochlear source of hearing impairment, and therefore may exagerrate the sensorineural nature of the hearing loss of non-native English speakers. In addition, any potential retrocochlear involvement - as exhibited by poorer WRS at high intensity levels - may be overlooked due to the inability to achieve a PBmax10. As well, another indicator for retrocochlear hearing losses is abnormally poor word recognition. Poor word recognition scores exhibited by a non-native English speaker may be attributed to a language problem by the clinician; as a result, a true tumor may be overlooked. Therefore, it is recommended that speech audiometry not be used solely for differential diagnosis with non-native English speakers, but be conducted along with other objective test measures such as acoustic reflexes, bone conduction testing, and auditory brainstem responses (ABR). Two background factors emerged to be more reliable predictors of performance in speech audiometry, and they are age and birthplace (for Cantonese speakers only). For elderly clients and for Cantonese speakers born outside Hong Kong, it is more likely that Maximum word recognition score; as intensity is increased above that at which this score is obtained, WRS would decrease in the case of retrocochlear involvement. Non-native speech audiometry 47 speech audiometry using English stimuli would result in inaccurate measurement of hearing sensitivity. In such cases, alternative strategies such as training interpreters to administer part of the testing or using digit pairs instead of CID spondees for SRT testing, should replace traditional speech audiometric testing. References on strategies on the use of interpreters and on speech audiometric test materials in other languages are included in Appendix 7. An informal investigation on clinical practices in speech audiometry both in Hong Kong and in Vancouver was conducted by contacting audiologists currently working in each city. Responses were received from three individual audiologists in each city. Audiologists in Vancouver handle non-native-English-speaking clients by 1) allocating more time for appointments, 2) conducting Speech Awareness Threshold measures instead of Speech Reception Threshold measures, 3) eliminating speech audiometry from the test battery, 4) implementing the full speech audiometry in English when they deem their clients to know sufficient English, and 5) relying on clients' families or friends as interpreters during case history and counselling sessions. In Hong Kong, standardized speech audiometry materials are not widely available and speech audiometry is not always implemented. Most audiology clinics develop and use their own word lists, which are usually not standardized. Only two word lists have been developed based on empirical research; and only one word list has been published. The published list comprises monosyllabic words developed by Lau and So (1988) and contains little standardization information; the other is a standardized Cantonese Hearing-In-Noise Test (HINT) developed by Wong and Soli (in press). The former list has not been recorded and the latter test has only become available recently. Non-native speech audiometry 48 Despite knowledge and awareness of the limitation of English speech audiometry on non-native English speakers, it seems most often the case that alternative tests of hearing for speech are not readily available. The least that the monolingual clinician should attempt, however, is to adopt a communication style that accommodates better to the multicultural client's needs. Such a style involves listening patiently to the client's imperfect English, constantly ensuring mutual understanding throughout session, adopting a non-judgmental attitude towards differences, speaking clearly and free of jargon and colloquialisms, and treating multicultural clients with the same respect as all other clients. When interpreting results, care should be taken in applying native English norms to the non-native-English client. As the present study has shown, the same degree of measured hearing loss may indicate different degrees of impairment to different people. This is especially true when the client's daily life involves a language different from the one he/she was being tested. To help multicultural hearing-impaired clients, the best way may be to trust and rely on clients' own perception of hearing impairment and need. The use of subjective measures, such as hearing handicap profiles, is recommended in addition to objective testing. Other suggestions for assessing the speech recognition and discrimination of non-native English speakers are provided in Appendix 8. 4.5 Limitations and Future Directions A major limitation of the present study was the ambiguity in interpretation of the SRT findings due to the 5-dB increments used during testing. Significant results might have been overlooked, while significant findings could have resulted due to the large step size used during measurement and might not have been truly statistically significant. While the Non-native speech audiometry 49 SRT's obtained were all within 5 dB across stimuli, the small discrepancy in SRT could have been due to the relatively high English level of the participants of this study. The participants have lived in an English-speaking country for an average of 12 years and have been educated in English for an average of 10 years. For the non-native English speakers with lower English level, the discrepancy in SRT due to the stimulus effect would likely be larger, resulting in a clinical significance. Therefore, the stimulus effect revealed in this study is still clinically applicable. Nevertheless, to increase the accuracy and clinical applicability of results, it is recommended that future research on SRT testing should use 2 dB step sizes to more accurately account for any statistically significant findings. As mentioned earlier, insufficient sample size and an excessive number of random independent variables were two major limitations of the present study. Due to the insufficient sample size, all hearing-impaired participants were categorized into one group for comparison against the normal hearing group. Within the hearing-impaired group, however, different degrees and configurations of hearing losses could have led to different test results. A larger number of participants with a controlled range of linguistic background factors, as well as hearing sensitivity, would have increased the reliability of the results. Future research on this topic can explore the trends revealed in the present study in a more specific and quantifying manner. It is recommended that in a replication of this research, only a few linguistic factors should be targeted and all other background variables should be controlled; a representative sample of participants for each controlled linguistic factor should also be obtained. Sufficient samples should also be obtained for each of the different degrees and configurations of hearing loss to enable comparison of the different type of losses of speech audiometric results. Non-native speech audiometry 50 Another experimental limitation was talker familiarity. The listening advantage gained from being familiar with a talker's voice and speaking style has been documented in numerous studies (cited in Bradlow & Pisoni., 1999. p.2075). All speech stimuli used in the present study were spoken by one speaker; while some participants were acquiantances of this speaker, some were not. The ones who were more familiar with the speaker might have perceived her speech more easily and better than those who did not know the speaker. An independent samples t-test was performed comparing the group who were acquainted with the speaker with the group who were not on the dependent variables; results indicated a significant difference between the groups on one dependent variable - Digit-SRT, t (38.5)=-2.1, p<0.05, although no significant difference was found for the other dependent variables. As Digit-SRT was found to be the most accurate measure of hearing sensitivity for English speech for non-native English speakers, the effect of talker familiarity on the reliability of the present finding could not be ruled out. It is thus recommended that future research should use standardized recorded speech stimuli from outside sources if available. Another potential source of error in internal validity was the use of a non-standardized Cantonese word list. Due to unavailable standardized Cantonese word lists, a word list with unknown source was used in the present study. The mean Cantonese WRS of 95.0% in noise was high compared to English norms on native English speakrs (Stoppenbach et al., 1999), suggesting that the Cantonese WRS-in-noise measure might not be as sensitive as the English WRS-in-noise measure. However, the possibility that Cantonese speech may possess distintive qualities to render it more perceptually salient in noise also could not be ruled out. A worthwhile project for future research would be to analyze the frequency content and acoustics of Cantonese speech. Standardized recorded Non-native speech audiometry 51 stimuli are recommended for future research to reduce confounds due to stimulus or speaker variations, to enable accurate replications of previous studies, and to more accurately reflect real life auditory functioning. As all data were collected by the primary researcher, experimenter's bias in scoring could not be ruled out. Suggestions for eliminating this bias in future studies are 1) have participants write down their response instead of repeating it aloud, and 2) audiotape responses and have multiple judges score the responses. An interesting observation was noted during data collection. Among all the errors made by the Cantonese speakers on the Cantonese stimuli, none were in tones. Tonal perception seems to involve a different processing mechanism than the normal speech sound discrimination involved in speech perception; it would thus be interesting to investigate tonal perception in hearing-impaired speakers of tonal languages. Much research is yet needed to investigate native speech perception in bilinguals compared to monolinguals. There is also a major and urgent need for the development and validation of non-English speech audiometric test materials. It seems also worthwhile to examine the use of children's English word lists as a viable alternative to standard speech audiometry on non-native English speakers. Finally, a similar replication of this study on speakers of other dominant languages in our society, such as Tagalog and Punjabi, would be highly warranted to enable clinicians to more adequately serve the diverse mosaic in their clientele. Non-native speech audiometry 52 5 References American Academy of Audiology. (1991). Audiogram of Familiar Sounds. Williams and Williams. American National Standards Institute (1979). American National Standard Specification for Audiometers. ANSI S3.6-1979. New York: American National Standards Institute, Inc. American Speech and Hearing Association. (2005). Summary Counts by Ethnicity and Race. [Online] Retrieved March 28, 2005. American Speech-Language-Hearing Association. (1989, March). Bilingual speech-language pathologists and audiologists: definition. Asha, 31, p.93. American Speech and Hearing Association. (1979). Guidelines for Determining the Threshold Level for Speech. Asha, 21, 353-356. American Speech and Hearing Association. (1978/ Guidelines for Manual Pure Tone Speech Audiometry. ASHA, 20, 297-301. Assmann, P. & Summerfield, Q. (2004). The perception of speech under adverse conditions. In S.Greenberg, W. Ainsworth, A. Popper, & R. Fay (Eds.), Speech Processing in the Auditory System. New York: Springer-Verlag Inc. Beverly-Ducker, K. (2003) Multicultural issues in Audiology. ASHA Division 9 Newsletter, 73(1), 12-15. Bradlow, A. R. & Pisoni, D. B. (1999). Recognition of spoken words by native and non-native listeners: Talker-, listener-, and item-related factors. Journal of the Acoustic Society of America, 106(A), 2074-2085. Non-native speech audiometry 53 Cheng, L. (2002). Asian and Pacific American cultures. Ixi Battle, D.E. (Ed.), Communication Disorders in Multicultural Populations, 3rd edition. Woburn, MA: Butterworth-Heinemann. Cheung, L. Y. & Cheung, C. Y. (1994). Mandarin/Cantonese Phonetic Dictionary. Hong Kong: Chung Wah Publishing Ltd. [Chinese] Cutler, A. & Chen, H. C. (1997). Lexical tone in Cantonese spoken word processing. Perception and Psychophysics, 59(2), 165-179. Dayle, J. & Wong, L. L. (1996). Mismatch between aspects of hearing impairment and hearing disability/handicap in adult/elderly Cantonese speakers: Some hypotheses concerning cultural and linguistic influences. Journal of the American Academy of Audiology, 7, 442-446. DeBow, A. & Green, W.B. (2000). A survey of Canadian audiological practices: pure tone and speech audiometry. Journal of Speech-Language Pathology and Audiology, 24(A), 153-61. Garstecki, D. C. (1980). Measuring discrimination efficiency: Alternative approaches. In Rupp, R. & Stockdell, K. (Eds.), Speech Protocols in Audiology. New York, U.S.: Grune & Stratton, Inc. Heckendorf, A. L., Wiley, T. L., Wilson, R. H. (1997). Performance norms for the VA compact disc versions of CID W-22 (Hirsch) and PB-50 (Rush Hughes) word lists. Journal of the American Academy of Audiology, 8(3): 163-72. Lau, C. C. & So, K. W. (1988). Material for Cantonese speech audiometry constructed by appropriate phonetic principles. British Society of Audiology, 22, 297-304. Non-native speech audiometry 54 Leung, M. T., Law, S. P., & Fung, S. Y. (2004). Type and token frequencies of phonological units in Hong Kong Cantonese. Behavior Research Methods, Instruments and Computers, 36(3), 500-505. Liu, H., Bates, E., & Li, P. (1992). Sentence interpretation in bilingual speakers of English and Chinese. Applied Psycholinguistics, 73,451-484. Lutman, M. (1997). Speech tests in quiet and noise as a measure of auditory processing. In M. Martin (Ed.), Speech Audiometry, 2nd Edition. San Diego, California: Singular Publishing Group, Inc. Lyregaard, P. (1997). Towards a theory of speech audiometry tests. In M. Martin (Ed.), Speech Audiometry, 2nd Edition. San Diego, California: Singular Publishing Group, Inc. Mendel, L. L. & Danhauer, J. L. (1997). Characteristics of sensitive speech perception tests. In L. Mendel & J. Danhauer (Eds.), Audiologic Evaluation and Management and Speech Perception Assessment. San Diego, U.S.: Singular Publishing Group, Inc. Mayo, L. H., Florentine, M., & Buus, S. (1997). Age of second-language acquisition and perception of speech in noise. Journal of Speech, Language, and Hearing Research, 40, 686-693. Meador, D., Flege, J. E., & Mackay, I. R. (2000). Factors affecting the recognition of words in a second language. Bilingualism: Language and Cognition, 3(1), 55-67. Mildner, V., Stankovic, D., & Petkovic, M. (2004). The relationship between active hand and ear advantage in the native and foreign language. Brain and Cognition, 57, 158-161. Non-native speech audiometry 55 Palmer, A. & Shamma, S. (2004). Physiological representations of speech. In S. Greenberg, W. Ainsworth, A. Popper, & R. Fay (Eds.), Speech Processing in the Auditory System. New York: Springer-Verlag Inc. Pichora-Fuller, M. K., Schneider, B. A., & Daneman, M. (1995). How young and old adults listen to and remember speech in noise. Journal of the Acoustic Society of America, 97(1), 593-608. Ramkissoon, I. & Khan, F. (2002). Serving Multilingual Clients with Hearing Loss: How linguistic diversity affects audiologic management. [Online] The ASHA Leader, 8(3). Retrieved December 15, 2004. Ramkissoon, I., Proctor, A., Lansing, CR. & Bilger, R. C. (2002). Digit speech recognition thresholds for non-native speakers of English. American Journal of Audiology, 11, 23-28. Rupp, R. R. (1980). Classical approaches to the determination of the spondee threshold. In Rupp, R. & Stockdell, K. (Eds.), Speech Protocols in Audiology. New York, U.S.: Grune & Stratton, Inc. Silverberg, S. & Samuel, A. G. (2004). The effect of age of second language acquisition on the representation and processing of second language words. Journal of Memory and Language, 51, 381-398. Statistics Canada. (2001). Population by Mother Tongue: 2001 Census. [Online] 11 a.htm Retrieved January 18, 2005. Stockdell, K. G. (1980). Classical approaches to measuring discrimination efficiency via word lists. In Rupp, R. & Stockdell, K. (Eds.), Speech Protocols in Audiology. Non-native speech audiometry 56 New York, U.S.: Grune & Stratton, Inc. Stoppenbach, D. T., Craig, J. M., Wiley, T. L., Wilson, R. H. (1999). Word recognition performance for Northwestern University Auditory Test No. 6 word lists in quiet and in competing message. Journal of the American Academy of Audiology, 10(8): 429-35. Thornton, A. R. & Raffin, M. J. (1978). Speech-discrimination scores modeled as a binomial variable. Journal of Speech and Hearing Research, 21(3), 507-518. UCLA Cantonese Language Profile, (n.d.) Retrieved April 10, 2005 from http ://www. Imp 1 .htm Van Wijngaarden, S. J., Steenekeri, H. J., & Houtgast, T. (2002). Quantifying the intelligibility of speech in noise for non-native listeners. Journal of the Acoustic Society of America, 111(4), 1906-1916. Von Hapsburg, D. & Pena, E.D. (2002). Understanding bilingualism and its impact on speech audiometry. Journal of Speech, Language, and Hearing Research, 45(1), 202-213. Wong, L. & Soli, S. (in press). Development of the Cantonese Hearing-In-Noise Test (CHfNT). Ear and Hearing. Non-native speech audiometry 57 APPENDIX 1 CID W-l Spondees NORTHWEST HOTDOG ICECREAM DRAWBRIDGE BASEBALL PANCAKE COWBOY PLAYGROUND SUNSET BLACKBOARD EARDRUM DUCKPOND OATMEAL AIRPLANE HORSESHOE MOUSETRAP RAILROAD WHITEWASH Non-native speech audiometry 58 APPENDIX 2 Digit Pairs 1- 4 (pronounced "one four") 3-9 4-8 2-3 3-5 2-1 1-3 6-3 1-8 5-2 2- 8 9-6 4-2 1-8 5-3 8-4 9-4 8-6 Non-native speech audiometry 59 A P P E N D I X 3 NU-6 List 3A 1. base 2. mess 3. cause 4. mop 5. good 6. luck 7. walk 8. youth 9. pain 10. date 11. pearl 12. search 13. ditch 14. talk 15. ring 16. germ 17. life 18. team 19. lid 20. pole 21. road 22. shall 23. late 24. cheek 25. beg 26. gun 27. jug 28. sheep 29. five 30. rush 31. rat 32. void 33. wire 34. half 35. note 36. when 37. name 38. thin 39. tell 40. bar 41. mouse 42. hire 43. cab 44. hit 45. chat 46. phone 47. soup 48. dodge 49. seize 50. cool Non-native speech audiometry APPENDIX 4 Spondee Words Cantonese Word Lists xm mm a» *s ms .ma Phonetically Balance WS-Al WS-A2 {ft SP WS-Bl & gs n A £ WS-B2 t i i i • r a ^ T * « a * » '• + . ;& • & # Jl ffi m ai m r?p 23 4fc $3 ffc S 0c Non-native speech audiometry 61 APPENDIX 5 INTERVIEW SCRIPT Language Status What is your preferred language for communication? What is your first language? second? third/fourth? Where were you born? How long have you been in Canada? What language do you use most in your everyday life? Language History How old were you when you were first exposed to English? (encounter daily) Have you ever received formal instruction in English? For how long? Language Stability Do you travel back to your home country? How often? How much time per year do you spend in a non-English-speaking country? What language do you use at home? What language do you use at work? What language do you use at school? Demand for Use Do you read English materials regularly? (e.g. newspapers, magazines, books) How often? Do you use English to communicate regularly? How often? Do you write English regularly? (e.g. fill out applications, letters, X'mas cards) How often? Do you watch TV or movies in English? How often? Do you usually watch with subtitles (closed caption)? Do you use the internet and email? In what languages? How often? Non-native speech audiometry 62 APPENDIX 6 Case History sample questions Do you have any history of ear or hearing diseases? If yes, describe it. When did this occur? Did you see a doctor about it? Do you have any concerns about your hearing? If yes, describe the situations when you have problem hearing. Does this happen to both sides? One side only? Which side? Do you have any chronic illnesses? Are you taking any medications? Do you have a history of noise exposure? Have you ever had surgery to your ears? Non-na t ive speech audiometry 63 APPENDIX 7 Sources Berger , K . W . (1977). Speech audiometry materials. Ken t , O h i o : H e r a l d P u b l i s h i n g House . contains foreign language tests that assist i n hearing a id select ion languages avai lable include F i n n i s h , Japanese, Spanish, A r a b i c , N o r w e g i a n , D u t c h , Ge rman , Italian, Tagalog , Russ ian , and F rench Speech audiometry available i n other languages A r a b i c Ashoor, A A & Prochazka, T. (1985). Saudi Arabic speech audiometry for children. British Journal of Audiology, 79(3), 229-238. Ashoor, A . A . & Prochazka, T. (1982). Saudi Arabic speech audiometry. Audiology, 21(6), 493-508. Cantonese Lau, C C & So, K W (1988) Material for Cantonese speech audiometry constructed by appropriate phonetic principles. British Journal of Audiology, 22, 297-304. W o n g , L . & S o l i , S. (accepted). Deve lopment o f the Cantonese Hear ing- In-Noise Tst (CHDSfT). Ear and Hearing. Danish Elberling, C , Ludvigsen, C . & Lyregaard, P. (1989). D A N T A L E : A new Danish speech material. Scandinavian Audiology, 18(3), 169-175. Olsen, S.O. (1996). Evaluation of the list o f numerals in the Danish speech audiometry material. Scandinavian Audiology, 25(2), 103-107. French Picard, M . (1997). Speech audiometry in French-speaking Quebec. Journal of Speech-Language Pathology and Audiology, 21(4), 301-312. Picard, M . (1984). Speech audiometry for French-Canadians. Audiology, 23(4), 337-365. German Plath, P., Stuhlen, H . W . , Graf, H . & Pelzer, H . (1973). Investigations of the intelligibility o f a new recording o f the Freiburg speech test. Zeitschrift fur Laryngologie, 52(6), 457-469. (German) L u g a n d a Non-native speech audiometry 64 Nsamba, C. (1979). Luganda speech audiometry. Audiology, 18(6), 513-521. Spanish Zubick, H.H., Prizarry, L.M., Rosen, L., Feudo, P., Kelly, J. H. & Strome, M. (1983). Development of speech audiometric materials for native Spanish-speaking adults. Audiology, 22(1), 88-102. Working with Multicultural Clients ASHA (1997-2004). Tips for working with an interpreter. [Online] Ramkissoon, I. & Khan, F. (2002). Serving Multilingual Clients with Hearing Loss: How linguistic diversity affects audiologic management. [Online] The ASHA Leader, 8(3). Battle, D. E. (2002). Communication Disorders in Multicultural Populations (3rd ed.). Boston, U.S.: Butterworth Heinemann. Beverly-Ducker, K. (2003) Multicultural issues in Audiology. ASHA Division 9 Newsletter, 13(1), 12-15 Non-native speech audiometry 65 APPENDIX 8 Assessing Speech Recognition and Discrimination in Non-Native English Speakers -Suggestions • CASE HISTORY • Get a general sense of client's English level • How long have they been in Canada? • Do they use English in daily life? • Use an interpreter if necessary • INSTRUCTIONS • Face-to-face • Speak slowly, clearly, and free of colloquialisms • Verify client's understanding • Strongly encourage guessing • TEST MATERIALS • Digit pairs for SRT • SAT, picture-pointing (WIPI), children's word lists, e.g. PBK, NU-CHIPS, word lists in other languages1 • Use both subjective and objective measures, e.g. hearing handicap scales • TEST PROCEDURES • Familiarize before SRT testing • Allow longer response time • Re-instruct if necessary • TEST INTERPRETATION AND RECOMMENDATIONS • Client-oriented approach - listen to client's subjective comments on listening needs and problems • Be aware of limitations of applying test results to real life functioning ' ASHA (1989) established guidelines on who could provide assessment services in languages other than English. If the billingual criteria could not be satisfied, trained speakers of the client's language should be utilized. 


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items