UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Perceptual learning of dysarthric speech : effects of familiarization and feedback Buchholz, Leah Kee 2009

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2009_spring_buchholz_leah.pdf [ 538.72kB ]
Metadata
JSON: 24-1.0067015.json
JSON-LD: 24-1.0067015-ld.json
RDF/XML (Pretty): 24-1.0067015-rdf.xml
RDF/JSON: 24-1.0067015-rdf.json
Turtle: 24-1.0067015-turtle.txt
N-Triples: 24-1.0067015-rdf-ntriples.txt
Original Record: 24-1.0067015-source.json
Full Text
24-1.0067015-fulltext.txt
Citation
24-1.0067015.ris

Full Text

PERCEPTUAL LEARNING OF DYSARTHRIC SPEECH: EFFECTS OF FAMILIARIZATION AND FEEDBACK by  LEAH KEE BUCHHOLZ B.A., Simon Fraser University, 2006  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in  THE FACULTY OF GRADUATE STUDIES (Audiology and Speech Sciences)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) February, 2009 Ⓒ Leah  Kee Buchholz, 2009  ii Abstract The current study investigated the presence of perceptual learning following familiarization with spastic dysarthric speech secondary to cerebral palsy. The phonemic level of speech perception was examined using the word-initial stop voicing contrast. Stimuli were produced by a male speaker with severe spastic dysarthria. Speech samples were selected from the speaker’s utterances based on negative voice-onset time (prevoicing duration). Stimuli were systematically selected to create the voicing contrast using tokens with either short prevoicing or abnormally long prevoicing durations. Thirty naïve listeners were randomly assigned to one of three familiarization groups: one group was provided written feedback during familiarization, the second group listened to the same stimuli but was not provided with feedback, and the third group listened to different stimuli, which did not contain the voicing contrast. A forced-choice testing format was used to measure listeners’ responses preceding and following familiarization. Results showed changes in listeners’ response patterns following familiarization across the three groups indicating that perceptual learning occurred. Theoretical, clinical, and design implications are explored.  iii Table of Contents Abstract ........................................................................................................................................... ii Table of Contents........................................................................................................................... iii List of Tables ...................................................................................................................................v List of Figures ................................................................................................................................ vi Acknowledgements ...................................................................................................................... vii 1. INTRODUCTION .......................................................................................................................1 1.1 Overview of thesis .........................................................................................................1 1.2 Cerebral palsy ................................................................................................................2 1.2.1 General description of cerebral palsy .............................................................2 1.2.2 The speech pattern for people with dysarthria................................................3 1.3 The acoustic review of stop phonemes in non-neurologically compromised speakers... ..............................................................................................................................................4 1.4 Acoustic characteristics at the level of the larynx for speakers with dysarthria............5 1.5 Speech perception of speakers with dysarthria..............................................................7 1.6 Perceptual learning research ..........................................................................................9 1.6.1 Overview of perceptual learning.....................................................................9 1.6.2 Special listener and special speaker populations ............................................9 1.6.3 Perceptual learning using synthetic speech as stimuli ..................................10 1.6.4 Perceptual learning using dysarthric speech as stimuli.................................13 2. METHODS ................................................................................................................................20 2.1 Stimuli construction .....................................................................................................20 2.1.1 Recording......................................................................................................20 2.1.2 Stimuli selection............................................................................................20 2.2 Listeners.......................................................................................................................25 2.3 Procedure .....................................................................................................................26 3. RESULTS ..................................................................................................................................29 3.1 Overview of the results ................................................................................................29 3.2 Typical token analysis ................................................................................................30 3.3 Atypical token analysis ................................................................................................31 3.4 Comparison between typical vs. atypical tokens .........................................................33 3.5 POA analysis for atypical items...................................................................................34 4. DISCUSSION ............................................................................................................................38 4.1 Summary of the findings..............................................................................................38 4.2 Perceptual learning for typical tokens .........................................................................38 4.3 Perceptual learning for atypical tokens........................................................................39 4.4 Generalization of perceptual learning to novel words .................................................41  iv 4.5 Performance as a function of place of articulation for atypical tokens........................43 4.6 The effect of familiarization on perceptual learning....................................................44 4.7 Theoretical and clinical implications ...........................................................................46 4.8 Limitations ...................................................................................................................48 5. CONCLUSION..........................................................................................................................50 REFERENCES ..............................................................................................................................51 APPENDICES Appendix A: Speaker Consent...........................................................................................56 Appendix B: Stimuli Words for All Wordlists ..................................................................58 Appendix C: Listener Without Feedback Consent Form...................................................60 Appendix D: Listener With Feedback Consent Form........................................................62 Appendix E: Questionnaire................................................................................................64 Appendix F: Forced-Choice Presentation During Participant Testing ..............................66 Appendix G: Table 2. POA Mean Accuracy Scores (%) and Standard Deviations for All Wordlists ............................................................................................................................67 Appendix H: Ethics Certificate of Approval .....................................................................68  v List of Tables Table 1. Prevoicing Durations (ms) of Typical and Atypical Tokens for Wordlists.....................23 Table 2: POA Mean Accuracy Scores (%) and Standard Deviations for All Wordlists................67  vi List of Figures Figure 1. Voicing Accuracy Across Wordlists for the Atypical Bilabial Voicing Contrast..........32 Figure 2. Comparison Between Typical versus Atypical Tokens for the Bilabial Voicing Contrast ..........................................................................................................................................34 Figure 3. Voicing Accuracy for all Atypical Places of Articulation..............................................35 Figure 4. Voicing Accuracy for all Atypical Places of Articulation Across all Wordlists............36  vii Acknowledgements First, a thank you to my research supervisor, Dr. Valter Ciocca. His insightful comments and commitment to my thesis have been invaluable. I would also like to thank my committee members, Dr. Barbara Purves and Dr. Linda Rammage. I have appreciated their specialized knowledge and suggestions though out this process. To Dr. Judith Johnston and friends a special thank you for your support, guidance, and words of encouragement. I have appreciated all immensely. Last, but not least, a thank you to my aunt, Christine Schink. Without her, my interest in language and communication may never have been born.  1 1. INTRODUCTION 1.1 Overview of thesis Familiarization is previous exposure to an acoustic speech signal (Spitzer, Liss, Caviness, & Adler, 2000; D’Innocenzo, Tjaden & Greenman, 2006). Familiarization may result in higher speech recognition accuracy. For example, listeners who have prior experience with unique speech patterns have better comprehension for spoken words than unfamiliar listeners (McGarr, 1983; Flipsen, 1995). The superior performance following exposure has been identified as the “accommodation effect” (Weiss, Gordon, & Lillywhite, 1987) and “perceptual learning” (Norris, McQueen, & Cutler, 2003). Presently it is unclear exactly how prior exposure to unique speech stimuli affects listeners’ speech perception (Tjaden & Liss, 1995; D’Innocenzo, Tjaden & Greenman, 2006; Spitzer, Liss, Caviness, & Adler, 2000; Garcia & Cannito, 1996; Eisner & McQueen, 2005; Kraljic & Samuel, 2006; Nygaard & Pisoni, 1998; Lass, 1996). Researchers have shown that information other than the acoustic characteristics of the speech signal, such as information about the conversation topic or semantic context cues, can improve intelligibility scores (Weismer & Martin, 1992). Previous experience with specific talkers has also been shown to improve speech recognition (Nygaard & Pisoni, 1998), presumably because listeners are able to adjust their perceptual representation of speech categories to reflect the individual characteristics of speech produced by specific talkers (D’Innocenzo, Tjaden, & Greenman, 2006). Speech perception studies have examined the effects of familiarization with the unique speech patterns of dysarthric speakers (Yorkston & Beukelman, 1983; Garcia & Cannito, 1996; Hustad & Cahill, 2003; D’Innocenzo, Tjaden, & Greenman, 2006; Spitzer, Liss, Caviness, & Alder, 2000; Tjaden & Liss, 1995). The current study examined the effects of the following  2 variables during the familiarization process: the presence or absence of feedback, the use of different types of stimuli, and the amount of pre-voicing (PV) associated with the perception of the English voicing contrast for word-initial stops. The speaker was an adult male diagnosed with spastic dysarthria secondary to cerebral palsy (CP). Section one reviews CP, dysarthria, the acoustic characteristics of stops produced by speakers with and without dysarthria, and research involving perceptual learning. Section two describes the design and procedures for the present study. Section three presents the results. Section four discusses the results, clinical implications, and the limitations of the study. The final section draws conclusions based on the study. 1.2.0 Cerebral palsy 1.2.1 General description of cerebral palsy Cerebral palsy is an umbrella term used to describe neuromotor disorders due to lesions in the cerebrum occurring around the time of birth (Bhtnagar, 2002; Rammage, Morrison, & Nichol, 2001). Across Canada, approximately 50,000 people are diagnosed with CP with an estimated one in 500 locally in British Columbia (Colledge, 2006). Many of these individuals reach adulthood and have a regular lifespan (Colledge, 2006). People with CP may or may not have an accompanying cognitive deficit (Bhtnagar, 2002). There are four major types of CP: spastic, ataxic, athetoid, and mixed. Spastic CP will be the focus of this study. Spastic CP is the most prevalent of the four types with approximately 75% of individuals with CP diagnosed with this variant (Colledge, 2006). This subtype is a consequence of bilateral damage to the motor cortex or direct motor pathways (Bhtnagar, 2002). The damage manifests itself in speech production by affecting the neural transmission of messages from the motor cortex to the muscles needed for speaking (Bhtnagar, 2002). The consequence of this damage is a unique pattern of speech called spastic dysarthria.  3  1.2.2 The speech pattern for people with dysarthria Dysarthria is a general term used to identify neurologic speech disorders which are due to deviations in strength, tone, speed, steadiness, or accuracy of movement needed to control speech, phonation, respiration, prosody, and articulation (Duffy, 2005). Perceptually, spastic dysarthria is characterized by a strained, harsh voice quality with a low pitch, reduced variability in loudness and imprecise articulation (Darley, Aronson, & Brown, 1969). Though these perceptual characteristics are fairly well described within the literature, there is relatively little research on the acoustic characteristics of speech produced by speakers with dysarthria (Bunton & Weismer, 2001). Of the research on the acoustic dimensions, abnormal durations for vowels and consonants as well as aberrant formant frequencies have been documented (Duffy, 2005). These abnormalities manifest themselves perceptually as distorted vowels and imprecise consonants (Duffy, 2005). Coordination of laryngeal and supralaryngeal structures in speakers with CP is of particular interest in the current study. Fine articulatory control is needed for accurate production of the voiced-voiceless contrast in English. The following section provides a brief review of the voicing contrast in non-neurologically-impaired speakers. 1.3 The acoustic review of stop phonemes in non-neurologically-compromised speakers The English stop inventory is based on a two-category distinction, a binary division between voiced and voiceless phonemes (Lisker & Abramson, 1964). Lisker (1977) documented 16 different cues that are associated with the voiced and voiceless contrast in stops. One such cue is the delay in the onset of the first formant (F1) energy in comparison to higher formants in voiceless, but not in voiced stops (Lisker, 1977; Kent & Read, 1992). The presence of aspiration  4 noise following the release burst is another acoustic property that indicates voiceless but not voiced stops (Lisker, 1977; Kent & Read, 1992). Vowel duration also provides an acoustic cue for the listener to categorize the voicing contrast in stops. Vowels are longer when following voiced stops than are their voiceless counterparts (House & Fairbanks, 1953). Of the many documented cues, voice-onset time (VOT) is the acoustic cue that is most commonly used to differentiate voiced from voiceless stops (Ansel & Kent, 1992). Voice-onset time is typically measured in milliseconds (ms). The VOT values for voiced stops are typically short and range from approximately -20 ms to +20 ms (Lisker & Abramson, 1964; Kent & Read, 992). By contrast, VOT durations for voiceless stops range from approximately +25 ms to +100 ms (Kent & Read, 2002). To perceive voiceless stops as voiceless VOT must commence after 25 ms following the release burst. Voiceless stops may be perceived as voiced if the VOT value is less than 20 ms (Klatt, 1975 Voice-onset time can either consist of a negative value (measured when voicing commences prior to the release burst) or a positive value (measured from the release burst to the commencement of vocal fold vibration) (Lisker & Abramson, 1964). Negative VOT is also known as prevoicing (PV) and voicing lead (Ryalls, Zipprer, & Baldauff, 1997; Kent & Read, 1992). Voiced stops may be characterized by a period of PV. Prevoicing values for voiced stops are typically no greater than -20 ms (Lisker & Abramson, 1964). Prevoicing is typically absent in voiceless stops (Lisker & Abramson, 1964; Ryalls, Zipprer, & Baldauff, 1997). 1.4 Acoustic characteristics at the level of the larynx for speakers with dysarthria The assumption when examining acoustic-articulatory relationships is that acoustic measurements are related to physiological movements (Kent, Weismer, Kent, Vorperian, & Duffy, 1999). Pitch is a perceptual cue determined by fundamental frequency. Physiologically,  5 this acoustic cue is identified by the number of vocal fold pulses per second (Pickett, 1999). Pitch is a characteristic that is challenging for speakers with dysarthria to control (Hess, 1992; Patel, 2002). Perceptually, pitch ranges from low to high (Kent & Read, 1992). One study examined speakers with dysarthrias’ ability to produce low, medium, and high levels of pitch (Patel, 2002). Eight speakers with severe dysarthria secondary to CP were trained with visual feedback using a Visi-Pitch program to produce these different pitch levels for sustained vowel durations. For each speaker 20 different recordings were obtained at each pitch level following training. The speakers were not provided visual feedback during testing. Results showed that most speakers were unable to produce three distinct pitches. Of the eight speakers, two successfully produced three distinct pitches, five produced two distinct pitches, and one was unable to produce any different levels. Overall, the speakers were inconsistent in their repeated productions of the same pitch level. The speakers’ difficulty in producing pitches and inconsistency in repeated pitch phonations reflect their poor abilities in controlling the finer movements in the larynx. This study indicates that control at the laryngeal level of phonation is difficult for speakers with dysarthria, even in the absence of coordinating with articulators. Though the study had a small sample size due to logistical constraints of finding participants and also lacked inter-rating judgments (judgments made by different listeners), the results revealed that speakers with dysarthria had noticeable difficulties with speech production at the level of the larynx. The following details the additional difficulty this population experiences when attempting to coordinate the laryngeal level with the supralaryngeal level (Farmer, 1980; Ansel & Kent, 1992). Farmer (1980) investigated people with CPs’ ability to produce distinctive VOT values for the voiced-voiceless contrast. Voice-onset times were measured in stops produced by five  6 speakers with spastic dysarthria secondary to CP and five speakers with athetoid dysarthria secondary to CP. The focus of this review will be the results for the speakers with spastic dysarthria. Words with word-initial stops were elicited, phonetically transcribed, and VOT measurements were obtained. Results indicated that similar to non-neurologically impaired speakers, VOT durations were shorter for voiced stops and longer for voiceless stops. Overall however, VOT durations were longer (M= /p/ = +97 ms, /t/ = +91 ms, /k/ = +101 ms, /b/ = +11 ms, /d/= +9 ms, /g/= +13 ms). A study conducted by Ansel and Kent (1992) also found prevoicing durations for voiced stops but not for voiceless. These researchers noted that the PV durations were unusually long (-328 ms to -32 ms). Longer durations are expected for speech patterns where slowed rate is the norm, such as in spastic dysarthric speech (Forrest & Weismer, 1997). Farmer (1980) completed a detailed phonological analysis of the speakers’ recordings and found that the voicing substitution error of voiced stops replacing voiceless stops occurred twice as often as the substitution error of voiceless stops replacing voiced stops. Consequently, the speakers’ intended voiceless stops were perceived as voiced twice as often as speakers’ intended voiced stops perceived as voiceless. Farmer concluded that unusually long VOT durations subsequently led to poor intelligibility. The results from both Farmer’s and Ansel and Kent’s research reveal that speakers with dysarthria have different acoustic characteristics for the stop voicing contrast. Specifically, the prevoicing durations are markedly longer for voiced stops. Farmer suggested that speakers with dysarthria may require more time to coordinate the vocal tract which resulted in longer VOT durations for this population of speakers in comparison to non-dysarthric speakers. The results suggest that speakers with dysarthria lack coordination between the laryngeal and supralaryngeal movements.  7 1.5 Speech perception of speakers with dysarthria A consequence of these deviant acoustic characteristics in speakers with dysarthria is poor speech perception by listeners (Whitehill & Ciocca, 2000). Specifically, laryngeal incoordination has been documented as one of the primary acoustic-phonetic cues that negatively impacts listeners’ speech perception. One study using Cantonese speakers with dysarthria secondary to CP identified the phonetic contrasts that most negatively affected single-word intelligibility scores (Whitehill & Ciocca, 2000). Twenty speakers with CP participated in the study: 13 with spastic CP, 5 with athetoid CP, and 2 with mixed CP. These speakers were recorded to examine 17 phonetic contrasts. These contrasts captured segmental errors representative of Cantonese speakers with dysarthria. The researchers focused on the laryngeal level contrast specific to Cantonese. In Cantonese, this distinction is found in the aspirated versus unaspirated stops rather than the English voiced-voiceless phonetic contrast. Twelve native Cantonese listeners with normal hearing identified the recorded word in a four-word, multiplechoice format. The listeners were presented with the target word and three foils which differed by one phonetic contrast. The results indicated that four of the six most difficult contrasts for listeners to identify involved control at the level of the larynx: tone level, glottal versus null, long versus short, and aspirated versus unaspirated. For all three types of dysarthria, the aspirated versus unaspirated contrast was the third most difficult contrast for listeners to identify. This error pattern reflects the speakers with dysarthrias’ challenge in coordinating movements between the laryngeal and articulatory levels. The difficulty with the aspirated versus unaspirated distinction that Cantonese speakers with dysarthria exhibit parallels the voicing difficulty that English speakers with dysarthria experience (Farmer, 1980; Ansel & Kent, 1992). Whitehill and Ciocca (2000) suggested that synchronizing the fine laryngeal movements with articulators may  8 be a challenge for speakers with dysarthria across languages. In addition, speakers’ coordination challenge is subsequently reflected in listeners’ speech perception errors. Bunton and Weismer (2001) examined the perception of different phonemic contrasts produced by English speakers with dysarthria. Groups of speakers from varying etiologies all with a common diagnosis of dysarthria were recorded saying minimal pair CVC contrasts. These speakers included 10 males and females with ALS, 10 males and females with Parkinson’s disease, and 5 males with CVAs. The speaker recording and listener testing procedures were similar to the Whitehill and Ciocca (2000) study. The listeners’ eight most poorly identified phoneme contrasts were categorized according to the error rate. The word-initial voicedvoiceless distinction was within the top four listener errors for male speakers and within the top five errors for female speakers. This study not only indicates that English speakers with dysarthria have difficulty creating the voicing contrast, but also that this difficulty is reflected in poor phoneme identification accuracy. Together these studies highlight that the production difficulties present in the acoustic properties of the speech signal are manifested as poor phoneme identification in the perceptual dimension. The implications of the acoustic-phonetic dimension on speech perception will be explored. The reviewed articles suggest that speakers with dysarthria have difficulty controlling the larynx and synchronizing the movements of the laryngeal and supralaryngeal levels. Coordination of these two levels is needed for typical production of the voiced-voiceless stop contrast. As speakers with dysarthria are poor at sequencing these movements, their acoustic productions for stops are abnormal and subsequently, listeners are poor at accurately identifying them. The present paper explored the possibility that familiarization with dysarthric speech may lead to perceptual learning. The focus was on PV durations to examine if familiarization led to  9 improved listener accuracy scores for correctly identifying the voicing contrast in word-initial stops. It was hypothesized that listeners’ accuracy scores following familiarization would be higher than before familiarization. 1.6.0 Perceptual learning research 1.6.1 Overview of perceptual learning literature A review of the perceptual learning literature reveals consistent findings. Following previous exposure to abnormal speech signals, listeners are able to identify parts of an utterance that were previously unintelligible (Nygaard & Pisoni, 1998). A literature review provides convincing evidence for the importance of speaker-specific information. The procedure for familiarization differs slightly in the literature. One common approach is for listeners to hear words or phrases while reading along with a written transcript. Listeners are later tested while transcribing a new or the same set of stimuli from the same or different speaker. 1.6.2 Special listener and special speaker populations Research on perceptual learning of speech has identified “special talker” populations. “Special talkers” are those who have a unique speech pattern that differs significantly from the local native speech style (Bradlow & Bent, 2007). Examples of these populations include English as second language speakers. These speakers have consistent articulation errors that differ from native English speakers. For example, in Japanese /l/ and /r/ are not distinctive phonemes. Subsequently, when native Japanese speakers speak English, their native language phonology interferes with clear articulation of the /r/ phoneme and the result is difficulty producing /r/ (Flege, Takagi, & Mann, 1995). Speakers with spastic dysarthria are another group of “special talkers” who have unique acoustic characteristics in their speech pattern (Duffy, 2005). The present paper proposes that  10 just as there are “special talker” populations, there are “special listener” populations. These groups of listeners have unique signal-independent knowledge, such as previous experience with a speaker or “special talker” population that makes them different from other, inexperienced listeners. Research indicates that prior experience with unique speech patterns leads to improved speech perception abilities that inexperienced listeners do not have (Allen & Miller, 2004; Norris, McQueen, & Cutler, 2003). The difference in speech perception indicates that these listeners possess different in signal-independent knowledge. The present paper proposes that this difference in knowledge could be recognized by identifying such individuals as “special listeners”. “Special listeners” parallel “special talkers” in the sense that they are linguistically unique from typical, inexperienced listeners. The current study will explore listeners’ abilities to learn the speech patterns of a speaker with spastic dysarthria secondary to spastic CP. “Special listeners” may have extensive previous experience with unique speech signals, such as parents with children who have phonological disorders (Flipson, 1995) or individuals who work with people who are deaf (McGarr, 1980). Both groups of listeners have been documented to obtain higher intelligibility test scores identifying words for the special speaker populations they have previous experience (Flipson, 1995; McGarr, 1980). These results support the anecdotal claims of clinicians’ self-reporting higher intelligibility for the speech perception of “special speaker” populations (Yorkston, Dowden, & Beukelman, 1992). 1.6.3 Perceptual learning using synthetic speech as stimuli Perceptual learning in listeners exposed to synthesized speech has consistently been found (Kraljic & Samuel, 2006; 2007; Norris, McQueen, & Cutler, 2003, Allen & Miller, 2004). In these studies, two phonemes that differ primarily by one acoustic property are selected. An ambiguous phoneme is then created by manipulating the key acoustic cue to halfway between the  11 two contrasting phonemes. For example, in a study conducted in Dutch the frication noise was manipulated to create a continuum between the phonemes [f] and [s] (Norris, McQueen, & Cutler, 2003). The amount of frication was then systematically manipulated to create a 21 step continuum. The midway point of this continuum was identified as the ambiguous sound [?]. Listeners were presented with the 21 step continuum and identified if they perceived an [s] or [f]. Then, during familiarization the listeners heard [?] embedded in words. The [?] sound had to be perceived as either [f] or [s] in order to perceive a real word. The familiarization was limited to 20 ambiguous phonemes randomized within a 100 word wordlist containing clear [f] and [s] productions. Following familiarization, listeners were again represented with the [f] to [s] continuum. Initially the listeners were not predisposed to identifying items on the continuum more often as either [f] or [s]. Following familiarization, a perceptual shift was evident as listeners identified more of the continuum sounds to match what sound they were biased towards during familiarization. Specifically, if the listeners were biased to perceive the [?] sound as [f] they identified more segments in the continuum as [f] but if they were lexically biased to perceive the [?] sound as [s], more segments were subsequently identified as [s] on the same continuum. The systematic and response pattern following familiarization suggests these listeners experienced perceptual learning during the course of the study. Evidence of perceptual learning has also been found for listeners exposed to computer modified English stops. A series of studies conducted by Kraljic and Samuel (2006; 2007) examined listeners’ accuracy patterns following limited exposure to a synthesized sound created from phonemes /t/ and /d/. Researchers used a similar stimuli construction paradigm as Norris, McQueen, and Cutler (2003) but rather than focusing on one acoustic dimension, three were manipulated: amplitude, VOT, and duration of silence preceding the burst. The three acoustic  12 criteria were adjusted incrementally to create a 21-step continuum. The ambiguous phoneme [?] was the midpoint in this continuum. Listeners were presented with the continuum. Then, they were biased to perceive the ambiguous phoneme as /t/ or /d/ in order to perceive a real word with a forced-choice lexical decision task. Listeners heard the ambiguous phoneme in 20 words randomized within 20 words with typical /t/ or 20 words with typical /d/, and 100 filler words that excluded /t/ and /d/. Initially, listeners did not show a preference for identifying the 21 step continuum as /t/ or /d/. Following familiarization, listeners showed a bias to perceive [?] as either /t/ or /d/ consistent to their familiarization period. Following familiarization, listeners were also tested on a continuum constructed for two new phonemes, /p/ and /b/. Interestingly, learning exhibited for the modified acoustic properties on the /t/ to /d/ continuum generalized to this new place of articulation. These results suggest that the listeners adjusted their categorization of voiceless and voiced stops according to the acoustic properties they heard during familiarization (Kraljic & Samuel, 2006; 2007). In addition, these listeners extended their new perceptual learning to different, non-trained phonemes with the same manner of articulation (Kraljic & Samuel, 2006; 2007). Unlike the previous studies that used a single speaker to explore perceptual learning, Allen and Miller (2004) explored perceptual learning using multiple talkers. Specifically, these researchers examined how perceptual learning can signal speaker identification. These researchers selected VOT as the speaker specific cue because it systematically differs across speakers even when speech rate is constant (Allen & Miller, 2003). Twenty native English speakers completed two listening sessions; each consisted of a familiarization phase and a testing phase. The listeners heard two different speakers during testing. Both speakers’ word-initial stop for the word “town” was acoustically modified to create a nine-step continuum. The /t/ VOT  13 durations for each speaker were either shorter or longer than the speaker’s typical productions. The VOT durations for /d/ in the contrasting minimal pair “down” were not manipulated. In the familiarization phase, subjects identified the speaker and the initial phoneme for each word. Listeners were either exposed to long or short VOT durations for the voiceless word - initial stop in “town”. During testing listeners heard the short or long VOT duration for the speakers’ productions of the word “town”. Listeners had to identify the “town” productions that sounded similar to the ones heard during familiarization; namely, those that had the same VOT. On the second testing session the novel word “time” was presented in addition to “town”. The VOT properties for this word were modified to also create a nine-step continuum. Participants again selected the token that sounded most like the VOT duration they heard during training with this new word. Listeners were informed of their accuracy on speaker identification with the word “yes” or “no” in addition to the talker’s name presented on the computer screen. No feedback was provided for phoneme discrimination. Results indicated that listeners were not only able to learn talker-specific VOT characteristics following familiarization but also able to generalize this knowledge to the new words. The researchers recommend examining perceptual learning of VOT durations for nonsynthetically modified speech. Speakers with motor speech disorders meet this unique requirement as VOT durations in their speech are naturally different from the norm. 1.6.4 Perceptual learning using dysarthric speech as stimuli Over the last 20 years researchers have continued to examine the effects of familiarization using dysarthric speech. Yorkston and Beukelman (1983) conducted a seminal study that examined the effects of different familiarization conditions. Two speech samples were obtained from nine dysarthric speakers with speech intelligibility ranging from mild to severe. Nine listeners transcribed both speech samples from all speakers. Listeners were randomly  14 assigned into one of three listening conditions: no-familiarization with speech, familiarization in the form of listening to sentences and then immediately transcribing new sentences, and familiarization with feedback in the form of written transcripts for listeners to read along during the familiarization period. Results showed no group differences in performance following familiarization. However, the authors noted that they failed to control for listeners’ prior experience with dysarthric speech, which resulted in the experienced listeners (ie. speechlanguage pathologists) to be placed in the no-familiarization group. Consequently, two of the three listeners in the no-familiarization group were speech-language pathologists. The result of not controlling for this variable was evident in the pre-familiarization accuracy scores, which were higher for experienced than for naive judges. Thus, the failure to find differences between the familiarization groups may be attributable to differing prior experience with dysarthric speech. The present study attempted to control for this variable by excluding all listeners who had experience with speakers with spastic dysarthria. A more recent study conducted by Garcia and Cannito (1996) also failed to find familiarization effects. This study focused on the effects of using nonverbal and verbal stimuli during three different familiarization conditions: AudioOnly, AudioVisual, and VisualOnly. All three familiarization conditions were compared with a non-familiarized listener group. A single speaker with flaccid dysarthria was audio and video recorded saying 96 sentences. In the familiarization condition, inexperienced listeners heard a short passage, but feedback was not provided. Following familiarization, listeners transcribed 16 sentences. The number of correctly identified words was calculated. Results found no differences in test scores between the familiarization and the non-familiarization conditions. However, it is possible that these results were possibly due to the absence of feedback during the familiarization sessions. Norris,  15 McQueen, and Culter (2003) posited that for previous exposure to unique speech stimuli to be effective, listeners must know the identity of the intended utterances in order to change their mental representations for the correct phonetic categories. Therefore, listeners in the Garcia and Cannito (1996) study were possibly unable to alter their phonetic representations according to the speech they heard as they were not provided with information about the speaker’s intended productions during the familiarization phase. For this reason, the present study examined the effects of providing written feedback during the familiarization period. The scores of listeners who were provided written feedback will be compared to the scores of listeners who were not provided written feedback. It was hypothesized that the group of listeners who were provided with written feedback would have higher accuracy scores following familiarization than the group of listeners who were not provided with written feedback. The findings reviewed above contrast with those of several studies that found improved accuracy scores following familiarization (Hustad & Cahill, 2003; D’Innocenzo, Tjaden & Greenman, 2006; Spitzer, Liss, Caviness, & Alder, 2000; Tjaden & Liss, 1995). In two studies (D’Innocenzo, Tjaden & Greenman, 2006; Tjaden & Liss, 1995a) researchers compared the effects of perceptual learning with dysarthric speech at the word and sentence levels. In these studies the listeners were familiarized with either a paragraph or words from the same paragraph randomly presented. In the first study (D’Innocenzo, Tjaden & Greenman, 2006), the speaker was diagnosed with mixed spastic-flaccid dysarthria secondary to a traumatic brain injury and in the second study the speaker mixed spastic-ataxic dysarthria secondary to CP. Familiarization groups either heard the words or paragraph recordings while reading the written transcripts during familiarization. The non-familiarization group of listeners did not complete this phase. During testing, listeners transcribed the same audio recording heard during familiarization. These  16 transcriptions were later analyzed according to accurate word identification. Results indicated that listeners who were familiarized performed better than those who were not. Neither study found a statistically significant difference in accuracy when trained with sentences versus single words. Researchers speculated that participants in the familiarization condition were able to disambiguate the speech signal by relying on their previous experience to compensate for the aberrant acoustic characteristics in the speech signal. The Tjaden and Liss (1995a) study did not complete a pre-familiarization comparison test for the three groups of listeners. Thus, the differences found following familiarization may have been due to group differences present prior to familiarization. Despite this limitation, the results were consistent with and supported by previous research demonstrating that familiarization improves listeners’ accuracy scores. Spitzer, Liss, Caviness, and Alder (2000) also investigated the effects of familiarization on the perception of dysarthric speech. Twenty naïve, inexperienced native English listeners participated. During familiarization these listeners read a written transcript while listening to 18 phrases spoken by a speaker. Then, they immediately transcribed 60 novel sentences spoken by the same speakers. The control group consisted of 14 additional listeners who did not receive the familiarization session. Significantly higher accuracy scores were recorded for those who were familiarized with the speech pattern than those who were not. The familiarized group had overall better accuracy at the lexical level. A detailed comparison of all the lexical errors revealed that within these words the familiarized listeners also demonstrated greater accuracy at the segmental level. These results give evidence to support that gains in speech perception through familiarization are not restricted to the lexical level, but instead also occur at the phoneme level. Spitzer and colleagues proposed that experience enabled listeners to adjust to the unique acoustic-phonetic characteristics of the speaker, and that this speaker-specific knowledge is  17 particularly important if the speech signal is challenging – such as, for example, the perception of dysarthric speech. These studies highlight the significance of individual differences in facilitating speech perception. Traditionally, variability in speech has been treated as a problem that must be eliminated though a talker normalization process. Normalization is a form of averaging or abstraction, which removes individual variation. Such individual variation was considered unhelpful to speech perception. The growing evidence within the literature as evidenced by the reviewed studies suggest individual variation is important in speech perception (Johnson & Mullennix, 1997; Nygaard & Pisoni, 1995; Luce & Pisoni, 1986). Such speaker specific properties are learned and subsequently used by listeners for speech perception (Johnson & Mullennix, 1997; Nygaard & Pisoni, 1995; Luce & Pisoni, 1986; Lindblom, 1985). Signalindependent information such as listeners’ strategies and additional knowledge about the message, help to decipher the speech signal (Lindblom, 1985; Trofimovich, 2008; Goldinger, 1996). With previous experience, the memory of talker specific acoustic-phonetic information allows listeners to customize perceptual strategies to a specific talker’s speech pattern. This, in turn, leads to improved speech perception with natural speech produced by individual talkers. This is particularly important for speakers with motor speech disorders who have a degraded acoustic signal (Weismer & Martin, 1992; Spitzer, Liss, Caviness, & Alder, 2000). Initially, when listeners encounter novel and unique speech stimuli, their existing perceptual knowledge is used to perceive the atypical speech patterns (Ansel & Kent, 1992). The result is a great number of errors (Ansel & Kent, 1992). With experience, listeners gain signal independent knowledge, speaker specific knowledge that can provide valuable information to compensate for the degraded speech signal (Ansel & Kent, 1992; Lindblom, 1985).  18 The results the reviewed studies established familiarization procedure of prefamiliarization testing phase, followed by familiarization, and then immediately followed by retesting is an effective method for documenting changes in talker-specific speech perception (Clarke-Davison, Luce, & Sawusch, 2008). Spitzer and colleagues suggest the need to examine perceptual learning at the phoneme level in speakers with dysarthria. It was speculated that this level of analysis may highlight the specific acoustic benefits listeners gain following familiarization with dysarthric speech. To meet this need, the current study examined the voicing distinction in the English word-initial stop consonants. Until now, a detailed acoustic-phonemic level investigation of perceptual learning of dysarthric speech secondary to CP has not been conducted. Spastic dysarthria provides a unique opportunity by naturally providing speech stimuli that deviate markedly from the norm. The current study investigated perceptual learning as evidenced by improved accuracy scores for the word-initial voicing contrast. Research with synthesized speech provides convincing evidence for perceptual learning at this phonemic level. However, the literature review of studies examining familiarization with dysarthric speech reveals mixed findings. Contrasting findings in support of perceptual learning may be due to methodological issues in the studies. The current study addressed several of these methodological issues by attempting to control for: the presence or absence of feedback during familiarization and prior experience with the unique speech stimuli. To summarize, the following hypotheses were made based on the findings within the above literature review. Hypothesis 1: Listeners’ identification of the word-initial stop voicing contrast will be different following familiarization.  19 Hypothesis 2: The listener group provided with written feedback during familiarization will have higher accuracy scores than the groups not provided written feedback. Hypothesis 3: If listeners’ accuracy scores show perceptual learning following familiarization, perceptual learning will generalize to novel words, which contained the same acoustic voicing properties.  20 2.0 METHODS 2.1 Stimuli construction 2.1.1 Recording Two speakers diagnosed with severe spastic dysarthria secondary to spastic CP were recorded in a soundproof booth. Speakers were recruited by requesting referrals from local speech-language pathologists. Both speakers were informed that their participation was voluntary and they were free to withdraw from the study at any time (see Appendix A). The first speaker was a 52 year old female and the second speaker was a 43 year old male. A cardioid condenser microphone (Audio-Technica 3035) was used to record the speakers. The microphone was mounted onto a stand in order to maintain a constant 10 cm distance from the speakers’ mouth to the microphone. All recordings were completed on an Apple iMac computer using a MOTU Ultralite audio interface. The utterances were sampled at a rate of 44.1 kHz. The speakers completed the recording in one two-hour session. The stimuli consisted of monosyllabic CVC words (see Appendix B). Words were written on cue cards and were presented individually to the speakers at a rate of approximately one word per two seconds. The speakers were presented all stimuli words five times. Some tokens were presented twice if the speaker did not produce a word-initial consonant or the word-initial consonant was highly distorted. To ensure random presentation, cue cards were shuffled eight times before each recording. Between each presentation of the stimuli the speakers were provided a break. 2.1.2 Stimuli selection Both speakers’ recordings were informally screened by an undergraduate student for intelligibility. The student identified the word-initial consonant for each word. Ninety-five percent of the word-initial consonants were correctly identified for the female speaker. The  21 experimenter judged these productions to be too intelligible as ceiling effects would limit the amount of potential perceptual learning during familiarization. A different undergraduate student identified the word-initial consonants in the male speaker’s utterances. Identification of the word-initial consonant for this speaker was near chance. The experimenter concluded this recording was suitable as the possibility of obtaining ceiling or floor effects was less likely. The study was then conducted using only the male speaker’s recordings. The speech stimuli were divided into four wordlists (pre-familiarization, familiarization, pseudo-familiarization, post-familiarization-old, and post-familiarization-new). The PRE-F and the POST-O tokens were the same wordlist with the same stimuli were identified as the PRE-F wordlist when presented to listeners prior to familiarization and identified as POST-O when presented to the listeners following familiarization. The pre-familiarization (PRE-F), familiarization, post-familiarization-old (POST-O) and post-familiarization-new (POST-N) wordlists consisted of word-initial voiced-voiceless minimal pair stop contrasts (eg. tot vs. dot). Each wordlist had 11 minimal pairs for a total of 22 tokens. The pseudo-familiarization (PSEUDO-F) wordlist consisted of 22 different CVC words that excluded all consonants with a voiced-voiceless contrast (see Appendix B). All sound files were edited using Praat 5100 for MacOS X. All words were stored as separate audio files and labeled for use during participant testing. All word initial stops were examined to determine the speaker’s voicing accuracy (perceptually and acoustically accurate productions of word-initial stops). Voicing accuracy each place of articulation (POA) was determined by the experimenter as follows: /p/-57%, /b/-69%, /t/-35%, /d/-57%, /k/-11%, /g/26%. The experimenter then attempted to reflect this percentage correct for each POA in the selected sound files for each wordlist. This was done to approximate the voicing accuracy  22 listeners hear when speaking with an individual with dysarthria secondary to CP. Each repetition of the speaker’s recording for all stimuli were acoustically examined. Abnormally long PV durations were noted for many of the voiced and voiceless word-initial stops. For speakers without dysarthria, PV may be present for voiced stops (-20 ms to -12 ms)) but is absent for voiceless stops (Lisker & Abramson, 1964; Ryalls, Zipprer, & Baldauff, 1997). The PV durations for speakers with dysarthria are typically longer than non-dysarthric speakers (-328ms to -32 ms) (Ansel & Kent, 1992). The speaker in the current study also had long PV durations. The negative VOT values were present for both voiced and voiceless stops. They provided a relatively distinct acoustic characteristic to differentiating between his voiced and voiceless tokens. Spectral analysis was completed for all stimuli and tokens were chosen based on the presence or absence of PV. The primary measurement was assessed from the spectrogram from the beginning of the prevoicing bar to the release burst. The secondary criterion was the waveform, from the beginning of aperiodicity to the release burst. Finally, the selected portion perceptually evaluated. Overall, the PV durations for voiced stops were noticeably longer than was documented in the Farmer (1980) and Ansel and Kent (1992) studies (see Table 1). In addition, the current speaker also had pronounced PV durations for voiceless stops, an acoustic cue not present in the previous studies on dysarthric speech. These differences may in part be due to the severity of the speaker in the current study versus those in the literature. The speaker recorded was diagnosed with severe spastic dysarthria secondary to CP whereas the speakers in the former studies were predominantly diagnosed with moderate to severe dysarthria.  23  Table 1. Prevoicing Durations (ms) of Typical and Atypical Tokens for Wordlists  POA Bilabial Typical  Atypical  Pre-Familiarization & Post-Old Voiced Voiceless -211 0  Familiarization  Post-New  Voiced -205  Voiceless 0  Voiced -287  Voiceless 0  Alveolar  -284  N/A  -270  0  -326  0  Velar  -8  N/A  -110  0  -37  0  Bilabial  -155  -464  -134  -399  -131  -533  Alveolar  -190  -219  -168  -374  -176  -426  Velar  /h/  /h/  /h/  /h/  /h/  /h/  Accurate voiceless stops were identified by an absence of PV and accurate voiced stops were identified by a relatively short presence of PV. In this way, the PV durations for the typical items approximated the PV durations for speakers without dysarthria; specifically, zero prevoicing for voiceless stops and prevoicing present for voiced stops (Lisker & Abramson, 1964). The experimenter judged these tokens to sound perceptually more similar to the voicing pattern of non-dysarthric speech. This contrasted with the speaker in the current study who. The voiced and voiceless tokens with the typical PV characteristics will subsequently be referred to as the “typical” tokens. Inaccurate stops that were intended to be voiced or voiceless phonemes were selected from the speaker’s utterances but the PV durations acoustically and perceptually were the opposite voicing contrast. These tokens were selected on the basis of PV durations as measured by the presence or absence of the prevoicing bar on the spectrograph. Unlike the typical tokens, the selection was carried out to create a unique voicing distinction opposite to the typical PV distinction with long PV to mark voiceless tokens and short PV to mark voiced tokens. Clear  24 contrasting voicing distinctions based on PV durations were obtained for the bilabial and alveolar POA (see Table 1).The PV duration for the voiceless tokens was minimally twice as long as for the voiced tokens. A distinct voiced-voiceless PV contrast for velar tokens could not be obtained. Instead both voicing contrasts for the atypical tokens were characterized by a word initial voiced fricative, /h/. All tokens with the described pattern of PV will subsequently be referred to as “atypical.” It was expected that following familiarization, if perceptual learning took place listeners would start to correctly identify the long PV tokens as voiceless and short PV as voiced for the atypical items. Given that PV was similar to the norm for the typical items, it was hypothesized that listeners would obtain higher accuracy scores for these items overall than the atypical tokens following familiarization. During the second hour of the recording session the speaker became visibly fatigued. At this time his utterances became progressively less intelligible. The result was a more limited data set with fewer tokens that met the specified acoustic criteria. Consequently, an even distribution of tokens across POA and typicality was not possible. In the final stimuli set no typical voiced velar or alveolar tokens were present. Following spectral and perceptual analysis of each individual recording for all stimuli, the tokens were chosen and wordlists were created (PRE-F, familiarization, POST-O, and POST-N). The PRE-F wordlist consisted of one token per word (22 tokens in total). The familiarization and PSEUDO-F wordlists consisted of two tokens per word, (44 different tokens in total). The POST-N wordlist consisted of one token per word (22 tokens in total). The PV durations for the typical items had more normal voicing distinctions. These values contrasted with atypical items, which inverted the typical PV durations. In this way, a unique voicing contrast was systematically created using the atypical tokens. This voicing  25 contrast provided listeners with a novel voicing pattern to learn using natural, non-synthesized speech. The unique PV pattern was specifically created as it is unlikely that listeners inexperienced with dysarthric speech would have previous experience with this voicing pattern. Given listeners’ lack of experience, it was speculated that changes in accuracy following familiarization would provide evidence for perceptual learning. 2.2 Listeners Listeners were recruited by placing advertisements around the university campus. Listeners were informed their participation in the study was voluntary and they may withdraw their participation at any point (see Appendix C and D). Testing lasted 30 minutes and listeners were provided a $10 honorarium for their time. In total, 44 monolingual, undergraduate students between the ages of 18 and 40 (M = 23) participated in the study. This age limit was selected in order to avoid the possibility of diminished hearing which often co-occurs with age (DeBonis & Donohue, 2004). Speech perception of English stops /p, b, t, d, k, g/ require normal hearing between 500 Hz to 4000 Hz (DeBonis & Donohue, 2004). In order to ensure hearing was within normal limits on the day of testing all listeners completed a hearing screening at 20dB HL from 250 Hz-8,000 Hz at octave intervals. Only the listeners who passed the hearing screening proceeded to the next phase. These listeners completed a questionnaire (see Appendix E). Listeners with previous experience interacting or listening to speakers with dysarthria were excluded. In addition, listeners with previous linguistics or phonetics familiarization, non-native English speakers, multilingual speakers, and those with a history of speech or language impairments were also excluded. The experimenter sought to reduce the possibility that any of the mentioned experiential differences may have influenced listeners’ speech perception. This was of particular concern given the focused analysis for perceptual learning of PV durations for  26 the voicing contrast. Four listeners were excluded based on the exclusion criteria as two failed the hearing screening and two had previous experience with speakers with dysarthria. Thirty listeners were randomly assigned to one of three familiarization conditions: familiarization with feedback (FDBK), familiarization without feedback (N-FDBK), and pseudo-familiarization (PSEUDO-F). An additional, 10 different listeners were later tested to explore some anomalous findings in the data. These participants met the same criteria as the initial 34 listeners who participated in the study. 2.3 Procedure A multiple-choice, forced-choice format was used. Schiavetti (1992) reviewed speech intelligibility measures and identified this testing format as a valid and reliable measure for documenting reduced intelligibility. All listeners were tested in a sound-proof room. The stimuli were randomly presented to listeners using custom experimental software on an iMac computer. Similar to the Tjaden and Liss (1995) study, listeners were informed that the speaker was born with damage to his nervous system, which affected the muscles used to talk. The participants were told they might have difficulty understanding the speaker, but that they should listen carefully and guess if they were unsure of their answer. Listeners were informed they might hear the same word more than once when in fact each word was only presented once per wordlist presentation. This was said to try and reduce the possibility of the listeners guessing by elimination due to the limited forced-choice format. Listeners were seated in front of the computer and fit with supra-aural Sennheiser HD250 Professional headphones. During testing the listeners had an indefinite amount of time to indicate their answers. All listeners completed the PRE-F testing. Included in this testing were 7 foils (tokens from the PSEUDO-F wordlist), which excluded the voicing contrast. These foils  27 were included to reduce listeners’ likelihood of guessing the purpose of the study. The written words for each minimal pair were presented to listeners on the computer screen (see Appendix F). The location of the correct word was randomized between the left and right sides of the computer display. One second following the presentation of the forced-choice, the audio recording of the dysarthric speaker was played over the headphones. The participants then selected their answers by using the mouse to click on their choice. The following token was automatically presented following each response. Repetitions of the audio stimuli were not permitted. Tokens for all wordlists were randomly presented for each listener. Listeners’ answers that matched the speaker’s intended utterance were scored as correct and those that did not were scored as incorrect. No feedback was provided to the listeners at this time. Participants were then randomly assigned to complete one of the three familiarization conditions: familiarization with feedback (FDBK), familiarization without feedback (N-FDBK), and pseudo-familiarization (PSEUDO-F). During these conditions all listeners heard either the familiarization or the pseudo-familiarization wordlists. Each wordlist was presented twice, for a total of 88 presentations (44 different tokens for two presentations). Participants in the familiarization with feedback condition were presented with the written word of the audio stimuli one second prior to hearing the recording. These listeners were asked to read the word presented and then listen to the speaker saying that word. Participants in the N-FDBK condition were not provided with the written information. Instead, they only heard the familiarization wordlist. The PSEUDO-F group listened to the PSEUDO-F wordlist without receiving any visual feedback. The listeners in the N-FDBK and PSEUDO-F groups were asked to simply listen to the speaker’s audio recordings.  28 Following the familiarization sessions, all listeners completed the post-familiarization testing. All three listener groups heard the original PRE-F wordlist and the POST-N wordlist. The POST-O wordlist contained the same stimuli as the PRE-F wordlist but the POST-O wordlist was presented to listeners after familiarization. The POST-N wordlist consisted of new words. Listeners heard 22 POST-O words and 22 POST-N words for a total of 44 words. Similar to the PRE-F condition, listeners were presented with the two-option forced-choice on the computer screen one second prior to the audio stimuli. Listeners indicated their answer by selecting their answer with the mouse. Listeners were not provided feedback regarding their response accuracy at any point during testing.  29 3.0 RESULTS 3.1 Overview The following reports the results for accuracy of word-initial stop identification. Listeners’ answers that matched the speaker’s intended utterance were scored as “correct” and those that did not were scored as “incorrect”. Accurate responses for each listener and each condition were totaled. These numbers were then converted to proportions of correct responses, which were then transformed using the arcsine transformation in order to satisfy the normal distribution assumption for the use of parametric statistics. In principle, the statistical analysis of these data would include typical and atypical items analyzed together. However, such an analysis would have distorted the pattern in the data because of missing typical voiced velar and alveolar tokens (as discussed in the Methods section). The consequence was that unlike all other POA, the voiced velar and alveolar tokens contained only atypical tokens when the data was analyzed as a whole. Consequently, typicality was unevenly distributed across POA. In addition, the contrasting PV properties for the voicing distinction between the typical and atypical tokens led the researcher to predict that perceptual learning for the typical tokens may be different than that from the atypical tokens. For these reasons, the data analysis examined the typical and atypical tokens separately. Due to the problematic nature of the velar and alveolar stops, the initial analysis examined only the voicedvoiceless bilabial contrast first for the typical items and then for the atypical items. Following this, performances on the atypical and typical bilabial items were contrasted. Lastly, all of the atypical tokens were analyzed to investigate accuracy across POA. To examine listeners’ accuracy scores analyses of variance (ANOVAs) with mixed effects were conducted.  30 3.2 Typical token analysis The PV durations of the typical items resulted in them being perceptually more similar to the voiced and voiceless stops of non-dysarthric speakers. The typical bilabial voiced-voiceless contrast was analyzed with a 3-Way ANOVA (voicing ✕ wordlist ✕ group); group (FDBK, NFDBK, PSEUDO-F) was a between-subjects factor, and voicing (voiced versus voiceless) and wordlist (PRE-F, POST-O, and POST-N) were within-subjects effects. The results revealed a statistically significant main effect for group [F (2, 27) = 3.51, p < 0.05]. Initially it was predicted that if learning occurred, listeners in the FDBK group would outperform listeners in the N-FDBK and PSEUDO-F group. Contrary to these initial predictions, the N-FDBK group (68%) performed significantly better than the FDBK group (53%) and descriptively better than the PSEUDO-F group (61%) (Tukey HSD test, p < 0.05). An ANOVA was conducted to compare each groups’ PRE-F scores to examine if these group differences were present at the beginning of the study. Results revealed no statistically significant differences among the three groups [F (2, 27) = 0.61, p< 0.05]. Thus, group differences may be attributable to the type of familiarization received. Performance on the different wordlists (PRE-F, POST-O, and POST-N) was compared to examine if familiarization had an overall effect on accuracy. A main effect for wordlist [F (2, 54) = 3.27, p < 0.05] was found. If learning occurred for the POST-O wordlist, one would have expected that performance on the POST-O wordlist would be the most accurate. In fact, the POST-N wordlist (68%) was most accurately identified followed by the PRE-F (60%) wordlist and lastly the POST-O wordlist (55%). A posthoc Tukey HSD test, p < 0.05 revealed that only the POST-N wordlist was significantly better recognized than the POST-O wordlist. Neither the main effect of voicing nor any other interaction effects were statistically significant.  31 The unexpectedly superior performance on the POST-N wordlist to the POST-O wordlist was explored by testing 10 additional participants. The experimenter attempted to determine whether the difference in accuracy scores was due to factors other than prior exposure to the wordlist (such as inherently higher intelligibility of tokens on the POST-N wordlist than the POST-O wordlist). These participants met the same criteria for testing as the 30 participants who completed the main study, but they did not have any familiarization prior to testing. An ANOVA was completed to compare the POST-O tokens to the POST-N tokens. Their scores did not reveal a statistically significant difference between the POST-O and POST-N items [F (1, 9) = 0.0293, p < 0.05]. Thus, the higher accuracy score for the POST-N wordlist cannot be attributed to intelligibility differences in the stimuli. 3.3 Atypical token analysis To analyze listeners’ accuracy on the atypical bilabial voicing contrast a 3-Way repeated measures ANOVA was completed (voicing ✕ wordlist ✕ group); group (FDBK, N-FDBK, PSEUDO-F) was a between-subjects factor, and voicing (voiced versus voiceless) and wordlist (PRE-F, POST-O, and POST-N) were within-subjects effects. Wordlists were compared to examine if familiarization had an effect on performance. A main effect for wordlist was present [F (2, 54) = 5.35, p < 0.05]. Contrary to the initial predictions, a Tukey HSD test, p < 0.05 showed a significant decrease in accuracy from PRE-F (67%) to POST-O (56%) as well as a significant decrease from PRE-F to POST-N (55%) scores. The main effect for voicing [F (1, 27) = 71.30, p < 0.05] was also present. Overall, the voiced items were recognized significantly better than the voiceless tokens (74% and 44% respectively). In addition, the interaction effect of voicing ✕ wordlist was statistically significant [F(2, 54) = 61.29, p < 0.05]. Closer analysis of this interaction reveals an interesting pattern. The  32 voiced items on the PRE-F (97%) and POST-N (90%) wordlists had high accuracy scores with a significantly lower score on the POST-O wordlist (34%). The inverse pattern occurs for the voiceless items that had low scores on the PRE-F (37%) and POST-N (20%) wordlists but a significantly higher score on the POST-O wordlist (77%) (see Figure 1).These findings suggest different perceptual learning patterns for voiced and voiceless stops following familiarization. Figure 1. Voicing Accuracy Across Wordlists for the Atypical Bilabial Voicing Contrast  For the typical items no significant differences were present for the familiarization groups. In addition, no significant differences were present for the type of feedback provided. However, descriptively, marked group differences existed (FDBK 53%, N-FDBK 68%, and PSEUDO-F 61%). These will be further explored in the Discussion. None of the remaining effects were statistically significant.  33 3.4 Comparison between typical versus atypical tokens Given the missing data for typical voiced alveolar and velar stops, the contrasting presence of perceptual learning between the typical and atypical tokens was limited to voiced and voiceless bilabial stops. The contrasting acoustic properties of typical and atypical tokens led the experimenter to anticipate a different pattern of learning for the typical versus atypical items. A 4-Way repeated measures ANOVA was completed (voicing ✕ wordlist ✕ group ✕ typicality); group (FDBK, N-FDBK, PSEUDO-F) was a between-subjects factor, and voicing, wordlists (PRE-F, POST-O and POST-N), and typicality (typical versus atypical) were within-subjects effects. As predicted, a significant main effect was found for typicality [F (1, 27) = 37.32, p < 0.05]. The typical items (78%) were identified significantly better than the atypical items (43%). This pattern was consistent across wordlists as revealed by the results for the typicality ✕ wordlist [F(1, 27) = 64.69, p < 0.05]. The statistically significant interaction of voicing ✕ typicality [F(1, 27) = 64.69, p < 0.05] suggests that perceptual learning differed according to the contrasting voicing properties of the typical and atypical stops. A Tukey HSD test, p < 0.05 confirmed that the typical tokens were recognized significantly better than the atypical tokens (see Figure 2). Of the atypical items, the voiced stops were recognized significantly better than the voiceless stops. No significant difference between voiced and voiceless stops was present for the typical stops. Thus, typicality appeared to markedly influence accuracy scores of the voicing contrast for the atypical items but not the typical items.  34  Figure 2. Comparison Between Typical versus Atypical Tokens for the Bilabial Voicing Contrast  The type of familiarization each group received was also examined, but no significant differences were found. Possible reasons for the failure to find an effect for type of familiarization will be explored in the Discussion. Neither the main effects for voicing or wordlist, nor additional interaction effects were statistically significant. 3.5 POA analysis for atypical items All atypical tokens were examined to investigate if accuracy scores differed according to POA. This analysis was limited to the atypical items as the typical tokens were missing the voiced alveolar and velar tokens. A 4-Way repeated measures ANOVA (POA ✕ voicing ✕  35 wordlist ✕ group); group (FDBK, N-FDBK, PSEUDO-F) was a between-subjects factor, and voicing, wordlists (PRE-F, POST-O and POST-N), and POA (bilabial, alveolar, and velar) were within-subjects effects. A main effect for POA was present [F (2, 54) = 11.60, p < 0.05]. Overall, the bilabials (58%) were significantly more accurately identified than the alveolars (48%). The velars (54%) were also significantly better identified than the alveolars, but the bilabials and velars were not significantly different. A significant interaction for voicing ✕ POA was also found [F(2, 54) = 64.94, p < 0.05] (see Figure 3). Voiced bilabials and velars were perceived significantly more accurately than their voiceless counterparts (75% vs. 42% for bilabials; 70% vs. 37% for velars; Tukey HSD test, p < 0.05). The alveolars showed the opposite pattern, the voiced items were recognized significantly worse than the voiceless items (31% and 62%, respectively; Tukey HSD test, p < 0.05). Figure 3. Voicing Accuracy for all Atypical Places of Articulation  36 Performance across wordlists was also compared to examine if familiarization had an effect on POA and voicing. Results revealed a significant voicing ✕ POA ✕ wordlist interaction [F(4, 108) = 44.37, p < 0.05] (see Appendix G). Scores for the voiced bilabials and velars were recognized significantly more accurately than voiced alveolars on the PRE-F wordlist (Tukey HSD test, p < 0.05); the voiceless stops showed the inverse pattern - alveolars were identified significantly more accurately than bilabials and velars (see Figure 4). On the POST-O wordlist, all answers converged to the middle and the large differences in accuracy present in the PRE-F wordlist were absent. No significant differences in mean accuracy scores for any POA or voicing were present. Accuracy across POA and voicing on the POST-N wordlist showed the same pattern as in the PRE-F wordlist; however the fluctuations in accuracy were not as large. Significant differences on the POST-N wordlist only existed for the voiced items across POA and between the alveolar voicing contrast (Tukey HSD test, p < 0.05). Figure 4. Voicing Accuracy for all Atypical Places of Articulation Across all Wordlists  37 The main effect of wordlist [F (2, 54) = 51.17, p < 0.05] suggests the presence of perceptual learning. Unexpectedly, accuracy on the PRE-F (50%) wordlist was significantly more accurate than on the POST-O (34%) wordlist. However, the POST-N (75%) wordlist was accurately identified significantly better than both the POST-O and the PRE-F wordlists. This main effect is better understood by examining the interaction for voicing by wordlist, which was also significant [F (2, 54) = 0.00, p<0.05]. Every POA in the PRE-F condition had overall mean accuracy scores that were almost twice as high for the voiced (64%) in comparison to the voiceless (36%). In the subsequent testing conditions the significant differences between voicing disappeared (voiced 35% vs. 32% voiceless in the POST-O, and 77% vs. 73% in the POST-N wordlist). These patterns highlight the trend of reduced variability in accuracy scores following familiarization. This interaction reveals that the significant increase on the POST-N wordlist from the POST-O wordlist is due to the higher accuracy scores for the voiceless items. This trend was trend was consistently found for the voiced and voiceless alveolar and velar contrast across all wordlists (see Figure 4). Contrary to the initial predictions, a significant difference in accuracy according to type of familiarization received was not found. No other interaction effects were statistically significant.  38 4.0 DISCUSSION 4.1 Summary of findings The purpose of the current study was to examine if previous experience with spastic dysarthric speech improved listeners’ accuracy scores for the voiced-voiceless word-initial stop contrast. First, the results of the study suggest that following familiarization, listeners experienced perceptual learning. This type of learning resulted in differences in the accuracy pattern for typical and atypical stops. For the typical items listeners had higher accuracy scores on the POST-N wordlist whereas for the atypical items, listeners had higher accuracy scores in the PRE-F. Second, the presence of feedback provided during familiarization did not result in better accuracy scores in comparison to the other familiarization conditions. Instead, the NFDBK group scored the highest for the typical items and groups did not differ for the atypical items. Third, the pattern of perceptual learning differed for the voiced and voiceless items. Following familiarization, listeners’ accuracy scores for the voiceless items increased while accuracy for the voiced items decreased. These results will be further discussed in light of the initial predictions made. The following will also explore some of the trends that nearly reached statistical significance as their pronounced patterns provide interesting insights. 4.2 Perceptual learning for typical tokens The significant group differences with the POST-N wordlist with higher accuracy scores than the PSEUDO-F and POST-O wordlist is better understood when examining the interaction effects; specifically, the voicing ✕ wordlist interaction. Although no statistically significant differences were present between the voiced and voiceless contrast in overall accuracy, the interaction effect of voicing ✕ wordlist revealed an interesting pattern of change following training. It was expected that since PV properties for the typical voiceless stimuli approximated  39 the normal PV pattern, the accuracy scores would be high overall. In addition, these tokens were selected specifically as the experimenter perceived them to be clearly voiceless stops. However, listeners scored near chance (52%) on the PRE-F wordlist, but their accuracy increased above chance following familiarization POST-O (63%). This trend suggests that perceptual learning occurred for the typical voiceless tokens. By contrast, the accuracy for the voiced stimuli decreased from the PRE-F wordlist (67%) to the POST-O wordlist (47%). The decrease in accuracy for the voiced tokens was surprising as during familiarization listeners were exposed to 12 typical voiced tokens and merely five typical voiceless tokens. Perhaps familiarization with the typical items led the listeners to associate the presence of PV with voicelessness, which resulted in the misidentification of the voiced tokens as voiceless. The short familiarization period may not have enabled the listeners to associate the short versus long-PV distinction to the perception of voicing in a consistent fashion. The changes in accuracy scores for the voiced and voiceless items on the POST-O wordlist suggest that perceptual changes occurred following training. The specific pattern of improved accuracy for the voiceless tokens following training is consistent with previous findings of perceptual learning following familiarization with dysarthric speech (Hustad & Cahill, 2003; D’Innocenzo, Tjaden & Greenman, 2006; Spitzer, Liss, Caviness, & Alder, 2000; Tjaden & Liss, 1995). The present findings are also consistent with Spitzer, Liss, Caviness and Alders’ (2000) conclusion that perceptual learning of disordered speech can alter speech perception at the segmental level. 4.3 Perceptual learning for atypical tokens For atypical stops, the voiced tokens on the PRE-F, POST-O, and POST-N wordlists were recognized significantly more accurately than voiceless tokens overall. This outcome was  40 expected as the voiceless stops had abnormally long PV durations, giving them the perceptual quality of being voiced, while the voiced items had shorter PV durations which approximated the typical, non-dysarthric PV duration. Unexpectedly, overall accuracy scores for atypical items were significantly higher on the PRE-F wordlist (67%) than on the POST-O wordlist (56%). This difference was further explored by examining the wordlist ✕ voicing interaction. The interaction revealed that the voiceless tokens were correctly perceived significantly more often following familiarization (37% PRE-F versus 77% POST-O wordlist). This pattern suggests that perceptual learning occurred for the voiceless stops. Surprisingly, a comparison of the responses for the voiced items showed a significant decrease in accuracy from the PRE-F (97%) to the POST-O (34%) wordlist. Together these findings suggest that the listeners’ categorical boundary along the PV dimension had changed as a result of familiarization. A detailed token analysis was completed in order to better understand this pattern of accuracy scores following training. The PV durations for all word-initial stops were measured, and means for each POA were calculated (see Table 1). During the familiarization period listeners heard tokens with the following PV durations: voiced tokens ranged from -130 ms to 196 ms (M = -151 ms) and voiceless tokens ranged from -217 ms to -492 ms (M = -390 ms). During testing the durations were as follows: voiced from -155 ms to -190 ms (M = -173 ms) and voiceless from -218 ms to -615 ms (M = -366 ms). The voiceless tokens had nearly twice as long PV durations than the voiced PV durations. Given this marked difference in duration, it was surprising that the voiced atypical items were identified as voiceless on the POST-O wordlist. One reason for the decreased accuracy of the voiced items may be due to the limited number of exposures for these tokens versus voiceless tokens during the training periods (4 voiced and 11 voiceless tokens). In effect, listeners were provided more than twice the opportunities to learn the  41 voiceless atypical PV duration than the voiced. This may have resulted in better learning for the voiceless PV parameter than the voiced. In turn, it may have biased the listeners to perceive any long PV duration as voiceless. The result was the increase in incorrectly identifying the voiced items as voiceless. Possibly by providing the listeners with more opportunities to learn the atypical voiced PV duration during training the accuracy scores for these tokens in the POST-O condition may have increased as they did with the voiceless tokens. 4.4 Generalization of perceptual learning to novel words It was initially hypothesized that, if perceptual learning took place, listeners would be able to generalize the long PV versus short PV voicing distinction to the POST-N wordlist. This hypothesis was based on reports that listeners generalized newly learned VOT properties to new tokens (Allen & Miller, 2004). In addition, Kraljic and Samuel (2006; 2007) found that listeners generalized the new acoustic properties of the voicing contrast to a novel POA. The results of the current study showed that listeners did not generalize the PV durations heard for the typical or atypical tokens to the new POST-N tokens. Instead, responses for the POST-N tokens were significantly different from the POST-O items and showed a similar pattern of accuracy as the PRE-F wordlist (see Figure 4). A primary difference between studies that found generalization (Allen & Miller, 2004; Kralji & Samuel, 2006; 2007) and the current study was that the generalization studies used speech stimuli from a normal speaker without neurological impairments. Moreover, these studies systematically manipulated the stimuli along an acoustic continuum, instead of selecting naturally produced stimuli that differed along a single dimension as was done in the present study. It is possible that perceptual learning using non-dysarthric stops is less challenging than for the disordered speech. Perhaps the speech patterns of individuals with dysarthria require more perceptual learning generalization to novel words and POA to occur.  42 Alternatively, perceptual learning may have generalized if listeners in the present study were provided with more training during the familiarization phase. Comparison of studies using synthetically modified normal speech and dysarthric speech shows a pronounced difference in the amount of time and quantity of stimuli provided to the listeners during the familiarization period. In studies using normal, non-dysarthric speech listeners exhibit perceptual changes with as few as 10 critically modified tokens embedded in 100 normal words and 100 non-words during familiarization (Kraljic & Samuel, 2007). This contrasts with the familiarization procedures with the dysarthric speech stimuli that typically had listeners familiarized for upwards of half an hour with a minimum of 18 five-word sentences, or the equivalent if words were presented in isolation (Liss, Spitzer, Caviness, & Adler, 2002). For the present study, the familiarization period was 15 minutes long and consisted of 88 tokens in total (44 unique tokens heard twice). Even with this limited familiarization some significant and consistent trends in listeners’ accuracy scores emerged following familiarization. Therefore, these listeners might have benefited even more from receiving more familiarization with the dysarthric speech stimuli than what was provided. The need for substantial familiarization may be particularly important given that the stimuli in the current study were focused at reorganizing listeners’ perceptual categories in the opposite direction of the typical voicing contrast found in normal, nondysarthric speech (ie. altering the typical longer PV for voiced stops and shorter PV for voiceless stops to become long PV to signal voiceless stops and short PV to signal voiced stops). In addition to requiring more training, listeners may need more focused training during familiarization. For the present study both typical and atypical productions were presented in an attempt to represent the voicing error patterns of the speaker with dysarthria. Limiting exposure  43 to either the typical or atypical tokens in separate sessions may lead to better learning and generalization. These possibilities could be examined in future research. 4.5 Performance as a function of place of articulation for atypical tokens The subsequent analysis was limited to the atypical items because the typical items lacked voiced alveolar and velar tokens as discussed in the Methods section. During the recording session the experimenter noted that the speaker varied in the amount of effort used to produce the voicing contrast for different places of articulation. Specifically, he was most successful and appeared to expend the least amount of effort in producing bilabials, less successful and more effortful with alveolars, and the least successful and most effortful with the velars. This difference in production accuracy and effort is reflected in the speakers’ production accuracy across POA, as documented in the Methods section. Clear PV distinctions to differentiate the voicing contrast were present for the both the atypical bilabials and alveolars. Contrastingly, some velar tokens failed to meet the PV selection criteria. All of the velar voiced and voiceless atypical tokens had a word-initial frication sound. Of all the velars in the selected token set, of the intended 12 voiced tokens, only 4 had the short PV duration to mark voicing and only three of the intended 12 voiceless items had the requisite long PV duration. This contrasted with the bilabials and alveolar tokens where all 12 out of 12 intended voiced and voiceless tokens clearly contained the PV criteria. Due to this production pattern it was initially expected that no learning would be present for the velars as a reliable PV contrast was absent in the majority of the items. As expected, the bilabials were identified with the best accuracy (59%); however, it was surprising that velars (54%) were more accurately identified than alveolars (47%). In addition, bilabials and velars showed a similar pattern of accuracy across the wordlists, whereas the  44 pattern for alveolars was different pattern (see Figure 4). One possible reason the bilabials and velars had a similar pattern is that the listeners used a cue other than PV to identify the velar voicing contrast. Research shows a variety of well-documented acoustic cues that signal the voicing contrast including: formant transitions, aspiration noise following release burst, and the following vowel length (Lisker, 1977; Kent & Read, 2002). Given that PV durations were unreliable and often absent for the velars, it is possible that the listeners attuned to alternative cues. According to cue trading, one cue may be used when another cue is weak or reduced in effectiveness (Pisoni & Luce, 1987; Davis & Johnsrude, 2007). The principle of cue trading and applying different cues according to availability and reliability has also been suggested when listening to atypical speech patterns (Ansel & Raymond, 1992; Francis, Ciocca, & Yu, 2003). The patterning of the alveolars may indicate that the PV property may not be the central acoustic cue for the listeners. These tokens had clear PV durations according to the criteria of long PV duration for the voiceless tokens and short PV for the voiced tokens (see Table 1). If listeners relied on PV as the primary cue, it would be expected that the accuracy scores for alveolars would pattern similarly to the bilabials following familiarization. Given the accuracy patterns, listeners may have used additional or alternative acoustic cues to identify the voicing distinction when listening to this speaker with dysarthria. 4.6 The effect of familiarization on perceptual learning Groups provided with the wordlist containing the voiced-voiceless stop contrast were expected to perform better than the group presented with the PSEUDO-F. The PSEUDO-F wordlist excluded all phonemes with the voicing contrast across all classes of sounds. Contrary to these initial predictions, for the typical tokens the PSEUDO-F group did not have the lowest  45 accuracy scores (FDBK 53%, N-FDBK 68%, PSEUDO-F 61%). In fact, the only significant group difference observed was between the FDBK group and the N-FDBK group. Possibly, the listeners in the FDBK group experienced the greatest amount of perceptual learning and rather than increasing the accuracy as initially predicted following familiarization, learning may have adversely affected their performance. The result was this group of listeners scoring the lowest. Contrastingly, for the atypical tokens the FDBK group obtained the highest accuracy score (FDBK 64%, N-FDBK 57%, PSEUDO-F 55%). However, these group differences did not reach statistical significance. Despite the non-significant results for the atypical tokens, the general trend of accuracy scores is as initially predicted; the FDBK group obtained the highest accuracy score followed by the N-FDBK group and then the PSEUDO-F group. To obtain a clearer understanding of the effects of stimuli type, a control condition in which listeners are exposed to speech from a speaker without dysarthria may be needed in future studies in order to control for the possibility that listening to any of the speaker’s utterances, with or without feedback, is sufficient to produce some perceptual learning effects. It was originally hypothesized that the listeners who were not given feedback (ie. the PSEUDO-FDBK group and the N-FDBK group) would not have improvements in accuracy after familiarization, and that the feedback (FDBK group) would show the largest improvements. This prediction was made based on the finding that the listeners in the Garcia and Cannito (1996) failed to show perceptual learning following familiarization. These listeners were not provided with feedback. By contrasts, later studies found evidence for perceptual learning following familiarization when listeners were provided with written feedback (Norris, McQueen, & Culter, 2003; Hustad & Cahill, 2003; D’Innocenzo, Tjaden & Greenman, 2006; Spitzer, Liss, Caviness, & Alder, 2000; Tjaden & Liss, 1995). The contrasting findings were resolved by suggesting that,  46 for familiarization to be effective in changing speech perception, listeners must know the target phonemes in order to accurately adjust their mental representations to the heard speech sound (Norris, McQueen, & Culter, 2003; D’Innocenzo, Tjaden & Greenman, 2006). Although the  results of the statistical analysis do not entirely support the idea that written feedback is required for perceptual learning, the pattern of perceptual changes that occurred for the atypical items in the different familiarization conditions is consistent with this proposal. It is likely that a statistically significant effect might have been observed had the sample size been larger and the training more effective. 4.7 Theoretical and clinical implications Although the conclusions from the present study are necessarily tentative given the small sample size and limitations, the findings are consistent with the view that the inherent variability in the speech signal is not eliminated but stored in listeners’ minds and used for perceptual judgments (Johnson & Mullennix, 1997; Nygaard & Pisoni, 1995; Goldinger, Klieder, & Shelley, 1999; Dahan, Drucker, & Scarborough, 2008; Tjaden & Liss, 1995). Information about variability provides listeners with important knowledge about speaker-specific speech patterns, which is crucial in order to maximize the recognition of the unique speech patterns of dysarthric speakers. The results of the current study suggest that speaker variation is indeed important. Specifically, the pattern of listeners’ accuracy scores suggest unique acoustic properties of the speaker are learned. The result was a systematic difference in accuracy scores following familiarization. The listeners’ changes in accuracy patterns provide support for the importance of talker variability over speaker normalization. The results in the study also provide additional support for Lindblom’s (1985) interactionist view which highlights the relationship between the speech representations stored in listeners’ minds and the acoustic properties present in the speech  47 signal. The signal-independent information (the learnt perceptual representations of the dysarthric speech stimuli) and the signal-dependent information (the unique speech signal) influenced each other and resulted in a difference in speech perception following familiarization. Since listeners were carefully screened to eliminate several other possible sources of signalindependent information (such as previous experience with a speaker with dysarthria, hearing loss, multilingualism, and others), the results suggest that learning may be attributable to the familiarization period. In light of the communication model, these listeners’ accuracy patterns across wordlists provide support for the existence of a “special listener” category. These “special listeners” gained unique acoustic knowledge about the divergent properties of spastic dysarthric speech secondary to CP while being familiarized in the study. If a short training experience such as the one provided in the present study results in perceptual learning, then health care professional who provide services for people with spastic dysarthria may benefit from being familiarized with speech stimuli prior to commencing treatment with these individuals. With an increased ability in understanding patients’ speech, clinicians and others in allied health may in turn be able to provide better care to these patients. Given the lack of generalization to novel words in the current study, training stimuli may focus on those words that are used more often by a speaker with dysarthria. The study also has more direct implications for speech-language pathologists. Typically, the same clinician perceptually evaluates intelligibility preceding and following treatment, to determine whether a change in intelligibility has occurred. However, if the same clinician is re-administering tests, it is possible that previous experience may inflate intelligibility scores. Therefore, clinicians may wish to be cautious when interpreting their perceptual evaluations of the same speakers at different time points, particularly if the patient has been seen over a long period of time.  48 4.8 Limitations Although the limitations due to sample size and length of training have already been discussed, another noteworthy limitation was the uneven distribution of tokens across POA and typicality. Although five to seven tokens were recorded for each word, due to speaker fatigue and the specific error patterns in the speaker’s speech, a limited token set was available for selection. As a result, tokens were not available for the atypical voiced alveolars and velars. A preferable stimuli set would have been an even and comprehensive distribution of items across typicality for all POA. This study was unique in that it did not acoustically modify the speech signal; instead, tokens were carefully selected based on the duration of PV that was present in naturally produced stimuli. It is well documented that speakers with spastic dysarthria have significant variation in speech production, even in repeated productions of the same word (Farmer, 1980; Patel, 2002). As expected, the speaker who participated in the current study had variable productions as was noted during the recording session. Of the five different recordings, only those that met the strict PV criteria were selected as tokens for the listeners to hear during testing. Thus, internal validity was increased by limiting the acoustic diversity that the listeners were exposed to during the study. This approach also ensured that the speech tokens were naturalistic and contained multiple cues to signal the voicing contrasts, unlike the more carefully controlled studies that used unnatural, synthetically modified stimuli. The increase to internal validity came at the cost of external validity as a type of artificial listening environment was created; specifically, one that reduced the acoustic variability in the speech signal that typifies speakers with spastic dysarthria. The consequence of this reduced heterogeneity is a decrease to the external validity of the study. Though, the present findings help us to understand the mechanisms  49 of perceptual learning, they may not be readily applicable to real-life situations, in which speakers with dysarthria show a high amount of variability in individual speech patterns.  50 5.0 CONCLUSION The central finding of the current study is that perceptual learning following familiarization occurs at the phonemic level with spastic dysarthric speech. This experiment comes at the heels of previous studies that have laid down the foundation for a new wave of literature highlighting the importance of the individual speaker variation. This study provides additional evidence in support of perceptual learning and the importance of individual talker variability. Tentative support was also found for the idea that perceptual learning of dysarthric speech with feedback results in greater gains in perceptual learning than learning without feedback or simple familiarization with a speaker’s speech.  51 References Allen, J. S., & Miller, J. L. (2003). Individual talker differences in voice-onset-time. Journal of Acoustical Society of America, 113, 544-552. Allen, J. S., & Miller, J. L. (2004). Listener sensitivity to individual talker differences in voice-onset-time. Journal of Acoustical Society of America, 115 (6), 3171-3183. Ansel, A. M., & Kent, R. D. (1992). Acoustic-Phonetic contrasts and intelligibility in the dysarthria associated with mixed cerebral palsy. Journal of Speech and Hearing Research, 35, 296-308. Beukelman, D. R., & Yorkston, K. M. (1980). Influence of passage familiarity on intelligibility estimates of dysarthric speech. Journal of Communication Disorders, 13 (1), 33-41. Bhatnagar, S. C. (2002). Neuroscience for the Study of Communicative Disorder. Philadelphia: Lippincott Williams & Wilkins. Bunton, K., & Weismer, G. (2001). The relationship between perception and acoustics for a high-low vowel contrast produced by speakers with dysarthria. Journal of Speech, Language, and Hearing Research, 44, 1215-1228. Colledge, N. (2006). A Guide to Cerebral Palsy. Retrieved June 23, 2008, from http://www.bccerebralpalsy.com/pdfs/guidetocp.pdf  Darley, F. L., Aronson, A. E., & Brown, J. R. (1969). Differential diagnostic patterns of dysarthria. Journal of Speech and Hearing Research, 12, 246-269. Davis, H. M. & Johnsrude, I. S. (2007). Hearing speech sounds: Top-down influences on the interface between audition and speech perception. Hearing Research, 229, 132-147. D'Innocenzo, J., Tjaden, K., & Greenman, G. (2006). Intelligibility in dysarthria: Effects of listener familiarity and speaking condition. Clinical Linguistics and Phonetics, 20(9), 659-675.  52 Dahan, D., Drucker, S. J., & Scarborough, R. A. (2008). Talker adaptation in speech perception: Adjusting the signal or the representations? Cognition, 108, 710-718. DeBonis, D. A., & Donohue, C. L. (2004). Survey of audiology : fundamentals for audiologists and health professionals (1st ed.). Boston: Pearson/ Allyn and Bacon. Duffy, J. R. (2005). Motor speech disorders : substrates, differential diagnosis, and management. St. Louis, Mo.: Elsevier Mosby. Farmer, A. (1980). Voice onset time production in cerebral palsied speakers. Folia Phoniatrica, 32, 267-273. Flege, J. E., Takagi, N., & Mann, V. (1995). Japanese adults can learn to produce English /r/ and /l/ accurately. Language and Speech, 38, 25-55. Flipsen, P. (1995). Speaker-Listener Familiarity: Parents as Judges of Delayed Speech Intelligibility. Journal of Communication Disorders, 28 (1), 3-19. Francis, A. L., Ciocca, V., & Yu, J. M. C. (2003). Accuracy and variability of acoustic measures of voicing onset. Journal of Acoustical Society of America, 113 (2), 1025-1032. Garcia, J. M., & Cannito, M. P. (1996). Influence of verbal and nonverbal contexts on the sentence intelligibility of a speaker with dysarthria. Journal of Speech and Hearing Research, 39(4), 750-760. Goldinger, S. D. (1996). Words and voices: Episodic traces in spoken word identification and recognition memory. Journal of Experimental Psychology, 22, 1166-1183. Goldinger, S. D., Kleider, H. M., & Shelley, E. (1999). The marriage of perception and memory: Creating two-way illusions with words and voices. Memory and Cognition, 27, 328-338. Haggard, M. P., Ambler, S., & Callow, M. (1970). Pitch as a voicing cue. Journal of the Acoustical Society of America, 47, 613-617.  53 Hess, W. (1992). Pitch and voicing determination. In S. Furui & M. M. Sondhi (Eds.), Advances in speech signal processing (pp. pp. 3-47): Marcel Dekker Inc. House, A. S., & Fairbanks, G. (1953). The influence of consonant environment upon the secondary acoustical characteristics of vowels. Journal of the Acoustical Society of America, 25, 105-113. Hustad, K. C., & Cahill, M. A. (2003). Effects of presentation mode and repeated familiarization on intelligibility of dysarthric speech. American Journal of Speech Language Pathology, 12, 198-208. Kent, R. D., & Read, C. (1992). The acoustic analysis of speech. San Diego, Calif.: Singular Pub. Group. Kent, R. D., Weismer, G., Kent, J. F., Vorperian, H. K., & Duffy, J. R. (1999). Acoustic studies of dysarthric speech: methods, progress, and potential. Journal of Communication Disorders, 32 (3), 141-186. Kraljic, T., & Samuel, A. G. (2005). Perceptual learning for speech: Is there a return to normal? Cognitive Psychology, 51 (2), 141-178. Kraljic, T., & Samuel, A. G. (2006). Generalization in Perceptual Learning for Speech. Psychonomic Bulletin and Review, 13 (2), 262-268. Lass, N. J. (Ed.). (1996). Principles of Experimental Phonetics. St. Louis: Mosby-Year Book, Inc. Lindblom, B. (1985). On the communicative process: Speaker-listener interaction and the development of speech. Augmentative and Alternative Communication, 220-230. Lisker, L., & Abramson, A. (1964). A cross-language study of voicing in initial stops: Acoustical measurements. Word, 20, 384-422.  54 Lisker, L. (1977). Rapid versus rabid: A catalogue of acoustic features that may cue the distinction. Journal of Acoustical Society of America, 62, 77-78. McGarr, N. S. (1983). The intelligibility of deaf speech to experienced and inexperienced listeners. Journal of Speech and Hearing Research, 26 (3), 451-458. Norris, D., McQueen, J. M., & Cutler, A. (2003). Perceptual learning in speech. Cognitive Psychology, 47 (2), 204-238. Nygaard, L. C., & Pisoni, D. B. (1995). Talker-Specific learning in speech perception. Perception and Psychophysics, 60 (3), 355-376. Patel, R. (2002). Phonatory control in adults with cerebral palsy and severe dysarthria. Augmentative and Alternative Communication, 18, 2-10. Pickett, J. M. (1999). The acoustics of speech communication: fundamentals, speech perception theory, and technology. Boston: Allyn and Bacon. Rammage, L., Morrison, M. D., & Nichol, H. (2001). Management of the voice and its disorders (2nd ed.). San Diego, CA: Singular/Thomson Learning. Ryalls, J., Zipprer, A., & Baldauff, P. (1997). A preliminary investigation of the effects of gender and race on voice onset time. Journal of Speech, Language, and Hearing Research, 40, 642-645. Schiavetti, N. (1992). Scaling procedures for the measurement of speech intelligibility. In R. D. Kent (Ed.), Intelligibility in Speech Disorders (pp. 11-34). Philadelphia: John Benjamins Publishing Company. Spitzer, S. M., Liss, J. M., Caviness, J. N., & Adler, C. (2000). An exploration of familiarization effects in the perception of hypokinetic and ataxic dysarthric speech. Journal of Medical Speech-Language Pathology, 4, 285-293.  55 Stecker, G. C., Bowman, G. A., Yund, E. W., Herron, T. J., Roup, C. M., & Woods, D. L. (2006). Perceptual training improves syllable identification in new and experienced hearing aid users. Journal of Acoustical Society of America, 43 (4), 537-552. Tjaden, K. K., & Liss, J. M. (1995). The role of listener familiarity in the perception of dysarthric speech. Clinical linguistics & phonetics, 9 (2), 139-154. Tjaden, K., & Liss, J. M. (1995). The influence of familiarity on judgments of treated speech. American Journal of Speech-Language Pathology, 4, 39-48. Trofimovich, P. (2008). What do second language listeners know about spoken words? Effects of experience and attention in spoken word processing. Journal of Psycholinguistic Research, 37, 309-329. Weismer, G., & Martin, R. E. (1992). Acoustic and Perceptual Approaches to the Study of Intelligibility. In R. D. Kent (Ed.), Intelligibility in Speech Disorders (pp. 67-118). Philadelphia: John Benjamins Publishing Company. Weiss, C. E., Gordon, M. E., & Lillywhite, H. S. (1987). Clinical Management of Articulatory and Phonologic Disorders (2 ed.). Baltimore: Williams & Wilkins. Whitehill, T., & Ciocca, V. (2000). Perceptual-phonetic predictors of single-word intelligibility: A study of Cantonese dysarthria. Journal of Speech and Hearing Research, 43, 1451-1465. Yorkston, K. M., Dowden, P. A., & Beukelman, D. R. (1992). Intelligibility Measurement as a tool in the Clinical Management of Dysarthric Speakers. In R. D. Kent (Ed.), Intelligibility of Speech Disorders (pp. 265-286). Philadelphia: John Benjamins Publishing Company.  56 Appendix A Speaker Consent Form SCHOOL OF AUDIOLOGY AND SPEECH SCIENCES Faculty of Medicine University of British Columbia  5804 Fairview Ave. Vancouver, B.C. V6T 1Z3 Tel: (604) 822-5795 Fax: (604) 822-6569  CONSENT FORM Project title: “Perceptual Learning of Dysarthria: Effects of Familiarization and Feedback” Principal Investigator: Dr. Valter Ciocca, Professor, School of Audiology and Speech Sciences, phone #: (604) 822-2266 Co-Investigator: Leah Buchholz, Master’s student, School of Audiology and Speech Sciences, phone #: 778.895.0519. This experiment is being conducted in partial fulfillment of a Master’s thesis in the School of Audiology and Speech Sciences at the University of British Columbia. Purpose: The researchers are investigating peoples’ perceptions of speech and how it might differ following training to specific speech stimuli. Participants will be asked to listen to your speech and to identify which word they hear from two alternative words displayed on a computer screen. The results of this study will provide information about how listeners are able to recognize unique speech patterns. Study Procedures: As a volunteer for this study, you will attend one session lasting about one hour. An appointment will be scheduled at your convenience. All testing will be carried out in the School of Audiology and Speech Sciences located on the Vancouver UBC campus. During this session, you will be recorded saying four lists of words five different times. Words will be presented to you one at a time on cue cards at a rate of 5 seconds per word. Your audio recordings will be stored onto a password protected computer. The audio recordings will be reviewed by the principal and co-investigators. The two most typical productions (most clear) of the five repetitions for each word will be selected and individually saved onto the same password protected computer. Following this, a group of 30 listeners will then hear only these words. They will complete three different training procedures differing according to the presence of absence of feedback for the word heard. The listeners will then be tested to examine if they have learned to identify words with your unique speech pattern. Confidentiality: Your anonymity will be safeguarded as follows. The 30 listeners will not be provided with any of your personal identifying information beyond a general description. This description will state that you were born with damage to the nervous system which resulted in a unique speech pattern. Only the co-investigator, Leah Buchholz, will know your identity in order to arrange and complete the audio recordings. All audio recordings will be stored on a password-protected  57 computer located within a secured and locked laboratory. Only the principal investigator and coinvestigator will have access to these recordings. Compensation: In order to defray the costs of transportation and the inconvenience, you understand you will be reimbursed $20 for transportation costs as well as an honorarium of $30 for his/her productions of the speech stimuli that will be used in this study. Contact: This consent form will be made available to you at least twenty-four hours before you arrange an appointment for participating in the study. If you have any questions or desire further information with respect to this study, you may contact Leah Buchholz via email: LeahBuchholz@gmail.com or telephone (778)-895-0519. If you have any concerns about your treatment or rights as a research subject, you may contact the Office of Research Services at the University of British Columbia at (604) 822-8598. Consent: You understand that your participation in this study is entirely voluntary, and that you may refuse to participate or that you may withdraw from the study at any time without consequence. You understand that, should you withdraw before completion, you will still receive an honorarium of $50.00. ______________________________________ Subject signature  _______________ Date  ______________________________________ Subject name (PRINT) ______________________________________ Signature of a Witness ______________________________________ Name of Witness (PRINT)  _______________ Date  58 Appendix B Stimuli Words for All Wordlists POA  Pre-Familiarization Voiced Voiceless Foils Bit  Pit  Low  Big  Pig  Row  Bad  Pad  Law  Beep  Peep  Raw  Beg  Peg  Liar  Butt  Putt  Wire  Dad  Tad  Ring  Duck  Tuck  Bilabial  Alveolar  Velar  Gab  Cab  Gut  Cut  Goad  Code  Familiarization -Feedback -No feedback Voiced Voiceless Bet 1 Pet 1 Bet 2 Pet 2 Back 1 Pack 1 Back 2 Pack 2 Bat 1 Pat 1 Bat 2 Pat 2 Bought 1 Pot 1 Bought 2 Pot 2 Beat 1 Peat 1 Beat 2 Peat 2 Dote 1 Dote 2 Dug 1 Dug 2 Dead 1 Dead 2 Good 1 Good 2 Got 1 Got 2 Gob 1 Gob 2  Tote 1 Tote 2 Tug 1 Tug 2 Ted 1 Ted 2 Could 1 Could 2 Cot 1 Cot 2 Cob 1 Cob 2  PSEUDOFamiliarization  PostFamiliarization Voiced Voiceless  Hill 1 Hill 2 Hail 1 Hail 2 Hole 1 Hole 2 Hire 1 Hire 2 Joy 1 Joy 2 Jail 1 Jail 2 Law 1 Law 2 Lure 1 Lure 2 Low 1 Low 2 Liar 1 Liar 2 Ring 1 Ring 2 Raw 1 Raw 2 Real 1 Real 2 Row 1 Row 2 Will 1 Will 2 War 1 War 2 Wail 1 Wail 2 Wire 1 Wire 2 Way 1 Way 2  Bod  Pod  Bop  Pop  Bick  Pick  Bug  Pug  Beck  Peck  Dab  Tab  Dot  Tot  Dip  Tip  God  Cod  Goat  Coat 1  Gap  Cap  59 Year 1 Year 2 Your 1 Your 2 Yell 1 Yell 2  60 Appendix C Listener Without Feedback Consent Form SCHOOL OF AUDIOLOGY AND SPEECH SCIENCES Faculty of Medicine University of British Columbia  5804 Fairview Ave. Vancouver, B.C. V6T 1Z3 Tel: (604) 822-5795 Fax: (604) 822-6569  CONSENT FORM Project title: “Perceptual Learning of Dysarthria: Effects of Familiarization and Feedback” Principal Investigator: Dr. Valter Ciocca, Professor, School of Audiology and Speech Sciences, phone #: (604) 822-2266 Co-Investigator: Leah Buchholz, Master’s student, School of Audiology and Speech Sciences, phone #: 778.895.0519. This experiment is being conducted in partial fulfillment of a Master’s thesis in the School of Audiology and Speech Sciences at the University of British Columbia. Purpose: The researchers are investigating peoples’ perceptions of speech and how it might differ following training to specific speech stimuli. Participants will be asked to listen to speech produced by an individual with cerebral palsy (dysarthric speech) and to identify which word they hear from two alternative words displayed on a computer screen. The results of this study will provide information about how listeners are able to recognize disordered speech. Study Procedures: As a volunteer for this study, you will attend one session lasting about 30 minutes. An appointment will be scheduled at your convenience. All testing will be carried out in the School of Audiology and Speech Sciences located on the UBC Vancouver campus. During this session, you will take part in four phases: First phase (Hearing test): You will be asked to wear headphones and to raise your hand when you hear soft “beeps”. If your hearing is found to be within the normal range, you will proceed to the next task. Second phase: You will be asked to listen to short words presented one at a time. These words will be spoken by a person who was born with damage to the nervous system. This damage has affected the way he/she speaks. You will be then be asked to identify the word you heard by clicking on one of two words presented on the computer screen. Some of the words may be difficult for you to understand, but you are encouraged to guess even if you are unsure of the intended word. Third phase: You will listen passively to words produced by the same speaker. At this time you will not have to identify any of the words you hear. Fourth phase: During this session you will perform the same task as in the second phase.  61  Potential Benefits: You will complete a hearing screening to determine whether your hearing ability on the day of testing is within normal limits. If you pass the hearing screening you will know your hearing is within the normal range. If you do not pass this screening you will learn that you possibly have a hearing impairment and will be provided with the needed information to arrange for a detailed follow up assessment with an audiologist. Confidentiality: You understand that your identity will be protected by assigning you a participant code. This code will be used to identify all of your forms and computer files. Only group results (no individual results) will be given in any reports about the study. Coded results only (no personal information) will be kept in computer files on a password-protected computer. Hard-copies of documents will be stored in a locked filing cabinet inside of a secured and locked laboratory. Compensation: In order to defray the costs of transportation and the inconvenience, you understand that you will receive an honorarium of $10.00 for participating in this study. Contact: This consent form will be made available to you at least twenty-four hours before you arrange an appointment for participating in the study. If you have any questions or desire further information with respect to this study, you may contact Leah Buchholz via email: LeahBuchholz@gmail.com or telephone (778)-895-0519. If you have any concerns about your treatment or rights as a research subject, you may contact the Office of Research Services at the University of British Columbia at (604) 822-8598. Consent: You understand that your participation in this study is entirely voluntary, and that you may refuse to participate or that you may withdraw from the study at any time without consequence. You understand that, should you withdraw before completion, you will still receive an honorarium of $10.00 as reimbursement.  ______________________________________ Subject signature  _______________ Date  ______________________________________ Subject name (PRINT) ______________________________________ Signature of a Witness ______________________________________ Name of Witness (PRINT)  _______________ Date  62 Appendix D Listener with Feedback Consent Form SCHOOL OF AUDIOLOGY AND SPEECH SCIENCES Faculty of Medicine University of British Columbia  5804 Fairview Ave. Vancouver, B.C. V6T 1Z3 Tel: (604) 822-5795 Fax: (604) 822-6569  CONSENT FORM Project title: “Perceptual Learning of Dysarthria: Effects of Familiarization and Feedback” Principal Investigator: Dr. Valter Ciocca, Professor, School of Audiology and Speech Sciences, phone #: (604) 822-2266 Co-Investigator: Leah Buchholz, Master’s student, School of Audiology and Speech Sciences, phone #: 778.895.0519. This experiment is being conducted in partial fulfillment of a Master’s thesis in the School of Audiology and Speech Sciences at the University of British Columbia. Purpose: The researchers are investigating peoples’ perceptions of speech and how it might differ following training to specific speech stimuli. Participants will be asked to listen to speech produced by an individual with cerebral palsy (dysarthric speech) and to identify which word they hear from two alternative words displayed on a computer screen. The results of this study will provide information about how listeners are able to recognize disordered speech. Study Procedures: As a volunteer for this study, you will attend one session lasting about 30 minutes. An appointment will be scheduled at your convenience. All testing will be carried out in the School of Audiology and Speech Sciences located on the UBC Vancouver campus. During this session, you will take part in four phases: First phase (Hearing test): You will be asked to wear headphones and to raise your hand when you hear soft “beeps”. If your hearing is found to be within the normal range, you will proceed to the next task. Second phase: You will be asked to listen to short words presented one at a time. These words will be spoken by a person who was born with damage to the nervous system. This damage has affected the way he/she speaks. You will be then be asked to identify the word you heard by clicking on one of two words presented on the computer screen. Some of the words may be difficult for you to understand, but you are encouraged to guess even if you are unsure of the intended word. Third phase: You will listen to words produced by the same speaker while reading these words presented to you on the computer screen. You will not have to identify any of the words you hear.  63 Fourth phase: During this session you will perform the same task as in the second phase. Potential Benefits: You will complete a hearing screening to determine whether your hearing ability on the day of testing is within normal limits. If you pass the hearing screening you will know your hearing is within the normal range. If you do not pass this screening you will learn that you possibly have a hearing impairment and will be provided with the needed information to arrange for a detailed follow up assessment with an audiologist. Confidentiality: You understand that your identity will be protected by assigning you a participant code. This code will be used to identify all of your forms and computer files. Only group results (no individual results) will be given in any reports about the study. Coded results only (no personal information) will be kept in computer files on a password-protected computer. Hard-copies of documents will be stored in a locked filing cabinet inside of a secured and locked laboratory. Compensation: In order to defray the costs of transportation and the inconvenience, you understand that you will receive an honorarium of $10.00 for participating in this study. Contact: This consent form will be made available to you at least twenty-four hours before you arrange an appointment for participating in the study. If you have any questions or desire further information with respect to this study, you may contact Leah Buchholz via email: LeahBuchholz@gmail.com or telephone (778)-895-0519. If you have any concerns about your treatment or rights as a research subject, you may contact the Office of Research Services at the University of British Columbia at (604) 822-8598. Consent: You understand that your participation in this study is entirely voluntary, and that you may refuse to participate or that you may withdraw from the study at any time without consequence. You understand that, should you withdraw before completion, you will still receive an honorarium of $10.00 as reimbursement.  ______________________________________  Subject signature  _______________  Date  ______________________________________ Subject name (PRINT) ______________________________________ Signature of a Witness ______________________________________ Name of Witness (PRINT)  _______________ Date  64 Appendix E Questionnaire Participant #: _______________ Questionnaire This study is being conducted in partial fulfillment of a master’s thesis. Your participation is voluntary and you are free to withdraw your participation at any time. Please complete this questionnaire by filling in the blanks and circling your responses. To ensure anonymity, please do not put your name on this questionnaire. Hearing Screening Pass  Did not pass  Age: _____ Sex: _____ Is English your first language? Yes No  Do you have any training in phonetics or linguistics (either through coursework at university or professionally?) Yes No  Do you have previous experience speaking with people with Cerebral Palsy, Parkinson’s Disease, or Lou Gehrig’s Disease (ALS)? Yes No  If the answer to the previous question was yes, can you describe in what capacity your interactions with this individual were? If no, please proceed to the next question.  65 Do you have a history of speech or communication impairments? Yes No  66  Appendix F Forced-Choice Presentation During Participant Testing  67 Appendix G Table 2. POA Mean Accuracy Scores (%) and Standard Errors (Underneath Each Mean) for All Wordlists  POA Bilabial Typical  Alveolar Velar Bilabial  Atypical  Alveolar Velar  Pre-Familiarization Voiced Voiceless 67 52 (5) (7) N/A N/A  Post-Old Voiced Voiceless 47 63 (6) (5) N/A N/A  Post-New Voiced Voiceless 67 68 (5) (8) N/A N/A  N/A  N/A  N/A  N/A  N/A  N/A  97 (3) 50 (4) 90 (6)  17 (6) 77 (5) 15 (4)  34 (5) 33 (6) 37 (5)  40 (6) 20 (5) 37 (8)  93 (5) 56 (5) 83 (7)  70 (5) 90 (6) 60 (6)  68 Appendix H Ethics Certificate of Approval The University of British Columbia Office of Research Services Behavioural Research Ethics Board Suite 102, 6190 Agronomy Road, Vancouver, B.C. V6T 1Z3  CERTIFICATE OF APPROVAL - MINIMAL RISK PRINCIPAL INVESTIGATOR:  INSTITUTION / DEPARTMENT: UBC BREB NUMBER: UBC/Medicine, Faculty of/Audiology Valter Ciocca H08-01346 & Speech Sciences INSTITUTION(S) WHERE RESEARCH WILL BE CARRIED OUT: Institution  Site  UBC  Vancouver (excludes UBC Hospital)  Other locations where the research will be conducted:  N/A CO-INVESTIGATOR(S): Leah Buchholz SPONSORING AGENCIES: UBC Faculty of Medicine PROJECT TITLE: Perceptual Learning of Dysarthric Speech: Effects of Familiarization and Feedback CERTIFICATE EXPIRY DATE: July 8, 2009 DOCUMENTS INCLUDED IN THIS APPROVAL: Document Name  Consent Forms: Speaker Listeners without feedback Listeners with feedback Advertisements: Listener recruitment Ad Questionnaire, Questionnaire Cover Letter, Tests: Listeners Questionnaire Letter of Initial Contact: contact with speech-language pathologists  DATE APPROVED: July 8, 2008 Version  Date  N/A N/A N/A  June 30, 2008 June 30, 2008 June 30, 2008  N/A  June 30, 2008  N/A  June 30, 2008  N/A  June 30, 2008  The application for ethical review and the document(s) listed above have been reviewed and the procedures were found to be acceptable on ethical grounds for research involving human subjects.  Approval is issued on behalf of the Behavioural Research Ethics Board and signed electronically by one of the following:  Dr. M. Judith Lynam, Chair Dr. Ken Craig, Chair  69 Dr. Jim Rupert, Associate Chair Dr. Laurie Ford, Associate Chair Dr. Daniel Salhani, Associate Chair Dr. Anita Ho, Associate Chair  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0067015/manifest

Comment

Related Items