Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Dichotic perception of automatic speech in normal subjects Ben-Dat, Evelyn Judith 1974

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata


831-UBC_1974_A6_7 B45.pdf [ 4.29MB ]
JSON: 831-1.0093057.json
JSON-LD: 831-1.0093057-ld.json
RDF/XML (Pretty): 831-1.0093057-rdf.xml
RDF/JSON: 831-1.0093057-rdf.json
Turtle: 831-1.0093057-turtle.txt
N-Triples: 831-1.0093057-rdf-ntriples.txt
Original Record: 831-1.0093057-source.json
Full Text

Full Text

DICHOTIC PERCEPTION OF AUTOMATIC SPEECH IN NORMAL SUBJECTS by EVELYN BEN-DAT B.A., University of Toronto, 1971 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in the Department of Paediatrics Division of Audiology and Speech Sciences We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA June, 1974 In p resent ing t h i s t h e s i s in p a r t i a l f u l f i l m e n t o f the requirements f o r an advanced degree at the U n i v e r s i t y of B r i t i s h Columbia, I agree that the L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r reference and study . I f u r t h e r agree tha t permiss ion fo r e x t e n s i v e copying o f t h i s t h e s i s f o r s c h o l a r l y purposes may be granted by the Head of my Department or by h i s r e p r e s e n t a t i v e s . I t i s understood that copying o r p u b l i c a t i o n of t h i s t h e s i s f o r f i n a n c i a l ga in s h a l l not be a l lowed without my w r i t t e n p e r m i s s i o n . Department of InuJAWW hueLjgu &/ AJAZUJ^J^U a*/L r n • . - _ i n l i • _ The U n i v e r s i t y of B r i t i s h Columbia Vancouver 8, Canada Date QfittAtoj J/7 HM4 ABSTRACT In the present study, the dichotic perception of "automatic" speech in normal subjects was investigated. Four dichotic tapes were presented, under both single-pair and double-paired conditions. The f i r s t tape consisted of "automatic" word pairs, and the second contained "propositional" word pairs. The third and fourth tapes, which were identical but presented on opposite channels, consisted of "automatic-propositional" word pairs. Subjects responded orally to the stimuli. Differences in order of report and distribution of errors were evaluated by means of Wilcoxon's Matched-Pairs Signed-Ranks Test. A significant right ear advantage was found for both automatic and propositional stimuli, indicating lateralization of processing to the dominant hemisphere. Responses to the third and fourth tape varied significantly, suggesting that automatic and propositional words constitute different modes of language, and therefore undergo different sub-cortical processing. The results of the present investigation are examined in relation to the existing model of automatic speech representation. Certain contradictions are noted. The limitations of the experiment, as well as suggestions for further research, are discussed. i i i TABLE OF CONTENTS PAGE ABSTRACT i i TABLE OF CONTENTS i i i LIST OF TABLES v LIST OF FIGURES v i ACKNOWLEDGEMENT v i i CHAPTER 1. INTRODUCTION 1 2. LITERATURE REVIEW 8 2.1 Model of Quantitative Hemispheric Involvement 8 2.2 Physiological A b i l i t i e s of the Minor Hemisphere 9 2.3 Linguistic Capabilities of the Minor Hemisphere 12 2.4 Automatic Speech 17 2.5 Properties of Automatic Speech 23 2.5.1 Expressive Aspects 23 2.5.2 Receptive Aspects 25 2.6 Dichotic Listening 25 2.6.1 Automaticity and Dichotic Listening . . . . 30 3. STATEMENT OF PROBLEM 34 4. METHOD 3 6 4.1 Stimuli Selection 36 CHAPTER iv PAGE 4.2 Preparation of Dichotic Tapes 36 4.2.1 Recording Stimuli 36 4.2.2 Preparation of Tape Loops 37 4.2.3 Stimuli Arrangement 38 4.2.4 Preparation of Stimulus Tapes 38 4.3 Subjects 42 4.4 Procedure 42 5. RESULTS 44 5.1 Scoring 44 5.1.1 Series #1 44 5.1.2 Series #2 46 5.2 Subjects 48 6. DISCUSSION 51 REFERENCES 59 APPENDIX 1 - Duration (in msec.) of Stimulus Words 64 APPENDIX 2 - Lists of Dichotic Word Pairs 65 APPENDIX 3 - Mingogram Tracings of Word Pairs 71 V LIST OF TABLES TABLE PAGE 5.1 Results of the First Series of Dichotic Tests 47 5.2 Results of the Second Series of Dichotic Tests 50 v i LIST OF FIGURES FIGURE PAGE 2.1 Neuroanatomical Schema for the Auditory Asymmetries . . 26 4.1 Preparation of Stimulus Tapes 4 0 ACKNOWLEDGEMENTS I want to thank a l l those who had a part in this thesis: the members of my thesis committee, Dr. Andre-Pierre Benguerel, and especially, Dr. John Gilbert. a l l the subjects for their time and cooperation. Betty, Sharon, Pat, Ingrid and Meralin for their companion-ship and. encouragement. and, particularly, Mordechai. 22028 AU T H O R 7 A U T EU R BEN-DAT, E, T I T L E / T I T R E DICHOTIC PERCEPTION OF AUTOMATIC SPEECH IN NORMAL SUBJECTS. N L - 2 0 5 CHAPTER 1 INTRODUCTION Hemispheric lateralization of the language function was recognized as far back as the 19th century, through the study of aphasia We have a report that Dax (1836) had observed aphasia resulting from left hemisphere lesions more often than from right lesions. This observation was confirmed with Paul Broca's findings. After performing a post mortem operation on the brain of one aphasic patient, Broca formu lated the theory (1861) that the loss of the motor aspects of language was the result of damage to the third frontal convolution of the le f t hemisphere. To describe his concept of cerebral dominance, Broca stated: "On parle avec 1'hemisphere gauche." Since his time, research has corroborated the importance of the left hemisphere for language. Other equally important language associated areas have been localized and demarcated in the left hemis-phere. Discrete lesions of these areas have been found to disrupt the comprehension, memory, ordering, integration and execution of language. In the majority of cases, damage to the left hemisphere (often accompani by right hemiplegia) causes language d i f f i c u l t i e s , while similar damage to the right hemisphere has l i t t l e or no effect. Using a more direct method of investigation, Penfield and Roberts (1959) found that electrical stimulation generally interfered with speech more consis-tently and over larger areas in the left himisphere than in the right hemisphere, (p. 120) Dichotic listening tests, based on the 2 principle of strong contralateral connections between ear and cerebral hemisphere, have demonstrated a significant right ear superiority for many types of linguistic material. (Kimura, 1967; Studdert-JKennedy § Shankweiler, 1970; Krashen, 1972). These types of investi-gations have resulted in modifications being made to Broca's s t r i c t localizationist position. While i t has been established that for the majority of people, the l e f t hemisphere is more involved in the function of language that the right, certain qualifications to a general statement have to be made. Using the Wada test, (which involves injecting sodium amytal into the carotid artery to effect the selective paralysis of one cerebral hemisphere), Milner, Branch and Rasmussen (1964) tried to determine a relationship between handedness and cerebral dominance. Their figures show that, of their sample, 90% of the right-handed subjects had language lateralized to the l e f t hemisphere, while only 64% of the left-handed subjects (without former pathologies) had similar lateralization to the le f t hemisphere. Of the remaining group of normal left-handed subjects, 20% were found to have language functions concentrated in the right hemisphere; the remainder, about 16%, were believed to have essentially bi-lateral representation of language in the cortex. Rossi and Rosadini (1967) after testing 84 subjects, arrived at different percentages. Yet, despite variance in the actual figures, they reached the same conclusions, i.e. the relative indepen-dence of handedness and hemispheric dominance, and concurred that b i -3 lateral representation of language does occur in some adults. There are also suggestions that the lateralization of language to one hemisphere, when present, is not always complete; in other words, the degree of lateralization may vary from individual to individual. Luria (1970) reports evidence from a large group of patients who had received damage to the primary speech areas of the lef t hemisphere. As compared with "pure" right handers, those who were left-handed, those who displayed slight signs of left-handedness,* even those from families with left-handers, were slightly less l i k e l y to display severe symptoms of aphasia, and were more lik e l y to have rapid recovery from their linguistic disturbances. Luria concludes that "only in cases of absolute dominance of the left hemisphere do gross lesions of the primary speech areas produce severe lasting aphasic disorders." Therefore, in phenotypic right-handers, when there is some latent left-handedness, the asymmetrical relationship between the l e f t and right hemispheres may be less than total. As Luria states: This does not imply that lateralized functions cease altogether depending on l e f t , or 'leading' hemisphere, but rather that a whole series of intermediate stages ranging from absolute dominance by the l e f t hemisphere, through equivalence of the two hemispheres, to dominance by the right hemisphere, may be expected to occur. (Luria, 1970, p. 62) Luria applied a variety of tests to determine whether the subject exhibited morphological, functional and physiological characteristics of s i n i s t r a l i t y . 4 Zangwill (1960), in reviewing several hundred cases of aphasia, reaches similar conclusions to those of the abovementioned studies. He also indicates that left-handers are less completely lateralized for language at the cerebral level, and concurs with Luria's conclusion that not only left-handers, but also a certain proportion of right-handed persons show incomplete lateralizations of language. He restates the conclusion: "cerebral dominance i s in a l l probability i t s e l f a graded characteristic, varying in scope and completeness from individual to individual." (Zangwill, 1960, p. 27) After an extensive study of patients with verified unilateral lesions, Hecaen and Sauguet (1971) also concluded: A relative b i - l a t e r a l i t y of representation for language and cognitive functions is a typical feature of the cerebral organization found in certain left-handed subjects (belonging to the familial group essentially) . . . . In spite of these differences, the functional assymetry found in right-handed subjects is also found in left-handed subjects, though to a less strong and consistent degree, (p. 45) From the original thesis, that language is centred in the left hemisphere, we have come to accept that there are variations between individuals in both the degree and locus of hemispheric dominance. However, the basic premise, that for the majority of people, the left hemisphere is more significantly involved in the function of language, s t i l l holds true. The basis of this inequality of hemispheric function, beyond the fact of it s existence, is l i t t l e understood. Geschwind and Levitsky (1968) have found some evidence of anatomical asymmetry. They showed that the area behind Heschl's gyrus on the temporal lobe, 5 the planum temporale, was larger on the l e f t side in 65 of the 100 human brains measured. The planum temporale is the site of part of Wernicke's area, the auditory association area. The importance of this region for the processing of language may be revealed by the results of a study by Luria (1970). Damage to Wernicke's area in the l e f t hemisphere was found to cause severe and persistent aphasia in 82-90% of cases. Geschwind argues for a relationship between these anatomical differences and the functional asymmetries between hemispheres. According to a review by von Bonin (1962), there are further differences. Generally there is more cortex on the l e f t side, the insula is larger and the Sylvian fissure is longer. However he points out that these are small disparities, and cannot explain the far greater functional variation between hemispheres. Di Chiro (1962) found the vein of Labbe in the hemisphere for speech was noticeably larger than the veins of Trolard. In the non-dominant hemisphere the reverse was true. His results did not show str i c t right-left differences; they were related to whichever hemisphere was dominant for language. Thus far, the available evidence f a i l s to lead to a firm conclusion. There i s , as yet, l i t t l e understanding of the relationship between brain structure and function. While some writers accept these asymmetries in anatomy as basic to the left hemisphere's impor-tance for language, others believe that the functional differences are far too large to be explained simply by the known anatomical asymmetries. 6 The basis of lateralization of function is uncertain; i t may be due to the anatomical differences, to genetic prepotence, to an increased efficiency through use, or some combination of these. Nor is the extent of the functional lateralization of linguistic pro-cesses f u l l y understood. The degree of asymmetrical hemispheric dominance has been shown to vary from individual to individual, indicating variation in the degree of right hemispheric participation in linguistic processing. Even in s t r i c t cases of l e f t hemispheric lateralization, are a l l linguistic a b i l i t i e s dominated by this hemisphere? Can, and does the right (minor) hemisphere play any role in language discrimination in this instance? What are the limits of this participation? The answers to these questions are of considerable importance. In a search for better understanding of "language," scientists hypothesize about the universal structure and grammatical rules which underlie man's linguistic competence. Their models wi l l be incomplete as long as they cannot be related to actual physiological processes in the brain. If the minor hemisphere plays a part in these processes, i t should not be ignored. In fact, perhaps a clear elucida-tion of the limited mechanisms available in the right hemisphere would shed some light, i f only by contrast, on the more complicated mechanisms concentrated in the l e f t hemisphere. Furthermore, a better understand-ing of the functional linguistic limits of the minor hemisphere might add further detail to our concept of language in i t s many facets. C l i n i c a l l y , awareness of right hemisphere involvement in language might help to explain certain pathological phenomena. Furthermore, i t would allow clinicians to consciously direct therapy 7 in order to maximize linguistic output within possible limits, and also to indicate, perhaps, when therapy should be terminated. Normative data is a truly essential basis for positive, constructive therapy with the pathological patient. Clearly, further exploration in this area is warranted. This present research does not attempt to answer the larger theoretical questions. The investigation w i l l concentrate only on a small element of the whole: the capability of the right hemisphere to process a sub-set of language, automatic speech. 8 CHAPTER 2 LITERATURE REVIEW 2.1 Model of Quantitative Hemispheric Involvement In a l l of the foregoing material, we have been considering language as a constant, undiversified entity. Evidence strongly suggests however, that language is heterogeneous, containing several levels or sub-sets. And while accepting the functional asymmetry of language, there are indications that a l l of these linguistic levels are not equally lateralized to the dominant hemisphere. Hughlings Jackson, the English neurologist whose writings date back to the 1860's, recognized that one hemisphere "leads" the other in the processing of language. However, he was one of the f i r s t in a long line to reject the simple view of relating "language," as a homogeneous unit, to certain circumscribed physical locations in the brain. Instead he saw language as a complex organization of psychological processes which could be better reflected in dynamic cortical systems, which were not necessarily lateralized. He believed that language is represented in both cerebral hemispheres, but at very different levels in i t s functional hierarchy. At the lowest level of this hierarchy of language, he hypothesized, there are emotional utterances, and other primitive and highly coded responses which are relatively involuntary; these are organized bi-l a t e r a l l y , and, as a result, usually escape destruction in aphasia. At the intermediate level, there i s comprehension, which i s a process more "automatic" than expressive speech, and is therefore less subject to 9 strong unilateral specialization. At the highest level, there i s propositional, ideational, newly-coded speech which is "uniquely related to the evolution of a 'leading' hemisphere and wholly dependent upon its integrity." Dominance is then, in Jackson's view, "very much a matter of degree, and affects above a l l the most complex, least automatic, and most highly evolved aspects of psychological activity." (cited in Zangwill, 1967, p. 1) The model of cerebral functioning suggested by this hypothesis is one of bi-lateral involvement in language processing, with the degree of lateralization to the dominant side varying directly with the level of language. Involvement of the two hemispheres differs quantitatively, as opposed to qualitatively. 2.2 Physiological A b i l i t i e s of the Minor Hemisphere The f i r s t point we must investigate in this model is whether the minor hemisphere has the a b i l i t y to participate i n the functioning of language. Evidence indicates that i t does; the minor hemisphere has an i n i t i a l capacity to completely replace the other, and, in the mature brain, seems capable of i n i t i a t i n g , controlling and comprehending speech, although functionally limited in these a c t i v i t i e s . Cases of early childhood hemispherectomy and brain damage provide the basis of our knowledge of the development of lateralization. (Basser, 1962; Lenneberg, 1967; Krashen, 1972) It is suggested that, during the early period in which three processes take place, the growth and maturation of the brain, the acquisition and development of language, and the gradual lateralization of language to the dominant 10 hemisphere, there i s relative hemispheric " p l a s t i c i t y " with respect to the language function. Until cerebral dominance has been established, damage to the major hemisphere may result in the minor hemisphere assuming these functions. Past the c r i t i c a l age, when lateralization has been firmly established, damage to the major hemisphere would result in language disorders comparable to adult aphasia. The approxi-mate age at which the brain reaches i t s mature state of functional dominance i s a matter of discussion. Lenneberg postulates that this c r i t i c a l age is about 12-13 years or puberty, while Krashen supports a "lateralization by five" hypothesis. Despite this disparity, we can assume that during the i n i t i a l few years of l i f e , at least, the minor hemisphere has the potential to become dominantly involved with language. After maturity, under normal circumstances,''' the minor hemisphere is no longer able to totally subserve the language function. There are indications, however, that the right hemisphere i s s t i l l capable of playing a limited role in the processing of language. Recent evidence tends to confirm that the minor hemisphere does retain the physiological a b i l i t y to in i t i a t e vocalization, to control articulation, and to comprehend speech. Work by Penfield and Roberts (1959) in particular, lends support to the thesis that the right hemisphere is capable of in i t i a t i n g and directing speech. They used electrical stimulation in order to extensively map areas of the cortex. While application of the electrode interrupted speech in *An exceptional case is that of "Genie," (Fromkin et a l . , 1974) a right-handed adolescent g i r l who suffered severe deprivation for most of her l i f e . Though past puberty when discovered, she is now beginning to acquire language. Results of dichotic tests indicate that her language is strongly lateralized to the right hemisphere. 11 certain areas of the l e f t hemisphere, they discovered that stimulation of the Rolandic (motor) area and the supplementary motor area of either cerebral hemisphere would cause the vocalization of a "sustained or interrupted vowel cry, which at times may have a consonant component." (p. 120) The obvious problem with their technique, however, is the dissimilarity between a r t i f i c i a l l y introduced electrical stimuli and those electrical charges of the brain's own neural networks. Nonetheless, these results seem to be corroborated by patients following selective cortical excision. After removal of either the lef t or right pre-central gyrus (sensory-motor control), Penfield and Roberts found that "there is no more than a minor interference with the movement of face and mouth." (p. 18) They believe the motor control of vocal-, ization and articulatory movements to be b i - l a t e r a l l y represented, with either hemisphere able to assume unilateral control with l i t t l e d i f f i c u l t y . Geschwind, in discussion, (1967), confirms that there is "evidence to show that there are separate pathways descending in both the l e f t and right internal capsule for bi-lateral innervation of the speech musculature. The l e f t is normally used, but the right can substitute." (p. 120) Descriptions of patients after dominant hemispherectomy further substantiate these findings. Gott (1973) and Smith and Burklund (1966), in descriptions of past and current cases, report that speech was consistently available to patients immediately after surgery. 12 2.3 Linguistic Capabilities of the Minor Hemisphere One must always hesitate in presuming capabilities of the normal brain from evidence based on the sub-normal. Nevertheless, i t is study of the pathological that tends to provide the bulk of data to support our growing knowledge of cortical functioning. It is with this understanding that we explore material based on the sub-normal brain. Investigations of commissurotomized patients have provided insights into the capabilities of the minor hemisphere in relative isolation from an intact dominant hemisphere. (Gazzaniga, 1970; Gazzaniga § H i l l i a r d , 1971; Sperry § Gazzaniga, 1967) These authors have based most of their results on two particular epileptic patients who made rapid recovery after the operation, and who had not suffered any major brain damage before surgery. Their experiments took advantage of the fact that pathways conducting both visual f i e l d and fine stereognostic information project directly to the contralateral hemis-phere . When objects were presented visually and tactually to the left hemisphere, subjects had no d i f f i c u l t y naming or describing them, either verbally or in writing. When this procedure was repeated, with stimuli presented to the right hemisphere, verbal expression was impossible, or irrelevant confabulatory responses were made. Written responses to these stimuli were mere guesswork. These, and other related tasks, support the conclusion that the production of voluntary speech and writing in these patients, who were right-handed, 13 was confined to the left hemisphere. On the subject of automatic speech, however, Gazzaniga notes that "the possibility that a few simple emotional, tonal, or extremely familiar words might be expressed through the minor hemisphere . . . cannot be completely ruled out." (Gazzaniga, 1970, p. 125) Other tests were conducted, these requiring non-verbal responses. The performances of the subjects on these tasks indicated that the minor hemisphere was able to discriminate stimuli visually, tactually, and able also to make inter-modal transfers. Furthermore, they showed the minor hemisphere capable of making conceptual associa-tions between object and function. The right hemisphere could also react emotionally to provocative or amusing stimuli. It was also demonstrated that, when prior brain damage had been minimal, the minor hemisphere had some comprehension of verbal material. Two subjects were capable of reading letters, numbers and short words presented in the left visual f i e l d . The best performance was noted when the material was nominal; adjectives were comprehended less well, and verbs not at a l l . Spoken words were understood, and associated with pictures or a printed word. They were also able to discriminate affirmative from negative sentences. One subject could even spell very simple words with his l e f t hand, but slowly and with considerable effort. These experiments have succeeded in demonstrating that the minor hemisphere possessed definite, i f limited, linguistic s k i l l s . In tasks requiring word and object association, sorting and retrieval, the right hemisphere showed evidence of ideation and mental concen-14 tration. In contrast to the highly lateralized organization of propo-sitional verbal expression, the comprehension of language was found to be represented in the minor as well as the major hemisphere, although evidence indicates that the minor hemisphere is clearly inferior in this respect. Gazzaniga (1967) concluded that "the results of these studies favor the view that lateral specialization exists, but that i t lies more in the motor executive or expressive sphere than in the sensory perceptual component of any performance." (p. 162) Hemispheric anaesthetization by sodium amytal (Wada § Rasmussen, 1960) produces temporarily, the effects of hemispherectomy. Patients whose dominant hemisphere was anaesthetized have usually been unable to speak. The patients could, however, comprehend spoken commands and instructions. Reports on patients after surgical removal of the dominant hemisphere indicate that the minor hemisphere has even greater a b i l i t i e s , and define i t s functional linguistic capabilities when totally isolated. In the case of a 10 year old g i r l , Gott (1973) reports that R.S., immediately on regaining consciousness, could articulate a few isolated words, and more remarkably, could sing easily and fluently, a task requiring precise movement and integration of the articulators. Although suffering a severe expressive deficiency, she was able to repeat, and could rapidly count and recite the alphabet in series, but had d i f f i c u l t y with the naming or retrieval of elements of these series when presented out of sequence. Gott follows the development of R.S. over the two year period, up to the age of 12 years. At this point she remarks: "The dissociation of automatic verbal habits and 15 consciously directed speech is so great that R.S. is unable to name or write a given letter or number except by reciting the entire sequence." (Gott, 1973, p. 1087) Also consistently contrasted since the hemispherectomy, was the "superiority for auditory comprehension as compared with expressive speech." (loc. cit.) Auditory comprehension was present for numbers, letters, syntax and the semantic use of words. Expressive speech was limited to a very few words. The case for these a b i l i t i e s being representative of an isolated, truly minor hemisphere may be challenged in this instance, however, due to the young age at which neuroanatomical operations had been performed; a f i r s t tumour was removed at the age of 8 years, though without resultant language disturbance. It is possible that as early as 7-8 years, the right hemisphere had begun to acquire control of linguistic functions. However, Gott notes a "functional disconnection between language and other systems within the same hemisphere," a c l i n i c a l picture which has not been reported in cases of le f t hemispherectomy for infantile hemiplegia. Another case, in which this ambiguity was absent in presented by Smith and Burklund (1966). They describe the progress of a right-handed, 47 year old male, E.C., after l e f t hemispherectomy for glioma; up to one year previously, there had been no indication of brain damage. Pre-operative tests, a family history of consistent right-handedness, and the immediate results of the operation indicate that, for this man,: the le f t hemisphere had been dominant. Following surgery, E.C. had severe expressive aphasia; he could not repeat words on command, reply to questions, or communicate 16 in propositional speech. However, he was able to utter expletives and short emotional phrases with l i t t l e d i f f i c u l t y . He could follow simple verbal commands, indicating normal hearing and some speech comprehension Again there is the same picture of verbal comprehension and the lack of voluntary expression. Several months after the operation, E.C.'s occasional proposi-tional speech was improving, although Smith s t i l l described i t as severe impaired. While this was so, the patient had sudden recall of familiar songs, and within two months, could sing with l i t t l e hesitation and few articulation errors. Speech comprehension was improving constantly. Smith notes that these findings are similar to the few other such cases which have been reported. (Zollinger, 1935; Crockett and Estridge, 1951) Although severely impaired, . . . speech and verbal communication was available immediately after le f t hemispherectomy in a l l three cases. . . . Since these functions are not abolished, and since speaking, reading, writing and understanding language show continuing improvement in E.G. after lef t hemispherectomy, the right hemisphere apparently contributes to a l l these functions, although in varying proportions. . . . Thus hemispheric functions would seem to diffe r quantitatively rather than qualitatively. (Smith, 1966, p. 1281) On the basis of the consistency of this c l i n i c a l syndrome, one could speculate that the hierarchy of automatic speech, speech comprehension, and propositional language is a reflection of the right hemisphere's involvement in linguistic activity in the formerly intact brain. So far, we have considered two clear-cut types of pathologies in order to investigate the capabilities of the minor hemisphere with 17 regard to language processing. In both instances, in cases after hemispherectomy and after commissurotomy, Jackson's hypothesis with respect to the different degrees of lateralization of at least three levels of language has found support. Voluntary, ideational language has been demonstrated to be most strongly lateralized to the major hemisphere. Language comprehension has been found to be less lateralized The a b i l i t y of the minor hemisphere, as well as the major hemisphere to produce automatic speech, while neither specifically proved or disproved in split-brain studies or under sodium amytal, is strikingly apparent after dominant hemispherectomy. 2.4 Automatic Speech If this model is correct, then "automatic," that i s , non-ideational, uses of language are not lateralized to a dominant hemis-phere; instead, they are bi - l a t e r a l l y processed. Besides the striking evidence which comes from hemispherectomized patients, further support comes from other fields of research for the existence of automatic speech as a separate linguistic phenomenon, which can be controlled by a different processing centre than that which controls voluntary speech. Since Hughlings Jackson's work, clinicians investigating the various syndromes of aphasia have often observed the presence of automatic speech, while volitional speech is severely impaired. Weisenberg and McBride (1964) noted "there is Jackson's demonstration, corroborated again and again by other investigators, that the voluntary 18 act suffers when the automatic persists with l i t t l e or no change. An example . . . is the patient's i n a b i l i t y to produce at w i l l words which he may utter readily in reaction to some pressing situation." Expressions may be used correctly by a patient in a certain context, "yet his speech may be so limited that he cannot make voluntary use of such remarks, nor can he repeat them." (Weisenberg § McBride, 1964, p. 416) Luria (1970) also noticed the frequent preservation of "lower levels" of speech. In such a case, the patient is unable to formulate propositions, but he is sometimes able to name objects or repeat words; he may retain a few familiar speech expressions. He is able to utter spontaneous exclama-tions and t e l l what his name i s ; he may even be able to run through certain speech sequences which have become automatic to him. Any speech pattern which might be used for the expression of a thought is impossible, but speech processes which are not part of this system i.e. those which involve simple verbo-motor habits, or which express affective states, may be retained. (Luria, 1970, p. 281) Furthermore, the linguistic deficits associated with proposi-tional language do not usually extend to automatic language. Alajouanine (1956) observed that in'agrammatical aphasia, speech under volitional control lacked syntactic constraints, whereas highly coded exclamations or phrases were grammatical. Weisenberg and McBride remarked that an aphasic's automatic speech " i s usually superior to his other speech in the accuracy of (its) articulation." (Weisenberg 15 McBride, 1964. p. 416) This suggests that different processing is involved in the production of this level of language. Determining whether this separate processing takes place in the damaged dominant hemisphere, or in the intact minor hemisphere, 19 is a more d i f f i c u l t problem. Kinsbourne (1971) tested three right-handed men, aphasic due to l e f t (dominant) hemispheric lesions, in order to investigate this question. Intracarotid amobarbital injections were given bi-late r a l l y in two cases, and on the lef t side in the third. No speech arrest occurred with the three left-side injections; the aphasic speech continued unaltered. With anaesthetization of the right hemisphere, there was total speech arrest, and in fact, both patients were unable to phonate or move the tongue and lips to command. Kins-bourne infers from this that the aphasic speech was generated from the minor hemisphere. He concludes that "some, at any rate, of the varieties of aphasic speech are programmed in the minor hemisphere." (p. 305) The different characteristics and limitations of aphasic speech would vary then with the capabilities of the individual's minor hemisphere. These results support the notion that the clear, undamaged, lower level speech often interspersed with pathological utterances may arise/ not from adjacent portions of the damaged l e f t hemisphere, but from the right hemisphere. Murray Falconer (1967) reported on speech disturbances which commonly occur in psychomotor seizures of temporal lobe origin. Ictal dysphasia i s an inab i l i t y on the part of the patient to express himself correctly while he is s t i l l conscious and without impairment of articulation or hearing. Ictal speech automatisms are defined as "utterances occurring at the beginning or during an epileptic seizure of identifiable words or phrases which are lin g u i s t i c a l l y correct, but for which the patient is subsequently amnesic." (p. 186) 20 Research was conducted on 100 patients treated surgically for epilepsy. Speech disturbances occurring in relation with the attacks were noted pre-operatively. No sodium amytal testing was done. After post-operative follow-up period of 2-10 years, 53 patients were almost seizure free; i t was considered that the correct site of the epilepto-genic lesion had been identified and treated. A comparison of pre-operative EEG data with the type of speech disturbance exhibited by these 53 patients was carried out. Of the 17 cases with i c t a l dysphasia, only one had had a right-side resection (operated for focal spike on the right temporal lobe), and he happened to be left-handed. In contrast, of the 23 patients with i c t a l speech automatisms, the seizure originated from the right side in 13 cases, and from the le f t side in 10 cases. These results show that i c t a l dysphasic utterances were usually associated with seizures originating in the (assumed) dominant cerebral hemisphere, while i c t a l speech automatisms arose from seizures originating in either hemisphere, and slightly more frequently with seizures originating in the minor hemisphere. Falconer speculates on the origin of this i c t a l speech. First, the fact that in temporal lobe epilepsy, i c t a l dysphasia is almost invariably associated with seizures arising in the dominant temporal lobe suggests that the i c t a l discharges somehow or other interfere with the functioning of the traditionally accepted speech centers in the same hemisphere. They could not, to the best of our knowledge, influence the other, the minor, hemisphere. In contrast, the neuronal discharges that occurred during i c t a l speech automatisms . . . seem to leave the tradition-al speech centers in the major hemisphere undisturbed. It is tempting, therefore, to think that these i c t a l speech automatisms may perhaps sometimes arise in . . . areas of the minor hemisphere. (Falconer, 1967, p. 190) 21 Falconer gives two examples of the i c t a l speech automatisms: "I don't care what you do!" (repeated, in distress) and "I beg your pardon." He comments that i c t a l speech automatisms are usually emotional. Our knowledge of automatic speech is based mainly on investi-gations of the pathological. Studies of sub-normal patients have indicated that automatic speech production and comprehension are capabil-i t i e s of the right hemisphere. However, the next point to be pursued is the participation of the minor hemisphere of the normal intact brain in these processes. This is more d i f f i c u l t to determine. Data from Goldman-Eisler (1964), suggest, however, that even in normals, there is a significant difference in the performance of highly propositional language, as opposed to automatic language. She investigated the fluency of expressive speech by measuring i t s reciprocal, hesitation, in relation to verbal planning and the generation of information. In one experiment (Goldman-Eisler 1961a), conditions were created for the use of different cognitive levels of verbal behaviour by assigning tasks that varied in the degree of abstract and contrete planning required for the response. Subjects were asked to describe the events of a cartoon series presented without captions, and then to express the moral of the cartoons. In order to provide for a "scaling from spontaneity to automatic speech action," the subjects were then asked to repeat their descriptions of the cartoon events, and their moral explanation six times after the f i r s t version. The results showed that pausing was more than twice as long in speech explaining the meaning of the cartoon sequence, than in the 22 speech d e s c r i b i n g the events. Pausing a l s o v a r i e d w i t h the d i f f e r e n t degrees of spontaneity; there was a sudden d e c l i n e a f t e r the f i r s t t r i a l , and pausing g r a d u a l l y decreased on each t r i a l t h e r e a f t e r . From t h i s , Goldman-Eisler concluded that d e f i n i t e d i f f e r e n c e s e x i s t between spontaneity and r e i t e r a t i o n , and t h a t pause length i s r e l a t e d to the " c r e a t i v e a c t i v i t y " of the speakers and not to the information content of the utterance i t s e l f . She f u r t h e r concluded that pausing i n d i c a t e s that i n f o r m a t i o n i s being generated at the time of speech and s i g n a l i z e s that v e r b a l planning of some s o r t i s a c t u a l l y at present t a k i n g place i n the sense o f Hughlings Jackson's . . . 'now o r g a n i z i n g ' speech, i . e . speech organized at the time of u t t e r a n c e ; whereas when (the) utterance i s f l u e n t even though the inf o r m a t i o n content may be high we s h a l l conclude that the generating processes happened i n the past. (Goldman - E i s l e r , 1964, p. 103) In an e a r l i e r experiment, Goldman-Eisler used a d i f f e r e n t parameter f o r c l a s s i f i c a t i o n : the measurement of breathing a c t i v i t y during speech. In 1955 she showed t h a t breath r a t e does r e f l e c t the degree of e x c i t a t i o n i n speakers; a high breath r a t e i n d i c a t e s e x c i t e d s t a t e s while a low breath r a t e i n d i c a t e s c o n t r o l . Measuring t h i s r a t e during i n t e r v i e w s , she found that " h i g h l y v e n t i l a t e d " speech went along w i t h e m o t i o n a l l y charged t o p i c s , while w i t h " t o p i c s of i n t e l l e c t u a l import and content r a t e d as c a u t i o u s , " there was r e s t r a i n e d breathing a c t i v i t y . With regard to the e f f e c t of a f f e c t on l e v e l s of speech, Hughlings Jackson's view was that i n a s t a t e of emotional excitement, speech becomes more a matter o f automatic v e r b a l i z a t i o n . He b e l i e v e d that "strong emotion leads to i n f e r i o r speech, to more automatic, more 23 organized utterance. . . ." (cited in Goldman-Eisler, 1964, p. 103) Furthermore, he proposed a reciprocal relationship between affective and cognitive processes ("with the f a l l in the intellectual element, there is a rise in the emotional one"). According to this theory, we might expect that emotional speech, which would be more automatic, should be indicated not only by a high breath rate, but also by a smaller degree of pausing. The opposite then would also be true; i f more controlled speech indicates higher cognitive activity, there should be a greater proportion of pausing, along with the lower breath rate. To support the theory, breath rate should be inversely related to pausing in speech; this proved to be the case. The correlation between amount of pausing and frequency of breathing was negative. On the basis of Goldman-Eisler's work, we cannot speculate on the origin of the more automatic level of language. However the data do point toward the existence of a significant, measurable difference between these two levels of language, and suggests that, since the difference does not reside in the utterances themselves, i t must reside in the different processes that underlie them. 2.5 Properties of Automatic Speech 2.5.1 Expressive Aspects From the studies we have considered, a composite definition of automatic speech may be made. Automatic speech is seen as over-learned, highly context bound utterances, including emotional expressions (swearing, expressions of joy or grief), stock phrases, serials (e.g. 24 counting, days of the week), familiar songs, greetings and reactive forms (e.g. how are you? -- Fine, thanks). The speech is non-novel, well-organized and highly coded. It is believed that automatic speech is produced in "wholes" or "gestalts." In comparison, propositional speech requires a greater degree of linguistic coding of information (semantic and syntactic). It is more voluntary, and less reactive to context. Higher intellectual processes, necessary for the generation of the newly-organized material are believed to be involved. Within the large linguistic environment, the borders of the two levels of language are not clear-cut and absolute, but rather, a continuum exists with automatic and propositional speech as i t s extreme limits. The dimension along which the continuum lies is functional familiarity. This means that the distinction is a function of the familiarity, not with the particular utterances in isolation, but with their linguistic use. A phrase which has become automatic in a linguistic repertoire, by being used often on frequently presented occasions, and therefore becoming highly organized as a unit in that context, may, in an entirely new situation, become more propositional. If the distinction does not l i e in the utterance i t s e l f , there must be some underlying cause of differentiation. Our model suggests that the difference between the two modes of speech is a reflection of differences in the underlying mechanisms of speech processing. 25 2.5.2 Receptive Aspects Less is known about automatic speech in i t s receptive aspects. If separate mechanisms underlie automatic speech however, and these are b i - l a t e r a l l y represented in the hemispheres, then we might assume that the a b i l i t y to perceive and comprehend automatic speech would be similarly represented in the cortex. Should this principle be true, then clues to the presence of mechanisms to process automatic speech in the minor hemisphere of normal subjects might be gleaned from certain dichotic listening studies. 2.6 Dichotic Listening Dichotic listening tests, originated by Broadbent (1954) involve the presentation of different stimuli simultaneously to each ear. Under this condition of competition, asymmetrical accuracy in performance has been found, dependent upon hemispheric specialization for specific types of stimuli. As a result, the dichotic listening technique is considered one of the safest methods for determining the relative a b i l i t y of each hemisphere to discriminate a variety of stimuli. Kimura (1961) presented taped digits to patients with unilateral temporal lobe lesions. When presented monaurally, both ears performed equally well. However, when presented dichotically, more digits were accurately reported from the right ear, regardless of site and side of lesion. Kimura speculated that the score was higher from the ear opposite the language dominant hemisphere. She repeated the experiment on three groups of subjects: epileptic patients whose le f t hemisphere was LEFT HEMISPHERE R I G H T HEMISPHERE LEFT SAR RIGHT EAR Figure 2.1 Neuroanatomical schema for the auditory asymmetries. dominant for speech, epileptic patients whose right hemisphere was dominant for speech (as determined by the Wada test), and normal right-handed subjects presumed to be le f t hemisphere dominant. Kimura reported that for patients with speech represented in the le f t hemisphere, and for the normal subjects, there was a significant right ear advantage (REA). For the right hemisphere dominant patients, the le f t ear was more efficient. Later experimenters have confirmed and expanded these findings. A consistent REA has been found in samples of right-handed subjects for many types of verbal material, such as monosyllabic words, digits (Kimura 1961, 1967), spondaic words (Dirks, 1964), backwards speech sounds (Kimura § Folb, 1968) and nonsense CVC syllables (Studdert-Kennedy § Shankweiler, 1970). These studies of nonsense words and backward speech have shown that the c r i t i c a l features of speech recogni-tion are not related to meaningfulness or familiarity. Studdert-Kennedy and Shankweiler, having found a REA for CVC syllables, but no REA for steady state vowels, concluded that the specialization of the dominant hemisphere in speech perception is sensitive primarily to the phonetic content, and principally the consonantal features, of an utterance. One explanation for this perceptual asymmetry is the stronger or more numerous connections between contralateral ear and cerebral hemisphere. It has been shown that in cats, (Rosenzweig, 1951), while both ears project to each cerebral hemisphere, 60% of the nerve fibres from each ear travel along the contralateral pathway, and 40% along the ips i l a t e r a l pathway. Animal studies conducted by Tunturi 28 (1946) and Rosenzweig (1951) demonstrated that the contralateral auditory-pathway is functionally stronger, in terms of the amplitude of evoked cortical response. Bocca ejt al_., (1954) showed that the recognition of distorted speech presented to the ear contralateral to unilateral temporal lobe lesions was significantly impaired. Similarly Kimura (1961) found that temporal lobe excision of either side impaired the perception of digits arriving at the contralateral ear. Kimura postulated that this contralateral advantage, in combination with the left hemisphere specialization for speech, accounted for increased efficiency of the right ear. When stimuli are presented monaurally, both ears perform equally well; i t is only under conditions of competition that an ear advantage is found. It is suggested that the performance of the con-tralateral ear is enhanced due to sub-cortical suppression of the ipsi l a t e r a l pathways by the contralateral pathways. (Kimura, 1961) Evidence to support this comes from dichotic studies of right hemis-pherectomized and commissurotomized patients. In both instances, the only available pathways to the language controlling (left) hemisphere are the contralateral connections from the right ear, and the ipsilateral connections from the left ear. In these cases (Curry, 1968; Milner, Taylor § Sperry, 1968; Sparks § Geschwind, 1968), under dichotic conditions, the scores from the le f t ear are completely extinguished or severely depressed. When stimuli are presented only to the l e f t ear, without competing input to the right ear, scores are normal. One final factor must be taken into account. It is presumed that the callosal pathway from the right to the le f t hemisphere is 29 used for stimuli presented dichotically to the le f t ear (Sparks § Geschwind, 1968; Sparks, Goodglass S Nickel, 1970). While the right ear signal has direct access to the le f t hemisphere, the left ear stimulus has to travel an extra synapse; increased transmission loss, or the later arrival of this signal to the dominant hemisphere, may account as well for further enhancement of the right ear scores. The results found in dichotic listening situations cannot be attributed solely to asymmetrical division of attention, short term memory storage, or a tendency to report the right ear f i r s t , as others have suggested. Bryden (1967) has shown that recall of right ear stimuli when compared with that of left ear stimuli, was superior on both the immediate and delayed channels of report. Kimura (1967) reported that deliberate manipulation of the order of report does not affect the REA. Finally, the ear advantage i s material specific; a lef t ear advantage has been found for musical stimuli (Milner, 1962; Kimura, 1964) and for environmental sounds (Curry, 1967) . Asymmetrical performance on dichotic listening tests can therefore be safely con-sidered a reflection of the perceptual differences in the cerebral hemispheres. For asymmetric ear performance, however, i t is not enough that the two ears receive different acoustic signals. Sparks § Geschwind (1968) presented digits to the le f t ears of their commissuro-tomized patients (ipsilateral to dominant hemisphere) and, alternatively, white noise or "unintelligible cocktail party babble" to the right ear (contralateral to le f t hemisphere). No ipsi l a t e r a l suppression took place; the patient achieved 100% and 95% scores respectively in the 30 digit reporting. It seems that both ears must be fed similar stimuli for the proper condition of competition to be reached. In an additional test, the authors simultaneously presented digits to the l e f t ear, and digits with varying degrees of distortion by low pass f i l t e r to the right ear. When the right ear signal was most distorted, l i t t l e suppression of the lef t ear stimuli took place. As distortion to the right ear decreased, the correct responses from the l e f t ear decreased, and correct responses from the right ear increased, indicating increasing extinction of the lef t ear stimuli. This suggests that as the stimuli become more similar, more suppression takes place. Berlin, Lowe-Bell, Porter, Berlin £ Thompson (1973), after testing a sample of temporal lobectomees, hemispherectomees and normals, showed that suppression of the ips i l a t e r a l CV stimuli took place to different degrees when varied language-like acoustical signals were presented in the contralateral ear. Vowels generally produced the least suppression, while "bleats" (isolated second and third formants of CV syllables) and CV syllables showed greater suppressive effects. The authors theorize that while vowels do not actually compete for decoding at the speech "processer" in the l e f t hemisphere, "they appear to the nervous system to be likely 'candidates' for the special processing." (p. 11) CV syllables and bleats, which are more speech-like, cause stronger suppression of the ips i l a t e r a l signals. 2.6.1 Automaticity and Dichotic Listening Two particular studies using dichotic listening have relevance to our model of automatic speech. Stephen Krashen (1972), 31 presented Morse code dichotically to three groups of subjects. His object was to determine whether Morse code was lateralized as speech by those with varying degrees of familiarity with i t , and to investi-gate the degree of this lateralization. One group consisted of experienced Morse code operators, one group contained recent graduates of a Morse code training course, and the third was a group of naive subj ects. Comparing the results of the two groups of Morse code (m.c.) operators gives us three results significant to our hypothesis: (1) Morse code letters presented dichotically in pairs, were lateral-ized to the right ear, lef t hemisphere by both groups of m.c. operators. (2) Greater lateralization on this task was found than under comparable experiments using verbal material. (3) The more experienced m.c. operators were less l e f t hemisphere lateralized than the recent graduates of the Morse code course. Krashen attempted to explain these results: If i t i s assumed that a given degree of lateralization corresponds to the degree to which functions involved in the perception of a stimulus are concentrated in one hemisphere, then, i f stimulus A i s more lateralized to the lef t hemisphere than stimulus B, stimulus B uses a relatively greater right hemisphere contribution for i t s perception than stimulus A. In other words, the result that operators lateralize code to a greater degree than they would language is explainable i f what is considered 'ordinary language' contains a non-linguistic sub-set that requires right hemisphere processing.(Krashen, 1972,p.10) If operators do not store code in wholes, but must process dot/dash . . . perception of code may involve less 'automatic' or non-segmental processing than ordinary language; i t may thus be the case that less right hemisphere processing takes place in code than in language. . . . It is reasonable to assume that the experienced operators would tend to store letters in wholes more than recent graduates. (Ibid., p. 11) 32 This is what would be reflected in the smaller degree of lateralization found in the former group with respect to the latter. If his conclusions are correct, then automatic speech may be processed more by right hemisphere mechanisms, and on dichotic tests, be found to be less lateralized to the left hemisphere than propositional speech. Diana Van Lancker (unpublished) tested this hypothesis directly, by determining the degree of lateralization found on dichotic listening tests in response to high frequency propositional word pairs and to automatic word pairs. She found no ear superiority to either type of stimulus when single pairs were presented dichotically. This suggests that for highly familiar speech signals, the language processing mechanisms are bi - l a t e r a l l y represented in the brain; both hemispheres could apparently deal equally with familiar speech stimuli. To further suggest a different mode of speech processing in the minor hemisphere, under the automatic dichotic condition, Van Lancker reported that substitution errors at the lef t ear were more automatic in nature, while substitution errors at the right ear were more propositional. When stimulus words were presented in doubled dichotic pairs, however, the results were different. Again no ear advantage was found for the high frequency propositional words. This i s consistent with psycholinguistic research and observations in aphasia (high frequency words are often retained). But a right ear superiority was found in response to automatic word pairs. This discrepancy in results i s d i f f i c u l t to explain, since on other dichotic listening tasks, the shift from single to double 33 pairs had not resulted in a difference in ear advantage. Van Lancker suggested that perhaps her corpus of automatic words needed revision, since their selection had been largely based on intuition and observations of aphasia, instead of on the automatic speech repertoire of normal subjects. Also her pairs of words were not balanced for number of syllables, a factor which might influence the ear effect in dichotic listening tasks. Alternately, she speculated that, although automatic speech may be bi-laterally represented, the automatic speech mode may be overly represented in the left hemisphere, which would account for the right ear advantage in the more d i f f i c u l t dichotic listening test. It is also possible that subjects produced a right ear effect since the left hemisphere is more efficient for the processing of linguistic stimuli, including such speech signals which can be processed in both hemispheres. 34 CHAPTER 3 STATEMENT OF PROBLEM Consistent with Jackson's hypothesis, studies of certain brain-damaged patients have shown automatic speech to be represented in the minor hemisphere. Since a degree of comprehension has also been available to these patients, i t has been assumed that automatic speech can also be comprehended in the minor hemisphere. Evidence suggests that in normal subjects also, automatic speech exists as a different level of language. L i t t l e research has, however, been directed at the investigation of the minor hemisphere's involvement in automatic language processing in the normal brain. Dichotic listening has been shown to be a safe and reliable technique for determining cerebral dominance. Studies by Krashen (1972) and Van Lancker (unpublished) have indicated that, in contrast to hemispheric dominance for low frequency linguistic signals, less lateral-ization to a dominant hemisphere exists for more familiar linguistic stimuli. If the dichotic listening technique is truly sensitive to varying degrees of minor hemisphere involvement in linguistic processing, then this method could be appropriately applied to normal subjects in order to examine the minor hemisphere's relative participation in the processing of different levels of language. In particular, the aim of the present research was to determine answers to the following questions: To what extent is the processing of automatic language lateralized to the dominant hemisphere? How does the degree of lateralization for automatic speech processing compare with lateralization for propositional speech processing? What are the effects of competition between the automatic and propositional modes of language when alternately presented along the auditory pathways contralateral and ipsilateral to the dominant hemisphere? 36 CHAPTER 4 METHOD 4.1 Stimuli Selection A word l i s t was developed, comprising 105 words or phrases; single words were di-syllabic, phrases consisted of two monosyllabic words. These were chosen according to the writer's intuition, as including "automatic" and "non-automatic" utterances. This l i s t was then presented individually to 20 people. Each was given the introduction: Some phrases come to our lips more quickly than others. Often, almost without thinking, one wi l l respond to a question, a statement, or a situation with a well-used word or words. The l i s t of words was then presented orally to each person, who was asked to respond (according to his/her personal repertoire) to each word as: (1) definitely f a l l i n g under this definition, (2) not f a l l i n g under this definition, or (3) being neither extreme. Seventeen words were responded to positively (i.e. as "automatic") by 95-100% of the people polled. 4.2 Preparation of Dichotic Tapes 4.2.1 Recording Stimuli These 17 words and phrases were considered therefore to be "automatic." Along with 25 other words and phrases, they were recorded 37 three times on a Scully 280 two-track stereo tape recorder, on 1/4 inch tapes. A male speaker, seated in an IAC 1204 audiometric booth, recorded through an Altec 681 microphone, at approximately an 8-inch mouth-to-microphone distance. The l i s t was read at normal conversational level; the Scully record level was adjusted so that words never peaked above 0 VU. An attempt was made by the speaker to read the phrases without intonational variation from one utterance to the next. Mingograms were then made of this recording, and each word or phrase was measured for duration. Spectrograms were also made of each utterance to substantiate durational measurements. The sixteen utterances f i n a l l y chosen were clear, without recording artifacts, and f e l l within a durational range of 560-600 msec. The range of variation was limited to 40 msecs. in order to minimize offset discrepancies. Eight of the sixteen words f e l l into the "auto-matic" category described above; eight were considered "non-automatic," or "propositional." A complete l i s t of these words and their durations is found in Appendix 1. 4.2.2 Preparation of Tape Loops The selected recording of each phrase was found on the master tape; i t s beginning and end were noted by deflections of the VU meter needle and then carefully marked. This demarcated length of tape was cut from the master tape, and spliced to an appropriate length of leader tape to form a tape loop. The phrase and correct direction were marked onto each loop. 4.2.3 Stimuli Arrangement Three l i s t s of word pairs were composed. In l i s t #1, there were 8 "automatic" utterances, randomly arranged into pairs, under the condition that each word appear an equal number of times on both channels and that i t be paired, once, with each other word. There were therefore 28 final pairs in this "automatic-automatic" l i s t . List #2 consisted of 8 "non-automatic" words and phrases, similarly paired to form 28 "propositional-propositional" pairs. List #3 was composed of 32 "automatic-propositional" pairs; each "automatic" word, and each "propositional" word appeared four times in randomly assigned pairs. See Appendix 2 for complete l i s t s of dichotic word pairs. 4.2.4 Preparation of Stimulus Tapes The purpose in making the dichotic tapes was to record phrases from two tape loops at a time, simultaneously onto 2 tracks of a new tape. Three tape recorders, a Scully 280, Ampex AG 440B, and Revox A77 were used in order to achieve this purpose. These recorders were arranged such that one output channel from the Scully, and one output channel from the Ampex were connected to the inputs of channel 1 and channel 2 of the Revox, respectively. Tape loops were played on the Scully and Ampex, and the playback levels on both these machines were kept constant. The record levels of both channels of the Revox were adjusted for each stimulus, until i t showed a peak intensity of 0 VU. A dual channel memory oscilloscope was connected to the Revox, so that the experimenter could visually monitor each pair of stimuli, and determine the degree of synchrony during recording. 39 Tape loops were placed on the Scully and the Ampex according to their designated order on the l i s t s , and according also to channel. Since the tape loops varied slightly in length, and since the Ampex recorder had a slightly faster running speed, one f i r s t had to determine which loop was leading the other. Once this was established, the "faster" loop was placed slightly back from the playback head of the machine; the second loop was placed directly on the playback head. These two machines were then set in motion. By monitoring the osc i l l o -scope, the degree of alignment of the pair, and the rate at which the two approached synchrony, could be determined. When this pair appeared to be approaching simultaneity, the Revox was started. After the pair had been recorded, i t was played back to determine whether synchrony had, in fact, been achieved. If the pair was not sufficiently aligned, i t was immediately erased, and the entire procedure repeated. If the pair was well aligned, the experimenter advanced to the next pair on the l i s t , leaving a 4-5 second gap between stimuli pairs. After the dichotic tape was completed, mingograms were made of the two channels, and synchrony of onset was measured. Those pairs whose onset varied by more than 20 msecs.''' were erased, and then re-Berlin, Lowe-Bell, Cullen, Thompson § Loovis (1973) indicate that when the left ear stimulus lags 30 msecs. behind the right, the REA disappears. Beyond this 30 msec, lag time, the left ear stimulus increases in i n t e l l i g i b i l i t y . The experimenter set a criterion of a 20 msecs. or less difference in onset time. The direction of this difference was random. 40 Scully 280 Playback Mode Ampex AG 4403 Playback Mode Revox A77 Record. Mode • channel 1 channel 2 Dual Channel Oscilloscope •channel 1 channel 2 Figure 4.1 Preparation of stimulus tapes. 41 recorded until they f i t this criterion. Since the variation in word duration was 40 msec. (560-600 msecs.), and the allowed difference in 2 onset was 20 msecs., the offset differences were limited to 60 msecs. Differences in onset and offset varied randomly between the two channels. (See Appendix 3 for samples of mingogram tracings.) A copy of this master tape was made on a second Revox A77. The inter-stimulus sections of the tape were carefully erased, in order to try to eliminate the repetition of stimuli by print-through. When the print-through could not be eliminated, inter-stimulus sections of the tape were cut out, and leader tape spliced in i t s place. Each l i s t on this copy tape was then separated, and placed on a different reel, so that the order of their presentation could be varied. The peak intensity of each word on the fi n a l tapes was measured with a Bruel and Kjaer precision sound level meter type 2203, through Sharpe HA 10 earphones, and a Briiel and Kjaer a r t i f i c i a l ear, type 4152. The difference in intensity between units of each pair was 3 within 6 dB. The playback level controls were set at a marked position on the Scully, in order to maintain the signal at 72 dB SPL ± 3 dB. A calibration tone of 1000 Hz. corresponding to a playback level of 72 dB SPL was placed on both tracks at the beginning of each tape. Whenever the tapes were played, these tones would be checked on the VU meter to insure that both channels were at an equal playback intensity and at a repeatable level from day to day. 2 Spreen and Boucher (1970) have shown that offset differences of 60 msecs. produce a no significant effects on ear asymmetry. 60 msecs. was believed to be a safe cut-off point. 3 Thompson ejt al_. (1972) have shown that the right ear, when receiving a speech signal 10 dB less intense than the left ear signal, can s t i l l obtain a superior score under dichotic conditions. 42 Due to some ambiguity in the results after testing had been completed on 13 subjects, the tapes were later modified for presentation in another series of tests. The purpose of these modifications was to make the tests more d i f f i c u l t , and therefore to increase errors. To do this, specific pairs and gaps were spliced from the tapes, changing the format of each test from pair-gap-pair-gap to pair-pair-gap. None of the four words within each group of double pairs was repeated. List #1 and l i s t #2, in the second set of tests, each con-sisted of 22 word pairs. List #3 remained 32 pairs in length. 4.3 Subjects The 17 subjects were a l l normal, right-handed, North American adults between the ages of 20-35 years. For a l l subjects, English was the native language. Screening was run on a l l subjects; the criterion for acceptance was bi-lateral symmetry of hearing, within a range of 5 dB. No subject was rejected after hearing screening. Many of these subjects had been in the original sample for stimuli selection; however, this was not a pre-requisite for acceptance to the actual testing situation. Thirteen subjects were involved in each of the two series of tests conducted. 4.4 Procedure In both series of tests, the same experimental procedure was followed. Each subject sat in a comfortable chair beside the tape recorder, facing the experimenter. Instructions were kept to a minimum; 43 each subject was told that he/she would be required to listen to 4 tapes, and was asked to repeat, during the pauses On the tape, every-thing he/she had heard and could remember. Stimuli were presented over circumaural headphones (Sharpe HA 10), adjusted to f i t snugly and comfortably. During testing, the experimenter copied down the subjects' responses, in the order they were spoken. The four tapes played were tape #1 (automatic-automatic word pairs), tape #2 (propositional-propositional word pairs), tape #3 (automatic-propositional word pairs), and tape #4 (propositional-automatic word pairs). Tape #3 and tape #4 were actually the same tape, but played on opposite channels. Order of presentation of the.tapes was varied systematically from subject to subject, in order to randomize error in the results due to learning. Channel-by-ear presentation was also alternated, to minimize the effects of differences between channels. During the interval between tapes, while the reels were being changed, the subjects were given a short break. The testing session lasted from 20 to 30 minutes. 44 CHAPTER 5 RESULTS 5.1 Scoring Four tests were run in each of two series. The results were examined in two different ways: according to accuracy, and according to order of report. When only one correct response was given to a dichotic pair, one point was given to the ear to which that word had been presented; an error was recorded for the other ear. Absence of response and combinations were included under the heading of errors. There were too few of the latter to consider them as a separate phenomenon. When both words were repeated correctly after presentation of a dichotic pair, one point was given to the ear whose word was reported f i r s t ; there would be no errors in this case. When two pairs were presented together, the same scoring technique was followed. One point was given for the f i r s t correct response to each pair. Errors were recorded when utterances were incorrect or absent. 5.1.1 Series #1 In this set, i t w i l l be recalled, each dichotic pair was presented singly, followed by a 4-5 second pause during which the subject repeated what had been heard. As i t turned out, this was a relatively easy task, and accuracy was high. In the four tests, over-a l l accuracy (total possible scores) ranged from 98.49% - 98.9%. Of the 2120 total words presented, there were only 41 errors. Despite 45 this small number, however, percentage of error scores (P.O.E.) were tabulated. The P.O.E. score is defined as the left ear's percentage of the total errors. (Krashen 1972) Therefore, when the right ear superiority increases, P.O.E. scores w i l l rise. Increasing deviation from 50% indicates increasing degrees of lateralization. P.O.E. scores were f e l t to be an appropriate measure of the dichotic listening effects here, since, unlike other methods of scoring which are negatively correlated with accuracy and would therefore indicate l i t t l e on results such as these, P.O.E. scores have a weak positive correlation with accuracy, r = .2077. In Test #1 (automatic-automatic), the P.O.E. score was 54.5%; in Test #2 (propositional-propositional), P.O.E. was 62.5%; in Test #5 (automatic to R.E., propositional to L.E.), and also in Test #4 (propositional to R.E., automatic to L.E.), P.O.E. was 50%. A non-parametric s t a t i s t i c a l test, Wilcoxon's Matched-Pairs Signed-Ranks Test, was applied to the results of each subject within each test, according to f i r s t ear of report scores. Analysis of the score of the 13 subjects in Test #1 (automatic-automatic pairs) shows that the right ear was reported f i r s t s i g n i f i -cantly more often than the le f t ear. (T = 12, p < .01, one-tailed). In Test #2 (propositional-propositional pairs) there was also a significantly stronger right ear, as f i r s t ear, report. (T = 15, p < .025, one-tailed). There is no significant difference between these results of Test #1 and Test #2 (T = 23.5). In Test #3 (automatic to R.E., propositional to L.E.), the difference between the right and le f t ear was not significant. (T = 24.5) 46 Similarly in Test #4, the right ear report was not significantly greater than the le f t ear report. (T = 25) There was also no significant difference between the results of Test #3 and Test #4. (T = 40.5) 5.1.2 Series #2 In this series of tests, two dichotic pairs were presented one immediately after another, followed by a pause during which the subject repeated the stimuli heard. This proved to be a more d i f f i c u l t task than that presented in the f i r s t series; accuracy was lower, even though the actual dichotic pairs were not changed. Overall accuracy on these four tests ranged from 85.49% - 94.59%. Of the 2808 words presented during the tests, there were 226 errors. Although this number of errors is not large in i t s e l f , i t was nevertheless sufficient to allow s t a t i s t i c a l analyses. Percentage of error scores: Test #1, P.O.E. = 79.2%; Test #2 P.O.E. = 59.04%; Test #5, P.O.E. = 55.5%; Test #4, P.O.E. = 82%. Note the larger deviation from 50% particularly in Test #1 (automatic-automatic) and in Test #4 (propositional to R.E., automatic to L.E.). The Wilcoxon's Matched-Pairs Signed-Ranks Test was applied to both (a) the results according to order of report and (b) the subjects' errors. Analysis of the results from these tests add support to those results from Series #1. In Test #1, (automatic-automatic) according to order of report, there was a significant right ear advantage. (T = 13.5, p < .025). According to the distribution of errors, this effect was SERIES •/! TEST -?1 TEST ^2 TEST •'3 TEST #4 R.E. AUTO. L.E. AUTO. R.E. PROP. L.E. PROP. R.E. AUTO. L.E. PROP. R.S, PROP. L»iii • AUTO. First Ear Scores 236 128 222 141 250 159 246 168 Errors 5 6 3 5 6 6 5 5 Overall Accuracy 98.493 98.93 98.563 98. 83 P.O.E. 54.53 62. erf 503 50 REA (according to order of report.) significant P < .01 significant P < .025 non-significant non-significant Table 5.1 Results of the f i r s t series of dichotic tests. 48 even more significant. (T = 0, p < .005, one-tailed). Test #2 (propositional - propositional) showed a barely s i g n i f i -cant REA according to order of report. (T = 19, p < .05, one-tailed) According to errors, however, the right ear was significantly more efficient that the lef t ear. (T = 10.5, p < .01, one-tailed) There was no s t a t i s t i c a l significance to the differences between Test #1 and Test #2. (T = 28.5) In Test #5 (automatic to R.E., propositional to L.E.), the right ear, as f i r s t ear, report was not significantly greater than the le f t ear report. In fact, s t a t i s t i c a l l y , the scores were very close. (T = 42.5) In the f i r s t set of tests, the results according to order of report were also insignificant. Examination of errors, however, indicates a significantly larger number at the lef t ear. (T = 15.0, p < .025, one-tailed) Test #4 (propositional to R.E., automatic to L.E.) demonstrated strong laterality. Accordingly to the order of report, a strong right ear effect was found. (T = 1, p < .005, one-tailed). According to distribution of errors, there was a strong and significant right ear advantage. (T = 2, p < .005, one-tailed). This is strikingly different from the results of this test in the f i r s t series. 5.2 Subj ects Subjects' responses varied within and among themselves in the degree of laterality displayed on individual tests. While many subjects consistently reported "right ear words" f i r s t , and produced more errors on " l e f t ear words," for others the mode of response alternated from test to test. One subject in each series gave consis-tently superior responses for the le f t ear stimuli, across tests. Their results were not excluded from analysis, since they had f i t the c r i t e r i a for subject selection; exclusion would have introduced delib-erate sample bias. SERIES /F2 TEST TEST TEST #3 TEST :?4 R.E. AUTO. AUTO. R.E. PRO?. L.E. PRO?. R.E. AUTO. L.E. PRO?. R.E. PROP. T V -Li • - J • AUTO. F i r s t Ear Scores 185 99 174 106 211 203 310 105 Errors 10 38 34 49 20 25 9 41 Overall Accuracy 9 1 . 6 1 3 8 5 . 4 9 3 9 4 . 5 9 3 9 3 . 9 9 3 P.O.E. 79.23 59.043 5 5 . 53 823 REA (according to order of report) s i g n i f i c a n t ? < . 0 2 5 s i g n i f i c a n t ? K . 0 5 non-s i g n i f i c a n t s i g n i f i c a n t P < . 0 0 5 REA (according to errors) s i g n i f i c a n t s i g n i f i c a n t s i g n i f i c a n t s i g n i f i c a n t P < . 0 0 5 P * .01 P < . 0 2 5 P < . 0 0 5 Table 5 . 2 Results of the second series of dichotic t e s t s . 51 CHAPTER 6 DISCUSSION Four dichotic tapes were presented to normal subjects. The f i r s t tape consisted of "automatic" word pairs, and the second contained "propositional" word pairs. The purpose of these two tests was to determine the extent of lateralization to the dominant hemisphere for automatic speech processing, and to compare this with lateralization for the processing of propositional speech. The third and fourth tape, which were identical but presented on opposite channels, consisted of "automatic - propositional" word pairs. These tapes were designed to investigate the effects of competition between "automatic" and "propositional" words when alternately presented along the auditory pathways contralateral and ips i l a t e r a l to the dominant hemisphere. The results of this experiment indicate that, in normal subjects, the lef t hemisphere is relatively more involved in the comprehension of automatic speech than is the right hemisphere. Test #1, involving the dichotic presentation of automatic word pairs, yielded a strong right ear advantage, under both the single pair and doubled pairs conditions. This right ear effect was not significantly different from that found in Test #2, which represents the degree of lateralization, in this group of subjects, for propositional linguistic stimuli. It w i l l be recalled that Van Lancker (unpublished) found no ear effect when automatic speech was presented dichotically in single 52 pairs; Van Lancker's results are contrary to the results of this experi-ment. Her automatic stimulus words, however, were recorded "with natural intonation appropriate to each item." In the present experiment, automatic words were presented with regular f a l l i n g intonation. It is possible that Van Lancker's subjects responded not only to the particular item, but also to the characteristic intonational contours. Milner (1962), in a study comparing lef t and right temporal lesions and lobec-tomies, found that "tonal memory" was significantly more affected by right temporal lobectomy than l e f t . The results of Van Lancker's test may have been confounded by the introduction of a paralinguistic quality whose processing is lateralized to the minor hemisphere. Results from our tests, however, are consistent with those found by Van Lancker in her doubled-pairs condition. Contrary to our expectations, i t would seem that the comprehension of automatic words, in normal subjects, is lateralized to at least the same degree as propositional words. There are, however, indications that despite equal lateraliza-tion, the automatic words are processed differently, at least at the sub-cortical level. The limited scope of this experiment does not allow comment on differences in processing within the same hemisphere. Test #3 and Test #4 were used to investigate the effects of competition between automatic and propositional words. It is assumed that in Test #3, automatic words travelled along the contralateral pathways to the dominant hemisphere, propositional words along the ipsilateral pathways. According to the order of report, in both series, there was no significant right ear effect. If automatic words were processed exactly like propositional words, we would expect to have a REA comparable to that found in Test #1 and Test #2. This was not found. According to Kimura (1967), the c r i t i c a l measurement of an ear effect is the distribution of errors. A s t a t i s t i c a l l y significant right ear advantage was found according to errors, but this ear effect was small when compared with the results of the other tests. There would seem to be some factor which differentiates this condition from the others. We have seen that dissimilar stimuli travelling along the contralateral pathways (e.g. distorted words, steady-state vowels, noise) cause less suppression of linguistic stimuli travelling along ipsilateral pathways than do stimuli of the identical mode. The results of Test #3 could be explained i f automatic speech constitutes a mode of speech in some measure dissimilar to prepositional speech. The automatic words travelling along the contralateral pathways would cause less suppression of the propositional words travelling along the ipsilateral pathways. Thus these latter words would be stronger than under normal dichotic conditions on their arrival to the processing centre in the left hemisphere, and would be augmented by signals coming across the corpus callosum from the right hemisphere. There would therefore be more competition from these stimuli at the cortical level. This could account for the smaller asymmetry in ear effect. In Test #4, i.e. the reverse condition, propositional words travelled along the pathways contralateral to the dominant hemisphere, automatic words travelled along the ipsilateral pathways. Although under the single pair condition, according to order of report, the 54 right ear effect was not s t a t i s t i c a l l y significant, under the more d i f f i c u l t condition, which tends to enhance an ear effect (Bryden, 1967), there was an extremely strong REA, both according to order of report and according to the distribution of errors. These asymmetric results can be simply explained by the theorizing behind dichotic listening, i.e. the stronger contralateral connections to the dominant hemisphere, along with the suppression of stimuli along the ips i l a t e r a l pathways. The extreme strength of the asymmetry may be due to an easier suppression of automatic words, or greater competition at the cortical level, by the robust propositional stimuli. The ear effect found in Test #4 was significantly greater than that of Test #3, despite the fact that the identical word pairs were used. The only difference between these two conditions was the ear of input for automatic and propositional words. This implies that the cause of the differentiation does l i e in the effects on interaction between these modes of language and the different pathways. Automatic speech seems to constitute a different mode of language, as our model has suggested. Yet the corollary to this model, that automatic speech is bi - l a t e r a l l y represented in the two hemispheres, is not supported by our data. Therefore, we cannot reject the possibility that the minor hemisphere of the normal brain simply does not have the a b i l i t y to comprehend automatic speech. There are some explanations, however, which help to interpret our results, without contradicting our assumptions of the underlying capabilities of the minor hemisphere to process this type of language. 55 It may be, as Van Lancker suggested, that subjects become "set for language and process the stimuli in the leading (language) hemisphere, including such sounds as can be processed in both hemi-spheres." (Van Lancker, Chapter 4, page 51). Kinsbourne (1970) hypothesized that there are "attentional asymmetries" which strengthen laterality to the hemisphere most actively involved in executing a certain task. According to our model, automatic speech is as strongly represented in the lef t hemisphere as in the right hemisphere. It is possible that the degree of automatic language processing in the l e f t hemisphere is enhanced by the "set" for linguistic material. This could perhaps account for the relative superiority of the right ear stimuli in the automatic speech dichotic condition. It must be borne in mind that the theory of bi-lateral representation of automatic language is based primarily on evidence from studies of the sub-normal brain. Another possible explanation for the strong laterality of automatic speech comprehension to the dominant hemisphere might be that, although in the normal brain the minor hemisphere has the capacity to become involved in automatic speech processing, i t is inhibited from doing so in the presence of the intact dominant hemisphere. Moscovitch (1973) after conducting studies of reaction time to verbal stimuli, reported that in normal subjects, the results reflected only the competence of the dominant hemisphere. Aware of the evidence for linguistic capabilities in the minor hemisphere of the pathological brain, he suggested a model of "functional localization" of language to the dominant hemisphere. 56 This model states that, although . . . the minor hemisphere may have the competence to perform adequately on tasks requiring verbal s k i l l s , i t s competence w i l l be masked by the control the dominant hemisphere can exert via interhemispheric and other pathways. (Moscovitch, 1973, p. 114) This model could account for the minor hemisphere's improved performance on verbal tasks when i t has become disassociated from the dominant hemisphere's suppressive influence (e.g. after dominant hemispherectomy). It can also explain our finding of strong lateralization of automatic speech to the dominant hemisphere in our sample of normal subjects. What both of these explanations suggest is that in the normal brain, the lef t hemisphere, due to i t s "set" for linguistic stimuli, or i t s suppressive influence, w i l l functionally dominate the minor hemisphere in the processing of any level of language. Therefore the results of our test could be understood to be a reflection of the actual participation of the normal minor hemisphere in the comprehension of automatic speech, but not a reflection of i t s underlying capability to become involved in this function. Following this line of reasoning, however, leads to the assumption that in dichotic listening tests using linguistic stimuli, there would consistently be a REA. Yet Van Lancker has found no ear effect for dichotically presented familiar propositional words. This result would suggest that the minor hemisphere can and does become in-volved to an almost equal extent in the comprehension of certain linguistic stimuli. As yet, there is insufficient data to reconcile these different positions. 57 One final factor which could have affected results should be mentioned: there are certain limitations inherent in the dichotic listening technique for the examination of the automatic level of language. Automatic speech is understood to be highly bound to situation, while the method of dichotic listening requires a r t i f i c i a l extraction of words and phrases from context. Also, automatic speech is a concept at the level of language use; the same distinction of propositional - automatic may not exist in the comprehension mode, which dichotic listening tests investigate. Thus, two areas of research have yet to be examined before any definite conclusions can be reached. The differences between the production and comprehension of automatic speech must be studied in order to better understand the results of our dichotic listening tests. Secondly, the complex relationship between normal and brain-damaged processing of language is s t i l l unknown. Since automatic speech is a phenomenon common to normal and pathological language, further study may provide important data to this investigation. The technique of electroencephalography (EEG), which can be safely applied to both normal subjects and brain-damaged patients, and which can be used to examine the different degrees of hemispheric involvement in both the production and comprehension modes of automatic language, may possibly be the most f r u i t f u l method of investigation for providing additional information on this topic. Within the limitation of this present research, then, several points have been made. First, there appears to be, as c l i n i c a l evidence strongly suggests, a meaningful difference between automatic 58 and propositional modes of expression. This was reflected in the effects of competition of these two types of stimuli along the contra-lateral and ipsilateral auditory pathways. Second, comprehension of automatic language, in normal subjects, was shown to be lateralized to the dominant hemisphere to the same extent as propositional language. This may be due to the fact that comprehen-sion of automatic language is lateralized more to the dominant hemisphere than is its production. Third, i t may be that in the normal brain, the dominant hemi-sphere's a b i l i t y to comprehend automatic speech is enhanced by a "set" for language processing, or by i t s suppression of the minor hemisphere's abi l i t y to become involved in linguistic functioning. Finally, i t is possible that by extracting our words and phrases from intonation and context, our experimental stimuli, while representing a different mode of language, may not f a l l under the st r i c t definition of automatic language. It would thus appear that present evidence for the hypothesis that the minor hemisphere, in the normal as well as in the sub-normal brain, has the capability to become involved with language processing at the level of automatic speech, is s t i l l inconclusive. 59 REFERENCES ALAJOUANINE, T. ( 1956). "Verbal realization in aphasia," Brain 79: 1-28. BASSER, L. ( 1962). "Hemiplegia of Early Onset and the Faculty of Speech with Special Reference to the Effects of Hemispherec-tomy," Brain 85_: 427-460. BERLIN, C , LOWE-BELL, S., CULLEN, J., THOMPSON, C , and LOOVIS, C. (1973) . "Dichotic Speech Perception: An Interpretation of Right Ear Advantage and Temporal Offset Effects," J. Acoust. Soc. Amer. 53: 699-709. BERLIN, C , LOWE-BELL, S., PORTER, R., BERLIN, H., and THOMPSON, C. (1973). "Dichotic Signs of the Recognition of Speech Elements in Normals, Temporal Lobectomees, and Hemispherec-tomees," IEEE Group on Audio and Electroacoustics Transactions. BOCCA, E., CALEARO, C , and CASSINARI, V. (1954). "A New Method for Testing Hearing in Temporal Lobe Tumours," Acta Oto-Laryngol. 44: 219-221. BONIN, G., von. (1962). "Anatomical Asymmetries of the Cerebral Hemispheres," in Interhemispheric Relations and Cerebral  Dominance, Mountcastle, V.B., ed. (Johns Hopkins Press, Baltimore), 1-6. BROADBENT, D. (1954). "The Role of Auditory Localization in Attention and Memory Span," J. Exp. Psychol. 47_: 191-196. BROCA, P. (1865). "Sur la faculte du langage articule," Bull. Soc. Anthropol. Paris 6_: 493-494, as cited in Zangwill, O.L. (1960)/'Cerebral Dominance and i t s Relation to Psychological Function." BRYDEN, M.P. (1967). "An Evaluation of Some Models of Laterality Effects in Dichotic Listening," Acta Oto-Laryngologica 63: 595-604. CROCKETT, H.G., and ESTRIDGE, N.M. (1951). "Cerebral Hemispherectomy: A C l i n i c a l , Surgical and Pathologic Study of Four Cases," Bull. Los Angeles Neurol. Soc. 16: 71-87. CURRY, F. (1968). "A Comparison of the Performance for a Right Hemispherectomized Subject and Twenty-five Normals on Four Dichotic Listening Tasks," Cortex 4_,: 144-153. 60 DI CHIRO, G. (1962). "Angiographic Patterns of Cerebral Convexity Veins and Superficial Dural Sinuses," Amer. J. Roentgenol. 87: 308-321, as quoted in Brain Mechanisms Underlying Speech  and Language, Millikan, C. and Darley, F., eds. DIRKS, D. ( 1964). "Perception of Dichotic and Monaural Verbal Material and Cerebral Dominance in Speech," Acta Oto-laryngol. 58: 73-80. FALCONER, M.S. (1967). "Brain Mechanisms Suggested by Neurophysiologic Studies," in Brain Mechanisms Underlying Speech and Language, Millikan, C. and Darley, F., eds. (Grune and Stratton, New York), 185-190. FROMKIN, V., KRASHEN, S., CURTISS, S., RIGLER, D., and RIGLER, M. (1974). "The Development of Language in Genie: A Case of Language Acquisition Beyond the 'Critical Period,'" Brain and Language 1_: 81-107. GAZZANIGA, M.S. (1970). The Bisected Brain, (Appleton, New York). GAZZANIGA, M.S., and HILLYARD, S.A. (1971). "Language and Speech Capacity of the Right Hemisphere," Neuropsychologia 87: 415-422. GESCHWIND, N. (1967). In Brain Mechanisms Underlying Speech and Language, Millikan, C. and Darley, F., eds. (Grune and Stratton, New York), 103-107. GESCHWIND, N., and LEVITSKY, W. (1968). "Human Brain: Left-Right Asymmetries in Temporal Speech Region," Science 161: 186-187. GOLDMAN-EISLER, F. (1964). "Hesitation and Information in Speech," in Disorders of Language, De Reuck, A.V.S. and O'Connor, M. eds. ( L i t t l e , Brown and Co., Boston), 96-111. GOTT, P. (1973). "Language after Dominant Hemispherectomy," J. Neurol. Neurosurg. Psychiat. 36: 1082-1088. HACAEN, H., and SAUGUET, J. (1971). "Cerebral Dominance in Left-Handed Subjects," Cortex 7_: 19-48. JACKSON, J.H. ( 1958). Selected Writings of John Hughlings Jackson, 2 vols., Taylor, J., ed. (Staples Press, London). KIMURA, D. (1961a). "Cerebral Dominance and the Perception of Verbal Stimuli," Can. J. of Psychol. 15_: 166-171. 61 KIMURA, D. (1961b). "Some Effects of Temporal Lobe Damage on Auditory Perception," Can. J. of Psychol. L5: 156-165. KIMURA, D. (1964). "Left-Right Differences in the Perception of Melodies," Quart. J. Exp. Psychol. 16: 355-358. KIMURA, D. (1967). "Functional Asymmetry of the Brain in Dichotic Listening," Cortex _3: 163-178. KIMURA, D., and FOLB, S. (1968). "Neural Processing of Backwards Speech Sounds," Science 161: 395-396. KINSBOURNE, M. (1970). "The Cerebral Basis of Lateral Asymmetries in f Attention," Acta Psychologica _3_3: 193-201. KINSBOURNE, M. (1971). "The Minor Cerebral Hemisphere as a Source of Aphasic Speech," Arch. Neurol. 25_: 302-306. KRASHEN, S. ( 1972). "Language and the Left Hemisphere," UCLA Working Papers in Phonetics 24. LENNEBERG, E. (1967). Biological Foundations of Language, (Wiley, New York). LURIA, A.R. (1966). Higher Cortical Functions in Man, (Basic Books Inc., New York). LURIA, A.R. (1970). Traumatic Aphasia, Its Syndromes, Psychology and  Treatment, (Mouton, The Hague). MILNER, B. ( 1962). "Laterality Effects in Audition," in Interhemispheric  Relations and Cerebral Dominance, Mountcastle, V.B., ed. (Johns Hopkins Press, Baltimore), 177-195. MILNER, B., BRANCH, C , and RASMUSSEN, T. ( 1964). "Observations on Cerebral Dominance," in Language, Oldfield, R. and Marshall, J. eds. (1968), (Penguin), 366-378. MILNER, B., TAYLOR, L., and SPERRY, R. ( 1968). "Lateralized Suppression of Dichotically Presented Digits after Commissural Section in Man," Science 161_: 184-186. M0SC0VITCH, M. (1973). "Language and the Cerebral Hemispheres: Reaction Time Studies and Their Implications For Models of Cerebral Dominance," in Communication and Affect: Language and Thought, Pliner, P., Krames, L. and Alloway, T., eds. (Academic Press, New York), 89-126. PENFIELD, W., and ROBERTS, L. (1959). Speech and Brain Mechanisms (Princeton University Press, Princeton). 62 ROSENZWEIG, M. (1951). "Representations of the Two Ears at the Auditory Cortex," Amer. J. Physiol. 167: 147-158. ROSSI, G.F., and ROSADINI, G.( 1967). "Experimental Analysis of Cerebral Dominance in Man," in Brain Mechanisms Underlying Speech and  Language, Millikan, C. and Darley, F., eds. (Grune and Stratton, New York), 167-175. SMITH, A. (1966). "Speech and Other Functions after Left (Dominant) Hemispherectomy," J. Neurol. Neurosurg. Psychiat. 29: 467-471. SMITH, A., and BURKLUND, C.W. (1966). "Dominant Hemispherectomy: Pre-liminary Report on Neuropsychological Sequelae," Science 153: 1280-1282. SPARKS, R., and GESCHWIND, N. (1968). "Dichotic Listening in Man after Section of Neocortical Commissures," Cortex 4_:3-16. SPARKS, R., GOODGLASS, H., and NICKEL, B. (1970). "Dichotic Listening after Hemisphere Lesions," Cortex 6_: 249-260. SPERRY, R.W., and GAZZANIGA, M.S. (1967). "Language following Surgical Disconnection of the Hemispheres," in Brain Mechanisms  Underlying Speech and Language, Millikan, C. and Darley, F., eds. (Grune and Stratton, New York), 108-115. SPREEN, 0., and BOUCHER, A.R. (1970). "Effects of Low Pass Filtering on Ear Asymmetry in Dichotic Listening and Some Uncontrolled Error Sources," J. Aud. Res. 1_0: 45-51. STUDDERT-KENNEDY, M., and SHANKWEILER, D. (1970). "Hemispheric Special-ization for Speech Perception," J. Acoust. Soc. Amer. 48: 579-594. THOMPSON, C , STAFFORD, M., CULLEN, J. , HUGHES, L., LOWE-BELL, S., and BERLIN, C. (1972). "Interaural Intensity Differences in Dichotic Speech Perception," 83rd Meeting Acoust. Soc. Amer., Buffalo, New York. TUNTURI, A.R. (1946). "A Study of the Pathway from the Medial Geniculate Body to the Acoustic Cortex in the Dog," Amer. J. Physiol. 147: 311-319. VAN LANCKER, D. (unpublished). "Heterogeneity in Language and Speech: Neurolinguistic Studies." WADA, J., and RASMUSSEN, T. ( 1960). "Intracarotid Injection of Sodium Amytal for the Lateralization of Cerebral Dominance: Experimental and Cl i n i c a l Observations," J. Neurosurg. 17: 266-282. WEISENBERG, T.H., and McBRIDE, K.E. (1964). Aphasia, a Cl i n i c a l and  Psychological Study, (Hafner Publishing Co., New York). ZANGWILL, O.L. (1960). "Cerebral Dominance and i t s Relation to Psychological Function," (Oliver and Boyd, Edinburgh). ZANGWILL, O.L. (1967). "Speech and the Minor Hemisphere," Acta Neurol. Belg. 67: 1013-1020. ZOLLINGER, R. (1935). "Removal of Left Cerebral Hemisphere: Report of a Case," Arch. Neurol. Psychiat. 34: 1055-1062. APPENDIX 1 Duration (in msecs.) of Stimulus Words "Automatic" Words 1. oh yeah 600 2. oh shit 580 3. what for 570 4. thank you 570 5. O.K. 560 6. hello 560 7. oh no 560 8. a l l right 560 "Propositional" Words 1. redo 600 2. amount 600 3. remove 600 4. birthday 600 5. airplane 585 6. lively 580 7. upset 565 8. review 565 65 APPENDIX 2 L i s t s of Dichotic Word Pairs List of "Automatic - Automatic" Word Pairs, Test #1, Series #1 1. what for - oh no 15. 2. O.K. - a l l right 16. 3. thank you - oh no 17. 4. a l l right - hello 18. 5. thank you - O.K. 19. 6. oh no - O.K. 20. 7. hello - oh no 21. 8. thank you - oh yeah 22. 9. O.K. - oh shit 23. 10. oh shit - hello 24. 11. oh no - oh shit 25. 12. what for - oh yeah 26. 13. oh shit - what for 27. 14. oh yeah - O.K. 28. a l l right - thank you hello - oh yeah hello - what for what for - O.K. a l l right - oh shit oh no - a l l right O.K. - hello what for - a l l right hello - thank you oh yeah - a l l right oh shit - thank you oh no - oh yeah thank you - what for oh yeah - oh shit 67 List of "Automatic - Automatic" Word Pairs, Test #1, Series #2 1. what for - oh no 7. what for - O.K. O.K. - a l l right a l l right - oh shit 2. thank you - oh no 8. oh no - a l l right a l l right - hello O.K. - hello 3. hello - oh no 9. what for - a l l right thank you - oh yeah hello - thank you 4. oh no - oh shit 10. oh yeah - a l l right what for - oh yeah oh shit - thank you 5. oh shit - what for 11. oh no - oh yeah oh yeah - O.K. thank you - what for 6. a l l right - thank you hello - oh yeah 68 List of "Propositional - Propositional" Word Pairs, Test #2, Series #1 1. review - amount 15. birthday - airplane 2. remove - birthday 16. redo - upset 3. airplane - amount 17. redo - review 4. birthday - redo 18. review - remove 5. airplane - remove 19. birthday - li v e l y 6. amount - remove 20. amount - birthday 7. redo - amount 21. remove - redo 8. airplane - upset 22. review - birthday 9. remove - li v e l y 23. redo - airplane 10. lively - redo 24. upset - birthday 11. amount - lively 25. li v e l y - airplane 12. review - upset 26. amount - upset 13. liv e l y - review 27. airplane - review 14, upset - remove 28. upset - livel y 6 9 L i s t of "Propositional - P r o p o s i t i o n a l " Word Pair s , Test #2, Series #2 6 . review - amount remove - birthday airplane - amount birthday - redo redo - amount airplane - upset amount - l i v e l y review - upset l i v e l y - remove upset - remove birthday - airplane redo - upset 10. 11. review - remove birthday - l i v e l y amount - birthday remove - redo review - birthday redo - airplane upset - birthday l i v e l y - airplane amount - upset airplane - review 70 List of "Automatic - Propositional" Word Pairs, Test #3/4, Series #1/2 1. oh no - li v e l y 17. oh no - redo 2. oh shit - amount 18. oh shit - upset 3. what for - birthday 19. what for - airplane 4. a l l right - review 20. a l l right - remove 5. thank you - remove 21. thank you - review 6. O.K. - airplane 22. O.K. - birthday 7. hello - upset 23. hello - amount 8. oh yeah - redo 24. oh yeah - li v e l y 9. oh no - airplane 25. oh no - upset 10. oh shit - remove 26. oh shit - redo 11. what for - redo 27. what for - remove 12. a l l right - upset 28. a l l right - airplane 13. thank you - amount 29. thank you - birthday 14. O.K. - li v e l y 30. O.K. - review 15. hello - review 31. hello - l i v e l y 16. oh yeah - birthday 32. oh yeah - amount 71 APPENDIX 3 Samples of Mingogram Tracings of Word Pairs (SCALE: 20 msec/small division) l i v e l y - redo hello - review thank you - what for — i — -i what for - birthday M oh yeah- amount hello- what for -1 Mm -•i rig H i rt". • - . - i 7 ' ' ' . . ; : 4 :rH* •-1—•- -* -1--r—r -f r r r l 1 r H T T * •! N — ! ....:((,(,!. , R . '..1 ;' ; L . .1 • • . - - r4- — «-;••< •' i — H H ii. I. i Ui r ' a l l right-thank you O.K.-review i • • •MVP]tit'M!!' • . , review - birthday 1UiiirUi'.i,|i.'.!';»->»'.i'-,ii, lively-airplane . r - H 1-1 4-• -redo - amount • a l l right - review 


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items