UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Unimodal and multimodal communication with hearing impaired students Pudlas, Kenneth Arthur 1983

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1984_A2 P83.pdf [ 7.02MB ]
Metadata
JSON: 831-1.0096582.json
JSON-LD: 831-1.0096582-ld.json
RDF/XML (Pretty): 831-1.0096582-rdf.xml
RDF/JSON: 831-1.0096582-rdf.json
Turtle: 831-1.0096582-turtle.txt
N-Triples: 831-1.0096582-rdf-ntriples.txt
Original Record: 831-1.0096582-source.json
Full Text
831-1.0096582-fulltext.txt
Citation
831-1.0096582.ris

Full Text

C . I UNIMODAL AND MULTIMODAL COMMUNICATION WITH HEARING IMPAIRED STUDENTS by KENNETH ARTHUR PUDLAS M.A., The University of British Columbia, 1981 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF EDUCATION in THE FACULTY OF GRADUATE STUDIES Faculty of Education (Educational Psychology and Special Education) We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA December 1983 © Kenneth Arthur Pudlas, 1983 In p r e s e n t i n g t h i s t h e s i s i n p a r t i a l f u l f i l m e n t of the requirements f o r an advanced degree at the U n i v e r s i t y of B r i t i s h Columbia, I agree t h a t the L i b r a r y s h a l l make i t f r e e l y a v a i l a b l e f o r r e f e r e n c e and study. I f u r t h e r agree t h a t p e r m i s s i o n f o r e x t e n s i v e copying of t h i s t h e s i s f o r s c h o l a r l y purposes may be granted by the head o f my department or by h i s or her r e p r e s e n t a t i v e s . I t i s understood t h a t copying or p u b l i c a t i o n o f t h i s t h e s i s f o r f i n a n c i a l gain s h a l l not be allowed without my w r i t t e n p e r m i s s i o n . Department of £ fr. ? S y ^ . frNi> Specific £t>. The U n i v e r s i t y of B r i t i s h Columbia 1956 Main Mall Vancouver, Canada V6T 1Y3 DE-6 (3/81) ABSTRACT The purpose of this study was to gather data on hearing impaired students' reception of language' presented through each of five communication modes: oral (speechreading), aural (audition), manual (signs), oral-aural (speechreading plus audition), and simultaneous (speechreading plus audition plus signs). The 106 subjects (53 females, 53 males) had a mean age of 17 5.4 months and mean hearing threshold level (HTL) of 97.7dB. Other personal and demographic data are reported. The procedure utilized a within-subjects design, and four l i s t s of stimulus sentences which were constructed so as to control for vocabulary level and visemic content of lexical items, phrase and syntactic structure, and length. To ensure consistency across trials, sentence l i s t s were video taped. Each subject received twelve sentences through each of the five modes. After each sentence, subjects were alloted 30 seconds in which to write it in the appropriate blanks in an answer booklet . One mark was awarded for each correct word in the appropria blank, for a possible maximum score of 57 for each mode. The highest scores were obtained under simultan-eous (X = 33.2) and manual (X = 31.5) which were greater (p < .01) than other modes. The score for oral-aural (X = 7.3) was higher (p 4. .05) than oral (X = 3.8) or aural (X = 3.1). None of the other differences were significant. Separate analyses were performed to determine the effect of a number of personal and demographic variables on subjects' performance under each mode. Results of the multiple stepwise regression procedure indicated that subjects' syntactic ability accounted for a large proportion of the variance in all but the aural mode. Effects of independent variables varied between modes, emphasizing the difficulties inherent in matched sample best-method studies. Results are discussed in light of various theories of cognitive processing and selective attention. Implications and suggestions for further study are presented. i v TABLE OF CONTENTS Page ABSTRACT - — i i LIST OF TABLES vi LIST OF FIGURES v i i i Acknowledgements ix CHAPTER ONE - INTRODUCTION AND DELINEATION OF THE PROBLEM 1 Introduction to the Problem 1 Background of the Problem 1 Language reception 1 Research Problem 4 Research Questions 5 Chapter Summary 5 CHAPTER TWO - TERMINOLOGY AND REVIEW OF PERTINENT RESEARCH 6 Terminology 6 Hearing Impairment 6 Communication, Language, Speech 8 Communication 8 Language 9 Speech, 10 Language Development 11 Language Reception 13 Speech Reception 13 Speech sounds 14 Acoustic cues 14 Visual cues 15 Kinesthes is 20 Tactile sense 20 Combined Modalities 21 Communication Modes/Methods 23 The oral method 24 The auditory method 24 The Rochester method 24 The simultaneous method 24 Non-Oral Communication 25 Formal manual communication 26 Methodology Comparisons 31 Percept ion/Processing Models 36 Hy pothes es , 49 Background and Rationale for Hy pothes es 49 Operational Statement of Hypotheses 51 Chapter Summary 53 CHAPTER THREE - METHODOLOGY 54 Subjects 54 Dependent Variable 55 Independent Variables 55 Measurement of independent variables 56 Development of Experimental Materials 57 Pilot Study 60 V Pilot Test Subjects 60 Pilot Test Materials 60 Apparatus 61 Pilot Procedure 61 Ex per imental task 62 Scoring 64 Pilot Test Data Analysis 64 Pilot Test Results 64 Main Study 67 Procedure 67 Experimental design 67 Apparatus 69 Scoring 69 Data Analysis Procedures 70 Differences between modes 70 Differences between subjects 71 Chapter Summary 72 CHAPTER FOUR - RESULTS 73 Subjects 73 Category I variables 73 Category II variables 73 Category III variables 74 Category IV variables 75 Item Level Analysis 75 Internal consistency 75 Order of presentation 76 Differences Between Modes 78 Individual Differences 80 Regression for Oral mode 80 Regression for Oral-Aural mode • 81 Regress ion for Manual mode 83 Regression for Simultaneous mode 83 Regression for Aural mode 84 Chapter Summary 85 CHAPTER FIVE - DISCUSSION, IMPLICATIONS , LIMITATIONS 87 Inter-Mode Comparison 87 Discussion of Individual Differences 93 Oral mode 93 Oral-Aural mode 94 Manual mode 95 Simultaneous mode 96 Aural mode 97 Limitations of Study 99 Suggestions for Further Research 100 Summary 101 REFERENCES 103 APPENDIX A - Questionnaire 113 APPENDIX B - Lexical Item Pool 116 APPENDIX C - Sentence Lists 118 APPENDIX D - Test Booklet 121 APPENDIX E - Partial Correlations of Variables 125 vi LIST OF TABLES Page Table 1 Age and HTL of Pilot Test Subjects 62 Table 2 Results of Pilot Test 65 Table 3 ANOVA for Pilot Data 66 Table 4 Randomization Plan for (Modified) Latin Square Design 68 Table 5 Oneway ANOVAs for Effect of Order 76 Table 6 Comparison Among Oral Mode Means 77 Table 7 Rank by Order of Presentation 78 Table 8 Summary of Means and Standard Deviations 79 Table 9 Analysis of Variance for Mode Means 79 Table 10 Comparison Among Mode Means 79 Table 11 Summary of Stepwise Regression for Oral Mode Data 80 Table 12 Summary of Stepwise Regression for Oral-Aural Mode Data 82 Table 13 Summary of Stepwise Regression for Manual Mode Data 83 Table 14 Summary of Stepwise Regression for Simultaneous Mode Data 84 Table 15 Summary of Stepwise Regression for Aural Mode Data 85 Table 16 Partial Correlations for Oral Mode Variables Prior to Regression 127 Table 17 Oral Mode Partial Correlations After Forced Entry of Significant Variables 128 Table .18 Partial Correlations for Oral-Aural Mode Variables Prior to Regress ion 129 Table 19 Oral-Aural Mode Partial Correlations After Forced Entry of Significant Variables 130 Table 20 Partial Correlations for Manual Mode Variables Prior to Regression 131 Table 21 Manual Mode Partial Correlations After Forced Entry of Significant Variables 132 vi i Page Table 22 Partial Correlations for Simultaneous Mode Variables Prior to Regression 133 Table 23 Simultaneous Mode Partial Correlations After Forced Entry of Significant Variables 134 Table 24 Partial Correlations for Aural Mode Prior to Regression 135 Table 25 Aural Mode Partial Correlations After Forced Entry of Significant Variables 136 Table 26 Demographic Characteristics and Mode Scores of Subjects Retained in Regression Analyses 137 v i i i LIST OF FIGURES Page Figure 1 Adaptation of Broadbent (1958) 38 f i l t e r theory. Figure 2 Variables (dependent and independent) considered in present study. 51 Acknowledgements Credit for completion of this dissertaion must be shared with a number of individuals. Thanks to Dr. Bryan R. Clarke, the Chairman of my committee, who has also been my patient mentor and wise counsellor through much of my academic career. Thanks also to Dr. David Kendall who provided thoughtful input during the conceptualization of the study and to Dr. Perry Leslie who taught me much of what I know about -teaching hearing impaired students. Dr. Todd Rogers is responsible for much of any knowledge I may have regarding statistics. Thanks also to Betty, Marie Anne and Margaret for their help and encouragement and to my fellow graduate students for their friendship. I am grateful to the members of my family. They provided much needed encouragement and support. Also, Matthew and Tamara are to be commended. They were patient with their dad even though at the outset they were to young to comprehend why he wanted to be a doctor-daddy and not just a daddy. The final thanks is to my wife, Mary-Lynn. Thank you for typing the many drafts and revisions of this dissertaion. Even more important, thank you for your unwaivering faith and support. Without you this project would never have been completed. I CHAPTER ONE INTRODUCTION AND DELINEATION OF THE PROBLEM Introduction to the Problem It has been suggested that one of the crucial problems in the education of the deaf has been the inability of professional educators to resolve the oral-manual con-troversy (White & Stevenson, 1975, p. 48). Furth (1966) suggested the reason deaf children do not acquire profi-ciency in language is because it is taught too late, in an unreasonable medium, in an unnatural way, and by the wrong person. Oyer (1976) , in expressing a similar concern, stated "Perhaps the greatest single challenge is in the further development of a scientific data base upon which a prescriptive approach to communication habili tation -rehabilitation can be designed" (p. 5). These statements serve to point out that the critical issue of how to develop communication skills most effectively with hearing impaired persons, especially children, has yet to be resolved on the basis of any conclusive empirical data. The purpose of this research was to add in some measure to that scientific data base by obtaining empirical data on hearing impaired students ' reception of language through single and combined sensory modalities and presented via various communication modes such as speech reading, sign, and audition, both alone and in combinations. Background of the Problem Language reception. Regardless of whether language is learned or is inherently acquired, the fact remains that in order to develop communicative competence children must be able to receive and perceive the language of their environment. Communication is an interactive 2 dynamic process and therefore a child must be able to both decode the linguistic intent or meaning of received language and also encode his or her own meaningful messages and express them to others. This process and the related terms will be more fully described in chapter two. For most children with normal hearing the primary mode of receiving linguistic information is audition. For many hearing impaired children, and for most deaf children, audition alone simply is not an efficient means of receiving language. Other senses may need to be used in order to perceive and receive language. In order to supplement, and in some instances supplant, the incomplete messages received through audition, vision can be an important receptor. Visual forms would include manual modes, speech reading, and graphics. In fact it has been suggested (Vernon, 1972), "that sign language is unequivocally one of the most useful coping mechanisms of deafness" (p. 51) . A different view is represented by the statement of Goetzinger and Proud (1975) that "Obviously the language of signs is circum-scribed. It is not well suited either for abstract thought or for expressing shades of meaning" (p. 19). Inherent in these two points of view is the oral-manual conflict which has existed for many years. In an attempt to circumvent this conflict educators have for some time advocated a combined or simultaneous mode of communication. In recent years the term "total communica-tion" has been used to describe the use of residual audition, speech reading, sign, fingerspel 1ing, and graphics in communicating with hearing impaired persons. The Conference of Executives of American Schools for the Deaf (1976) stated: 3 Total communication is a philosophy requiring the incorporation of appro-priate aural, manual, and oral modes of communication in order to ensure effective communication with and among hearing impaired persons. (p. 358) Although the term may be relatively new, the concept of using a combined method has existed for many years (Clarke, 1972) since it is reported to have had its introduction in the sixteenth century (Garretson, 1976). The discussion regarding whether total communication is a philosophy or a methodology is beyond the purpose of this research. Pioneering studies of the use of audition and vision combined have been reported by Ewing (1944) , Hudgins (1953), and Clarke (1954). More recent studies have been conducted by Gaeth (1967), Gates (1970), Carson and Goetzlnger (1975), White and Stevenson (1975), Beckmeyer (1976) , and Titus (1978) . These researchers used various combinations- of communication modes in a variety of research designs and subjects of various ages to compare the effectiveness of those modes in communication. In a number of the studies nonsense syllables and signs were used; in others, pedagogical conditions and regular teaching methods were used. Not all the studies, therefore, compared modes under conditions which had ecological validity of pedagogical relevance. In order to Interpret the implications of the results- of the comparative studies it is useful to know something of the models and theories of how unimodal or multimodal Information is- processed. Ling (1976) has described a theoretical model of serial and parallel processing of linguistic cues, I n discussing the potential problem of competing cues, that is, audition and vision, Ling stated, "Whatever the case, one s t i l l has the problem of two widely different modes competing for sImultaneous attention and memory capacity. In the absence of research 4 we speculate that if a total communication approach were adopted, serial - rather than parallel - presentation of speech and sign would be advisable at least in the early stages of acquisition" (p. 61). The paucity of empirical data in this area is acknowledged by Clarke (in progress) who stated that "Further research should also concern the congruencies of sign and English to what is known about the processing capacities of the developing deaf child (Menyuk, 1974) and the amount and structure of visual — motor information that can be stored for retrieval by very young deaf children (Menyuk, 1976) ." The problem as seen by Carson and Goetzinger (1975) is associated primarily with the process ing qualities of the human organism and involves the issue of the extent to which bisensory stimulation may or may not be more efficient for learning than either modality alone. No one methodology is likely to be appropriate for all hearing impaired individuals at all stages of their development. Kiopping (1972) suggested that a substantial proportion of children fail to acquire any effective communication. It has been succinctly stated by Moores (1972) that, "If future research concerns itself with searching for indicators of the best match for a particular child at a particular stage of develop-ment then our children will be better off" (p. 10). It was with this background and within this context that the present research problem was developed. Research Problem The purpose of this study was twofold: first, to determine the relative efficiency of five unisensory and multisensory modes of communication used to present linguistic information to persons with impaired hearing and second, to identify the subject variables which affect language reception under the five modes. The study 5 differed in several aspects from some of the more recent studies in that it assessed reception of meaningful connected speech rather than nonsense material or individual words. It also differed in that pedagogically relevant communication modes were replicated. A further difference is that the investigation went beyond a search for the most efficient mode and examined individual subject characteristics as possible sources of variance in performance on the reception tasks for each mode. As part of the overall purpose, the study was designed so that the researcher could determine whether the addition of sensory information improves or impedes language reception . The results are discussed in light of several theoretical models of cognitive processing. Research Questions The research addressed the following questions: 1. Is there a within subject difference in the amount of language received (as measured by the number of words correctly recorded) when sentences' are presented via each of five modes : a) speech reading; b) speech reading plus audition; c) audition; d) sign; and e) audition plus speech reading plus sign? 2. Are there subject variables which account for differences in the amount of information received and recalled via each of the five modes? Chapter Summary An attempt has been made in this chapter to provide a brief introduction and background information to the present study. The chapter contains a statement of the purpose and the questions addressed by this investigation. Terminology, detailed background, and a theoretical framework are contained in chapter two, along with the hypotheses and a rationale for those hypotheses . 6 CHAPTER TWO TERMINOLOGY AND REVIEW OF PERTINENT RESEARCH The present study concerned communication with and by hearing impaired individuals. More specifically, it was an attempt to determine how well hearing impaired persons receive language when it is presented through various communication modes and then to determine the relationship betweeen language reception and various personal and demographic characteristics of hearing impaired students. As background information and to enhance understanding of the present study and its rele-vance, this chapter contains an explanation of terminology and a review of pertinent previous research. Terminology Hearing Impairment Hearing impaired. Historically, the term deaf has been indiscriminately applied to that group of individuals who have less than normal hearing. More recently the term hearing impaired has come into use as a descriptor used in a generic sense. Depending on the purpose of the descrip-tion, other terms may also be used. Hearing impairment can, as suggested by Myklebust (1964) be defined according to the degree of impairment, the age at onset, causal factors, site of dysfunction within the audi tory system, or in terms of social impairment. It should be recognized that though the definite article is sometimes used in referr ing to the deaf or the hearing impaired, the population of persons with some degree of hearing loss is made up of individuals with a wide range of personal characteristics. The existence of individual differences within the hearing impaired population is, in fact, essential to the thesis of this research. If hearing impaired individuals could be considered as a homogeneous 7 group, quantitative data on language reception gathered in this study could be widely generalized to the hearing impaired population. Clearly this is not valid and thus one of the purposes of the research was to determine which individual characteristics affect performance on the experimental task. The term "deaf" is s t i l l to be found in the literature, and it is not always intended to imply a total absence of hearing. To clarify the terminology, the Conference of Executives of American Schools for the Deaf recently adopted the following definitions: A deaf person is one whose hearing is disabled to an extent (usually 70 dB ISO or greater) that precludes the understanding of speech through the ear alone, with or without the use of a hearing aid. A hard-of-hearing person is one whose hearing is disabled to an extent (usually 35 to 69 dB ISO) that makes difficult, but does not preclude, the understanding of speech through the ear alone, with-out or with a hearing aid (Moores, 1982 , p. 6) The present research is concerned primarily with those persons who, according to the definition above, would be considered deaf. Other terms related to hearing impairment which are commonly encountered in the literature are prelingual and postlingual deafness. Children who are deaf at birth are congenitally deaf and they, along with those who lose their hearing, that is are adventitiously, deafened, prior to the development of language (about age two) are considered as prelingually deaf. Postlingually deafened individuals are those who lose their hearing after language and speech have been acquired. Perhaps the most fundamental terminology is that which is related to the measurement of hearing itself. 8 Hearing is measured in terms of intensity (or loudness) reported in decibels (dB) and frequency (pitch) reported in terms of cycles per second or Hertz (Hz). Davis and Hardick (1981, p. 9) suggest the average intensity level of conversational speech is approximately 50 dB above audiometric zero, and the average level of speech measured 18 inches from the lips of a male speaker is approximately 74 dB (p. 16). At a distance of one metre, the average SPL (sound pressure level) of conversational speech is approximately 60 to 65 dB (Davis S Hardick, 1981, p. 35). In terms of frequencies, Ling (1976, p. 30) suggests that hearing in the range 27 0 to 3,000 Hz is necessary for the auditory recognition of vowels and 250 Hz for nasal consonants to over 4,000 Hz for reception of fricatives such as /s/. Thus auditory speech reception is dependent on a number of factors. The basic audiogram reports the approximate frequency range over which an individual responds to sound and gives some indication of the minimal level at which the sounds of speech must be presented in order for them to be audible. Although speech is comprised of complex and rapidly changing acoustic events, and the audiogram only reports an individual's thresholds to steady-state pure tones, it is s t i l l the most useful single predictor of auditory speech reception ability (Ling, 1976) . Communication, Language, Speech Communication is one of the most important, if not the most important, aspects of behavior both human and non-human according to Moores (1982) who goes on to define communi cat ion as "any interaction between living organisms" (p. 181). Kretschmer and Kretschmer (1978) suggest "Communication is any attempt by the speaker to exchange information with another person in his speech communi ty" (p. 1) . 9 Staats (1968) examined communication in terms of learning theory and suggested: Communication may be considered to involve various diverse circumstances in which stimuli confront the indi-vidual which affect his behavior — the effects depending upon his past learning. Thus communication will be considered as the functioning of previously acquired Stimulus-Responses, (p, 4) For the most part communication does involve auditory-verbal stimuli and oral-verbal responses. Thus it is not surprising that an Impaired audi tory channel i s likely to affect an individual's ability to communicate and influence the mode of communication. Communication need not be verbal, however. Non-verbal communication is possible through such means as body posture, facial expression and gesture. Persons with impaired hear ing make use of these non-verbal forms of communication and thus it is important to distinguish between non-verbal as opposed to non-oral forms of communi-cation. Sign language would be classified as non-oral communication, since it involves language, albeit not in the form of speech. It has been suggested (Freeman, Carbin, & Boese, 1981) that a possible reason for the misclassification of sign languages as non-verbal is that their form is visual-gestural which is also the most common form of non-verbal communication used by hearing persons. The distinctions are between oral (spoken) and non-oral, and between verbal (employing language) and non-verbal. Language has been defined as an organized set of symbolic relationships, mutually agreed upon by a speech community to represent experience and facilitate communication (Kretschmer & Kretschmer, 1978, p. 1). Davis (Davis & Silverman, 1978) has suggested that one of man's greatest biological endowments is "the capacity 10 and organization of a brain suifable for the development of language. Language gave man not only the ability to share experience but a tool for abstract thinking" (p. 18). Speech is the audible production of language, the result of manipulation of the vocal tract and oral musculature (Kretschmer S Kretschmer, 1978, p. 1). Thus speech is distinct from language and should be considered rather as the most commonly used (but not the exclusive) means of communicating language between individuals. In summarizing the important distinction between communication, language, and speech, Eisenson and Ogilvie (1971) stated: Newman (1962) makes clear the relationships among speech, language, and communication. He defines communication as a social mani festa-tion that includes all the phenomena and acti-vities associated with interaction, whether linguistic or non-linguistic. Language is a group phenomenon that is generated and main^ tained in community living; a system of signs and symbols that is transmitted from genera-tion to generation; a code or a tool or an instrument of communication. Speech,- an individual physical activity, constitutes the manner of communication as distinguished from the means - language. (p. 3) Irrespective of what language is learned or how that language is communicated, be it through speech, written symbols, formal or informal manual gestures or some form of real-time graphic display, the fact remains that for communication to take place individuals must be able to receive and perceive the language of their environment or speech community. It is paradoxical that language acquisition is apparently effortless and simple for most normally hearing children; yet the acquisition process and result are so complex that they are not fully defineable by psycholiguists, and deaf children are rarely "taught" a high level of language competence. In order to elucidate the effect of an impaired 11 auditory channel on the communication process the next section of this chapter contains a description of the normal language development•process and the importance of reception in that process. Language Development Communication is an interactive process involving reception and express ion (usually) of language which is often in the form of audible speech. Davis and Hardick (1981) state, "The auditory speech signal is the main vehicle by which information about language is conveyed to developing children" (p. 14). The communication process begins at birth when the infant's cry brings a response from the parent, and this interaction is continually refined as the infant matures and develops communicative competence. Normally, children go through a series of identifiable stages of language development, beginning with undifferentiated crying at birth and progressing to differentiated vocalization in the second month of l i f e . From three to six months the vocali zati on normally increases and includes identifiable sounds which will later be used in speech. This babbling stage is considered to be very important (Eisenson & Ogilvie, 1971) and is a stage during which the innate propensity toward vocal-ization may be reinforced or discouraged. For normally hearing children the auditory feedback may be self-reinforcing, but for hearing impaired children there may be l i t t l e or no self-reinforcement from auditory feedback. It is during this stage of the speech and language develop-ment process that the alert observer may first notice differences in vocalization between deaf children and those children who have normal hearing (Lenneberg, 1964). The next stage in normal language development 12 occurs usually by the eighth month. During this stage normally hearing children engage in a considerable amount of seif-imitating vocalization. This lalling stage is closely followed by the echolalic stage during which the child imitates the vocal utterances and the gestures of others. During these stages, auditory reception of speech and language is essential for the normal develop-ment of vocal production of speech and language. Logic-ally, a child does not orally imitate what he does not aurally receive (tactile sense and kinesthesis, notwith-standing) . By the end of their first year many children may be able to obey simple verbal commands. By the middle of their second year (18 months) most normally hearing children can use language expressively in order to affect their surroundings. That is, they begin to learn that words have a power and are a way of getting people to do their bidding, of satisfying their needs both physically and psychosocially (Eisenson & Ogilvie, 1971, p. 125). Between ages 18 to 24 months most normally hearing children have expressive vocabularies of up to 50 words and are able to comprehend a much larger number of words. By age two, then, most normally hearing children are quite adept at using language in the form of speech for the purpose of communicating with others in their environment. Their language continues to become more refined and complex, and by age four the basic elements of language have been mastered by most children. Thus the ability to receive language is clearly essential in the developmental process. Language may be in the form of speech or represented non-orally but the developmental process is such that children must be able to receive the language patterns on which to build their own grammar. In addition, the brain develops rapidly during 13 the first five years of l i f e and needs rich language input during that time (Freeman, Carbin, & Bosse, 1981) . The early years are generally consi dered as the optimal time for language development and according to Furth (1966), thinking, emotional ties to parents, and language should all develop together. Given that ninety percent of all deaf children are born to normally hearing parents (Freeman, Carbin, & Boese, 1981), the primary language environment of most deaf children will be vocal speech. For children whose auditory channel is severely impaired, however, speech may not be a very efficient means of receiving language. Language-Reception Hearing people and non-hearing people alike use both verbal and non-verbal communication. In the sense that both signed language and spoken language are systematic, specific, and symbolic, they will both be considered as modes of verbal expression of language. Non-verbal communication (such as a particular facial expression or a shrug of the shoulders) is of course received through vision. The present discussion of language reception will be limited to verbal language, and will concern first speech reception, then reception of manual (sign) language, and finally reception of language via comb i ned spoken and manual modes. Speech Reception Spoken language may be received through one or a combination of four sense modalities: audition, vision, touch, and kinesthesis. Of the four, residual audition must be regarded as potentially the most important because it is the only one directly capable of appreciating the primary characteristics of communicative speech, which are acoustic. Both other exteroceptive senses, vision and 14 touch, may be regarded as surrogates capable of responding only to secondary characteristics of speech (Ling, 1976, p- 2 2), O'Connor and Hermelin (1978) have suggested that language developed as a biological auditory-articu-latory system, and therefore one would expect the auditory channel to be especially well equipped to deal with linguis-tic material . The acoustic properties of speech and the amount of in format ion which can be received through an impaired auditory channel are the next topics to be addressed. Speech sounds can be classified as either vowels or consonants. Vowels are produced by the vibration of the vocal cords and modified by the positioning of the tongue in the oral cavity. Thus vowel sounds can be further identified as either: high back, low back, central, low front, and high front. Vowel sounds which glide from one position to another are referred to as diphthongs. Consonants are partially identifiable by the presence or absence of voicing. I n addition, the manner and place of production give them their individual acoustic properties. Acoustic cues provide information on three aspects of speech including: suprasegmental aspects, vowel sounds, and consonant sounds. The suprasegmental aspects of speech include features of intonation, stress, rhythm, and disjuncture which are carried mainly by the voiced components of speech. Ling (1976) suggests that since voicing information is normally present below 300 Hz, the suprasegmental features should be audible to most hearing impaired children if they are able to utilize their residual hearing within that frequency range. Vowel sounds, as stated earlier, are produced by vibrations of the vocal cords which produce both a 15 fundamental frequency and harmonics at multiples of this fundamental frequency. These sounds are affected by the vocal tract so as to produce peaks of energy or formants which characterize each vowel (Ling, 197 6, p. 28) . In order to identify vowels through audition, persons must be able to perceive the first and second formants of vowels which requires usable hearing in the frequency range of 270 Hz to 3,000 Hz (Ling, 1976, p. 30). Consonant sounds, the third aspect of speech, also possess acoustic properties which aid in auditory speech reception. The range of frequency required for auditory reception of consonants is about 250 Hz for nasal consonants and well over 4,000 Hz for a voiceless fricative such as /s/ (Ling, 1976, p. 30). Thus while some information may be derived auditorily even by persons with severely impaired hearing, Erber (1975) has suggested that by the time the loss reaches 95 dB we can be fairly sure that no useable speech will be heard. Conrad (1979) suggested that com-prehension of speech is disturbed long before levels of profound deafness are reached (p. 177), and Hood and Poole (1971) reported that discrimination of speech sounds is impeded with a hearing loss as l i t t l e as 31 dB. Logically, the greater the degree of hearing loss the greater will be the dependence on alternate sense modal-ities to provide cues to help in speech reception. Visual cues are also available to aid in speech reception. The extent to which vision is used in speech reception by normally hearing children is unknown (Ling S Ling, 1978, p. 135). For hearing impaired persons, however, the use of visual cues through speechreading is very important. How much information is available through speechreading? Davis and Hardick (1981) have suggested that, "The visual parameters of speech do not lend themselves 16 to accurate perception by lipreading alone?" (p. 63). They described five factors which influence the visual perception of speech: visiblity of speech sounds, rapidity of speech, speaker differences, environmental factors, and characteristics of the lipreader. In discussing the visiblity of speech, Jeffers and Barley (1975) estimated that sixty percent of speech sounds are either obscure or invisible and that these visibly indis tinguis hable sounds occur more frequently in normal conversation than do those which are more readily identifiable. Vowel sound l i p patterns are the least ambiguous cues available in speechreading (Ling, 1976). The production of vowel sounds involves degrees of l i p rounding and variations in tongue position. Jeffers and Barley (1975) concluded that "there are only two, or at most three, speech movements that are sufficiently stable to be of any great help in identifying the vowels and diphthongs" (p. 62). Essentially the visibly distinguishable vowel groups are high front (eg. /i/) and high back (eg. /u/). The suprasegmental aspects of speech are virtually invisible. Intensity, intonation, and rhythm can not be consistently judged simply by observing the speaker's mouth. There has not, however, been a great deal of research into the visual reception of suprasegmental features (Ling, 1976). Consonant sounds range in v i s i b i l i t y from being completely obscure to being visible but somewhat ambiguous. Erber (1971) has demonstrated that the v i s i b i l i t y of a consonant is related to the place of articulation. Jeffers and Barley (1975) suggested that the person attempting to speechread has available only three or four visible movements and two obscure ones to aid in the identifica-tion of the consonant sounds. The groups they suggested as being visible for almost all speakers are: 1. /f, v/; 17 2. /w, hw, r/s 3. /p, b, m/, and for some speakers; 4. /&,%/. Vision is a poor substitute for audition in identifying consonants. Although the four groups of consonant sounds identified above are distinguishable from each other, the sounds within the groups are not. Speech sounds which are visibly indistinguishable are referred to as homophenous. Some 40 percent of speech sounds are said to be homphenous (Nitchie, 1930; Erber, 1972) . In addition to the dearth of visibly distinguish-able cues available to identify speech sounds, the rapidity of connected speech is a second factor mitigating against accurate speech perception by vision alone. Davis and Hardick (1981) reported that the briefest consonants are about 30 milliseconds while the longest are about 300 milliseconds in duration. In connected conversational speech, the speaker averages 5 to 5.5 syllables per second or about 270 words per minute (Calvert & Silverman, 1975). Because Jeffers and Barley (1975) reported that ordinary speech averages about thirteen speech sounds per second whereas the eye is capable of perceiving only eight to ten movements per second, clearly the rapidity of speech makes vision a poor substitute for audition in receiving spoken language. The three other factors which affect visual speech reception are speaker differences, environmental factors, and characteristics of the speech reader. The first two factors, speaker differences and environmental factors, are related to the rapidity of speech and the v i s i b i l i t y of speech. The environmental and speaker factors include the rate of speaking, the amount of random movement, distance from the speaker, and lighting. Even when all the aforementioned variables are optimal, the characteristics of the speechreader s t i l l may affect the speechreading task. 18 Davis and Hardick (1981) reported that "in general there do not appear to be personality traits that are associated with the ability to lipread well" (p. 65), and further that "attempts to isolate personal character-istics that contribute to lipreading performance have been largely unsuccessful and probably do not warrant further serious effort on the part of researchers" (p. 66). Conversely, Jeffers and Barley (1975) cited examples of research into primary and secondary factors involved in the attainment of speechreading skill. The primary factors discussed by Jeffers and Barley are those which "are believed to determine speechreading ability because taken together they constitute the task" (p. 31), and they include: visual proficiency, synthetic ability, and flexibility. The secondary factors include: training, knowledge of language, and emotional attitudes. Speechreader characteristics of visual proficiency are subdivided by Jeffers and Barley (1975) into visual perception, speed of perception, and peripheral perception. They found no research literature which dealt with peripheral perception and so dealt with speed of perception and with visual perception, which they defined as "the ability to perceive and identify individual speech sound movements and to discriminate among them" (p. 143). Visual perception, since it is basic to speech-reading, may be considered one of the most important primary factors. In the studies cited, "results indicate that visual perception may be associated with from 14 percent to 46 percent of the total skill" (Jeffers & Barley, 1975, p. 143). As discussed previously, conversational speech is quite rapid and thus speed of perception would seem to be an important speechreader characteristic. Jeffers and Barley (1975) found, however, that l i t t l e attention 19 had been paid to this factor and the results—of three studies cited were inconclusive. As an explanation they suggest, "It is impossible to measure accurately the contribution of a single primary factor without equating or control ling the remainder of the factors" (p. 148) . Other primary factors or speechreader character-istics discussed by Jeffers and Barley (1975) are synthetic ability, defined as "the tentative or final i denti f ication of a message or any part of it" (p. 150) and flexibility, "the aptitude fo'r revising tentative closures, (sounds, words, phrases, and message), and shifting to different sets or closures when a meaningful or appropriate message cannot be synthesized on the bases of the original decisions" (p. 159). Once again it is impossible to give definitive answers as to the relative contribution of each of the primary factors, partly because none of the factors can be measured in total isolation, Jeffers and Barley (1975) did suggest, however, that "'in general' perceptual proficiency will be found to contribute approximately 40 percent to the total task, and synthetic ability and flexibility, the remaining 60 percent" (p. 185) . One of the factors ancillary to the synthetic process and to f l e x i b i l i t y is the aptitude for abstract inductive reasoning. Some previous research (Jeffers S Barley, 1975, p. 167) has indicated a correlation between speechreading skill and the Raven Progress ive Matrices. This particular test has general acceptance (Conrad, 1979; Goetzinger, Will, & Dekker, 1967) as one which measures abstract or spatial reasoning. While the Raven is frequently used as a non-verbal intelligence measure with hearing impaired subjects, there is some doubt as to whether it is in fact a non-verbal measure, especially in light of the correlation between measures of inner speech and Ravens scores (Conrad, 1979) . Since speechreading can be defined as "the 20 visual recognition (aided by partial hearing) of known language" (Jeffers & Barley, 1975, p. 33), it follows that of the secondary factors or speechreader character-istics, language comprehension is the most critical. They stated "language comprehension, ie. knowledge of structural meanings, vocabulary, and idiomatic expressions, is considered to be the most important of the secondary or 'backup' skills" (p. 127). Of the 13 studies cited, Jeffers and Barley (1975) indicated that 12 report statistically significant (p< .05 or less) contributions by language comprehension measures of from 14 to 47 percent of the total variance on speechreading tasks. Other secondary factors include training and emotional attitudes. Jeffers and Barley (1975) cited 13 studies which report that improvement in speechreading skills results from training. They also reported that some studies indicate a correlation between emotional attitudes and speechreading skill although the association has not been studied extensively. From the discussion to this point, it is clear that in the absence of audition, vision is a poor substi-tute for the reception of verbal information. The potential contributions of touch and kinesthesis to speech reception are discussed below. Kinesthesis is that sense which provides informa-tion on the movements of muscles and which gives sensory information as to the spatial position of the body and its parts. This sense is of considerable importance in speech production but has limited, if any, direct usefulness in speech reception (Ling, 1976). The tactile sense has been used for centuries in attempts by teachers to convey speech information to their students (Ling, 1976). The amount and accuracy of verbal information available through touch is varied. Ling 21 (1976, p. 37) suggested "Touching the speaker's face or chest can clearly afford some information about voice: its presence or absence, its relative duration, its intensity, and - if the fingers are correctly placed on the face - its pitch." Thus while the vowel sounds cause movements which can be detected by placing hands on the speaker's face, it is nearly impossible to differentiate the vowels by touch alone. Vocalization of consonants also produces characteristic vibrations as in nasal sounds /m, n, tj /; and plosives /p, b, t, d, k, g/ produce a burst of air which can be felt. Fricatives also have a characteristic airflow. In all, however, attempts at speech reception via touch are possible only when the sender and receiver are in very close proximity which is not always possible and which may be socially awkward. This review of the effectiveness of various sense modalities in the reception of speech shows that audition is the only single sense capable of perceiving all aspects of speech. For persons with impaired auditory capabilities several options are available, including combinations of sensory inputs or other non-vocal modes of communication. These options are elaborated in the next section of this chapter. Combined Modalities Research into multisensory speech reception has been conducted for some time. Early studies of the combined use of audition and vision have been reported by Ewing (1944), Hudgins (1953), and Clarke (1954). The results of these studies indicate that when speechreading was combined with audition the scores were greater than for either mode alone. In Ewing's (1944) study, the 22 subjects' score reached a high of 90 percent when speechreading and aided hearing were combined to receive sentences. It should be noted, however, that her subjects were postlingually deafened adults. Results of subsequent studies by Hutton (1959) and later by Erber (1972) also indicated that audition and vision combined yielded better results than either modality alone. The results, however, showed that bisensory scores were not equal to the sum of the visual and auditory scores. This would seem to indicate that some of the verbal information is redundant when available through both senses or, perhaps, that subjects were unable to focus attention on two sets of simultaneous stimuli . This question will be addressed in greater detail in a subsequent section dealing with processing theory. Regarding the question of the combined use of audition and vision, Meadow (1980, p. 67) reported, "Erber (1975) has reviewed almost 50 papers relevant to the question of auditory versus visual presentation of speech, and has concluded that 'numerous clinical and laboratory studies on the auditory-visual performance of normal-hearing and hearing-impaired children and adults demonstrate that combined auditory-visual perception is superior to perception through either audition or vision alone ' ." (p. 481) . Several caveats concerning the early studies have been proffered by Ling (1976). He cautioned that many of these studies used nonsense syllables and words rather than running speech, and in most studies the nonsense syllables were constructed using the vowel /a/ which provides much better cues in place of consonant production than do other vowels. Multisensory speech reception can, of course, involve combinations other than audition and vision. 23 Vision and touch have also been combined in a number of studies. An early study by Gault (1926) involving adult subjects found that, although touch was useful in dis-criminating between sentences and in identifying vowels, touch alone or combined with vision was not sufficient to differentiate words which varied only by one consonant feature. More recently, a study by Oiler (1980) demonstrated that, with a brief training period, deaf adolescents attained a high level of perceptual performance with a tactual speech system in discrimination of certain hard to speechread word pairs. Once again, effectiveness was limited to words and not connected speech. Devices designed to trans late vocal information into tactile form have been developed (Englemann & Rosov, 1975; Ling & Sofin, 1975; Pickett, 1963). These devices and their output are complex, due to the nature of speech and its various components, Thus the devices may not prove to be a viable alternate mode of speech reception, especially for young children. Residual audition may also be combined with touch, although research has mainly been concerned with touch as a supplement to speechreading (Ling, 1976, p. 51) . Schulte (1972) reported that many schools for the deaf were using amplification systems which incorporate a vibratory output. It is unlikely, however, that the tactile sense would ever be used to the exclusion of vision as an adjunct to residual audition in the reception of speech. Communication Modes/Methods Given the various combinations of sensory input, there are essentially four communication modes and methods of instruction commonly used in North America. These include: the oral method, the audi tory method, the Rochester method, and the simultaneous method (Moores, 1982) . 24 The oral method, also known as the oral-aural method, is a system whereby children receive input through residual hearing (aided) and through speechreading. Expression is through speech. No formal or informal manual communication is allowed. The auditory method is unimodal in its approach. This method is also known as the acoupedic or aural method (Clarke, Kendall, S Leslie, 1978). The primary focus is on the development of listening skills by making the child rely exclusively on residual audition. Early reading, writing, speechreading, and manual communication are discouraged. Some attempts have been made at using this method with profoundly deaf children (Moores, 1982) even though proponents say it is designed for children with an average aided hearing loss of less than 60 decibels (Pollack, 1964) . The Rochester method is a combination of the oral method and fingerspel 1ing. Reception is through speechreading, amplified speech, and fingerspelling. The expressive component of communication involves speech and fingerspelling. In this system, reading and writing are encouraged and emphasized. The s i m u l t a n e o u s method combines the oral method with signs and fingerspe11ing. Reception of language is via speechreading amplified speech, signs, and fingerspelling. Expression is through speech, signs, and fingerspe11ing. Initial exposure to language is through speech for most deaf children, especially those with hearing parents (Meadow, 1980). There are a number of researchers and educators who propose that this i n i t i a l oral/aural exposure should be continued even, according to a few, to the exclusion of any visual speechreading cues (Gold-stein, 1939; Griffiths, 1967; Guberina, 1964; Pollack, 1964). This unisensory approach, however, is not in 25 widespread use. Most educators are less rigid in their approach to communication and even proponents of aural-only methods do include, according to Meadow (1980), speechreading in their approaches to language acquisi-tion. Thus, most communication alternatives are at least bisensory or bimodal. The discussion to this point has shown that speech is best perceived through an intact auditory channel. Since the population of concern in this study has some degree of hearing impairment, an intact auditory channel is not available for perceiving speech. For some hearing impaired persons, however, their residual audition may s t i l l be sufficient for speech perception. Vision alone has been demonstrated to be a poor substitute for audition as a receptor of speech. Various combinations of vision, audition, and touch have been briefly mentioned. It becomes clear that for some hearing impaired individuals, speech may simply not be an effective means of receiving language either through unisensory or bisensory methods. Some form of non-oral communication may, for some individuals, prove to be more effective as a means through which language is communicated, and these non-oral forms are discussed in the subsequent section of this chapter. Non-Oral Communication Non-oral communication may take several forms including: informal gestures, formal sign language, real-time graphic display or other less complex uses of printed word. Several recent works give a comprehensive description of reading (Kretschmer, 1982) and the use of real-time graphic display (Stuckless, 1981). This study concerns language reception and therefore gestures used in informal non-verbal communication will not be discussed further. 26 Formal manual communication modes may take several forms. In fingerspelling, using a manual alphabet, words are spelled letter by letter using different hand shapes. An alternate form of presenting language manually involves the use of signs, each of which represents a complete idea. Cued Speech (Cornett, 1967) is a manual aid to the reception of speech. The cues consist of eight hand shapes in four different positions near the head and neck. However, since the manual cues are generally meaningless in the absence of speech this communication mode will not be included in the description of non-oral forms. Use of fingerspelling as a non-oral mode of communication is fraught with many of the same difficulties as the use of graphics. That is, in order to interpret the fingerspelled message, the receiver must first be able to assimilate the individual letters into meaningful words. Thus fingerspelling alone is only viable after a child has acquired skill in reading. The Rochester method uses f ingers pelling and speech together as a method of teaching. In the Soviet Union, neo-oralism is the term ascribed to the combined use of fingerspelling and speech-reading (Moores, 1972a). These methods cannot be considered non-oral since both include speech. Fingerspel 1ing alone is not in common use; most persons tend to use both signs and fingerspelling in their conversations (Moores , 1982) . Signs may be employed in the context of a sign language or a sign system. Sign languages such as American Sign Language (ASL or Ameslan) , French Sign Language or Swedish Sign Language are independent of spoken language. Recent research into ASL (Wilbur, 1979) suggests that it is a separate language, with its own lexicon and grammar, both independent of spoken English. Similar research has been conducted with Swedish Sign Language (Bergman, 1979) . 27 Sign systems have been developed to incorporate manual representations of English words and structures. Mayberry (1978) has labelled these systems as manual English systems. Davis and Hardick (1981) suggest the impetus for the development of manual English systems "has been the need for a way to stimulate deaf children with English in a nonambiguous, meaningful form" (p. 81). Since most hearing people must learn ASL as a foreign language and also for the pragmatic pedagogic purpose of presenting hearing impaired students with a visual representation of spoken English, one of a number of manual English or sign systems is usually used in schools. The system which is recommended by the Ministry of Education and most widely adopted for use in British Columbia schools is Signed English which is sometimes referred to as Gallaudet Signed English (Davis & Hardick, 1981) and was developed by Bornstein (1974, 1975) and others at Gallaudet College. Signed English is the sign system employed in the present study. This particular system was chosen because it is commonly used with and by the population of interest in the study. In addition, it uses manual representations for most common English word endings, pronouns, auxilary verbs, and articles. Since these are all present in normal verbal language, and since the study attempts to compare reception of language through various modes, it was necessary to choose a sign system which would allow presentation of messages which were parallel in lexicon and grammar in both manual and oral forms. To summarize the discussion to this point, language is of utmost importance in the human communicative process. Language development depends on reception most often of speech. It would seem logical , given the demons trated superiority of the auditory channel as a 28 receptor of speech, that audition should be emphasized as the primary receptor. Given varying degrees of hearing impairment, audition could be supplemented with speech-reading and the tactile sense to the extent necessary for each individual. For some, however, speech reception via one or all of the modalities may be too difficult to be an efficient part of the communication process. Again, logically, for such individuals some form of non-oral communication may be necessary and should be considered acceptable. Logic, unfortunately, has not always had a place in the education of hearing impaired individuals as evidenced by what has been termed (Meadow, 1980, p. 143) the Hundred Years War. This oral-manual controversy has involved opponents from two camps: those who oppose any form of manual communication and those who argue for its inclusion. In fact the term "oral-manual" is said by Moores (1982) to be a misnomer since few educators today advocate a purely manual mode of communication. The historical roots of the controversy have been well documented (Bender, 1970; Conrad, 1979; Lane, 1980; Meadow, 1980; Mindel & Vernon, 1971; Moores, 1982; Schmitt, 1966). The debate has often been clouded by extrinsic factors such as European politics and national pride in the early years and by personality conflicts in the later years. Only recently, however, has personal polemic been pushed to the background by a search for empirical data. Even some of this recent research has been criticized (Nix, 1975), sometimes justly. One of the fundamental problems has been the search for what is best for the "the deaf". As was alluded to earlier, research has sometimes ignored individual differences and this false assumption of homogeneity has led to many overestimations of the external validity of the results. In an effort to circumvent the oral-manual controversy, and influenced by the recent trend in all areas of special education toward meeting individual needs, total communication has become predominant in education of hearing impaired students. Some educators (Clarke, Kendall S Leslie, 1978; Moores, 1982) consider total communication to be an extension of the simultan-eous method which is a combination of the oral-aural method plus signs and fingerspelling. Total communi-cation is also referred to as bimodal communication (Davis S Hardick, 1981; Meadow, 1980). The conference of Executives of American Schools for the Deaf (1976) offered the following definition: Total communication is a philosophy requiring the incorporation of appropriate aural, manual, and oral modes of communication in order to ensure effective communi-cation with and among hearing impaired persons (p. 358). The astute reader may note that total communication is referred to as both a methodology and a philosophy. The moot question as to whether total communication is a methodology, a philosophy, or both is beyond the scope of this chapter. The rationale underlying total communication is based on two major premises (Davis & Hardick, 1981): all hearing impaired children are different and hence require different procedures, stimuli, and techniques in order to learn as well as possible, and; communication is the basis for most human endeavors, and total communi-cation provides a means for early development of both receptive and expressive communication. Whereas the oral method was predominant in the United States until around 1970 (Moores, 1982), the majority of programs now employ total communication (Jordan, Gustason, & Rosen, 1976, 1979). Given the 30 above rationale for total communication this recent trend may on the surface appear to be a positive step. There are, however, several dangers inherent in such a rapid and apparently complete shift in communcation methodo-logies. One such danger is the possiblity that individual characteristics, needs and abilities may not be adequately assessed. Another may be the assumption that when simultaneous communication is used it is always grammatically complete and accurate. A study reported by Marmor and Petitto (1979) attempted to determine how well English grammar is repre-sented in simultaneous communication. They concluded that "unfortunately across all linguistic constructions we found the teachers signed utterances to be predominantly ungrammatical. Specifically, declarative sentences and questions were signed incorrectly more than 90% of the time" (p. 126). The results of this study also indicated that teachers' signs and verbal utterances were not, in the majority of cases, actually simultaneous. That is, what the teacher signed and what the teacher said were not necessarily simultaneous presentations of the same linguistic unit. Thus the shift toward the use of simultaneous communication in classes for the hearing impaired may not be providing students with a more clear linguistic message. As Wilbur (1979) has stated, "On the one hand, the establishment of effective communication with young deaf children is viewed to be essential to their socio-emotional development as well as their academic achieve-ment . On the other hand, there are unanswered questions concerning the potential for processing two modalities at the same time..." (p. 5). This question is repeated by Freeman, Carbin and Boese (1981) who ask, "Can several types of auditory and visual input, if presented simultan-eously, be integrated and used by the brain?" (p. 120) . 31 As was always the case with the oral or manual debate, the possibility s t i l l remains that hearing impaired children may be placed in educational and communicative environments with l i t t l e or no consideration given to the student's cognitive characteristics. The problem as seen by Carson and Goetzinger (1975) is associated primarily with the processing qualities of the human organism and involves the issue of the extent to which bisensory stimulation may or may not be more efficient for learning than either modality alone. The next section of this chapter contains a review of some of the recent research and literature on the effectiveness of the various communication methodo-logies and of the literature which deals with the perception and cognitive processing of linguistic messages. Methodology Comparisons Comparative studies which matched various combinations of communication methodologies and a wide range of other variables have been reported. This review will address mainly some of the most recent research on language reception via various modes. A comparative study by Klopping (1972) assessed language understanding of deaf students under three auditory-visual stimulus conditions. The modes compared were speechreading with voice, fingerspelling with speech, and total communication. Klopping conluded that for the particular population sample involved, the most efficient presentation mode was total communication. A study reported by Carson and Goetzinger (1975) compared the learning of nonsense syllables by eight to ten year old deaf children under seven conditions: lipreading, signs, audition, lipreading plus audition, lipreading plus signs, signs plus audition, and signs 32 plus speech-reading plus audition . The highest score was obtained under the lipreading plus auditory condition which was significantly greater (p < .01) than signs plus audition, audition alone, and signs alone. The second highest score was for the "total communication" condition which was significantly (p < .05) greater than signs plus audition and audition alone. None of the other differences were found to be significant, including that between the top two conditions. The authors conclude that "our results would cast doubt on the effectiveness of the so-called 'total communication approach' in teaching the deaf" (p. 79). It is doubtful whether such a generalization is valid based on the results of a study with an N of only 35 in which only five subjects received each treatment, in which no consideration appears to have been given to other variables such as previous communicative experience, and in which the task, by nature of the nonsense syllables , had l i t t l e pedagogical relevance. A study which compared the amount of information received via total communication, manual communication, oral communication, and graphics was conducted by White and Stevenson (19 75) . Their study involved a stratified random sample of deaf students aged 11.0 to 18.7 years with I.Q. ranging from 60 to 140. The results of the 3 x 3 x 4 (age by I.Q. by mode) study showed all sub-broups received more information through reading and that there were no significant differences between total communication and manual communication although both were more efficient than oral communication. There are, however, several limitations to this study. First, the sample was drawn from public residential schools, albeit two different f a c i l i t i e s , and thus the external validity may be somewhat limited. A second drawback is the fact that only the sub j ect variables of age and I.Q. were 33 considered. Other literature has suggested that language facility, age at onset, degree of loss, and other subject variables may affect language reception. Another study, somewhat similar to the present research, was conducted by Beckmeyer (1976). He gave an association learning task to 22 deaf subjects who had a stated preference for either oral or manual communication. Information to be learned was presented through oral, sign, fingerspelling, oral and sign language, and oral plus fingerspel ling modes. The materials used in the study were designed to minimize previous language learning. They consisted of consonant vowel consonant (CVC) trigrams forming unknown words but paired with visual representations of real objects. The results suggested that the combined modes did not lead to improved learning and for the group as a whole there was no difference between the oral mode and sign although both were superior to fingerspelling. Several cautions must be noted. The total number of subjects in the study was only 22, comprised of 11 oral preference, 8 manual prefer-ence and 3 no communication preference. The small N could account for the lack of statistical s igni ficance of the results. A further limitation is the fact that nonsense trigrams were used in a task which depended very much on memory. Other studies have shown that context (animate nouns in isolation as opposed to inanimate nouns or nouns within a sentence) affect speechreading (Erber & McMahan, 1976), as does sentence familiarity (Lloyd & Price, 1971). Another problem in the Beckmeyer (1976) study was the fact that there was a signifleant difference (p d. .05) between the mean HTL's of two of his sub-groups (oral preference had lower HTL's than manual preference). This once again illustrates the difficulties inherent in using a matched sample technique, although in the study 34 cited above the means were later statistically adjusted. Another often quoted study is that of Moores, Weiss, and Goodwin (1973) . They developed a receptive communication test to assess: sound alone, sound plus speechreading, sound and speechreading plus fingerspelling, sound and speechreading plus signs, and print. Their subjects were 74 preschool children from 7 programs using a variety of modes of instruction. The task was a multiple choice recognition task comprising 20 correct items each with 3 distractors for foils. The results of the study showed that scores improved as dimensions were added, with the highest group mean for audition plus speechreading plus signs (having excluded those subjects from programs which were o f f i c i a l l y oral or which used the Rochester method). The scores for this mode were significantly higher (p < .02) than for sound alone, print, sound plus speechreading. Sound, speechreading, and fingerspelling combined yielded significantly higher scores (p < .01) than sound alone or print. And, scores for sound plus speechreading were significantly (p < .01) higher than for sound alone or for print. It is not unexpected that audition alone did not yield high scores since subjects' mean HTL was 95 dB. The mean chronological age of subjects was just less than 62 months which might partially account for the low score for print. The use of a multiple choice task raises the possibility that it measures recognition as much as reception. The fact that all tests were administered by a number of trained research fellows raises the question of uniformity across all trials. A study by Brooke, Hudson, and Riesberg (1981) examined the effectiveness of seven modes of input on the learning rate of hearing impaired students. Each of 42 subjects was randomly assigned for training via one of seven modes: auditory, lipreading, fingerspelling, audi tory-1ipreading, auditory-fingerspel1ing, 1ipreading-35 fingerspelling, auditory-lipreading-fingerspelling. Spanish number words and their English counterparts were paired and presented via the assigned mode. After training, subjects were presented a Spanish number and wrote the corresponding numeral in a test booklet. The results "indicated that bisensory methods of communication were superior to unimodal methods in regard to rate of learning" (p. 839), and analysis of covariance found that neither age nor WISC scores was a significant covariate. The results also indicated that combining two visual modes (lipreading -fingerspelling) neither facilitated nor impeded learning. While the results are interesting several guestions arise. Hearing losses are reported only as "in the severe to pro-found range" and with only six subjects in each of seven groups it is possible that HTL might have had a significant effect. Also, the task did not require reception of verbal information in connected speech as is more common in communication. Finally , the study did not include a sign component and, as has been stated earlier, any educational program with a manual component is far more likely to use sign than fingerspelling alone (Moores, 1982). The equivocal results of these studies have answered only a few questions and in fact have raised others. Levine (1981) cautions that since some of these studies deal with experimental subjects of different ages, instructional backgrounds, and communicative experiences, and with comparisons of different communication modes, they cannot be expected to point to any definitive dis-closure of a "best method"; but there seems to be an undercurrent in favor of manual methods (p. 133, 134) . Perhaps more important, and certainly more plausible, than a search for a best method as a search for what makes a particular method best for an individual hearing impaired person. Thus the question must first be asked, "Is it cognitively feasible for more than one sensory modality to be employed to receive language?". This question has in part been answered by studies which show at least no decrease in reception when multisensory stimuli are presented. And yet contradictory positions are held by other unisensory proponents. One reason for the aforementioned contradiction may l i e in methodological and design weaknesses of the studies, some of which were reviewed here. The present study attempts to answer some of the same questions raised by previous studies while, with the help of hindsight, minimizing some of their weaknesses. It is important that questions as to receptive ability be asked in light of cognitive process ing theory or models. The question should not be merely under which mode the receptive score may be highest; but rather, given that a particular mode may prove better for an individual or group, research should investigate possible reasons for the difference. Percept ion/Proc es s ing Models As noted earlier, Carson and Goetzinger (1975) have stated "The problem is associated primarily with the processing qualities of the human organism and involves the issue of the extent to which bisensory stimulation (the auditory and visual) may or may not be more efficient for learning than either an auditory or visual input alone" (p. 73). One factor must certainly be that of attention. There are, according to Moray (1969) some six theories of selective attention, of which three allow a detailed examination of their predictions. Since the question of unimodal or multimodal processing of language involves attention to one or more signals, the three 37 theories referred to by Moray (1969) are addressed below, as is the literature dealing with processing. Pollack (1974) has suggested that the distinction between perceptual and cognitive processes has been blurred by a steady stream of brilliant theorists and empiricists (p. 77). He goes on to suggest a continuum between cognition and perception. The review presented here, rather than attempting to distinguish the two will attempt to shed some light on possible sources of individual di f ferences between hearing impaired persons on perceptual and cognitive processes. Several processing theories have been proposed. It was postulated by Broadbent (1956), Davis (1959) and Adams (1962) that regardless of the mode all incoming signals reach a central processing area in the brain. This is a kind of unimodal one track approach by which only one signal at a time can be processed and implies that two or more signals from different sense organs, if they arrive simultaneously, can not be processed. Broadbent (1956) suggested, however, that attention may shift rapidly between two incoming signals thereby responding unimodaly to bimodal stimuli. Broadbent (1974) stated, "If it were indeed the case that given tasks could be allocated clearly and distinctly to the different hemispheres, then each function should be capable of carrying on unimpaired by the simultaneous performance of the other" (p. 31). In terms of Broadbent's (1958) f i l t e r theory as related by Moray (1969) a language reception task might be diagrammed as in Figure 1. That is, the incoming sentence stimuli should present one or a combination of sensory stimuli (auditory, visual-speechreading, visual-sign) which would be stored in short term memory and one of which would be processed along the limited capacity channel. This model allows for retrieval of additional auditory -SENTENCE STIMULI sign speech reading ^MEMORY ^ SHORT TERM ^FILTER LIMITED CAPACITY CHANNEL OUTPUT MECHANISM LONG TERM MEMORY Figure 1. Adaptation of Broadbent (1958) f i l t e r theory. 39 information from the short-term memory store. Thus a subject might attend to one signal, sign for example, and then verify or enhance his perception by returning to the short-term memory store for auditory or additional visual information. Several other theories have been postulated, one by Treisman (1964) and another by Deutsch and Deutsch (1963). Both of these theories are considered by Moray (1969) to be elaborations of Broadbent in that they attempt to provide an explanation of the contents of the " f i l t e r " in Broadbent's model. In Treisman's model, information also comes in through a number of parallel channels. However, the messages are filtered or analyzed for crude physical properties such as pitch, loudness, place of articulation and so on. It is at this point that the postulated f i l t e r operates by attenuating the signal strength of some of the components and extracting others. From this point the one unweakened message along with the other weakened messages move through the nervous system to a kind of "pattern recognizer" which is postulated to consist of a large number of "dictionary units". The incoming messages pass along a logical tree with probabilistic nodes and upon reaching the end of the tree a single dictionary unit "fires" and the word is recognized. The dictionary units have different and variable thres-holds so that even an attenuated signal may be sufficient to trigger the unit. To make the theory relevant to the present study, it would suggest for example that use of hearing aids or auditory training for hearing impaired persons could have the effect of lowering the thresholds of certain auditory dictionary units, thereby increasing the likelihood of auditory reception of speech. Treisman's model further postulates that the triggering of a particular unit lowers the threshold 40 of units which in the past have been associated with that signal. Thus the importance of context cues such as are available in meaningful connected speech is evident, and nonsense syllables, as are used in some of the previous studies, may not be relevant stimuli. Moray (1969) reported that in one of her papers Treisman (1966) wrote: Since Nature has not yet guaranteed that all signals that are important to us or relevant to our interests should arrive at our senses with particular clarity and intensity, we have to adapt the alternative strategy of lowering our criteria for perceiving them, accepting them on the basis of less sensory evidence than we would a neutral or uninteresting stimulus. According to this model attention is a two stage process: filtering occurs first on the basis of the characteristics of the incoming signal, and; within the pattern recognition network, the dictionary units have different threshold settings. It should be noted that since information about the simple characteristics of all messages is postulated to bypass the intervening mechanisms it assumably would be available for reanalysis, subject to the limitations of short-term memory. A further modification to the Broadbent f i l t e r model was made by Deutsch and Deutsch (1963) who proposed a response selection theory of selective attention. They criticized the Treisman model as being redundant in its two stages. In their model the i n i t i a l f i l t e r present in the Treisman model is deleted. Signals are postulated to progress straight through to the dictionary or recog-nization unit. At this stage in their model they postulate that the signal encounters a kind of floating criterion stimulus selector, and the most important stimulus captures the attention. This importance weighting is a function 41 of past experience, and only if another unit begins to fire more strongly will the first be replaced. Thus if, for example, a visible speech feature was to be more recognizable than its auditory representation, the attention would be shifted to that visible feature. As Moray (1969) stated, "in this theory recognition by the pattern recognizing mechanisms of the brain of the precise nature of an incoming signal occurs at an earlier stage than that at which the observer is conscious of the nature of the signal. Pattern recognition precedes conscious perception and is not identical with it" (p. 34). Other theories have been postulated by Reynolds (1964), Egeth (1967) and Neisser (1967). However, Moray (1969), after reviewing them, suggested they are less precisely stated than those already discussed and they are also of less importance. The three theories discussed above can be con-sidered unimodal in that they suggest only one dimension can be attended to at one time. Broadbent's theory would appear to allow switching between channels at a rate of about 0.25 seconds (Moray, 1969, p. 30), but this would s t i l l be considered a unimodal response to bimodal stimuli. In fact Broadbent (1956) found there was a decrease in performance of a visual task when an auditory task containing much information was simul-taneously presented. Conversely , results of psychophysical experiments lead to the implication that often two dimensions of a stimulus, or two stimuli, may be better than one (Moray, 1969, p. 103). There are a number of these studies, some of which have been reviewed by Garner (1962) which show that more information is transmitted when the dimensionality of the input is increased. A number of the methodology comparisons such as Moores, Weiss, and Goodwin (1973) 42 seem to show similar results. One reason for this apparent disparity in results may be in the nature of the studies and the type of task and stimuli that were used. For example Treisman (1964) has shown that linguistic features of the messages affect attention. She noted a very close relation between performance in selective listening tasks and the information content of the stimulus material (Moray, 1969, p. 59). Memory may also be a crucial factor in the integration of simultaneous cross-modal stimuli as shown in a study reported by Moray (1969) in which a warning signal was effective even if presented after the target signal. A further factor may be the nature of the response task. As Broadbent (1958) noted, "The more complex the response, the more the interference with the reception of in format ion" (p. 31) . Another possible reason for the apparently disparate results was offered in a paper (discussed in greater detail below) by Norman and Bobrow (1975). They suggested that almost all processes will have regions which are resource limited and regions that are data limited. That is, performance on a task may be limited either by the amount of cognitive processing capacity required or by the quality and quantity of input data. They suggested, "A failure to recognize this distinction lies at the apparent discrepancy in many re ported experiments: when one experimenter reports interfering effects of one task upon the performance of another and a second experimenter finds no interference, the difference can most simply be traced to the fact that one worked within the region where both functions were resource limited whereas the other did not. Alternatively, as discussed above, measures taken may reflect varying data properties, and not be true performance measures" (p. 54). 43 Thus studies which show no increase in performance under multimodal communication may simply reflect data limita-tions and not necessarily processing problems. A good summary of the theories discussed above Is offered by Moray (1969, p. 107). The several theories which have been offered to explain the limits on our ability to handle information have essentially been concerned with the generation of responses to one or more incoming messages. Competition is between messages received in parallel. Thus Broadbent (1958) originally postulated selection by a f i l t e r which occurred fairly peripherally; Treisman (1966) placed selection somewhere along the input path-ways but before the recognition system; and Deutsch and Deutsch (1963) in the recognition system itself but prior to conscious perception. Th.e common feature of all the theories is that signals can be received in parallel up to a certain point at which the information channel narrows and after which there is a reduction in the information rate. Synthesizing these theories in the light of further research, Moray (1969) suggested a version of selective attention , that is, a modified version of Broadbent's (1958) original hypothesis. Moray suggested that a complex signal passes through the receptors and associated structures which results in its analysis and recoding into a set of signals which occupy different loci in input space. That is, features such as color, brightness, loudness, or pitch are identified and this recoding produces a set of signals which are now in the internal language of the nervous system. Thus, although the signals may represent different external 44 stimuli, they may be cross-correlated for similarity. Moray did not specify the criteria for similarity but rather suggested, "cross-correlation means those which are similar may be subsequently recoded into a single message, thus reducing the number of loci in input space which are carrying information and the load on the subsequent analysing systems" (p. 182) . The cross-correla-tion is assumed to take place well below any level of conscious perception. Moray (1969, p. 182) further suggested that the postulated functions of reception, encoding and allocation to input space loci, and cross-correlation all seem to be necessary to any model of selective attention. He further elaborated the theory and noted that there is some kind of pattern recognizer, the components of which have different and variable thresholds which vary as a function of signal probability, emotional value of signals, and other related factors. The connect ion between the firing of a recog-nizer and the conscious awareness of the occurrence of the stimulus is not clear. On the whole, however, Moray suggested the evidence seems to favor Deutsch's theory that conscious awareness is a response to the output of the recognizers, and probably conscious awareness is a global property of the interaction of many parts of the brain. The questions as to whether some incoming messages are attenuated or whether certain responses are selected remain unanswered and controversial (Moray, 1969, p. 183) . However, he stated in summary , "Overall, it seems that, with minor changes, Broadbent (1958) was right" (Moray, 1969, p. 193). The theories discussed above have several impli-cations for the current study. In the task of language perception via a combined or simultaneous method, for 45 example, the various stimuli (sign, speechreading, audition) would represent the components of the complex signal passing through the receptors. This signal would then be analyzed for cross-correlation and a single linguistic message would result. Thus for hearing impaired subjects, their "recognizers" could have a lower theshold for audition if their HTL were lower or if they had received auditory training. The same would also hold true for training in speechreading. Thus one might logically expect a positive correlation between a measure of visual-spatial integration (such as the Raven Progressive Matrices) and success in speechreading. Moray's (1969) theory of cross-correlation may also explain why scores for bisensory speech reception were not equal to the sum of the visual and auditory scores as reported by Ling (1976 , p. 50) . As was previously mentioned the theories cited above are essentially unimodal. A different theoretical position is espoused by proponents of a bisensory approach to education. The bisensory position is based on the acceptance of interactions between data obtained through various modalities (Clarke, Kendall, S Leslie, 1978). As Ettlinger (1967) has suggested, sensory information is thought to be processed at two levels since neuroanatomically the sensory areas of the brain are interconnected with inter-connecting sensory neurons. According to the processing model proposed by Ettlinger (1967), the first level of processing involves reception of sensory information via separate neural systems for each modality, with each res pons ible for recognizing the sensory attributes relative to its particular function. To this point the theory is not dissimilar to Broadbent's (1958) f i l t e r theory. The second level of Ettlinger's (1967) model involves a system which receives the specific sensory perceptions from each subsystem and integrates them to form the total perception . 46 This model is somewhat different from the Broadbent model in that it allows for the simultaneous integration of multisensory signals whereas the f i l t e r theory holds that multisensory information is processed successively, relying on short-term memory store. Neither position, however, precludes the possibility of multisensory information being processed. In fact Broadbent (1956), in a study with normally hearing subjects, stated, "It is obvious that a considerable degree of success is achieved in the two bisensory conditions; our first conclusion is, therefore, that the eye and ear can to some extent receive different stimuli simultaneously" (p. 147), and further, "it is' clear that the two senses together are certainly no worse than either alone" (p. 148). These results were all obtained from normally hearing subjects. The questions addressed by the present study concern the extent to which these same results may or may not apply to a hearing impaired population for both auditory-visual reception and visual-visual modes. Norman and Bobrow (1975) have suggested that, like any information processing system, the human process-ing system is resource-limited. When several processes compete for the same resources, eventually performance deteriorates. For the present study the question would be whether adding additional sources of information or adding modalities leads to an over load which causes a degradation on task performance. Norman and Bobrow (1975) further suggested that because of the forced allocation of processing resources the degradation in performance is usually gradual unless there is some critical amount of resource required such that when it is unavailable the gradual decline becomes a catastrophic failure in performance. Performance on a task may be resource-limited o r data-limited according to Norman and Bobrow (1975). That is, when an increase in the amount of processing resources, such as psychological effort, results in improved performance the performance is said to be resource limited. On the other hand, if the amount of data may be limited, as in the case of reception of speech by a hearing impaired person, such a task is said to be data limited. Two forms of data limita-tion are suggested. Signal data-limits are those in which performance is limited by the quality of the input data signal as in an acoustic signal-noise ratio. Memory data-limits are those limitations which occur when neither an increase in the quality of the data nor an increased allocation of processing resources will improve performance. An example of memory-data limits might be the recall of a sentence when all other factors have been maximized. Adapting the nomenclature used by Norman and Bobrow (1975) performance on a multisensory speech reception task might be represented as: P = R + R ,, where R = L - R and L is the limit p-dl s-dl s p of processing resources available. In other words, performance (P) depends on the resources of the primary channel (R ) minus the data limitations (dl) plus the resources available to process the secondary channel (Rg) again less the data limitations. Thus the picture of information processing presented by Norman and Bobrow (1975) is somewhat different and perhaps less complex than the theories described earlier by Moray (1969) or Ettlinger (1967). The former do not speak of stages of processing nor of levels of processing but rather suggested the existence of a pool of processing resources which can actively 48 pursue their analyses at a rate determined in part by the allocation of those resources. They stated "All that is needed to use these ideas is the ability to distinguish between resource and data-limited operations and to know at any time which one is taking place" (p. 62). The discussion to this point has been intended to show the importance of language and speech in the fundamental human process of communication. The descrip-tion of the nature of speech provides some indication of the potential difficulty in auditory speech reception faced by persons with an impaired auditory channel. Multisensory speech reception was shown to be fraught with difficulties, leaving some form of manual communi-cation as the most effective alternative for some hearing impaired persons. Even then, it is an unfortunate fact that, "Not all deaf children successfully acquire language f a c i l i t y by the methods currently used in schools for the deaf " (Lowenbraun, Appelman, & Callahan, 1980, p. 4). Throughout this chapter potential sources of variance between individuals have been indicated. The examination of processing theories indicates that cognitive function is at once a most fundamental and yet a most complex variable and one which certainly affects performance on a language reception task. Although sometimes forgotten, the idea of examining individual differences or sources of variance is not new or unique to this study. For example, Menyuk (1974) noted that if, as was demonstrated in research by Conrad (1972) , different types of conf us ions were related to individual differences in the utilization of information, rather than educational experience, this meant that all deaf children did not respond in the same way to the same training methods. If this were so it was important to search for the reasons. Again, as 49 stated by Lowenbraun, Appelman, and Callahan (1980) "Part of the language teacher's job is to understand and, if possible compensate for these individual differences that interfere with the language-learning process" (p. 4) . Thus it was with this background and within this framework that the present study was undertaken. It had essentially two major components. First the study measured the reception of verbal language through various modalities in order to determine the relative efficiency of multimodal communication for the population sample. Given that not all subjects performed equally well on the language reception tasks, the second component of the study involved an analysis of independent variables which may account for variance on the reception tasks under each particular mode. Hy potheses  Background and Rationale for Hypotheses The discussion in this chapter has indicated some of the difficulties hearing impaired students may encounter in trying to receive spoken language. For some, it was suggested, alternate modes of communication may prove more effective. A variety of communicative alternatives was discussed. The literature indicates a trend toward the use of total communication in educational programs for hear ing impaired students. Yet, as was suggested, there is conflicting evi dence as to whether unisensory or multi-sensory communication modes lead to better language reception by hearing impaired students . For this reason, the present study compared five communication modes, with the first hypothesis stated in non-directional terms. Also, in the present study baseline data under the unisensory communication modes were established,' thereby enabling 50 direct comparison with the multisensory modes. The literature review discussed several theories of selective attention and cognitive processing. These theories give rise to questions regarding whether the addition of sensory information leads to an increase or decrease in performance on language reception tasks. Again, by establishing baseline data under the unisensory modes a comparison of those data with scores obtained under multisensory modes was possible. Also, the discussion of the theoretical "pattern recognizers" raised the question of whether certain individual characteristics might affect performance on the language reception task. Generally, the discussion in chapter two was intended to out 1ine the wide variety of factors which may affect hearing impaired students' reception of language. These factors were considered in the development and design of the present study, and were the basis for the second hypothesis. Based on the review of the literature as outlined in chapter two, a l i s t of subject variables was developed. These variables were considered to be potential sources of variance between subjects' performance on the language reception tasks. A number of the variables are biological and unalterable. Others were considered as factors which might affect subjects' communication environment. A third category was made up of factors relating to education-al environment or educational experience of the subjects. The fourth category comprised those factors which might be termed cognitive. Beginning with the second category, each was cons idered to be a product of or at least affected by the factors in the categories preceding it. So that, for example, subjects' use of hearing aids was considered to be a factor which might affect language ability, whereas the inverse relationship clearly was not logical. 51 The dependent and independent variables of interest in the present study are illustrated in Figure 2. Dependent Variables Y Oral Mode Score Y2 Oral-Aural Mode Score Y Manual Mode Score Y. Oral-Aural-Manual Mode Score 4 Y Aural Mode Score Independent Variables Category I X 1 Age X^ Hearing Threshold Level X^ Age at Onset Category II X Age at First Fitting of Aids X5 Use of Aids X^ Language of the Home 6 X7 Hearing Impaired Family X Place of Residence o X Class Communication y X^o H o m e Communication P r e v i - o u s Communication Category III Total Years in School X13 P r e s e n t School Previous Schools X ^ r Present Class 1 5 X^. Previous Classes 16 Category IV X17 Syntactic Ability X Vocabulary Score 1 8 Xjg Comprehension Score X2o Visual-Spatial Score Figure 2. Variables (dependent and independent) considered in present study. Operational Statement of Hypotheses Based on the literature, and with the rationale outlined above, the research guestions stated in chapter 52 one were operationali zed and the following hypotheses were developed: 1. HQ: There are no differences among the five communication modes in terms of the number of words received and correctly recalled. A , -Hy - 0 (Where/{_. is the mean score for mode j and j' is an alternate mode) : There are differences in the number of words received and recalled under the five modes. Given differences among modes, the second set of hy pothes es examined factors which may account for the variance in performance. Specifically, the variables in the four categories described earlier were examined for their contribution under each of the five modes. For the sake of clarity, the second hypothesis is stated verbally rather than in mathematical terminology: 2. HQ: For each of the five modes: a) category I variables do not account for any of the variance in performance; b) given any effect of category I variables, category II variables do not account for any variance in performance; c) given any effects of category I and II variables, category III variables do not account for any variance in performance, and; d) given any effects of category I, II and III variables, cate-gory IV variables do not account for any variance in performance. 53 H^: For each of the five modes: Variables from categories I, II, III, IV will account for variance in performance on the reception task. Chapter Summary This chapter contains definitions of relevant terminology, a review of literature related to the present study, and statements of the hypotheses which were develop-ed as a result of the literature review. Chapter three contains a detailed description of the subjects and the methodology employed in testing the hypotheses. 54 CHAPTER THREE METHODOLOGY Subjects The subjects involved in the study constituted a sample of convenience drawn from that population of hearing impaired students originally identified in a demographic survey by Clarke, Leslie, Rogers, Booth and Horvath (1977) and from additional student's subsequently enrolled in school but meeting the original critieria. Those criteria, with modification for age, were: 1. At least five years of age on December 31, 1981 and not in attendance at a post-secondary institution who are 2. i) known to have a hearing loss with some sensorineural component , i i ) known to have been fitted with a hearing aid, and/or i i i ) in need of special educational treatment or program because of hearing impairment. In addition, subjects included in the present study met the following criteria : 3. physically able to copy (print or write) at least a four mono-syllabic-word sentence, 4. average or better (corrected) visual acuity, 5. no reported additional physical disability , 6. score above chance level on the Screen Test of the Stanford Achievement Test -Special Edition for Hearing Impaired, 7. able to'complete the Screen Test of the Test of Syntactic Abilities (Quigley et al., 197 8), and 8. enrolled in a program using Signed English. 55 Students from three school districts and the provincial school for the deaf were involved. No attempt was made to involve students from programs with fewer than five students. This decision was in large part based on financial and time constraints. Dependent Variable The dependent variable in this study was the amount of language received through each of five communi-cation modes: speechreading (oral); speechreading plus audition (oral-aural); signs (manual); speechreading plus audition plus signs (simultaneous) , and; audition (aural). The dependent variable was measured as the number of words received and correctly recorded in the appropriate space in an answer booklet. The stimulus material was connected meaningful language (sentences) presented from video tape via a television monitor. More detail is given in the description of the experimental task. Independent Variables The subject variables of interest included: 1. age, 2. HTL, 3. age at onset of hearing loss, 4. age at first fitting with amplification, 5. use of amplification, 6. language of the home, 7. additional hearing impaired family, 8. place of residence, 9. communication in the classroom, 10. communication at home, 11. previous communication, 12. total years in school, 13. present school, 14. previous school, 15. present class setting, 16. previous class setting, 17. syntactic ability, 18. vocabulary score, 19. language comprehension score, 20. visual-spatial integration score. 56 Variables 1-3 constituted category I as described previously in chapter two. Variables 4-11 were included in category II; 12-16 in category III, and; variables 17-20 constituted category IV. These variables were included in the study as possible factors account!ng for between-sub j ect variance on the dependent measures. Measures of independent variables were in part obtained from responses to questionnaires obtained from students' classroom teachers and from examination of students' school records. A copy of the questionnaire is included in Appendix A. Many of the responses are self-explanatory. Age was calculated in months as of September 1, 1982. Hearing theshold levels were cal-culated as the better ear pure-tone average at 0.5, 1.0, and 2.0 kHz (ANSI). Hearing impaired family was recoded as 1 = y e s and 0 = none. The variables dealing with home, class, and previous communication correspond to questionnaire numbers 12 (a) (i) , 12 (b) (i) , and 13 (a) (i) respectively. The variables dealing with total years in school and type of school/class setting correspond to questionnaire numbers 14 through 14 (d). Syntactic ability was measured as subjects' score on the Screen Test of the Test of Syntactic Ability (TSA) (Quigley, Steinkamp, Power, S Jones, 1978) . Vocabulary and comprehension scores were those for the subtests of the Stanford Achievement Test for Hearing Impaired Students (SAT - HI) (1972). The Visual-spatial integration score was that obtained by subjects on the Raven Progressive Matrices. It should be noted that the study was part of an ongoing series of programmatic studies. The data for this research were gathered shortly after testing for an earlier phase was completed. It was not feasible, there fore, to retest subjects on all of the standardized tests. For 57 this reason the scores for the Raven Progress ive Matrices, SAT-HI, and TSA were those which had most recently been measured, and complete data were not available for all subjects. Development of Experimental Materials One of the purposes of this study was to compare the amount of language received when presented to hearing impaired individuals through five different modes. The communication modes were: speechreading (oral), audition (aural), signs (manual), speechreading plus audition (oral-aural), and speechreading plus audition plus signs (simultaneous). The experimental task involved presenting stimulus sentences to subjects via each, of the five modes and having them write their responses. The design required four parallel sentence l i s t s , one each for oral-aural, simultaneous, and manual modes, and one l i s t for oral (speechreading) and aural (audition only) modes. The actual procedure used is described in greater detail later in this chapter. Since it has been reported (Lloyd & Price, 1971) that unfamiliar vocabulary items and sentence patterns can bias speechreading scores, the materials used in the task were developed specifically for the present study. The stimulus sentences to be used were controlled for lexical content, viseme content, phrase structure, and syntax. Thus the construction of the l i s t s involved a number of steps. First, all the mono-sy1labic level one vocabulary items from the Ling and Ling (1977) Basic Vocabulary and Language Thesaurus were listed. In the second step a 3 x 5 grid was developed, consisting of three vowel-viseme categor ies (front, mid, back) and five consonant-viseme categories (labio-dentals, rounded labials, apico-dentals , bilabials, and obscure consonants) . 58 These categories are based on 'the literature regarding v i s i b i l i t y of speech sounds as was reviewed in chapter two. The lexical items were then placed within the appropriate category on the grid, based on their vowel and consonant visemes. The grid is presented as Appendix B. The third step in developing the experimental materials involved constructing the stimulus sentences, using the lexical item pool developed in the previous steps. Each of the four l i s t s contained a total of 12 sen-tences made up of 6 four-word sentences, 3 five-word sentences, and 3 six-word sentences. In addition to the variable of length, the l i s t s were made up of different syntactic patterns, following those described by Streng (1972) and Blackwell, Engen, Fischgrund and Zarcadoolas (1978) as being the basic patterns of sentences . Each l i s t contained the following: 6 x Pattern II: NP + Aux + VP + NP eg. John drank his juice. 2 x Pattern III: NP + be '+ NP eg. That girl is my friend. 2 x Pattern IV: NP + be + Adj eg. The grass is green. 2 x Pattern V: NP + be + Adv(place) eg. The boy is in the car. Where NP = Noun Phrase; VP = Verb Phrase; Adv = Adverb; Aux = AuxHilary; t = t r a n s i t i v e , and; be = a 'be' verb such as is, are, am, was, were. In addition to length and syntactic pattern, the third sentence-related variable cons idered was phrase structure. Each l i s t was made up of four statements, four reguests, and four guestions. The guestions were developed by applying the appropriate T/Wh or T/yes-no trans formation to the basic sentence pattern. Thus, for example, "The food is hot.", a pattern IV (NP + be + Adj) statement became "Is the food hot?" in question form. Finally, each sentence l i s t contained an equal number of proper nouns (names). The sentence l i s t s are included in Appendix C. The lexical item pool was not necessarily representative of the normal distribution or frequency of occurrence of words in daily use in the schools. The use of level one vocabulary, however, minimized the possibly confounding factor of subjects' unfamiliarity with the material. It should also be noted that the distribution of visemes was not equal across all l i s t s , nor between the lexical item pool and the l i s t s . Each l i s t did, however, contain an equal number of front, mid and back vowel items and an equal number of words with obscure consonants (those which cannot be identified visibly) . In total, each l i s t contained more visible consonants than might be expected in normal classroom language since Jeffers and Barley (1975) reported that more words are indistinguishable visibly than are distinguis able. Thus the stimulus sentences, if biased at all, were biased in favor of speechreading rather than against. As stated earlier and described in greater detail in the Procedure section, the within-subjects nature of the research design required parallel sentence l i s t s . Gulliksen (1950) stated "Parallel tests were defined as tests with equal means, standard deviations , and intercorrelations: *1 = *2 ' ••• = *K, Sl = 52 = = SK, r i 2 = r i 2 = '•" rK (K-1)" (p. 36). Ferguson (1981) stated that, "Test content , type of item, 60 instructions for administering, and the like should be similar for different forms" (p, 437). In addition, Winer (1971) suggested, "pilot studies, guided by subject matter knowledge, should, whenever possible, serve as the basis for formulating the model under which the analysis is to be made" (p. 697). Therefore, in order to measure the degree to which the test l i s t s were parallel, to test the internal consistency of the l i s t s , and in order to test the appropriateness of the experimen-tal procedures, a pilot test was conducted as a precursor to the main study. Pilot Study Pilot Subjects The subjects involved in the pilot study were all enrolled at the provincial school for the deaf or one of the off-campus classes of that school. Criteria for inclusion were the same as those described earlier for the main study. Every effort was made to test as many of those students as possible to ensure a large sample. In total, 81 students were tested. Pilot Test Material The development and content of the four sentence l i s t s was described in the preceding section. The sentences from the l i s t s were arranged in random order, except that four of the simpler four-word sentences were placed at the beg inning of the first and second half of the l i s t in an attempt to maximize the likelihood of success for the subjects. In an attempt to ensure uniformity across all trials, the sentences were video recorded. Presentation of the l i s t of 48 sentences via sign only, speech (oral-aural), and simultaneous speech plus sign was then video recorded. The experimenter, 61 a trained teacher of the hearing impaired, spoke and/or signed each sentence, allowing a 30 second response interval between each. The video screen faded to blank during this response period. Each sentence was preceded with "Get ready ...number ", presented via the appropriate communication mode. Apparatus The video taping was done using the f a c i l i t i e s of the Audio-Vis ual Services Department of the Faculty of Education at the University of British Columbia . The recording was done in a studio with optimal lighting and sound quality and free from background noise. The camera was positioned at eye level, as close as possible in order to record the experi-menter's face and upper torso. Each of the three presentations was recorded on a 3/4 inch VHS master tape. This tape was then copied to a 1/2 inch VHS format for use in testing. The testing apparatus consisted of a Sony color television monitor (CVM 2150, Trinitron 21 inch) coupled with a Panasonic VHS cassette recorder (NV 8310). The tape utilized was a Scotch, T120 video cassette . Image size, contrast, brightness, and audio volume were held constant across al1 traiIs . Pilot Procedure Subjects were categorized according to age (8-10, 11-13, 14-16, and 17+) and hearing loss (<_ 95dB, > 95dB). From this sample subjects were first assigned to the speech (oral-aural) mode in pairs, with one from each of the HTL groups. Subjects were assigned in this manner since it was considered that reception via this mode would be more likely to be affected by differences in hearing levels than either of the modes with a manual component. The remaining subjects were then randomly ass igned to the simultaneous and sign only modes. These three modes were pilot tested 62 because they included all five of the modes to be tested in the main study. In other words, if the sentences were found to be parallel in the oral-aural mode it is unlikely they would be found to be not parallel in either oral or aural mode alone. The final configuration for the pilot test is illustrated in Table 1. Table 1 Age and HTL of Pilot Test Subjects Mode  Age Speech Si gn Simultaneous Total Group £ 95dB >95dB ±95dB >95dB £95dB >95dB n 8 - 1 0 1 0 1 3 0 1 6 11-13 0 1 0 10 5 9 2 5 14-16 7 6 1 8 3 5 30 1 7+ 6 7 1 2 1 3 20 Subtotals 14 14 3 2 3 9 18 Totals 2_8 26 27 n =81 Experimental task. Students were tested in small groups (maximum size 8) in a dimly-lit and noise-free room. For each trial, subjects were seated at desks placed in an arc at a maximum distance of 12 feet from the tele-vision monitor. Gates (1970) reported that a commonly used standard for viewing distance is 12 times the width of the television screen and for viewing angle not more than 45 degrees on either side of the line perpendicular to the face of the screen. For each trial the television monitor was placed at eye level to the seated subjects and the viewing angle was less than 45 degrees. Each subject was given an answer sheet with printed instructions and an example on the cover page. For each of the test items, a number of blank spaces corresponding to the number of words in the sentence 63 appeared in a line beside the sentence number. For example: 1. 2 . The experimenter explained: This ex per iment will try to measure how well students can -remember and write down sentences. You will need to pay close attention because some sentences are signed, some are spoken, and some are signed and spoken. Some have 4 or 5 or 6 words. Some sentences have names of people which may be finger-spelled. Watch and listen to each sentence and then write it on the Answer Sheet. If you do not know what a word was then you should guess. Be careful to write the words in the right spaces. Have fun. These same instructions were also presented in printed form on the test booklet. At least three examples were given, and any guestions were answered. Subjects were assured that ample time would be given between each item for them to write their responses. Appendix D contains an example of the test booklet. Each sentence was presented via the appropriate mode and the video screen faded to blank. Subjects record-ed in their answer booklets what they had seen and/or heard. Thirty seconds after the previous sentence had ended, the experimenter appeared on the television screen and stated "Get ready... number " and then gave the sentence via the mode being tested, This process was repeated for all 48 sentences. After the first half of the l i s t had been presented subjects were given a five 64 minute rest period. Scoring of responses was very rigid with only correct words in correct blanks counted. Spelling errors and errors of letter reversal were accepted as correct for all words other than proper nouns, following similar criteria to those of Clarke and Ling (1976). For example, "sopt" was considered acceptable for "stop". However, deletion of any morpheme ("ed" or "s" for example) caused a word to be marked as incorrect. Individuals obtained a score for each sentence based on the number of words correct. Thus the total possible under each l i s t was 57. Pilot Test Data Analysis The first analysis of the data gathered in the pilot test utilized the LERTAP (Nelson, 1974) program. Each of the four l i s t s was treated as a 12-item subtest for which means, standard deviations, and correlation matrices was calculated. From the analyses available through the LERTAP program, some indication could be obtained regarding the internal consistency of the four sentence l i s t s (sub-tests) . Subjects' scores for each l i s t were then utilized as dependent variables in an analysis of variance (ANOVA) using the program BMD P2V - Analysis of Variance and Covari-ance with Repeated Measures. In this analysis', mode was treated as the repeated measure or trial factor, and the number of words correctly recorded served as dependent variable. Results of these analyses are presented in the next section. Pilot Test Results The results of the LERTAP (Nelson, 1974) analysis are presented in Table 2. 65 Table 2 Results of P i l o t Test Presentation: Speech (Oral-Aural) Correlation L i s t Mean S.D. 1 12.96 13.28 (.98) 2 12.32 10.79 0.920 (.96) 3 11.50 10.25 0.949 0.926 (.97) 4 12.29 10.22 0.952 0.872 0.891 (.96) Number of subjects = 28 Possible score = 57 B . Presentation: Sign L i s t Correlation Mean S.D. 1 2 3 4 1 24.42 12. 21 ( .96) 2 28.58 12.68 0.922 (.97) 3 26.38 11.81 0.897 0.912 (.96) 4 26.00 12.75 0.900 0.922 0.902 (.97) Number of subjects = 26 Possible score = 57 Presentation: Simultaneous L i s t Mean S.D. Correlation 1 2 3 4 1 25. 48 16.32 ( .98) 2 27 .96 16.63 0 .933 ( .97) 3 25.33 16 .86 0 .949 0.927 ( .97) 4 24 .11 16 .23 0.954 0 . 939 0.928 ( .9 8 ) Number of subjects = 27 Possible score = 57 (Note: Internal consistencies main diagonal.) (Hoyt, 1941) reported on 66 The results summarized in Table 2 indicated a very high positive correlation between the l i s t s under each of the presentation modes. In discussing test reliability, Nelson (1974) stated "A ' r e l i a b i l i t y coefficient' reflects the accuracy of the measuring process: the higher the value of the coefficient, the greater the accuracy of the process" (p. 257). He also stated that "Internal consistency is an estimate of the extent to which each test item taps whatever the test is measuring" (p. 260). As shown along the principle diagonal of the matrices presented in Table 2, the internal consistencies computed using Hoyt's ANOVA (Hoyt, 1941) exceeded, in every case .90. These were well within the suggested limits of .85 to .90 (Nelson, 1974, p. 261). Thus there was some indication that under each of the three pilot test modes each of the l i s t s was measuring the same thing. To further test the appropriateness of the four l i s t s for use in the main study, scores obtained in the pilot and summarized in Table 2 were analyzed. The analysis involved a 4 (list) x 3 (mode) repeated measures analysis of variance. Results of the ANOVA are presented in Table 3. Table 3 ANOVA for Pilot Data Source D.F. Sum of Squares Mean Square Group 2 List 3 Group x List 6 Within Cells 312 13907 .07 2 85.68 218.14 56209 . 90 6953 .53 95.22 36 . 35 180.15 38 .59 0 . 52 0 .20 As summarized in Table 3, the F ratio for the effect of l i s t s was found to be not significant. The data summarized in Table 3 also indicated that there were no significant interactions between the l i s t s and the mode through which they wer presented. There was, as would be predicted 6 7 from the first hypothesis of the main study, a significant (p < .01) difference between the three groups (oral-aural, manual, and simultaneous mode). The nature of any betwen-group differences was not examined during the pilot study, however. Based on the results of the pilot study the four sentence lists designed specifically for the present research were accepted as parallel. The pilot also indicated that the instructions needed to be modified. For the main study, the instructions included a reference to the fact that proper nouns were used in some of the sentences. Main Study Procedure Experimental design. The design for this study was one in which stimulus sentences were presented via each of five communication modes to each of the subjects. Thus the study utilized a within-subjects design. The design chosen was a modified Latin-Square (LS — 4) design. Campbell and Stanley (1963) described the Latin-square arrangement as one typically employed in the counterbalancing which is used to enhance precision and experimental control by entering all subjects into all treatments (p, 50). It was considered important to minimize any order effects together with any effects associated with a particular sentence list, and the Latin-square was seen as the most appropriate design. Using the procedures described by Kirk (1968) for randomization and construction of Latin-squares , the following randomization plan was developed: 68 Table 4 Randomization Plan for (Modified) Latin Square Design Test Order Groups List Gl G2 G3 °4 Ll M2 Ml M4 M3 L2 Ml M4 M3 M2 L3 M4 M3 M2 Ml L4 M3 M2 Ml M 4 * M5 M5 M5 M5 M^ = Oral Mode (Speechreading) M^ = Oral/Aural Mode (Speechreading & Audition) M^ = Manual Mode (Signs) M^ = Simultaneous Mode (Speechreading & Audition S Signs) M = Aural Mode *(sentences used for this mode were 5 those which had previously been presented via the oral-only mode)  The aural-only mode occurred in the final position in all cases. The design was modified in this manner since it was cons idered that, given the hearing loss of the sub-jects, this mode was most likely to cause anxiety or frustra-tion. Should it occur in the i n i t i a l position it might have jeopardized further effort and cooperation. The l i s t used for the aural mode presentation was that which had previously been used in the oral mode presentation, since there were no overlapping cues from the two modes. The procedure used to develop the sentence l i s t s was described earlier. Each of the l i s t s was randomly ass igned to one of the four l i s t pos i tions (L^ - L^) within the design. 69 Subjects were assigned to test order groups (G - G ^ ) following the randomization procedure outlined by Lynch and Huntsberger (1976) . Rather than assigning individuals, however, it was necessary, because of school administrators' requests , to randomly assign intact class groups to a particular order. While this did not allow total randomization, the classes assigned in this manner contained no more than five subjects. In schools where students were mainstreamed for a large proportion of their classes, they were assigned to test order groups as their schedules allowed. Apparatus. To ensure uniformity across trials, the stimulus sentences were video recorded as described in the previous section of this chapter. After the pilot test showed the sentence l i s t s were appropriate for use in the study, the tape was edited so that sentences once again occurred in the same order on each of the l i s t s . For each trial the same playback system was used, as described in the pilot test procedure. The speechreading mode was accomplished by turning off the volume thus showing only the oral movement. The audition only mode was accomplished by placing opaque tape on the screen to mask out the face of the speaker. Scoring. The criteria for scoring were the same as those described for the pilot study. Individuals obtain-ed a score based on the total number of words correct under each of the presentation modes. The possible total for each mode was 57 (the total number of words in each sentence l i s t ) . Individuals' scores for each item were stored in a computer f i l e , and the LERTAP program (Nelson, 1974) was used to calculate scores for each mode. 7 0 Data Analysis Procedures Results of the analyses are presented in chapter four. However, the analyses used are presented here as part of the overall description of procedures used in the study. Differences between modes. The first analysis was an item level analysis, once again using LERTAP (Nelson, 1974) to compute the r e l i a b i l i t y of the measures. In the present study each of the four l i s t s was treated as a separate subtest for the item level analyses to determine internal consistency. The second analysis was for the possible effect of treatment order. This was investigated by doing five oneway analyses of variance. The analysis was accomplished using the Statistical Package for the Social Sciences (SPSS) (Nie, Hull, Jenkins, Steinbrenner, & Bent, 1975) ONEWAY procedure. The five separate analyses were for dependent variable, mode score (oral, oral-aural, manual, simultaneous, aural) by independent variable, order (1-4). Multiple comparisons of the resultant means (mode x order) were obtained and analyzed using Tukey's (1953) HSD procedure. The third analysis was for significant di f f erences among the five modes. This analysis utilized a repeated measures analys is of variance (ANOVA) as performed by the computer program BMDP: 2V (Dixon, 1982). In this analysis the scores under each of the five modes were entered as the dependent variable. Given a significant F, the means were compared using Tukey's (1953) HSD procedure. 71 Differences between subjects. The second question addressed by the research concerned the unique, characteristics of subjects who performed at different levels under the various presentation modes. The method of analysis used to investigate variance in the dependent measures (mode scores) was a stepwise regression. The program employed was BMDP: 2R (Dixon, 1982). The same procedure was followed for each of the five dependent variables. Only subjects with complete data were retained for this analysis (n=66). At the first step, category I variables were analyzed with an F to enter the regression equation set at 4.0 (the F value required to yield a probability level of .05 for 1/64 degrees of freedom). Thus for each mode, category I variables were examined to see if they contributed to the explanation of variance. In the second step, signi-ficant category I variables were forced into the equation, and category II variables were allowed to enter the regres-sion equation if they made a s igni ficant contribution after the effect of significant category I variables. At the third step, significant variables from Category I and II were forced into the equation and category III variables were examined for significance. The process was repeated at the fourth step. Thus the final regression equation included only those variables from each category which had proven significant after the contribution made by significant variables from preceding categories. The analysis was conducted in this manner for several reasons. First, if variables were allowed to enter freely, the effect of high correlations may have caused some (such as syntax, for example) to enter first, thereby perhaps not allowing the effects of other variables to be m e a s u r e d . Such free entry may have statistical or methodo-logical validity but, for the purposes of this study, it lacks logical or ecological validity. As was explained in the rationale for the second hypothesis, it is far more logical to expect that scores on a test of syntactic ability would have been influenced by factors such as age, onset, and HTL than vice versa. Thus the regression analyses were done in the stepwise manner described above. A further advantage of such a parsimonious procedure was that it reduced the subject/variable ratio by not entering all 20 independent variables into the regression at one time. Chapter Summary This chapter contains a description of both the independent and dependent variables of interest in the present study. The development of the experimental materials is described as are the procedures used in the pilot test undertaken to assess the appropriateness of both the materials and field procedures. The results of the pilot test are presented and the experimental design and procedures used in the main study are described. Finally , the procedures used in the data analysis for the main study are outlined. Results of the data analyses are presented in chapter four. Chapter five contains possible explanations for and implications of the results. 73 CHAPTER FOUR RESULTS Subjects In all, 106 subjects from three school districts and the provincial school for the deaf met the criteria for inclusion and completed the experimental task. Addi-tional personal and demographic data were gathered by means of a questionnaire which was completed by students' classroom teachers. A copy of the questionnaire is included in Appendix A. Four categories of subject variables were of interest in the present study, and the data for these categories are reported below. Biological unalterable variables (Category I). The subjects were 53 males and 53 females. Age of subjects ranged from 90 months (7.5 years) to 225 months (18.75 years),with the mean age of subjects 175 .4 months (S.D. = 33.725). Hearing threshold levels ranged from 67dB to 113 dB (X = 97.726; S.D. = 11.779). Subjects' age at onset of hearing loss was reported as follows: birth, 64 subjects; 0-6 months, 4 subjects; 13-18 months, 2 subjects; 24+ months, 8 subjects; unknown, 28 subjects. Thus 70 of the subjects were known to be pre 1i ngua1ly hearing impai red (as defined in chapter two) . Communication environment variables (Category II) . Age at first fitting of amplication was as follows: 7-12 months, 3 subjects; 13-18 months, 6 subjects; 19-24 months, 10 subjects; 25-30 months, 7 subjects; 31+ months , 36 subjects; did not respond or unknown, 44 subjects. Subjects' use of amplification was reported as: (0) never, 24 subjects; (1) seldom, 3 subjects; (2) some-times, 18 subjects; (3) usually , 30 subjects , and; (4) constantly, 31 subjects. A third communication factor, language of the home, was reported as: English, 84 subjects; ESL (English as a second language), 14 subjects; ASL, 7 subjects, and; English plus ASL for 1 subject. Additional hearing impaired family members (parents or sibling) were reported for 20 subjects, while the remaining 86 subjects had no immediate family with impaired hearing. Communication method used by students in the classroom was reported as: total communication (n = 89) , manual English (n=6) , ASL (n=5) , ASL/TC (n=3) , and aural-oral (n=3). The communication environment of the subjects' homes was more varied. At home, students used: total communication (n = 48), aural-oral (n=33) , ASL (n=8) , aural-oral plus writing (n=6), writing (n=4), ASL/TC (n=3), manual English (n=2), and writing plus sign (n=2). Thus 63 subjects used some form of manual communication at home. Another subject variable was the subjects' previous classroom communication environment. For 72 cases the previous class room communication was the same as the present. Aural-Oral communication had been used by 21 subjects and an additional 13 had come from Cued Speech classes. Educational environment (Category III). The number of years (in total) that subjects had been in school ranged from 4 to 16 years (X = 9.56; S.D. = 2.55). Present school placements included: regular elementary (n=13), regular secondary (n=37), and school for the deaf (n=56). Previous school experience was reported as: preschool (n=5), regular elementary (n=14), regular secondary (n=5), school for the deaf (n=32). In addition some subjects had previous experience in more than one type of setting including: preschool plus regular elementary (n=23), preschool plus regular secondary (n=l), preschool plus school for the deaf (n = 3) , regular elementary plus 7 5 r e g u l a r secondary (n=10), regular elementary/secondary plus school for the deaf (n=12), and other (n=l). Of the 106 subjects, 32 had preschool experience. Additional category III variables concerned present and previous classroom placements. Present class-room placements were: regular class (n=18) and class for hearing impaired (n-88) . Previous classroom place-ments reported included: regular class (n=15), resource room (n = 5) , special class (n=81) , and other placements (n=ll) . Cognitive measures (Category IV) . Scores on the Test of Syntactic Abilities (TSA) were available for 78 subjects. Scores ranged from 27 to 117 of a possible maximum of 120. The mean was 69.96 with a median of 71.50 and a standard deviation of 26.26. Scores on the Vocabulary subtest and Reading Comprehension subtest of the SAT-HI were available for 76 and 75 subjects respectively. Vocabulary subtest scores ranged from 10.0 to 160.0 with the mean = 124.38, the median = 125.5 and the S.D. = 18.90. The Reading subtest scores ranged from 106.0 to 176.0 with the mean = 134.73, the median = 130.75, and the S.D. = 17.87. Scores on the Raven Progressive Matrices were available for 82 subjects, Scores ranged from 11.0 to 58.0. The mean was 34.134, the median was 36.5 and the S.D. was 12.03. The data reported above were entered as inde-pendent variables in the regression equations which were used to analyze the sources of variance on the dependent variables (scores under each receptive mode) . Item-Level Analysis Internal consistency. One of the i n i t i a l analyses undertaken was to determine whether the four sentence l i s t s which had been pilot tested were in fact reliable measures in the actual main study. The resultant Hoyt estimates of r e l i a b i l i t y were 0.95, 0.96, 0.95 and 0.96 for l i s t s 1 through 4 respectively, These were considered well within the acceptable limits described in chapter three, and the s timulus sentence l i s t s were thus cons idered to be reliable measures in terms of internal consistency. That is, the four subtests (lists) were considered reliable in that they were each tapping the same measure. Order of presentation. The second factor considered was the order in which the modes were tested. Since the study utilized a within-subjects design a modified Latin Square (4 x 4) was utilized to eliminate or at least minimize the effects of order. To determine if there was any significant order effect, five one way analyses of variance (ANOVA's) were done using the SPSS program ONEWAY, as described in chapter three. The results of this analysis are summarized in Table 5 , Table 5 Oneway ANOVAs for Effect of Order. Oral x Order Source D.F. Sum of Squares Mean Squares F F Prob . Between 3 2 31.8230 77 .2743 3 .144 0 . 0284 Within 102 2 506.7 383 24 .57 59 Total 105 27 38 . 5613 Oral-Aural x Order Source D.F . Sum of Squares Mean Squares F F Prob . Between 3 1 782.6559 260 .8853 2 . 002 0 . 1184 Within 102 1329 3 . 4412 130 .3279 Total 105 14076.0938 Manual x Order Source D.F. Sum of Squares Mean Squares F F Prob . Between 3 804 . 6014 268 . 2004 1 . 991 0 . 1200 Wi thin 102 13737.8535 134 .6848 Total 105 23355.5313 Simultaneous x Order Source D.F . Sum of Squares Mean Squares F F Prob . Between 3 680.1096 226 .7 032 1 .020 0 . 3872 Wi thin 102 2267 5.4219 222 . 3080 Total 105 23355 . 5313 Aural x Order Source D.F. Sum of Squares Mean Squares F F Prob . Between 3 59 . 0315 19 . 6772 0 . 26 5 0 . 8 501 Within 102 7560.8181 74 . 1257 Total 105 7619.8477 77 The results summarized in Table 5 indicate that order of presentation had no significant effect except under the oral presentation mode. An analysis of comparison among the means is summarized in Table 6. Table 6 Comparison Among Oral Mode Means K4 Xl X3 X2 *4 = 5 .6552 -- 1 .37 39 2.9184 3 .847 5* *1 = 4 .2813 -- 1.5445 2 .4736 *3 ' 2 .7 368 -- 0.9291 *2 = 1 . 8077 --*p < .05 The summary presented in Table 6 shows that the difference between Groups 2 and 4 was significant. Examin-ation of Table 4 shows that subjects in Group 2 were tested under the Oral mode in the first position while the Oral mode appeared in the third position in Group 4. Thus a possible explanation of the significant difference might be the unfamiliarity of the task, especially in light of the literature which suggests familiarity as a factor in speechreading ability (see chapter 2). To further explore the effect of order of presentation on performance, the group rankings are juxta-posed with mode and presentation order in Table 7. For example, under the Oral Mode column, the 4 indicates that subjects in Group 2 (see Table 4) who received their first sentence l i s t (position 1) via the oral mode ranked fourth under the mode, whereas the 1 in that column indicates that subjects in Group 4 who received their third sentence l i s t (position 3) via the oral mode ranked first under that mode. 78 Table 7 Rank by Order of Presentation Presentation Communication Mode Pos ition Oral Oral -Aural Manual Simultaneous X Rank 1 4 (2) • 4 (1) 3 (4) 4 (3) 3 . 7 5 2 2 (1) 1 (4) 2 (3) 3 (2) 2 , 00 3 1 (4) 3 (3) 4 (2) 2 (1) 2 . 50 4 3 (3) 2 (2) 1 (1) 1 (4) 1 . 7 5 NB. Numerals in columns represent rank under that mode and numerals in brackets designate the order group. The row means in Table 7 above indicate that in three of four modes (oral, aural-oral, simultaneous) the worst performance was by subjects who received that mode in the i n i t i a l position (rank = 4). Only in the manual-only mode did the i n i t i a l position not produce the lowest score. This would appear to be consistent with the literature which shows the importance of familiarity in speechreading performance. This slight effect of order might have been avoided with sufficient practice. However, since the effect is cons is tent across all modes and is only signi-ficant in one instance, it was considered to be accommodated by the design. For further analysis the groups were collapsed and the subsequent ANOVA was performed on the mode scores of all 106 subjects. Differences Between Modes The research design and the analyses described above have accounted for possible effects of l i s t or order of presentation. The next step was an analysis of variance between the five dependent variables of presenta-tion mode (oral, aural-oral, manual, simultaneous , aural) . In order to determine if the differences between the mean score for each mode were significant, the data were analyzed using the program BMDP:2V, Analysis of Variance and Covariance with repeated measures. The dependent variable was mode, with five levels corresponding to the 79 five modes of presentation. A summary of the means and standard deviations is presented in Table 8, and a summary of the results of the ANOVA is given in Table 9. Table 8 Summary of Means and Standard Deviations Mode Oral Aural-Oral Manual Simultaneous Aural Mean 3.773 7.254 31.518 33.151 3 .104 S.D. 5.107 11.578 11.768 14.914 8.518 Table 9 Analysis of Variance for Mode Means Source D.F. Sum of Squares Mean Squares F F Prob . Mode 4 98259.97358 24564.99340 361.60 0 . 000 Error 420 28532.42642 67.93435 The results of the ANOVA as presented in Table 9 indicate that there is in fact a significant difference among the mean scores obtained under the five modes of presentation. In order to determine the sources of significance, Tukey's (1953) HSD procedure was used. Results of the comparison among mode means are summarized in Table 10. Table 10 Comparison Among Mode Means X5 X l X2 X3 X4 X5 (Aural) 3 .10377 - - . 67 4 . 1 5 * * 28 . 42** 30 . 0 5** X l (Oral) 3 .77 358 -- 3 .48* 27 . 75** 29 . 38** X2 (Aur-Or) = 7 .2 547 2 -- 24 . 26** 25 . 89** X3 (Manual) = 31 . 51887 -- 1 . 63 X4 (Simult) = 33 .15097 --* * p < .01 (Tukey HSD) * p .05 (Tukey HSD) 80 The comparisons s ummari zed in Table 10 revealed that there i s a s i g n i f i c a n t difference between modes with a manual component and those with no sign. Mean scores for both sign and simultaneous are s i g n i f i c a n t l y (p < .01) greater than scores for either oral-aural, oral, or aural modes. In addition, i t i s noteworthy that the score for the aural-oral mode was found to be s i g n i f i c a n t l y (p < .05) higher than the score for either oral alone or aural alone. Also, the score for oral-aural combined is greater than the sum of the i n d i v i d u a l scores; that i s , greater than oral added to aural. The implications of these r e s u l t s w i l l be discussed in chapter f i v e . The next series of analyses examined sources of variance on the dependent measure of mode score. Indi vi dual Differences Having determined the overall between-mode or i n t r a - i n d i v i d u a l differences, the next analyses concerned i n t e r - i n d i v i d u a l differences. The procedure employed was, as described in chapter three, a stepwise regression with the scores for each mode as dependent variables and the subject variables in the four categories as the independent variables. The grouping of variables was i l l u s t r a t e d in Figure 2. Results of the analyses for each mode are summarized in the following sections . The p a r t i a l c o r r e l a -tions of the variables are presented in Appendix E along with demographic data for the 66 subjects. Regression for Oral mode. Results of the regression for oral mode are summarized in Table 11. Table 11 Summary of Stepwise Regression for Oral Mode Data Step Variable M u l t i p i e In crease No . Category Name R RSQ in RSQ 1 I Age 0 .2969 0 .0882 0.0 882 2 I HTL 0 .1656 0 .1656 0.077 4 3 II Useaid 0 .2409 0 .2 409 0.0754 4 IV Syntax 0 .3509 0 .3509 0 .1100 81 The results summarized in Table 11 indicated that 35.09 percent of the oral mode variance was accounted for by the four variables which were signi ficant (p < .05) to enter the regression equation. Significant category I variables were subjects' age and hearing threshold levels, which together accounted for 16.56 percent of the variance. After the effect of significant category I variables, the only significant category II variable to enter the eguation was subjects' use of hearing aids, which added a further 7.54 percent. After the effect of significant category I and II variables, none of the category III variables contributed significantly. The final variable to enter the equation was syntax, which contributed a further 11.00 percent to the variance of the oral mode scores, even after the effect of significant variables from other categories . The literature reviewed in chapter two suggested that other studies had found that scores on the Raven Progressive Matrices contributed significantly to the variance on speechreading (oral mode) tasks. In the present study, the Raven scores were not significant. However, to test the contribution made by that particular variable, it was forced into the equation ahead of syntax. Results of this exploratory alternate procedure indicated that the Raven scores added only 1.80 percent to explanation of the variance when forced into the equation after significant category I, II and III variables. The F to enter the equation was not, however, significant and this variable was not included in the final equation reported previously. Regression for Oral-Aural mode. Res ults of the stepwise regression procedure for oral-aural mode scores are summarized in Table 12. 82 Table 12 Summary of Stepwise Regression for Oral-Aural Mode Data Step Variable No . Ca tegory Name Multiple R RSQ Increase in RSQ 1 2 3 4 I I II IV HTL Age Use aid Syntax 0 .4862 0 .5398 0 .70 50 0 . 729 8 0 .2 36 3 0 .2913 0 .4970 0 .5326 0 .2 36 3 0 .0 550 0 .20 56 0 .0 356 Two variables from category I were significant and entered the equation at step one. These two variables, hearing threshold levels and age of subjects accounted for a full 29.13 percent of oral-aural mode variance, of which 23.63 percent was attributable to HTL. The only significant category II variable to enter was subjects' frequency of use of aids accounted for a further 20.56 percent of the variance . After the effects of significant variables from categories I through III, the only category IV variable with F significant to enter was syntactic ability. This variable added only a further 3.56 percent to the 53.26 percent of the explained oral-aural mode variance. Under the conditions of the present study, variables contributing most to variance in oral-aural language reception were subjects' hearing thresholds and their frequency of use of hearing aids. Subjects' age and syntactic ability contributed lesser amounts to the explanation of oral-aural mode performance . For the same reasons as in the oral mode regression, Raven scores were forced into the regression equation at the final step in an alternate exploratory procedure. In this mode, the Raven scores displaced syntax from the equation but accounted for only 0.74 percent of the variance. Since the F to enter was not significant, however, this 83 variable was not reported in the summary. Regression for Manual mode. The stepwise regression results are summarized in Table 13. Table 13 Summary of Stepwise Regression for Manual Mode Data Step Variable Multipie Increase No . Category Name R RSQ in RSQ 1 I Onset 0 . 3295 0.1086 0 .1086 2 II Age 0 . 4131 0.1707 0.0621 3 III Prevclass 0 . 5007 0 .2507 0.0800 4 IV Syntax 0 . 7245 0.5249 0.2742 Of the four variables which entered the equation, the best predictor of manual mode performance by subjects in the present study was the measure of their syntactic ability. This variable, which entered after the effects of all sig-nificant variables from previous categories had been measured, accounted for a full 27.42 percent of manual mode variance. Significant category I variables were subjects age at onset of hearing loss and their age, which accounted for 10.86 percent and 6.21 percent respectively of the variance in manual mode performance. In this analysis, the only variable from category III with an F significant to enter the regression was subjects' previous experience in a regular class setting. This variable contributed an, additional 8.0 percent to the explanation of variance after level I and II variables. The nature of the relationship between performance and classroom experience is discussed in chapter five. Regression for Simultaneous mode. The significant independent variables which contributed to the variance in simultaneous mode performance are summarized in Table 14. 84 Table 14 Summary of Stepwise Regression for Simultaneous Mode Data Step Variable Multiple Increase No. Category Name R RSQ in RSQ 1 I Age 0 .4212 0 .177 4 0 .1774 2 I HTL 0 . 4982 0.2482 0 .0708 3 I Onset 0 . 4982 0.2998 0 . 0516 4 II Useaid 0 . 5896 0.347 6 0.0479 5 III Prevclass 0 .6523 0.4255 0 .0778 6 IV Syntax 0 .8250 0 .6806 0 .2551 The results summarized in Table 14 indicate that at least one variable from each category contributed signifi-cantly to performance under the simultaneous mode. A total of 68.06 percent of the variance was accounted for by the six variables. Even after the contribution of all other signifi-cant variables, subjects' syntactic ability accounted for an additional 25.51 percent of simultaneous mode variance. Category I variables accounted for 29.90 percent of the variance with age, hearing threshold level, and age at onset of hearing loss accounting for 17.74 percent, 7.08 percent, and 5.16 percent respectively. The frequency with which students used their hearing aids accounted for 4.79 percent of the variance, even after the large contribution of category I variables. The variable from category III which added significantly to the variance was students previous experience in regular class. This variable, entered at the f i f t h step, accounted for 7.78 percent of the variance in performance under the simultaneous mode. Regression for Aural mode. The same stepwise regression procedure was employed as with the previous modes. The results of the regression are summarized in Table 15. 85 Table 15 Summary of Stepwise Regression for Aural Mode Step Variable Multiple Increase No. Category Name R RSQ in RSQ 1 I HTL 0 . 3381 0.1143 0 .1143 2 II Useaid 0 . 4386 0 .1923 0 .0781 3 III Prevclass 0 . 5025 0 .2525 0 .0602 The results of the regress ion summarized in Table 15 indicate that only 25.25 percent of the variance in aural mode performance could be accounted for by the three variables which had a significant F to enter the equation. Not surprisingly, subjects' hearing threshold levels was a significant variable and it accounted for 11.43 percent of the variance. The second variable to enter was subjects' use of hearing aids which accounted for a further 7.81 percent of variance in aural mode performance. The only other variable to have a significant effect was students' previous experience in a regular class setting. It could logically have been expected that aural language reception would most greatly be affected by hearing threshold levels and use of hearing aids. Such an assumption was borne out by the results of the regress ion. Chapter Summary The results of the procedures used in analysis of the data were summarized in chapter four. The results indicated that there was a significant difference between modes with a manual component and those without. Also, the difference between the combined oral-aural mode and the unisensory oral or aural modes was significant. The results also indicated that variance in performance on the language reception tasks could be partially attributable to a number of independent or subject variables. The signifi-cance of the independent variables was shown to vary between modes. These results and possible implications discussed in chapter five. 87 CHAPTER FIVE DISCUSSION, IMPLICATIONS, LIMITATIONS Inter-Mode Comparison The first hypothesis of this study concerned differences in reception (measured as number of words correctly recorded) of language presented via five different modes: oral (speechreading), oral-aural (speechreading plus audition), manual (sign), simultaneous (speechreading plus audition plus sign), and aural (audition alone). Examina-tion of the results indicated that there was a between-mode difference in the rate of reception and the null hypothesis was rejected. The_ analyses next focused on the differences between modes. The results indicated that the lowest score (5.46%) was obtained when the stimulus sentences were presented through audition alone. This result was expected in light of the auditory parameters pertinent to speech reception as discussed in chapter two. Since the mean hearing threshold level for the population sample was 97.7 decibels (ANSI), audition was not expected to be an effective mode for receiving language. The second lowest score was obtained under the speechreading mode. The mean rate of language reception through speechreading, as measured by the present study, was 6.62% which was not significantly greater than the score obtained under the auditory mode. Once again, the dis cuss ion in chapter two regarding the difficulties inherent in receiving speech by vision alone suggested that speechreading alone would not be a very effective means of receiving sentences. When audition and speechreading were combined in the oral-aural mode, the mean score rose to 12.72%. This score was significantly (p < .05) greater than either audition alone or speechreading alone. Thus under the 88 conditions of the present study, the use of audition and vision combined proved to be a significantly more effective mode of receiving connected meaningful speech than did either of the unisensory modes. In the light of the literature reviewed in chapter two regarding visual and acoustic properties of speech, the results were not unexpected. For example, the summary offered by Ling (1976, p. 47) suggests that useable hearing up to 3000 Hz would be required for subjects to be able to perceive all the acoustic features of speech, and speechreading at best offers only partial visual information on certain features of speech. The mean HTL for the subjects in the present study suggested that they would not be able to receive all the acoustic speech signal. However, as Ling (1976) has stated, "For those with the most limited audition, speechreading can complement residual hearing in such a way that at least partial information on each of the eight speech features is available" (p. 48). The relationship between subjects' hearing threshold levels and their scores under the speechreading mode is explored further in the sub-sequent discussion of subject variables. The results indicated that the bisensory oral-aural score was greater than the sum of its unisensory component parts, although the difference was not significant. The oral-aural mode result would suggest that there are certain linguistic verbal cues which were identifiable when presented through combined visual and auditory senses but uni denti-fiable through a single sense. The results may be interpreted within the frame-work of the f i l t e r theory (Broadbent, 1958; Moray, 1969). That theory would suggest that either the auditory input 89 or the visual input of the oral-aural mode was first interpreted by the subjects. Once all possible information had been gleaned from that channel , subjects verified or enhanced their original perception by examining the information from the alternate sense which had been relegated to short-term memory. The result may also be examined within the framework of the theory of cross-correlation of input signals (Moray, 1969). In that postulated model, signals, which may represent different external stimuli, may be cross-correlated for similarity of information and then recoded into a single message. That message would then be analyzed for any recognizeable patterns. The ability to recognize two cross-correlated signals, such as the auditory and visual representations of speech as presented in the study, would be expected to be higher than for either of the unisensory signals alone. Thus the cross-correlation theory (Moray, 1969) provides a logical inter-pretation for the results of the present study to this point. The theories offered above as possible explan-ations are essentially unimodal in that they suggest only one signal at a time is attended to. That is, material is processed in serial order rather than parallel. The processing model postulated by Ettlinger (1967) would suggest that both the visual and auditory signals were processed and integrated to form a single linguistic message. Neither this bisensory model nor the unisensory model precludes the possibility of processing multisensory information which occurred in the present study under the oral-aural mode. The human processing system is, according to Norman and Bobrow (1975), resource limited. The question raised in chapter two was whether adding modalities would lead to an overload which would cause a degradation in performance. For the results obtained in the present study under the oral-aural mode, at least, the answer is no . The second highest score of the five modes tested in the present study was for the manual mode. The mean score for sentences presented through sign and fingerspelling was 55.3%, which was significantly (p <L .01) greater than that obtained under oral, aural, or oral-aural modes. The suggestion made in chapter two that for some hearing impaired individuals a form of manual communication may be most effective appears to hold true for the subjects of the present study. The highest score in the present study was obtained under the s i m u l t a n e o u s mode. When stimulus sentences were presented via combined aural, oral, and manual modes, subjects correctly received and recalled 58.16% of the words in those sentences. This score was significantly (p< .01) greater than for all other modes except manual. The manual mode is both unimodal and unisensory. The simultaneous mode is trimodal in that it involves sign plus oral plus aural modes, but bisensory in that only audition and vision are involved. Comparing the results obtained under the manual mode with those obtained under the simultaneous mode indicates once again that there was no apparent degradation in performance with the addition of more sensory input. Under the conditions of the present study, the subjects were able to receive the linguistic messages even when three modes were simultaneous ly presented. However, while the addition of modalities did not decrease performance, neither did it significantly increase performance on the language reception task. This should seem to lend further credence to the Norman and Bobrow (1975) model of resource-limited processing. 91 Unlike the aural-oral mode, the score for the simultaneous mode was not greater than each of the component part scores. While the simultaneous-manual difference was not statistically significant, there was some indication that adding speechreading and audition to signs increased the amount of information received. Had this result been statis-tically significant, it would be consistent with Moray's (1969) cross-correlational processing theory. That is, the amount of recognizeable information could be expected to be greater when it was available in three parallel forms. Returning to the nomenclature of Norman and Bobrow (1975) as first discussed in chapter two, performance on the task of language reception might be described as: P = (R - dl) + (R - dl) p s (where R = L - R and L is the limit of s p processing resources available). For the present study an application might be: P = performance under simultaneous mode R = resources of the primary channel ^ (manual mode) dl = data limitations (eg. v i s i b i l i t y of signs) R = resources of secondary channel ( aural-oral) dl = data limitations (eg. audibility of speech). Performance under the simultaneous mode could, in other words, be considered a function of: subjects' allocation of processing resources to the manual mode, minus limitations such as visually indistinguishable signs, plus allocation of processing resources to aural-oral mode, minus limitations imposed by some degree of hearing loss or the homophenous nature of speech. This additive function could account for the indicated trend toward higher scores under the s imultaneous mode. The fact that performance did not significantly improve when signs were supplemented by audition and speechreading might be interpreted to suggest that performance under this mode 92 i s resource-limited. However, as was the case in the aural-oral mode, the inclusion of additional sources of information did not lead to a degradation in performance. The results under the simultaneous mode are similar to those of the psychophysical experiments referred to by Moray (1969) . A number of the methodology comparisons, such as that of Moores, Weiss, and Goodwin (1973) , also obtained similar results. Similar to the interpretation of the aural-oral mode results , the simultaneous mode results can also be interpreted in the context of Moray's (1969) suggestion of discontinuous sequential sampling as the basis of all selective attention. That is, the incoming stimuli under the simultaneous mode may be cross-correlated for similarity before ultimately being processed. Once again, logic would suggest that if three alternate but parallel forms of a linguistic stimulus are presented, the likelihood that the stimulus will be recognized is increased. The results can also be examined within the framework of Ettlinger's (1967) theory. That theory suggested a second level of process ing at which the specific sensory perceptions from each first-level subsystem are integrated to form the total perception. Thus the three sources of verbal information under the simultaneous mode would be integrated in parallel rather than serial form. The main difference between Ettlinger's (1967) theory and that of Moray (1969) is that under the latter, short-term memory is required to allow for serial processing of verbal information. Neither theoretical position precludes the processing of multisensory information as it was presented under the simultaneous mode. Another interpretation of the results under the simultaneous mode is possible. It could be argued that the manual mode proved to be such a strong and complete stimulus that all other modalities were attenuated. Discussion of Individual Differences The second hy pothes is of this study concerned the possible effect of subjects' individual characteristics on their performance under each communication mode. The results of the regression analyses indicated that, depend-ing on the mode, independent (subject) variables made different contributions to the overall variance. The null hypothesis of no differences between individuals was rejected. Oral mode. The literature reviewed in chapter two suggested that factors affecting performance under the oral mode (speechreading) would include: synthetic ability, training, and knowledge of language. Thus it could be expected that the category II and III factors indicating previous oral experience, and category IV variables measuring language ability and visual-spatial integration would make significant contributions to the explanation of oral mode variance. The results of the regression for oral mode scores showed that after the effects of category I variables (age, HTL, age at onset), language ability, as measured by the TSA, did make a significant contribution. Contrary to some of the literature (Jeffers & Barley, 1975) , as discussed in chapter two, scores on the Raven Progress ive Matrices did not make as large a contribution as previously reported. Even when the Raven scores were forced into the regression equation before scores on other category IV variables, this measure of visual-spatial integration or synthetic ability was not a significant variable. It is noteworthy that subjects' use of aids added 7.54 percent to the accountable variance on oral mode performance, even though audition was not directly involved. This result suggests that students' ability to speechread might be improved as a result of taking advantage of any residual audition by using their hearing aids. This result could be interpreted in the framework of Moray's (1969) discussion of "pattern recognizers". That is, since the speechreading task did not directly involve audition, it could be postulated that the thres-hold for visual recognition of linguistic units was lowered as- a result of previous bimodal (oral-aural) practise. Oral-aural mode. When audition was combined with speechreading, subjects' hearing threshold levels accounted for 23.63 percent of the variance. All of the previous discussion, in both chapter two and earlier in this chapter, Indicated that audition was the only sense through which information regarding all eight aspects of the speech signal could be received. It was also not unexpected that subjects' use of hearing aids would make a significant contribution, in this case 20.56 percent, to oral-aural mode variance since proper amplification can greatly reduce speech reception thres-holds . As was discussed in chapter two, it could be expected that the threshold at which an oral-aural speech signal would be recognized would be lowered as a result of auditory training. Although the variables of HTL and use of aids do not in themselves constitute formal auditory training, the effective use of residual audition is cited in the literature as prerequisite for success in any auditory training program. It is interesting to note that significant category I variables (age, HTL) and the single significant category II variable (use of aids) together accounted for nearly 50 percent (49.49%) of the variance on the oral—aural mode task. As a result only one category IV variable (syntactic ability) was significant and it added only 3.56 percent to the accountable varlance when forced to enter at the last step. It would appear that for the subjects and conditions of the present study, the oral-aural language reception task was data limited. That is, as more data, in the form of increased auditory input were available, performance on the task improved. Manual mode. The relative contributions of category I variables was somewhat different under the manual mode than under non-manual modes. None of the category II variables had an F significant to enter the regression. The only significant variable directly connected with subjects' hearing was their age at onset. This category I variable was the first to enter the equation, and it accounted for 10.86 percent of the variance. The correlation between manual mode scores and subjects' age at onset was negative, suggesting that subjects whose hearing loss occurred later performed less well than those whose hearing loss occurred at or near birth. This result is consistent with the literature which suggests that post1ingually deafened students are less likely to rely solely on manual communication. The correlation between manual mode and the significant category III variable (previous regular class experience) was also negative. That is, students who had previous regular class experience were less likely to do well under the manual mode than students who had no regular class experience. The nature of any cause-effect relationship is unknown. A tentative speculation might suggest that subjects who need some form of manual communi-cation are least likely to be assigned to regular classes; or the students in those classes have less manual practice. Conversely, syntactic ability was positively correlated with manual mode score, accounting for 27.42 percent of the variance even when entered at the final step. This result is consistent with the literature reviewed in chapter two which suggested that language reception and language competence are correlated. Further 96 investigation would be required, however, to determine whether subjects performed well on the manual mode task because they had good syntactic skills or whether they had good syntactic skills as a result of having good manual receptive skills. Simultaneous mode. This mode yielded the highest mean score. For the subjects of the present study it was a significantly more efficient mode than all others except manual. The regression also included the highest number of variables (6). From category I, subjects' age was positively correlated, accounting for 17.74 percent of the variance in simultaneous mode scores. It would appear that as students get older their performance under simultaneous communication improves. This was the only mode in which age contributed such a large proportion of the variance. Other variables which affected simultaneous mode scores were HTL, age at onset of the hearing loss, and subjects' use of hearing aids. These variables relating to subjects' auditory receptive ability, added 17.03 percent to the explanation of variance on the simultaneous mode reception task. These variables did not add as much to the explanation of variance in performance for this mode as in some other modes. It may be that, as was suggested in the discuss ion of between-mode differences, subjects were attending primarily to the manual component of the simultaneous mode. Subjects' experience in a regular class setting, a category III variable, added a further 7.78 percent to the explanation of variance in scores on the simultaneous mode reception task. This variable was negatively correlated with the dependent variable, indicating that subjects who had regular class experience performed less well on the task. This result may once again be an indication that 97 subjects who perform best in a communication environment with a manual component are least likely to be candidates for regular class placement. Examination of the partial correlations for the oral-aural mode, for example, reveals that the variable is positively correlated in that mode. That would suggest that subjects with good oral-aural skills were more likely to be placed in a regular class than those with poor skills. Even when entered at the last step, the largest contribution to the variance on simultaneous mode scores was the 25.51 percent added by subjects' syntactic ability (as measured by scores on the TSA). Based on the results of the regressions for the other modes together with examination of the relevant literature it would appear that the nature of the relationship is such that subjects' age affected their language (syntactic) ability which in turn affected their receptive skills as measured by the tasks in this study, A possible explanation for this relationship may be found in the nature of the experimental task which required that students f i l l in the blanks with parts of a sentence. It could be expected that if a subject was able to perceive only part of the stimulus sentence then knowledge of syntax would influence the accuracy with which the subject was able to predict or f i l l in the missed portion of the sentence. Stated in terms of the selective attention theory postulated by Moray (1969), as described in chapter two, syntactic ability could be considered to affect the threshold at which a subject was able to perceive a particu-lar, pattern; in this case a syntactic pattern rather than merely an acoustic speech signal as previously discussed. Once again the nature of any cause-effect relation-ship between syntactic ability and receptive ability is not certain. It may be that both are mutually reinforcing. Aural mode. In total, only 25.25 percent of 98 the variance in aural mode scores could be attributed to the independent variables which entered the regression. The influence of the significant category I variable (HTL) and category II variable (use of hearing aids) could be anticipated based on the literature and given the nature of the reception task. The fact that together they accounted for only 19.24 percent of the variance in the scores for the aural mode reception task may be a distorted reflection of their actual importance, Rather this may be an artifact of relatively low variability in the dependent measure as indicated by the means and standard deviations reported in Table 8. From a different perspective, the combined variance of category I and II variables amounted to 76.19 percent of the explicable variance (25.25%) in aural mode scores. The only other variable which proved to be significant was previous class exper ience which added a further 6.02 percent to the variance in aural mode scores. The correlation of this category III variable with the dependent measure was positive. This result is consistent with previous suggestions that students with less dependence on communication with some manual component are more likely to be placed in a regular classroom. It is not clear whether the regular class experience contributed to increased auditory receptive ability and thus higher aural mode scores or whether the placement in regular class and subsequent higher aural mode scores came about because of existing auditory skills. This relationship warrants further study as do other possible cause-effect relationships. In summary, the results of the seperate regression analyses for scores under each of the five modes serve to confirm the folly of some of the previous attempts to make recommendations as to the best method of communication for all hearing impaired students, In the present study the l i s t of significant independent variables and their relative contributions to the variance changed from one 99 mode to the next. Some of the previously discussed literature concerning comparisons between modes attempted to match subjects on certain variables. Results of this study suggest that even if all the significant subject variables could be identified and matched, the relative importance and contribution of the variables changes from mode to mode. And the results of the regression for simultaneous mode suggest that, although six variables were identified which together accounted for 68.06 percent of the variance, an additional 31.94 percent of the simultaneous mode variance remained inexplicable. Even less of the variance was explicable for the other modes measured in this study. Perhaps one of the most sobering implications of the results concerns the low scores obtained by the subjects. Even under the simultaneous mode which yielded the highest mean score, subjects' response rate, as measured by the present study, was only 58.16 percent . That is, even when subjects were given sentences which contained simple vocabulary and had a higher proportion of visually distinguishable words than might be expected in normal conversation, much of the linguistic message was either not received or could not be recalled by many subjects. These results were obtained under somewhat a r t i f i c i a l conditions, in that the stimulus sentences were presented on a video monitor. However, results of a study reported by Marmor and Petitto (1979) suggest that actual pedagogical condi tions may be even less ideal than those of the present study. Limitations of the Present Study One of the limitations of this study concerned the manner in which reception was measured . It is recognized that requiring a written response may have Involved variables other than reception. However, any confounding effect this may have caused would be cons i stent across all five modes which were measured. The ecological validity of requiring a written response may also be 100 questioned. This response method does occur in classrooms, and it was considered more amenable to objective scoring than having students verbally repeat the teacher's utterance. No attempt was made to involve students from aural-oral educational programs. Thus the results should not be generalized to students from such settings. Each sentence was presented on a video screen without the benefit of situational context. While live pre-sentation of connected discourse may have produced a different result the possibility of inconsistency and the effect of memory (for connected discourse) were consi dered potentially more serious confounding factors. This study compared only relative amounts of language received. Since the experimental task did not require that any material be learned, no conclusions should be drawn regarding the relative efficiency of learning via the five modes. Suggestions for Further Research The main focus of this study was the determination of between-mode di f ferences in rate of language reception by hearing impaired students. Future research might concern itself with comparison of reception of stimuli which, though presented simultaneously, are not parallel. For example, in the present study several input channels may have been involved, but they were requi red to process the same linguistic unit. Thus no clear indication of the serial or parallel nature of the processing task can be gained. More insight may be gained if responses to competing stimuli were analyzed. Another area which warrants further attention is the effect on reception of various linguistic structures. There is some indication from this study that reception varies as a function of Unguis tic structure. Other studies (Schwartz & Black, 1967) have obtained similar results. Some form of error analysis may also prove useful in determining the factors which affect language reception. 101 One important focus- should be on the determin-ation of factors which predict performance under various modes. This study found that personal demographic variables- do affect performance on the reception tasks. More study is needed, however, to determine the cognitive processes involved in reception of language through the various modes. Perhaps with more insight into their nature, these processes can be refined or trained so as to enable more efficient language reception. Summary This study had a twofold purpose: to examine the relative efficiency with which hearing impaired students receive language presented through each of five modes of communication, and; to determine the contribution of various personal and demographic subject variables to performance under each of the five modes. The modes studied included: speechreading (oral), audition (aural), signs (manual), speechreading plus audition (oral-aural), and speechreading plus audition plus signs (simultaneous). The task, developed specifically for the present study, involved presentation of sentences via the five modes. Subjects responded by writing the words they had seen and/or heard. The rate of reception, measured as the number of stimulus words received and written correctly in the answer booklet, was obtained for each subject under each mode. This wi thin—subj ects design allowed direct comparison of the relative contribution made by each unique mode. Previous chapters provide background to the problem, a review of pertinent literature and explanation of terminology and detailed description of the subjects and procedures involved in the present research, The results indicate that, under the conditions of the present study, subjects scored significantly higher 102 when sentences were presented through the simultaneous mode or manual mode than for oral, aural, or oral-aural modes. Also, scores for the combined oral-aural mode proved to be significantly higher than scores for either oral alone or aural alone. The di fferences between the highest score (simultaneous mode) and second highest (manual) were not significant, nor were the differences between oral and aural mode scores. The results of the inter-mode compari sons indicate that subjects' performance on the language reception task did not appear to suffer from the inclusion of additional modalities. That is, oral-aural mode scores were certainly no worse than those for either mode alone, and simultaneous mode performance did not suffer as a result of adding both an oral and aural mode to signs. It is not clear from the results obtained under the simultaneous mode whether subjects attenuated part of the input or whether they processed all of the input, either simultaneously or successively. Results of the regression analyses indicate that, for all but the aural mode, s ubj ects' syntactic ability accounted for the largest proportion of the explicable variance, even though this variable was consistently forced to enter in the final step of the regression. It is also noteworthy that for all modes other than manual, subjects' use of hearing aids was a significant factor, even when entered into the regression after the effects of age, hearing threshold level, and age at onset of loss had already been m e a s u r e d . The possible nature of any cause-effect relationships between the independent and dependent variables warrants further investigation. The results briefly summarized above were discussed in greater detail in chapter five. Additional information pertaining to some of the materials and procedures is contained in the Appendices. 103 REFERENCES Adams, J.A. - Test of the hypothesis of psychological refractory period. Journal of Experimental  Psychology, 1962 , 64^, 280-287. Beckmeyer, T. Receptive abilities of hearing impaired students in a total communication setting. Amer ican Annals of the Deaf, 1976, 121 , 569-572. Benderf R.E. The conquest of deafness. Cleveland: Press of Western Reserve University, 1970. Bergman, B. Signed Swedish. Stockholm: National Swedish Board of Education, 1979. Blackwell, P., Engen, E., Fischgrund, J., S Zarcadoolas, C. Sentences and other systems: A language and  learning c u r r i c u l u m for hearing-impaired children, Washington, D.C.: The Alexander Graham Bell Association for the Deaf, Inc., 1978. Bornstein, H. Signed English: A manual approach to English language development. Journal of Speech  and Hearing Disorders , 1974 , 39_, 330-343. Bornstein, H., Hamilton, K., Saulinier, K., 5 Roy, H. The signed English dictionary for preschool  and elementary levels. Washington, D.C.: Gallaudet College, 1975. Broadbent, D.E. Successive responses to simultaneous stimuli. Quarterly Journal of Experimental  Psychology . 19 56 , 8_, 145-152 . Broadbent, D. E, Perception and communication. London: Pergamon, 1958. Broadbent, D,E. Division of function and integration of behavior. In F.O. Schmitt & F.G. Worden (Eds.) The neurosciences. Third study program. Cambridge, Mass.: M.I.T. Press, 1974. Brooks, R.W., Hudson, F., & Relsberg, L.E. The effective-ness of unimodal versus bimodal presentations of material to be learned by hearing-1mpai red students. American Annals of the Deaf, 1981, 126, 835-840. Calvert, D.R., S Silverman, S.R. Speech and deafness. Washington, D.C.: The Alexander Graham Bell Association for the Deaf, Inc., 1975. 104 Campbell, D.Rt) & Stanley, J.C. Experimental and quasi- experimental designs for research, Chicago: Rand McNally College Publishing Company, 1963. Carson, P.A., S Goetzingev, C P , A studu of learning in deaf children. Journal of Auditory Research, 1975, 15_, 73-80. Clarke, B.R. An experiment in auditory training of profoundly deaf children. The Teacher of the  Deaf, 1954, 52_, 72-76. Clarke, B.R. Total communication. The Canadian Teacher  of the Deaf, 1972, 2, 22-30. Clarke, B.R. Untitled paper. (In progress). Clarke, B.R., Kendall, D.C., S Leslie, P.T. Communication systems with the hearing impaired. Audio logy  S Hearing Education, 1978, February/March, 17-20; April/May, 6-7, 10-12, 14, 16, 40. Clarke, B.R., Leslie, P.T., Rogers, W.T., Booth, J.A., S Horvath, A. Selected characteristics of  hearing impaired school-age students British  Columbia: 1976-77. Vancouver: University of British Columbia, 1977, Clarke, B.R,, s Ling, D. The effects of using Cued Speech A follow-up study. Volta Review, 1976, 7 8 , 23-34. Conference of Executives of American Schools for the Deaf. Definition of Total Communication. American  Annals of the Deaf, 1976, 121, 358. Conrad, R. Short-term memory in the deaf: A test for speech reading. British Journal of Psychology, 1972, 63, 173-180. Conrad, R. The deaf school child: Language and cognitive  function. London: Harper and Row, 1979. Cornett, O. Cued speech. American Annals of the Deaf, 1967, 112, 3-13. Cronbach, L.J. Coefficient alpha and the internal structure of tests. Psychometrika, 1951, 16, 297-334. 1 0 5 Davis, R. The role of attention in the psychological refractory period. Quarterly Journal of Experi- mental Psychology, 1959 , 11_, 211-220. Davis, H., & Silverman, S.R. Hearing and deafness (4th ed.). New York: Holt, Rinehart and Winston, 1978. Davis, J.M., & Hardick, E.J. Rehabilitative audiology for children and adults. New York: John Wiley S Sons, 1981. Deutsch, J., & Deutsch, D. Attention: Some theoretical considerations. Psychological Review, 1963, 70 , 80-90. Dixon, W.J. (Ed.). BMP: Biomedical computer programs. Berkley: University of California Press, 1970 (revised, 1982). Ebel, R.L. Measuring educational achievement. Englewood Cliffs, N.J.: Prentice-Hall, 1965. Egeth, H. Selective attention. Psychological Bulletin, 1967, 67_, 41-57. Eisenson, J. & Ogilvie, M. Speech correction in the schools (3rd ed.). New York: The Macmillan Company, 1971. Englemann, S., & Rosov, R. Tactual hearing experiment with deaf and hearing subjects. Journal of  Exceptional Children, 1975, 41, 243-253. N.P. Effects of dis tance on the visual reception of speech. Journal of Speech and Hearing Research, 1971, 14_, 848-457. N.P. Auditory, visual, and auditory-visual recognition of consonants by children with normal and impaired hearing. Journal of Speech and  Hearing Research, 1972, 15_, 413-422. N.P. Auditory-visual perception of speech. Journal of Speech and Hearing Disorder, 197 5, 40, 481-492. N.P., & McMahan, D. Effects of sentence context on recognition of words through lipreading by deaf children. Journal of Speech and Hearing Research, 197 6, 19, 112-119. Erber, Erber, Erber, Erber, 106 Ettlinger, G. Analysis of cross—modal effects and their relationship to language. In F.L. Barley (Ed.), Brain mechanisms underly ing speech and language. New York: Grune & Stratton, 1967. Ewing, I.R, Lipreading and hearing aids. Manchester: Manchester University Press, 1944, Ferguson, G.A, Statistical analysis in psychology and education. New York: McGraw-Hill Book Company, 19 81. Furth, H.G. Thinking without language: Psychological implications of deafness. New York: The Free Press, 1966. Gaeth, J.H. Learning with visual and audio-visua1 presentations. In F. McConnell and P.H. Ward (Eds,), Deafness in Childhood. Nashville, Tenn.: University of Vanderbilt Press, 1967, 279-303. Garner, W. Uncertainty and structure as psychological  concepts. London: Wiley, 1962. Garretson, M.D, Total communication. In R. Frisina (Ed.) , A bicentennial monograph on hearing  impairment. Washington: Alexander Graham Bell Association for the Deaf, 1976, 88-95. Gates, R.R. The differential effectiveness of various modes of presenting verbal information to deaf students through modified television formats. Unpublished doctoral dissertation, University of Pittsburgh, 1970. Gault, R.H. Touch as a substitute for hearing in the interpretation and control of speech. Archives  of Otolaryngology. 1926 , 3_, 121-135. Goetzinger, C.P., Will, R.C,, & Dekker, L.C. Non-language IQ tests used with deaf pupils. Volta Review, 1967 , 69_, 500-506. Goetzinger, C., & Proud, G. The impact of hearing impair-ment upon the psychological development of children. Journal of Auditory Research, 1975, 15, 1^-60. 107 Goldstein, M.A, The acoustic method for the training of  the deaf and hard of hearing child. St. Louis: Laryngoscope Press, 19 39, Griffiths, C. Conquering childhood deafness. New York: Exposition Press, 1967. Guberina, P. Verbo-tonal method and its application to the deaf. Proceedings of the International  Congress of the Deaf. Washington, D.C.: Gallaudet College, 1964. Gulliksen, H. Theory of mental tests. New York: John Wiley & Sons, Inc., 1950, Hood, J.D., & Poole, J.P. Speech audiometry in conductive and sensorineural hearing loss. Sound, 1971, 5_, 30-38. Hoyt, C.L. Test r e l i a b i l i t y estimated by analys is of variance. Psychornetrika , 1941, 6_, 153-160 . Hudgins, C.V. The response of profoundly deaf children to auditory training. Journal of Speech and  Hearing Disorders, 1953, 18_, 270-275. Hutton, C. Combining auditory and visual stimuli in aural rehabilitation. Volta Review, 1959, 61, 110-114. Jeffers, J., & Barley, M. Speechreading (lipreading). Springfield, Illinois: Charles C, Thomas Publisher, 1975, Jordan, I.K., Gustason, G. , & Rosen, R, Current Communi-cation trends at programs for the deaf. American  Annals of the Deaf, 1976 , 12_1 , 527-532. Jordan, I.K., Gustason, G,, & Rosen, R, An update on communication trends in programs for the deaf. American Annals of the Deaf, 197 9, 12 4 , 3 50-3 57. Kirk, R.E. Experimental design: Procedures for the  behavioral sciences . Belmont , California .-Brooks/Cole Publishing Company, 1968. Klopping, H.W.E. Language understanding of deaf students under three auditory—visual stimulus conditions. American Annals of the Deaf, 1972, 117, 389-396. 108 Kretschmer, R.E, (.Ed,), Reading and the hearing-impaired individual. Volta Review, 1982, 84_, 5. Kretschmer, R., & Kretschmer, L. Language development  and intervention with the hearing impaired. Baltimore: University Park Press, 1978. Lane, H. A chronology of the oppress ion of sign language in France and the United States. In H. Lane & F. Grosjean (Eds.), Recent perspectives in  American sign, language. Hillsdale, N.J,: Lawrence Erlbaum Associates, 1980. Lenneberg, E.H. Language disorders in childhood. Harvard Educational Review, 1964, 24, 152-177 . Levine, E.S. The ecology of early deafness. New York: Columbia University Press, 1981, Speech and the hearing impaired child: Theory  and practice. Washington, D.C.: A.G. Bell Association, 1976. 5 Ling, A.H. Basic vocabulary and language  thesaurus for hearing impaired children. Washington, D.C.: The Alexander Graham Bell Association for the Deaf, Inc., 1977, 6 Ling, A. Aural habilitation: the foundations  of verbal learning in hearing-impaired children. Washington, D.C. The Alexander Graham Bell Association, 1978. & Sofin, B. Discrimination of fricatives by hearing impaired children using a vibrotactile cue. British Journal of Audiology, 1975, 9, 14-18. ~ Lloyd, L.L, and Price, J.G. Sentence familiarity as a factor in visual speech reception (lipreading) of deaf college students . Journal of Speech and Hearing Research, 1971, 14_, 291-294. Lowenbraun, S,, Appelman, K., & Callahan, J. Teaching the hearing impaired through total communication . Columbus, Ohio: Charles E. Merrill Publishing Company, 19 80, Lynch, M.D., & Huntsberger, D.V. Elements of statistical  inference for education and psychology. Boston: Allyn and Bacon, Inc., 1976. Ling, D. Ling, D., Ling, D., Ling, D., 109 Marmor, G.S., S Petitto, L. Simultaneous communication in the classroom: How well- is English grammar represented? Sign Language Studies, 1979, 22 , 99-135. Mayberry, R. Manual communication. In H. Davis and S. Silverman (Eds.), Hearing and deafness. New York: Holt, Rinehart and Winston, 1978. Meadow, K.P. Deafness and child development. Berkley, Cal.'. University of California Press, 1980. Menyuk, P. Cognition and language. The Volta Review, 1976, 78_, 250-257. Menyuk, P . Early receptive language: From babbling to words. In R. Schiefelbusch & L. Lloyd (Eds.), Language perspectives - acquisition, retardation  and intervention. Baltimore: University Park Press, 1974. Menyuk, P. In R.E. Stark (Ed.)', Sensory capabilities of hearing impaired children. Baltimore: University Park Press, 1974. Myklebust, H, The psychology of deafness. New York: Grune & Stratton, 1964. Mindel, E.D., & Vernon, M. They grow in silence: The  deaf child and his family. Silver Spring, Md.i National Association of the Deaf, 1971. Moores-, D. Neo-oralism and education of the deaf in the Soviet Union. Exceptional Children, 1972, 38, 377-384. (a) Moores, D, Communication: Some unanswered questions and some unquestioned answers. In T. O'Rourke (Ed.), Psycholinguistics and total communication. Silver Spring, Md.: American Annals of the Deaf, 1972, 1-10. (b) Moores, D.F. Educating the deaf: psychology principles  and practices (2nd ed.). Boston: Houghton Mifflin Company, 1982. Moores, D., Weiss, K., & Goodwin, M. Receptive abilities of deaf children across five modes of communication Exceptional Children, 1973, 40_, 22-28. Moray, N, Attention; Selective processes in vision and hearing. New York: Academic Press, 1969. 110 Neisser, U. Cognitive psychology. New York: Appleton Century Crofts, 1967. Nelson, L.R. Guide to LERTAP use and interpretation. Dunedin, New Zealand: University of Otago, Education Department, 1974. Newman, J.B. The categorization of disorders of speech, language, and communication. Journal of Speech  and Hearing Disorders, 1962, 27_, 287-289. Nie, N., Hull, C., Jenkins, J., Steinbrenner, K., & Bent, D. (Eds.). SPSS: Statistical package for the social sciences. New York: McGraw-Hill Book Company, Inc., 1975. Nitchie, E.B. Lip-reading principles and practice. New York: F. Stokes S Co., 1930. Nix, G.W. Total communication: A review of the studies offered in its support. Volta Review, 1975, 77, 470-494. Norman, D.A., & Bobrow, D.G. On data-limited and resource-limited processes. Cognitive Psychology, 1975, 7 , 44-64. 0':Connor, N., & Hermelin, B. Seeing and hearing and space and time. London: Academic Press Inc. (London) Ltd., 1978. Oiler, D, Tactual speech perception by minimally trained deaf subjects. Journal of Speech and Hearing  Research, 1980, 23_, 769-778, Oyer, H.J. (Ed.). Communication for the hearing handicapped  An international perspective. Baltimore: University Park Press, 1976. Pickett, J.M. Tactual communication of speech sounds to the deaf: Comparison with lipreading. Journal of Speech and Hearing Disorders, 19 63, 28, 315-330. Pollack, D, Acoupedics: A uni—sensory approach to auditory training. Volta Review, 1964 , 66_, 400-409. Pollack, I, Perceptual and cognitive strategies: State-of-the-art report. In R. Stark (Ed.), Sensory  capabilites of hear ing-impaired children. Baltimore: University Park Press, 1974. I l l Quigley, S., Power, D,, & Steinkamp, M. The language structure of deaf children. The Volta Review, 1977, 79_, 73-84. Quigley, S.P., Steinkamp, M.W., Power, D.J., s Jones, B.W. Test of syntactic abilities. Beaverton, Oregon: Dormac, Inc., 1978. Reynolds, D, Effects of double stimulation: Temporary inhibition of response. Psychological Bui let in, 1964, 62_, 333-347. Scheffe, H. A method of judging all contrasts in the analysis of variance. Biometrika, 1953, 40, 87-104. Schmitt, P. Language instruction for the deaf. Volta  Review, 1966, 6_S, 73-94. Schulte, K. Fonator system: Speech stimulation and speech feedback by technically amplified one-channel vibrations. In G. Fant (Ed.), Speech  communication ability and profound deafness. Washington, D.C.: Alexander Graham Bell Association for the Deaf, 1972. Schwartz, J,, S Black, J. Some effects of sentence structures on speechreading; Central States  Speech Journal , 1967, 18_, 86-90. Staats, A.W, Learning, language, and cognition. New York: Holt, Rinehart and Winston, Inc., 1968. Stanford Achievement Test for Hearing Impaired Students. Washington, D.C.: Gallaudet College, Office of Demographic Studies, 1972. Stark, E. (Ed.). Sensory capabilities of hearing-impaired  children. Baltimore : University Park Press, 1974 . Streng, A. Syntax, speech & hearing: Applied linguistics for teachers of children with language and hearing  disabilities. New York: Grune & Stratton, 1972. Stuckless, R.E. Real-time graphic display and language development for the hearing impaired. The Volta Review, 1981, 83, 291-300. 112 Treisman, A. Verbal cues, language and meaning in attention. American Journal of Psychology, 1964, 77_, 206-214. Tukey, J.W. The problem of multiple comparisons. Ditto, Princeton University, 396pp., 1953. Vernon, M. Mind over mouth: A rationale for "total communication." The Volta Review, 1972, 7 4, 529-540. White, A.H., & Stevenson, V.M. The effects of total communication, manual communication, oral communication, and reading on the learning of factual information in residential school deaf children. American Annals of the Deaf, 1975, 120, 48-57. Wilbur, R. American sign language and sign systems. Baltimore: University Park Press, 1979. Winer, B.J. Statistical principles in experimental design. New York: McGraw-Hill Book Company, Inc., 1971. 113 Appendix A. Questionnaire Used in Data Collection 114 ANALYSIS OF UNIMODAL AND MULTIMODAL LANGUAGE RECEPTION CONFIDENTIAL: A l l information which would permit i d e n t i f i c a t i o n of any i n d i v i d u a l w i l l be held s t r i c t l y c o n f i d e n t i a l . A. PERSONAL/DEMDGRAPHIC 1. Student Code_ 2. Name:(last) ( f i r s t ) 3. 4. 5. 6. Gender: I I Male (1) | | Female(2) Birthdate/Age: (year/month/day) / /_ Months Column 1-3 4 5-7 8-10 11 7. 8. Hearing Threshold Level:(HTL)(p.t.a.,better ear, 500,lk,2k) Onset of l o s s : • b i r t h (1) Q 7-12 mos (3) • 19-24 mos (5) • 0-6 mos (2) • 13-18 mos (4) • 24 + mos (6) | 1 unknown (7) F i r s t f i t t i n g of aids: • 0-6 mos (1) • 19-24 mos (4) Q 7-12 mos (2) • 25-30 mos (5) [~| 13-18 mos (3) • 31 + mos (6) Use of a m p l i f i c a t i o n : Does student wear amplification? [ • Yes (1) • No (2) I f yes, how frequently (• constantly (4) [ • sometimes (2) • usually (3) • s e l d o m (1) B. COMMUNICATION/EDUCATION 9. Language of the home: • E n g l i s h ( l ) • ESL(2) • ASL(3) H I family: j | None(l) | | Mother (3) | | Brother/S i.ster(5) I | Father(2) • Both Parents(4) • Sibling&Parent(6) Place of residence during school week:[ | home(l) (• dorm(2) {• other(3) 17 10. 12 13 14 15 16 11. 12. Present method (primary) a) i n classroom i)student o r a l / a u r a l (1) i i ) t e a c h e r ASL (2) manual English (3) ora l / a u r a l / s i g n ( t o t a l ) ( 4 ) w r i t i n g (5) 18-19 b) i n home i ) student 13. Previous method: same as above a) i n classroom i ) student o r a l / a u r a l ( l ) ASL(2) manual English(3) oral/aural/sign(4) writing(5) i i ) by parents 20-21 (6) or, o r a l / a u r a l ( l ) i i ) by teacher ASL(2) manual English(3) oral/aural/sign(4) writing(5) 22 23-24 page 2 115 Student Code 13. b) Number of years previous method used c) i n home i ) b y student oral/aural(l) ASL(2) manual English(3) oral/aural/sign(4) writing(5) i i ) by parents Column 25-26 27-28 d) Number of years previous method used 14. Educational Placement: a) present school: t o t a l years i n school regular elementary(2) regular secondary(3) school for the deaf(4) other (5) (82-09-01) b) previous schools: preschool(Ol) reg. elem.(02) reg. sec.(03) school for the deaf(04) other (05) 1 & 2 (06) (07) (08) (09) (10) (11) c) present class(es): regular class(1) hrs/wk resource room(2) hrs/wk special class for h.i(3) hrs/wk off campus J.H.S.(4) hrs/wk (note-should t o t a l 25 hrs/week) d) previous classes: regular"class(1) resource room(2) special class for h.i.(3)_ off campus J.H.S.(4) other (5) years years years years years (note-should t o t a l same as 14 above) C. TEST DATA 15. 16. Language scores: (T.S.A. Screen) Achievement scores: a) vocabulary b) comprehension (S.A.T.-H.I.) (date (date_ ) 17. Visual-spatial score: D. EXPERIMENTAL DATA 18. Treatment order group: (Ravens) (date CI(o/a,oral,TC,manual,aural) (1) C2(oral,TC,manual,o/a,aural) (2) C3(TC,manual,o/a,oral,aural) (3) C4(manual,o/a,oral,TC,aural) (4) 19. Scores (by method) a) oral b) oral/aural c) manual d) combined e) aural Total score _ / _/ _/ / ' / 20. Scores by sentence type: (1 to 12) 29-30 31-32 33 34 35-37 38-40 41-43 44-46 47-49 50-52 53-55 56-58 59-61 62-64 65-67 68-70 71-73 04 05-07 08,09 10,11 12,13 14,15 16,17 18-41 Appendix B. Lexical Item Pool LEXICAL ITFH POOL I labio- IT rounded Consonant Croups dental lab ial III hi Labials IV apico- V oi>scure consonants /(,v/ /w , wit, r / /p.b, m / dental /$$/ Vowel face friend are rake a m h i m pants bath a dad has 1 eg tell Groups fast give bear read back jam pen that an dance ha t lid tent fat half ca r real bad keep pick t hen a nd dark haven't ne t train high front feet have chair red bag lamb pi g t h ese aren ' t da y he nex t tree /i/ beet fell laugh ear r i ng bath made pi n they as d i g head said yes /I/ bit field leaf far wake bea r make plane thick a t dish hen sail yet /el/ bait fish leave hair wa y bed ma n plant thin cake dress h i l l sand /e/ take flag live here we bee ma t plate thing can d r i nk his say / £ / bet jar wen t big ma y play think candle eat i n see /*/ hat near wheel b lack me same this can't egg i s she / d / dance race when brea k men s/jeep with card else isn't sing low front rain with b r i ng milk skip ca t get i t sit fa rm name s leep ca t ch glass key s tand help pa i n t swim tap chi ck class grass green k i l l last string take clean hand left teach drive cow run bike come rub the cry hide nice toy high mid far far there bi rd cup some there cut high night truck /3T/ p e r t farm her wa s b i te f rom time does horse no i se try / A / pun find how wha t bog jump up down house ou t turn / O I / b i t e fire now white brush mice duck I right us /«»/ pot first our were bus mine eye ice shut use fly ride work but mo us e girl just sky you from right write buy mu gun light start you ng low mid f r o n t rub your by pipe hard like sun yours fun knife cl imb purse hi line tongue cough b 1 ow s 1 ow ball more mou th all go t school high back /u/ boot fall door snow bl ow mouth those call hold shoes floor draw s tore blue pond cl ock hole smoke / V / book food f loor walk boa t pes t clothes hot socks /OV/ boa t foot for want book po t coa t juice soft /£>/ obey for grow wash boots pull cold knock s poon / 3 / bought fork know wa t ch box push coul d long talk found low who broke put couldn ' t look to f rog new wood comb room do lost of rol 1 woods home shop dog lot off room saw wrong hopn mom stop top dol 1 don ' t no nose low back go good gold gone old on salt Appendix C. Sentences Used in Present Study List A Ty pe Words Morphemes Cheremes* Visemes* 1. John drinks his juice. II 4 5 8 4 2. - The sun is red. IV 4 4 4 4 3. That man was my dad. i l l 5 5 5 5 4. Those ducks are in the pond. V 6 7 7 6 5. What does Jim have? II 4 5 7 4 6. Who was that thin boy? I l l 5 5 5 5 7. Is the food hot? IV 4 4 4 4 8. Where is my knife and fork? V 6 6 6 6 9. Bring that gold purse. II 4 4 4 4 10. Bring me those boots, II 4 5 5 4 11. Give that ball to Earl, II 5 5 8 5 12. Paint the door with this brush. II 6 6 6 6 List B 57 61 69 57 1. Joan makes her bed. 2. The sky is blue. 3. That girl was my friend, 4. Those birds are in the tree 5. What does Tim watch? 6. Who was that old man? 7. Is the book long? 8. Where is your hat and coat? 9. Find that blue coat. 10. Take him those chairs. 11. Bring that box for Bert. 12. Start the car with this key II IV III V II III IV V II II II II 4 4 5 6 4 5 4 6 4 4 5 6 5 4 5 7 5 5 4 6 4 5 5 6 57 61 8 4 5 7 7 5 4 6 4 5 8 6 69 4 4 5 7 4 5 4 6 4 4 5 6 57 * Cheremes - as used here refer to meaningful manual units ** Visemes - as defined in chapter two and three. List C 1. Brad combs his hair. 2. The .night is black. 3. That doll was my toy. 4. Those dogs are in the car. 5. What does Tom want? 6. Who was that young girl? 7. Is the ball soft? 8. Where is your brush and comb? 9. Read that new book. 10. Give him those plants. 11. Take that bag to Pat. 12. Cut the cake with this knife. List D 1. June cleans her hands. 2. The grass is green. 3. That house was my home. 4. Those cows are in the field. 5. What does Ted know? 6. Who was that new friend? 7. Is the clock slow? 8. Where is my cup and plate? 9. Wash that old pot. 10. Buy me these shoes. 11. Get that watch for Fern. 12. Write your name with this pen. Type Words Morphemes Cheremes Visemes II 4 5 8 4 IV 4 4 4 4 III 5 5 5 5 V 6 7 7 6 II 4 5 7 4 III 5 5 5 5 IV 4 4 4 4 V 6 6 6 6 II 4 4 4 4 II 4 5 5 4 II 5 5 7 5 II 6 6 6 6_ 57 61 68 57 II 4 6 9 4 IV 4 4 4 4 III 5 5 5 5 V 6 7 7 6 II 4 5 7 4 III 5 5 5 5 IV 4 4 4 4 V 6 6 6 - 6 II 4 4 4 4 II 4 5 5 4 II 5 5 8 5 II 6 6 6 6_ 57 62 70 57 Appendix D. Sample Test Booklet ANSWER BOOKLET 122 NAME: BIRTHDATE: 19 (year) (month) (day) THIS EXPERIMENT WILL TRY TO MEASURE HOW WELL STUDENTS CAN REMEMBER AND WRITE DOWN SENTENCES. YOU WILL NEED TO PAY CLOSE ATTENTION, BECAUSE SOME SENTENCES ARE SIGNED, SOME ARE SPOKEN, AND SOME ARE SIGNED AND SPOKEN. SOME HAVE 4 OR 5 OR 6 WORDS. SOME SENTENCES HAVE NAMES OF PEOPLE (WHICH ARE FINGERSPELLED). WATCH AND LISTEN TO EACH SENTENCE, AND THEN WRITE IT ON THE ANWER SHEET. IF YOU DO NOT KNOW WHAT A WORD WAS THEN YOU SHOULD GUESS. BE CAREFUL TO WRITE THE WORDS IN THE PROPER BLANK SPACES. HAVE FUN. EXAMPLES: YOU MAY SEE/HEAR: "MY DOG IS BIG" SO YOU WOULD WRITE((OR PRINT) 1. My dog i s big IF YOU WERE NOT SURE ABOUT ONE OF THE WORDS, YOU COULD PUT 1. My i s big DO YOU HAVE ANY QUESTIONS? PAGE ONE ANSWER SHEET NAME LIST A 1. 2. 3. 4. 5 . \  6. \ 7. 8. 9. _ 10. 11- : 12. LIST B 1. - 2. 3. 4. 5 . 6. 7. .  8. 9. . 10. ^ _ _ _ _ _ _ _ _ 11. 12. PAGE TWO NAME LIST C 1. 2. 3 . 4. 5. 6 . 7. 8. 9. 10. 11. 12. LIST D 1. 2. 3 . 4. 5. 6 . 7. 8. 9. 10. 11. 124 12. LAST PAGE NAME 125 LIST E 1. 2. . 3. 4. 5. 6. 7 . 8. 9. 10. 11. 12. Appendix E. Partial Correlations for Dependent and Independent Variables 227 Table 16 Partial Correlations for Oral Mode Variables Prior to Regression Category Variable Partial Correlation F to Enter I Age 0 . 29691 6.91 I HTL -0 . 26336 4.77 I Onset 0 .08 361 0 .45 II Fitaid 0 .15195 1 .51 II Useaid 0 .20212 2 . 73 II Langhom -0 .07964 0 . 41 II HIfamiy -0 .01077 0 . 01 II Reside 0 .07014 0.32 II Clascom -0 .10276 0 . 68 II Homcom 0 .12052 0 .04 II Prevcom 0 .29082 5 .91 III Totyrs 0 .293 34 6.03 III Schnow -0 .17941 2.13 III Schbfor 0 .23 837 3 .86 III Clasnow -0 .0 5822 0 . 22 III Clasbfor -0 .05822 0.22 IV Syntax 0 .49420 20 . 68 IV Sathivoc 0 . 09252 0.55 IV Sathicmp 0 .4 5392 16 .61 IV Ravens 0 .26494 4.83 Table 17 Oral Mode Partial Correlations After Forced Entry of Significant Variables Variables in Equation Variables Not In Equation Name Coefficient F to Remove Name Partial Corr. F to Enter I Age 0 . 03567 2 .21 I Onset 0 .17988 2 . 01 II HTL -0 . 107 38 3 .43 II Fitaid 0 .06534 0 . 26 II Useaid -0 . 80369 4 .36 II Langhom -0 .167 32 1 .7 3 IV Syntax 0 . 07 501 10 .34 II Hlfam -0 .29362 5 .66 (Y-Intercept 1 . 86069) II Reside 0 . 21876 3 . 02 II Clascom -0 .00823 0 . 00 II Homcom 0 .11398 0 .79 II Prevcom 0 .16766 1 . 74 III Totyrs 0 .0 3667 0 .08 III Schnow -0 . 0902 0 .51 III Schbfor 0 .21119 2 .80 IV Clasnow -0 .10622 0 .68 III Clasfor -0 .14603 1 . 31 IV Sathivoc 0 .05959 0 . 21 IV Satbicmp 0 .04032 0 .10 IV Ravens -0 .05643 0 .19 Oo Table 18 Partial Correlation for Oral-Aural Mode Variables Prior to Regression Category Variable Partial Correlation F to Enter I Age 0.21047 2.97 I HTL -0.48615 19 .81 I Onset -0.0 5397 0 .19 II Fitaid 0.09927 0 .64 II Useaid 0.44013 15.38 II Langhom -0.03457 0.08 II Hlfam 0.0 3 528 0 . 08 II Reside -0.05220 0 .17 II Clascom -0.05803 0.22 II Homcom -0.13204 0.14 II P revcom 0.14033 1.29 III Totyrs 0.13317 1.16 III Schnow -0.25472 4.44 III Schbfor 0 .13916 1.26 III Clas now 0.11503 0 . 86 III Clasbfor 0.20120 2.70 IV Syntax 0.37416 10 .42 IV Sathivoc 0.0663 2 0 . 28 IV Sathicmp 0.37483 10.46 IV Ravens 0.160 56 1 .69 Table 19 Oral-Aural Mode Partial Correlations After Forced Entry of Significant Variables Variables in Equation Variables Not In Equation Name Coefficient F to Remove Name Partial Corr. F to Enter I Age 0 , 09800 6 . 42 I Onset -0-07946 0 .38 I HTL -0 . 39997 18 . 38 II Fitaid 0.04246 0 .11 II Useaid 2 . 94261 22 . 55 II Langhom -0 . 092 32 0 . 52 IV Syntax 0 , .08101 4 . 65 II Hlfam -018929 2 .23 (Y-Intercept 1 7 . ,7 8104) II Reside 0 .13834 1 .17 II Clas com 0.16280 1 .63 II HomCom -0.19220 2 . 30 II Prevcom -0.04381 0 .12 III Totyrs -0.15568 1 . 49 III Schnow -0.2 3051 3 .37 III Schbfor 0.143 32 1 .26 III Clasnow 0 .13843 1 .17 III Clash for 0 .13044 1 .04 IV Sathivoc 0 . 01466 0 . 01 IV Sathicmp 0.05535 0 .18 IV Ravens -0.02126 0 .03 131 Table 20 Partial Correlations for Manual Mode Variables Prior to Regression Category Variable Partial Correlation F to Enter I Age 0.2987 4 6 .27 I HTL -0.15034 1.48 I Onset -0.32954 7 . 80 II Fitaid 0 .27405 5 .20 II Useaid 0 . 00651 0 . 00 II Langhom 0 .03598 0 .08 II Hlfamly 0 .11524 0 . 86 II Reside -0.167 80 1 .85 II Clascom 0.06057 0 . 24 II Homcom 0.13370 1.16 II Prevcom 0.1645 5 1 . 78 III Totyrs 0.37149 10 .90 III Schnow -0.30666 6 . 64 III Schbfor 0.18014 2.15 III Clas now 0.08829 0 . 50 III Clasbfor -0.11500 0 . 86 IV Syntax 0 . 67636 53.96 IV Sathivoc 0.21774 3 .19 IV S athicmp 0 .61471 38 .87 IV Ravens 0,3952 5 11.8 5 Table 21 Manual Mode Partial Correlations After Forced Entry of Signi ficant Variables Variables in Equation Variables Not In Equation Name Coefficient F to Remove Name Partial Corr. F to Enter I Age -0 . 01379 0 .12 I HTL -0 .153 30 1 . 44 II Onset -1. 05218 8 .11 II Fitaid 0 .197 53 2 .44 III Clasbfor -4 . 19831 2 .07 II Useaid o .15773 1 .53 IV Syntax 0 . 24942 35 . 21 II Langhom -0 .07079 0 . 30 (Y-Intercept 23. 38 536) II Hlfamly -0 .24323 3 .77 II Reside -0 .11118 0 . 7 5 II Clascom 0 .06377 0 .24 II Homcom 0 .05519 0 .18 II Prevcom 0 .01519 0 . 01 III Totyrs 0 .15122 1 . 40 III Schnow -0 .23252 3 .43 III Schbfor 0 .15676 1 :51 III Clas now 0 .12321 0 .92 IV Sathivoc 0 .23478 3 .50 IV Sathicmp 0 .13530 1 .12 IV Ravens -0 .102 3 5 0 .64 Co 133 Table 22 Partial Correlations for Simultaneous Mode Variables Prior to Regression Category Variable Partial Correlation F to Enter I Age 0 .42116 13.80 I HTL -0 .24521 4 .09 I Onset -0.30001 6 .33 II Fitaid 0.2137 3 3 . 06 II Useaid 0.06843 0 . 30 II Langhom 0.02943 0 . 06 II HIfamiy 0.17799 2 . 09 II Reside -0.11851 0.91 II Clascom 0 .01601 0.02 II Homcom 0.10826 0 . 76 II Prevcom 0.13218 1.14 III Totyrs o.45441 16.65 III Schnow -0 . 3 5960 9 . 51 III Schbfor 0.24901 4.2 3 III Clasnow 0.16 381 1 . 76 III Clasbf or -0.01467 0 . 01 IV Syntax 0.77142 94.06 IV Sathivoc 0 .18038 2.15 IV Sathicmp 0. 7 0646 63.77 IV Ravens 0.47207 18.35 i Table 23 Simultaneous Mode Partial Correlations After Forced Entry of Significant Variables Variables in Equation Variables Not In Equation Name Coefficient F to Remove Name Partial Corr. F to Enter I Age 0 . 07056 1 .91 II Fitaid 0 .02353 0 .03 I HTL -0 . 25637 5 . 24 II Langham -0 .0822 5 0 . 40 I Onset -1 . 12973 6 .92 II Hlfam -0 .27 489 4 . 74 II Useaid 1 . 43937 3 .04 II Reside 0 .01967 0 . 02 III Clasbfor -3 . 48900 0 . 99 II C1 as com 0 .16153 1 . 55 IV Syntax 0 . 33528 47 .12 II Homcom 0 .02308 0 . 03 (Y-Intercept 26 . 93034) II P revcom -0 .15773 1 . 48 III Totyrs 0 .1009 4 0 . 60 III Schnow -0 .31397 6 . 34 III Schbfor 0 .22036 2 . 96 III Clas now 0 .24894 3 .83 IV Sathivoc 0 .19363 2 . 26 IV Sathicmp 0 .11934 0 . 84 IV Ravens -0 . 06134 0 .22 I— 1 35 Table 24 Partial Correlations for Aural Mode Var iables Prior to Regr ess ion Category Variable Partial Correlation F to Enter I Age 0.08196 0 . 43 I HTL -0.33806 8.26 I Onset 0.04961 .16 II Fitaid 0.00839 0 . 00 II Useaid 0.34636 8 .72 II Langhom -0.11863 0 . 91 II HIfamiy -0.11081 0.80 II Res ide 0.01410 0 .01 II Clascom -0.09636 0 .60 II Homcom -0.06447 0 .27 II Prevcom -0.08376 0 .45 III Totyrs 0.10365 0 . 70 III Schnow ' 0.01388 0 . 01 III Schbfor 0.04539 0.13 III Cla snow 0.06645 0 . 28 III Clash for 0.29358 6 . 04 IV Syntax 0.12940 1.09 IV Sathivoc 0.06 561 0 . 28 IV Sathicmp 0.0339 4 0 .07 IV Ravens 0.04320 0.12 Table 25 Aural Mode Partial Correlations After Forced Entry of Significant Variables Variables in Equation Variables Not In Equation Name Coefficient F to Remove Name Partial Corr. F to Enter I HTL -0.14371 6. 23 I Age 0.16813 1 . 77 II Useaid 0.76813 4. 58 I Onset 0.11598 0 . 83 III Clasbfor 3.35590 4. 99 II Fitaid -0.00942 0 . 01 (Y^Intercept 13,89.016) II Langhom -0.14394 1 .29 II Hlfamly -0.11303 0 . 79 II Reside 0 .13184 1 . 08 II Clascom 0 . 01200 0 .01 II Homcom -0 . 01873 0 . 02 II Prevcom -0.11423 0 . 81 III Totyrs 0 .16616 1 . 7 3 III Schnow 0 . 06749 0 . 28 III Schbfor 0 . 02722 0 . 05 III Clasnow -0 . 05604 0 .19 IV Syntax 0.16435 1 . 69 IV Sathivoc 0 . 0848 0 .49 IV Sathicmp -0.00174 0 . 00 IV Ravens 0 .15208 1 . 26 137 Table 26 Demographic Characteristics and Mode Scores of Sub j ects Retained in Regression Analyses Variable Mean S . D . Mi n . Max . 1. Age 179.98 2 8 .16 90 .0 222 .0 2 . HTL 100.86 9 .52 68.0 113 .0 3 . Onset 3 .26 2 .77 1 .0 7.0 4 . Fitaid 2 .95 2 .75 0 .0 6 .0 5 . Useaid 2 .00 1 .55 0 .0 4 .0 6 . Langhom 1 . 34 0 . 77 0.0 4.0 7 . Hlfamly 0 .26 0 .44 0 .0 1 .0 8 . Reside 1 . 30 0 . 49 1 .0 3.0 9 . Clascom 3 . 81 0 .87 1 .0 7 .0 10 . Homcom 3 .68 1 . 85 1 .0 8 .0 11 . Prevcom 0 .27 0 .45 0 .0 1 .0 12 . Totyrs 9 .97 2 .28 4 .0 15 .0 13. Schnow 3 .60 0 .55 2 .0 4 .0 14 . Schbfor 5.56 :- 2 . 55 1 .0 11 .0 15 . Clasnow 0.15 0 . 36 0 .0 1 .0 16 . Clash for 0 .15 0 . 36 0 .0 1 .0 17 . Syntax 68.7 3 27 .19 27.0 117 .0 18 . Sathivoc 124 . 49 19 . 89 10 .0 160 .0 19 . Sathicmp 135 .99 18 . 32 106 .0 176 .0 20 . Ravens 35.05 11 .15 12 .0 52.0 Oral 4 .21 5 . 21 0 .0 2 3.0 Oraur 6 .53 9 . 88 0 .0 40 .0 Manual 33.9 8 10 .68 9 .0 50 .0 Simult 35 .48 14 .07 5 .0 55 .0 A ural 1 .44 4 .87 0 .0 27.0 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0096582/manifest

Comment

Related Items