Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

The relative contribution of gesture to communication in aphasia : a case study Hirsch, Fabiane Monique 1992

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_1992_fall_hirsch_fabiane_monique.pdf [ 4.05MB ]
JSON: 831-1.0086551.json
JSON-LD: 831-1.0086551-ld.json
RDF/XML (Pretty): 831-1.0086551-rdf.xml
RDF/JSON: 831-1.0086551-rdf.json
Turtle: 831-1.0086551-turtle.txt
N-Triples: 831-1.0086551-rdf-ntriples.txt
Original Record: 831-1.0086551-source.json
Full Text

Full Text

THE RELATIVE CONTRIBUTION OF GESTURE TO COMMUNICATION IN APHASIA:A CASE STUDYbyFABIANE MONIQUE HIRSCHB.Sc., The University of British Columbia, 1988A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF SCIENCEinTHE FACULTY OF GRADUATE SCIENCES(School of Audiology and Speech Sciences)We accept this thesis as conformingto the required standardTHE UNIVERSITY OF BRITISH COLUMBIASeptember 1992© Fabiane Monique Hirsch, 1992In presenting this thesis in partial fulfilment of the requirements for an advanceddegree at the University of British Columbia, I agree that the Library shall make itfreely available for reference and study. I further agree that permission for extensivecopying of this thesis for scholarly purposes may be granted by the head of mydepartment or by his or her representatives. It is understood that copying orpublication of this thesis for financial gain shall not be allowed without my writtenpermission.(Signature)Department of  audiology and Speech SciencesThe University of British ColumbiaVancouver, CanadaDate  September 25, 1992DE-6 (2/88)ABSTRACTCommunication involves the use of many modalities. Communication modalities usedfor expressing information include speech, gesture, writing, and others. Although speech isusually considered the primary modality for expression of information, other modalities, such asgesture, are often integral parts of communicative exchanges.The purpose of this study was to examine the expressive communication ability, throughdifferent modalities, in an aphasic subject who demonstrated exceptional use of gesture inspontaneous conversation.Findings were discussed within the context of two hypotheses: (1) a central organizerexists which coordinates the output modalities; and (2) the modalities are differentially impairedfor this subject, but symbolic functioning is relatively intact.This investigation involved a single subject. Videotapes were made of the subjectdescribing sequences of pictures. Modality use (speech, gesture, or written) was restricted indifferent attempts at sequence description, so that the subject described each sequence in eachindividual modality and in all possible combinations of modalities. Judges observed thevideotapes of the subject (and written material produced by the subject, when writing wasallowed), and recorded the information they perceived. The information recorded by the judgeswas the data used for analysis of the subject's communicative ability through different modalities.Findings supported differential impairment of modalities for this subject. Communicationthrough speech was more severely impaired than communication through gesture or writing.Symbolic functioning appeared to be relatively intact.iiCoordination of the output modalities was demonstrated. It was proposed that a centralorganizer is responsible for this coordination. A brief outline of how such a central organizermight function is presented.Clinical implications of the findings are discussed.iiiTABLE OF CONTENTSABSTRACT	 iiLIST OF TABLES	 viLIST OF FIGURES	 viiACKNOWLEDGEMENTS 	 viiiCHAPTER 1. INTRODUCTION	1Literature Review	 2Classification of Gestures	 2The Relationship between Gesture and Speech 	 3Aphasia and Gestural Communication	 13Clinical Implications	 17Statement of Purpose 	 28CHAPTER 2. METHOD	 30Subject	 30Preliminary Investigation	 35Methodology	 37The Subject's Task	 38Judging	 40Analysis	 43ivCHAPTER 3. RESULTS 	 46Inclusive Dataset 	 47Confidence Ratings for Information Perceived by Judges 47'Correctness' of Information Perceived by Judges 	 50Condensed Dataset 	 53Modality Use 	 54Types of Information 	 60Gist Dataset 	 64CHAPTER 4. DISCUSSION 	 65Judges 	 66Symbolic Functioning and Differential Impairment of theModalities 	 69The Central Organizer Hypothesis 	 76Clinical Implications 	 82Conclusion 	 83BIBLIOGRAPHY 	 85APPENDIX A. INSTRUCTIONS FOR JUDGES AND SAMPLE SCORE SHEET	 89APPENDIX B. SAMPLE OF DATA FROM PRELIMINARY INVESTIGATION	 92APPENDIX C. CONDENSED DATASET 	 98vLIST OF TABLESTable 2.1	 Activities Depicted in the Sequences 	 38Table 2.2	 Test Conditions 	 39Table 3.1 	 Confidence Ratings for Each Individual Modality and Combinations 	 48of ModalitiesTable 3.2 	 Examples of Incorrect, Correct, and Meaningless Information 	 51Reported for Sequence 1Table 3.3 	 Amounts of Incorrect, Correct, and Meaningless Information Perceived 51by Judges Through Each Individual Modality and Combinations ofModalitiesTable 3.4 	 Modality Use in Each of the Conditions	 55Table 3.5	 Types of Information in Each Individual Modality and Combinations 	 61of ModalitiesviLIST OF FIGURESPercentage of Each of the Five Confidence Ratings Recorded 	 49by JudgesConfidence Ratings for Each Piece of Information Communicated 	 50Through Only One ModalityPercentage of Correct, Incorrect, and Meaningless Information 	 52Perceived by JudgesPercentage of Correct, Incorrect, and Meaningless Information 	 53in Each ModalityPercentage of Accurate Information Conveyed Through Each 	 56ModalityPercentage of Information Conveyed Through Each Modality in 	 57Condition 1Information Conveyed Through 1 Versus 2 Modalities in 	 58Conditions 2, 3, and 4Relative Use of Writing When 1, 2, or 3 Modalities were Allowed 	 59Relative Use of Gesture When 1, 2, or 3 Modalities were Allowed 	 60Percentage of Each Type of Information Over All Conditions 	 62Percentage of Each Type of Information Conveyed When Only 	 63Writing was AllowedPercentage of Each Type of Information Conveyed When Only 	 63Gesture was AllowedFigure 3.1Figure 3.2Figure 3.3Figure 3.4Figure 3.5Figure 3.6Figure 3.7Figure 3.8Figure 3.9Figure 3.10Figure 3.11Figure 3.12Figure 3.13 Frequency of Gist Perceived in Each Condition 	 64Figure 4.1 	 Proposed Central Organizer 	 80viiACKNOWLEDGEMENTSI would like to extend sincere thanks to the subject of this study for his patience andsense of humour. I would also like to thank my family, my friends, and the faculty and staff atthe U.B.C. School of Audiology and Speech Sciences who supported and encouraged me throughthis endeavour.viii1CHAPTER 1INTRODUCTIONCommunicative interactions are rarely restricted solely to verbal' interchange. How oftendoes one describe the size of 'the fish that got away' by simply saying that it was three feet long,rather than holding up one's hands to describe its size? (And also often to emphasize just howbig it really was, or seemed to be!) Using one's hands to aid communication is part of a broadspectrum of behaviours which can be collectively referred to as gesture.Although speech is usually considered the primary modality for exchange of information,gesture is often an integral part of the communication process. Gesture can reinforce what iscommunicated verbally, but can also communicate information independently. It also often playsa role in regulating participants' contributions in communicative exchanges. In disordersaffecting communication, such as aphasia, in which verbal communication is impaired, attentionis often turned to other communication options, such as the use of gesture.The following is an exploration of the literature dealing with gestural use. First, there isa discussion of gestural classification schemes. This is followed by a review of theories whichattempt to interpret how speech and gesture are related to one another in communication. Then,theoretical perspectives are presented which attempt to show how gestural communication isaffected in aphasia. This is followed by a review of clinical implications of the relationshipbetween research findings on gestural communication and aphasia.Following the literature review, the purpose of the present study is outlined.1 'verb al ' communication is defined as communication via speech, to the exclusion of gesture,for the purposes of this discussion. This terminology, however, is not agreed upon universally.McNeill (1985), for example, proposes that some types of gesture should be classified as 'verbal'.2Literature ReviewClassification of Gestures Numerous schemes for classifying gestures have been documented in the literature.Although the terminology for the different gesture types varies widely between these schemes,consistency between schemes on what constitutes different types of gestures is apparent.Gestures which indicate an aspect of the content of what is spoken (for example, demonstratingthe size of 'the fish that got away'), have been classified in many of these schemes as a distincttype of gesture. McNeil and Levy (1982) called such gestures 'iconic' gestures. Freedman(1972) labelled what appear to be these same gestures as 'motor primacy movements', andWiener et al. (1972) referred to them as 'pantomimic gestures'.Other gestures bear only an abstract relation to the content of the accompanying verbalmessage. They are typically subordinated to the rhythm of speech. McNeill and Levy (1982)called such gestures 'beats', and Freedman (1972) called them 'speech primacy movements'.Still other gestures, rather than adding information to speech output, appear to functionas complete utterances by themselves. An example of such an utterance would be touching theindex finger to the thumb with the other fingers extended, to indicate 'okay'. Some labels givento such gestures have included 'emblems' (Ekman, 1977) and 'autonomous gestures' (Kendon,1983).Gestures of interactional management, pointing gestures, facial expressions, and numerousother communicative actions have been included in various gestural classification schemes.Given all of the proposals for classification systems, it is apparent that the study ofgestural communication is very complex. Unfortunately, interpretation of the many studies ofgestural ability of aphasics and other populations is difficult because of the wide variety of3classification schemes in use. Yet, the study of gesture appears to promise exciting insights intothe functioning of the human brain, especially with regard to how the human brain processeslanguage. To this effect, McNeill (1985) stated 'gestures offer themselves as a second channelof observation of the psychological activities that take place during speech production - the firstchannel being overt speech itself.'The Relationship between Gesture and Speech How different modalities (speech, gesture, writing) are represented in the human brain andhow they are employed in the complex task of communication is of major interest in a widevariety of fields, including neurolinguistics, neuropsychology, anthropology, and aphasiology.Cicone et al. (1979), in a study of gesture and verbal language in aphasia, articulated theimportance of studying varied modalities. They stated that information about the relationbetween verbal and gestural output is important in examining what they call a 'fundamentaltheoretical issue in communication': Is there a central organizer which controls the modalitiesof communication, or is the division of labour among the modalities more complex, for examplewith one modality taking the lead, or with the modalities operating relatively independently ofone another? I will refer to these three hypotheses as the Central Organizer Hypothesis, theLeading Modality Hypothesis, and the Relative Independence Hypothesis respectively for thefollowing discussion. Because there is extensive support for the Central Organizer Hypothesis,discussion of this hypothesis is saved until the end.The Leading Modality Hypothesis Cicone et al. (1979) suggested that results of their study could be used to support thenotion of one modality, speech, taking the lead over another, namely gesture. In this study,4informally structured interviews of two Wernicke's aphasics, two Broca' s aphasics, and fourcontrol subjects were observed in order to evaluate the ways in which aphasic patients usegesture in spontaneous conversation. Numerous analyses of the data were subsequently carriedout, including an analysis of the physical parameters of the gesturing, an analysis of the typesof gestures used (referential (iconic, non-iconic, other) versus nonreferential), an analysis of thetypes of propositions communicated, a case analysis depicting the kinds of informationcommunicated, an analysis of speech versus gesture as modes of communication, and an analysisof the clarity of the information conveyed. One important finding was that, overall, speechcarried most of the information for both aphasic groups. Another interesting finding was thatBroca's aphasics tended to omit nonreferential gestures, which appeared to reflect the omissionof nonreferential units in speech. These two important findings can be used to support the notionthat speech functions as the major and dominant channel, with gesture appearing as a secondaryreflection of the properties of speech.The Relative Independence Hypothesis Several lines of evidence lend support to the view that different modalities operaterelatively independently of one another. Feyereisen and Seron (1982) proposed the term 'theindependence hypothesis' to describe the independence of the verbal and nonverbalcommunication systems.One line of evidence for this hypothesis comes from studies in anthropology (Kimura,1979). It has been suggested that a manual system of communication may have preceded vocalcommunication in man. Indirect evidence for this comes from the finding that all hominids wereupright walkers, hence their upper limbs were freed from locomotor activities millions of yearsago, leaving them available for the function of communication. In addition, evidence of early5tool use suggests that the ancestors of modern man had precise control of upper limb movementwhich would make gesturing possible. These findings do not, of course, prove the use ofgestures by the early ancestors of man, but they do support the notion that such communicationwas possible. Our living primate relatives also provide support for the relative independence ofgesture and speech. Gestural communication among primates exists in the absence of speech.In addition, Gardner and Gardner (1971) have found that chimpanzees can be taught to use amanual sign system, yet attempts to teach chimpanzees to use human speech sounds forcommunication have failed. Results from these various anthropological studies seem to suggestthat gesture is a more primitive system than speech, and that these two modalities likelydeveloped independently from one another.A second line of evidence for the notion of 'relative independence' comes fromneurophysiological studies which have suggested that speech is centered in a special structure inthe brain which is independent of structures that control gestures and other nonverbalcommunicative behaviours. Geschwind (1965) reported that, in nonhuman species, much of thecommunicative behavior is controlled by structures such as the limbic system. These are nothomologous to speech areas in humans. He suggested that the emergence of language in manactually depended on the introduction of a new anatomical structure, the human inferior parietallobule. Geschwind (1965) believed that this new association area was involved in thedevelopment of speech because of its importance in enhancing cross-modal associations, namelybetween vision, hearing, and somesthetic sensibility. He claimed that the inferior parietal lobulefreed man, to some extent, from the limbic system.A third line of possible evidence for the independent nature of speech and gesture comesfrom studies of child language development. Gestural communication appears to occur beforeverbal communication in the developmental progression (Bates, 1979).6Evidence from each of these areas of study suggest that speech and gesture act in relativeindependence of one another. It must be mentioned, however, that the 'relative independence'hypothesis does not imply a complete lack of interaction between the modalities. The word'relative' has been included in the title of this hypothesis to emphasize this point. Undoubtedly,verbal and nonverbal communication are influenced by one another. But such reciprocalinfluence is inherently different from a system in which a superordinate controlling mechanismcontrols and coordinates the modalities. The discussion will now turn to exploration of such asystem.The Central Organizer Hypothesis A theory supported by Cicone et al. (1979) and numerous other investigators postulatesthe existence of a single 'central organizer' which initiates and determines the complexity andclarity of both speech and gesture. Numerous lines of evidence, from a variety of fields, supportthis view of strict control and coordination of the modalities. The following review of literaturesupporting the Central Organizer Hypothesis includes studies from the fields of neurology, childdevelopment, anthropology, kinesics, and aphasiology.Neurological support for this hypothesis is provided by Kimura (1973) who found thatright-handed individuals tended to gesture mainly with their right hand, while no hand preferencewas demonstrated for nongestured ('self-touching') movements. From these results, Kimurasuggested that 'one can reasonably conclude that there is some system common to control of bothfree movements and speaking'. She believed that this common system was likely located in theleft hemisphere for most people. These results were further supported by Kimura's study of left-handed individuals, which revealed that left-handers, who are known not to be as fully lateralizedfor control of speech as right-handers, tended to gesture bilaterally.7Studies in the development of communication abilities in children have shown that theuse of gesture increases with age, and that the kinds of gesture and the way that gesture is relatedto speech change from infancy to adulthood. There appears to be a shift from elaboratepantomimes, which can take the place of speech, to a precise coordination between gesture andspeech (Bates, 1979). As speech emerges during development, rather than speech replacinggesture, gestures appear to be expanded and elaborated, and they become more highly coordinatedwith speech. It appears that this growth in gestural ability correlates closely with growth in thechild's capacity for spoken language. Bates (1979) suggested that these findings do not supporta view of language replacing preverbal communication during development, but rather thatlanguage and gesture "are related via some common base involving both communication andreference", and, as this 'common base' develops, the capacity for both gesture and speechdevelops.As noted previously, anthropological studies have been used by some researchers tosupport the notion of gesture and speech acting relatively independently, and to refute the notionof speech leading gesture. Results of such studies have also been used to provide support forcentral organization of both speech and gesture. This support comes primarily from the view ofhypothesized common origins for these two modalities. Some investigators, such as Hewes(1973), believed that speech developed directly from the early use of gesture, implying a commonorigin. As in studies of child development, gesture did not completely disappear during theevolution of communication; rather, it persisted as an accompaniment of speech.Kinesics refers to the study of body motion, which includes the study of gesture. Kendon(1983) and McNeill (1985) both provide very interesting accounts of the relationship betweenbody motion and speech. Kendon (1983), in studying this relationship, used findings from hisearlier studies to support the notion of a central organizer underlying speech and gestural output.8He defined a Gesture Unit as a unit of gesture composed of an excursion of the forelimb froma position of rest into free space in front of the speaker and back to the resting position. AGesture Phrase was identified by a nucleus of movement ('stroke') in which the gesticulatinglimb performs some definite pattern of movement. Kendon plotted the boundaries of GestureUnits and Gesture Phrases and compared these gestural constituents with the concurrent flow ofspeech. He found that the hierarchy of gesture fit closely with the hierarchy of phrases of speech- Gesture Phrases corresponded to Tone Units (defined by Crystal and Davy (1979) (as cited inKendon (1983)) as phonologically defined syllabic groupings comprising a single intonationtune), and Gesture Units corresponded to Locutions (a grouping approximating a completesentence) (Kendon, 1983, p.18). Kendon suggested that at the same time that a speaker ispackaging meanings into phrases of speech, he or she is also packaging meanings into phrasesof gesture. He emphasized his belief that gesture is not dependent upon speech, but rather thatgesture and speech are available as two separate modes of representation, and are coordinatedbecause both are being guided by the same overall 'aim', this aim being the production of apattern of action that will accomplish the representation of meaning. In other words, gesture andspeech are separate representational systems that are produced together because they are underjoint control. Such joint control correlates well with the notion of a central organizer.McNeill (1985), considering a wide array of information collected about referential anddiscourse-related gestures, proposed that gesture and speech share a computational stage; that is,these two modalities appear to 'respond to the same forces at the same times', which seems tocorrespond closely with the notion of a central organizer. He found that, typically, gesturesoccurred in conjunction with speech. In a sample of six narrations, he found that about 90% ofall gestures occurred during active speech output. This would be expected if speech and gesturewere parallel products of a common computational stage. McNeill suggested that even the 10%9of gestures that took place during silence support the existence of a common computational stage.He reported that these gestures were likely related to the breakdown of speech, and were perhapsused as an attempt to restart the speech process.Conventional linguistic symbols, such as words and phrases, show common usage amongsubjects. If gestures showed a similar commonality of usage among different subjects, this wouldreflect another important parallel between speech and gesture. McNeill used the followingexamples of referential gestures with concurrent speech to support this proposal. These examplesare taken from narrations of the same event of a cartoon story by 5 adult female subjects. Theleft-most bracket of the gesture description indicates approximately when the gesture was started.he tries going up inside the drainpipe[hand rises and points upward]he tries climbing up the drainspout of the building[hand rises and starts to point upward]and he goes up through the pipe this time[hand rises quickly and fingers open to form a basket]this time he tries to go up inside the raingutter[hand steady and pointing] [hand rises quickly (still pointing)]he tries climbing up the rain barrel[hand flexes back and up](McNeill, 1985, p. 352)Commonality of gesturing is seen in the fact that each subject demonstrated upward movementin their gesturing, although there was much individual variation in the details of the gesturesproduced.In contrast to the notion of commonality, McNeill pointed out that there appears to be atleast as much variation among the sentences produced verbally as among the gestures. For10example, the use of "pipe", "spout", "gutter", and "barrel" show at least as much variation as isshown by the observed gestures. This provides additional evidence of similarities between speechand gesture, and also support for the likelihood that they share a common computational stage.Observed semantic and pragmatic parallels between the two modalities offer furthersupport. Semantic parallels are exemplified by what McNeill called 'iconic gestures' (gesturesthat in form and manner of execution exhibit a meaning relevant to the linguistic meaningexpressed simultaneously) and 'metaphoric gestures' (gestures that exhibit a meaning relevant tosimultaneous linguistic expressions with abstract meanings). The examples above readilydemonstrate the semantic parallels between gesture and speech. In each instance, the gestureincluded upward movement and the concurrent sentence also conveyed the meaning of upwardmovement. Metaphoric gestures, which relate to more abstract concepts, show similar semanticparallels with speech. A videotaped discussion between two mathematicians provided thefollowing example.this gives a direct limit[left index finger slides along right index finger and comes to tensed stop just beyond thetip](McNeil, 1985, p. 356)McNeill suggested that the end marking provides a metaphor for the concept of a limit, and thestraight line along which the left finger slid provides a metaphor for directness.Pragmatic parallels are demonstrated by beats, which are gestures that emphasize off-propositional relations. They have no propositional content of their own. Whereas iconicgestures tend to be large complex movements performed slowly and carefully in the centralgestural space, beats tend to be small simple movements that are performed rapidly at or nearthe rest position of the hands. McNeill provides the following example of a beat.11he keeps trying to catch the bird[left hand at rest position rotates quickly outward and back](McNeill, 1985, p. 360)McNeill suggested that the important aspect of the sentence was that it was providing backgroundcommentary. The gesture, unrelated to catching the bird or anything else, paralleled this off-propositional role.From these examples of the different types of gestures and their relations with speech, itis suggested that, during a common computational stage, semantic and pragmatic functions aredecided on, and both modalities, speech and gesture, perform these functions in parallel. As withKendon's (1983) results of plotting gestural constituents against concurrent verbal constituents,McNeill found that speakers synchronize their gestures with semantically and pragmaticallyparallel linguistic items, and these gestures rarely cross clause boundaries.Using all of the above evidence, as well as evidence from studies of aphasia and childdevelopment discussed earlier, McNeill provides a convincing argument in support of speech andgesture sharing a common computational stage. It is not unreasonable to assume that such acomputational stage may be directed by a central organizer.Some studies of gestural and verbal abilities in aphasia lend further support to the centralorganizer hypothesis. Recall that some findings in the study by Cicone et al. (1979) were usedto support the Leading Modality Hypothesis. But other findings appear to support the notion ofa central organizer. As mentioned earlier, Cicone et al. (1979) examined a variety of differentaspects of gestural communication by Broca's and Wernicke's aphasics and control subjects inrelation to speech. They found that the gestures and spontaneous speech of the aphasicsexhibited very similar properties. The Wernicke's aphasics, whose speech output was fluent,grammatically varied, and quite copious, produced considerable amounts of gesture, much of12which was elaborate and complex, yet was often vague and unclear. The Broca's aphasics, whodemonstrated paucity and simplicity of speech, produced typically 'one-shot' gesturing which,like their speech, was rich in content. Duffy et al. (1984) found similar results when a fluent anda nonfluent aphasic performed a pantomimic referential task. Cicone et al. suggest that suchfindings, of speech output and gestural output paralleling one another, may lend support to thenotion of a central organizer determining the clarity and complexity of both of these modalities.Relative independence of the modalities is not well supported by these findings, but theyconcluded that their results could not differentiate between the Central Organizer Hypothesis andthe Leading Modality Hypothesis.Several other observations made by Cicone et al. have theoretical value. They found thatgestures sometimes preceded spoken referents, that gesture was sometimes clearer than speech,and that gesture sometimes occurred without any accompanying speech at all. They concludedthat gesture can not be entirely parasitic upon speech, which appears to soften the LeadingModality Hypothesis. And they propose that, if a central organizer exists, it might retain someflexibility about which modality to employ preferentially.Support for the hypothesis of a central organizer thus far has rested on observedsimilarities between speech and gesture in a variety of different fields. These studies support thenotion of an organizer that controls the two modalities, yet they do not address the notion of anorganizer that coordinates the two modalities. McNeill (1985) illustrates coordination ofmodalities by examples of complex meanings being divided between the speech and gesturechannels. For example,she chases him out again[hand, gripping an object, swings from left to right](McNeill, 1985, p. 353)13The act is described in the spoken sentence and the gesture conveys the idea of the instrumentof the act. Thus, speech and gesture are presumed to cooperate in presenting a single cognitiverepresentation. The speaker could have used a more complex sentence to convey the instrument,but this extra complexity was avoided by use of the gesture. It may be that speech is a modalitybetter suited to expression of some types of information, whereas gesture is better suited toothers. The important point for this discussion, however, is that the two modalities arecoordinated to present a single complex meaning Such coordination is strong evidence againstrelative independence of the modalities. Alternatively, it coincides very nicely with ahypothesized central organizer controlling and coordinating speech and gesture. Further evidenceof coordination of the modalities in representing meanings would provide additional support forthe central organizer hypothesis.Aphasia and Gestural Communication Several references have been made to studies of aphasic patients in the previous sectionto provide support for theories of modality organization and coordination. But how is functioningof the different communicative modalities affected in aphasia? This question has prompteddebates for over a century.In the aphasia literature, reference to a lecture by D.C. Finkelnburg (1870), as translatedby Duffy and Liles (1979), appears again and again. At the time of this lecture, much of thefocus of research in aphasia was on cerebral localization, with relatively little emphasis on carefulclinical description of the syndrome. Aphasia was widely held to be a disorder affecting onlyspeech. Finkelnburg countered the popular view by proposing that aphasia resulted not only indeficits in speech, but also deficits in reading and writing, as well as extraverbal deficits in14gestural comprehension and expression, in musical ability, in understanding of monetary value,in understanding of rank and social class, and in ability to perform religious rituals, such asSunday mass. He presented five case studies of aphasics that displayed such deficits. Heconcluded that this disorder affected all of the cerebral processes that convey meaning throughsymbols; hence, the verbal deficit was only a small portion of the total disturbance of aphasia.He proposed that this disorder would be more aptly named "asymbolia" to reflect that there wasa loss of the ability to comprehend and express symbols.Several more recent studies support the notion of asymbolia. Duffy, Duffy, and Pearson(1975) developed a pantomime recognition test to determine whether pantomime recognition wasimpaired in aphasics relative to other brain-injured and nonbrain-injured subjects, and to showwhether a relationship existed between pantomime recognition and verbal deficits in aphasics.Results of their analyses indicated that aphasics were indeed more impaired in pantomimerecognition ability than other groups of subjects, and that there were high correlations betweenverbal and pantomime recognition impairments in accordance with Finkelnburg's proposal. Theyconcluded that aphasia is an impairment in the common symbolic competence underlying gesturaland verbal communication. Due to impairment of this central symbolic ability, gestural andverbal deficits show the same general level of severity.Another study of gestural recognition, by Gainotti and Lemmo (1976), produced similarresults. In this study, two-thirds of the aphasic subjects could not understand the meaning ofsimple symbolic gestures, and there was a significant correlation between verbal semanticimpairment and nonverbal symbolic disintegration. These authors concluded that, in some formsof aphasia, the disturbance might be due to a central symbolic disorder, such as that proposedby Finkelnburg.Studies of gestural expression have also supported this theory. For example, Duffy,15Duffy, and Mercaitis (1984) developed the Pantomimic Referential Abilities Test (PRAT) tostudy the expressive pantomimic abilities of a fluent aphasic, a nonfluent aphasic, and fournormal control subjects. They found a strong correlation between the severity of verbal deficitsand the expressive and receptive pantomimic performances of the aphasic on the PRAT. Adisturbance of the central symbolic system could account for these findings.Support for the notion of loss of symbolic function in aphasia, however, is not unanimous.Davis and Wilcox (1981), in reviewing a number of studies of gestural comprehension, includingthose mentioned above (Duffy et al., 1975; Gainotti and Lemmo, 1976), concluded that theresults of these studies overemphasized the deficit in pantomime recognition. Davis and Wilcoxsuggested that the observed pantomime deficit was "statistically significant but not necessarilyclinically significant". By looking more closely at Duffy et al.'s (1975) data, they pointed outthat the aphasics' mean score on the pantomime recognition task was 86% correct. Although,statistically this was significantly lower than the normals' mean score of 96%, it was well abovethe chance level of 25%. In addition, the 10% difference between the aphasics' and the normals'scores on the pantomime recognition task was minimal relative to the difference between thescores on the naming test, a verbal test, on which the aphasics' mean was 57% and the normals'mean was 98%. Similarly, in Gainotti and Lemmo's study, 30 out of their 53 aphasic subjectsscored at least 90% on their test of gestural comprehension. By evaluating aphasics' ability torecognize pantomimes, rather than their inability to do so, it appears that gestural and verbalmodalities are not similarly affected by a central disorder in aphasics. These findings do not,therefore, lend support to the notion of 'asymbolia'.Other studies revealing discrepancies between verbal and gestural abilities in aphasicssimilarly do not appear to provide support for 'asymbolia'. Schlanger and Schlanger (1970),reporting on a program for aphasics that incorporates role-playing activities, noted that they16observed some aphasics who could not spontaneously say a single identifiable word, yet couldpantomime complicated sequences. Similarly, the subject of a case study by Kirshner and Webb(1981) presented with what they described as "an extreme example of modality selectivity in anacquired communication disorder". The subject was unable to perform any of the verbal taskson the Porch Index of Communicative Ability (Porch, 1967), yet she "gestured the function ofobjects with ease". Along the same lines, Daniloff et al. (1982) found that aphasic subjects intheir study had relatively preserved gestural recognition abilities in comparison to their verbalrecognition abilities. Herrmann et al. (1988) found that severe aphasics appear to compensatefor their verbal deficits by using nonverbal channels. They suggested that their aphasic patientsswitched from a severely impaired communication channel to a less impaired one, andcommented that "this finding disagrees to some extent with the 'symbolic disorder hypothesis".Thus, more than 100 years since first proposed, the notion of asymbolia is still a topicof debate. Depending on the focus of various researchers, for example whether they werelooking at gestural deficits versus residual gestural abilities, the conclusions of studies in thisarea vary with regards to the nature of the aphasic disturbance.Clarification of this issue has important theoretical and clinical implications.Theoretically, a clearer understanding of the aphasic deficit would shed light on both normal anddisturbed language processing in the brain. Clinically, it suggests important considerations forboth assessment and treatment. If aphasia is indeed a lack of understanding of symbols, thenassessment could focus strictly on one modality, such as the verbal modality, and deficits in othermodalities could be assumed to mirror the verbal deficits. Perhaps even more important, adisturbance of the symbolic system would imply that it would be of little value to teach gesturalcommunication to supplement or replace impaired verbal communication, thus making redundantyears of clinical research in the teaching of gestures and sign languages to aphasic patients.17As more and more evidence is gathered, a better understanding of how aphasia affects thecommunicative modalities, whether it be a central symbolic disorder or otherwise, will hopefullybe elucidated.Clinical Implications The primary goal in the treatment of aphasia is to improve the patient's ability tocommunicate in everyday life. Traditionally, the assessment and treatment of aphasic disordershas focused almost exclusively on the identification and remediation of verbal deficits. Mosttraditional assessment batteries look exclusively at the aphasic patient's ability to comprehendand produce verbal stimuli at varying levels of complexity. Deficits observed in such batteriesare then targeted for therapeutic intervention. Traditional therapy generally involves severalstimulus/response trials aimed at remediating either language production or languagecomprehension. For example, a comprehension task commonly used clinically involves thepresentation of a series of linguistic stimuli to which the patient is required to respond bypointing to an appropriate picture or object. A common production task in clinical use involvespresentation of picture stimuli to which the patient is expected to provide verbal responses. Theclinician leads all of the activities, and the stimuli are generally linguistic, with minimalextralinguistic context. This focus on the verbal aspects of communication difficulties isunderstandable, as it is the impairment of verbal communication which is usually the mostobvious and crippling aspect of aphasia.It is questionable, however, whether a narrow focus on linguistic abilities in assessmentand treatment is the only mechanism by which to achieve the goal of aphasia therapy, that is, toimprove the aphasic's ability to communicate in everyday life. For example, Wilcox et al. (1978)stated that although traditional assessments of auditory comprehension offer a measure of aphasic18breakdown in linguistic processing, they do not appear to adequately reflect aphasics'comprehension abilities in natural communicative settings. As for traditional linguistic therapyprocedures, Wepman (1972) reported that there was little evidence to support the view that directresponse training improves the spontaneous use of language. He stated that although somepatients improve in direct stimulus/response ability, it was questionable whether this improvementtransferred to improvement in spontaneous reception, integration, and production of languageusage. Others have also found a gap between clinical performance and performance in naturalcommunicative environments. Davis (1986), for example, has pointed out that traditional therapyprocedures are at variance with natural communication that occurs everyday, and suggests thatthis "clinic-nature gap" may be at least partially responsible for observed difficulties intransferring skills learned in the clinic to everyday interactions.In addition to lack of evidence for efficacy of traditional assessment and treatmentapproaches for improving spontaneous conversation, there are other pitfalls to the usual verbalapproach. Traditional tasks, such as those described above, often lack relevancy for the patient,leading to feelings of frustration and eventual withdrawal from language therapy.Due to the aforementioned concerns with traditional therapeutic approaches, there appearsto be a shift amongst some clinicians from a strictly verbal approach to a more functional orpragmatic approach to aphasia assessment and treatment. There is an expansion from anapproach which emphasizes recovery of linguistic skills, to an approach that stressescommunication as a whole. Hence, there is more attention paid to the parameters of naturalconversation, rather than employment of clinician-led stimulus-response tasks, and there isenhanced awareness of the various communicative channels, rather than observation solely of theverbal channel. This approach grew partially out of the recognition that most aphasics, even withintensive traditional therapy, do not recover post-morbid language skills. Hence, there was a19need to assess and fully develop alternative communication skills, such as the use of gesture, inorder for these patients to best be able to communicate in everyday situations.This new orientation is not meant to be a totally different approach to replace traditionalassessment and treatment, but rather, it is meant to supplement and to add validity to establishedmethodologies. Davis (1986) notes that such a pragmatic approach expands upon traditionaltherapeutic methodology in three ways: (1) it identifies some aphasic deficits that had notpreviously been considered in traditional approaches (and therefore identifies some new goals fortherapy); (2) it emphasizes some strengths in the communicative ability of aphasics that had notbeen emphasized before; and, (3) it increases awareness of the function of language in naturalcommunication. According to Davis, "generally, pragmatics has increased attention to severelyimpaired patients with respect to the pursuit of nonverbal modes of communication" (Davis,1986, p. 259).Assessment and treatment activities that have been developed in response to this shift infocus, especially those including a focus on alternative communicative modalities such as gesture,are outlined in the following sections.Assessment A pragmatic approach to the study of language is concerned primarily with two areas ofinvestigation: how language is used in its surrounding linguistic and situational context; and theuse of nonverbal communicative behaviour (Davis and Wilcox, 1981). Several authors haveemphasized the importance of assessing communicative abilities in natural settings (Herrmannet al., 1988; Holland, 1980), but, the primary interest here is with the latter aspect, nonverbalcommunicative behaviour. Hence, the focus in this section will be on the assessment ofnonverbal behaviour, specifically gestural abilities.20But why assess nonverbal communicative abilities? Albert Mehrabian (1968), as citedin EgoIf and Chester (1973), has estimated that only 7% of a message's affect is transmittedverbally. If nonverbal communication accounts for 93% of a message's affect, the ability tocommunicate using nonverbal means definitely warrants careful attention. Kirshner and Webb(1981), considering the verbal deficits of aphasic patients, further stressed the importance oflooking at nonverbal communication. They emphasized that recognition of partial sparing ofspecific modalities, such as the gestural modality, is required in order to facilitate functionalcommunication through the spared modality One of the aphasic subjects discussed by Daniloffet al. (1982), in a study of gestural recognition in aphasia, reinforces this claim This subjectreportedly performed extremely poorly on tests of auditory recognition, but recognized 9 out of24 gestures. Gestural recognition was therefore a strength for this subject. In discussing thissubject, Daniloff et al. suggest that clinicians not overlook the partially spared visuo-gesturalsystem as the basis of an effective means of communication for patients such as this.Unfortunately, although there is an obvious need, there are few clinical tools available forassessing gestural and other functional communicative abilities. Aten (1986) reported that suchassessment tools "are in their formative or even undeveloped stages" (p. 270). Such tools areimportant not only for initial assessment of an aphasic's communication skills, but also forassessing the effectiveness of therapy. This latter consideration is extremely important as theshift in focus from linguistic to communicative adequacy results in the introduction of newtherapy approaches requiring validation.Several assessment protocols have been developed to look closely at aspects ofcommunication such as gestural ability, for research purposes. One example is the PantomimicReferential Abilities Test (PRAT) designed by Duffy et al. (1984). But few assessment batterieshave been developed to look at nonverbal communication skills clinically. One of the rare21exceptions is the PICA (Porch, 1967), which contains two subtests aimed at measuring gesturalability. Fortunately, the shift to a more pragmatic or functional approach to aphasia assessmentand treatment has resulted in the development of a few tests of aphasia that pay closer attentionto residual nonverbal communication skills. Davis and Wilcox (1981) reported that the first ofthese more pragmatic tests was the Functional Communication Profile developed by Taylor in1965. Holland (1980) described this test as being "the only extant test aiming to measurecommunication skills". It wasn't until fifteen years later that another test focusing on functionalcommunication was presented that gained widespread clinical usage. This was theCommunicative Abilities in Daily Living (CADL) test, developed by Holland (1980). In additionto its main goal of assessing the functional communication skills of aphasic adults, this tool wasintended to remind the clinicians of the nature of interpersonal communicative interaction,stressing the role of communicating as opposed to talking. The tasks in this test are intended torepresent communicative interactions in which people typically find themselves. The aphasicsare involved in role-playing situations, such as going to a doctor's office, shopping, and phoningfor information, with the clinician. The aphasic's performance is scored according to functionaladequacy, rather than to linguistic correctness. Holland warns a clinician using the CADL testto be especially alert for information presented by the aphasic nonverbally, since thiscommunicative modality is traditionally neglected in standard tests, but is often important in anassessment of functional communication capabilities. Although, at present, the CADL appearsto be the most common test of functional communication used clinically, other tests have alsobeen developed. One example is the Edinburgh Functional Communication Profile (EFCP)(Skinner, Wirz and Thompson, 1984), which was designed for lower-level and elderly patients,since its authors felt that the CADL was only suitable for high-level patients, as the instructionsand tasks are quite complex.22Tests of functional communication are not intended to replace traditional assessmentbatteries. As the authors of both the CADL and the EFCP state in their manuals, these tests aredesigned, rather, to add another dimension, the functional dimension, to assessment ofcommunicative abilities.Therapy As with assessment, treatment of aphasic communication disorders has traditionallyfocused on a patient's residual verbal skills. There has been a tendency to work strictly on apatient's linguistic adequacy, rather than on improving his or her ability to convey a messageusing all possible modalities. Interest in pragmatics has redirected traditional therapeuticapproaches. Although the goals of both traditional linguistic and pragmatic intervention are thesame, that is, to increase a patient's ability to communicative in everyday situations, the meansby which this goal is achieved differs under these two approaches.Aten's (1986) Functional Communication Treatment (FCT) provides a good account ofthe pragmatic approach. Aten defines FCT as "any therapeutic endeavour that seeks to improvethe patient's reception, processing, and use of information germane to conducting daily activities,interacting socially, and expressing current physical and psychological needs" (p. 266). The mainobjective of this approach is improvement of communication, which includes not only languagerecovery, but recovery of all other aspects of communication, including nonverbalcommunication. Aten reported that, for some patients, linguistic confrontation, as experiencedin traditional language therapy, is, in fact, detrimental. He suggested that "overstressing languageelements (eg. a focus on bound morphemes) may constipate communication flow" (p. 268).Among the principles of Fer, as proposed by Aten, there is an emphasis on communication overlinguistic adequacy, and a focus on the patient's use of the most efficient and effective23communication channel. He proposed that the clinician explore alternative response channelssuch as gesture and encourage patients to use these channels, by encouraging and modelling theiruse in concrete communicative situations. As with functional assessment, functional treatmentis not meant to replace traditional linguistic approaches. Aten reported that functional treatmentshould be integrated with traditional treatment from its onset, and that it should continue oncetraditional treatment is no longer effective, in order to assist the patient in coping with his or herresidual language impairments. According to Davis and Wilcox (1981), Holland advocated asimilar approach to therapy. That is, she stressed the importance of getting the message acrossusing a number of different modes of communication. This could have been predictedconsidering her development of the functional assessment tool, CADL.Davis and Wilcox (1981) have devised an intriguing therapy technique which incorporatesthe aspects of pragmatic intervention. This technique is known as PACE (Promoting Aphasics'Communicative Effectiveness) therapy. The objective in PACE therapy is the same as that inspontaneous conversation - the transfer of new information from sender to receiver. The taskitself varies depending on the skills of the aphasic, but in general, it involves the patient and theclinician alternately drawing cards from a face-down deck. When one picks up a card, s/hebecomes the sender and the other becomes the receiver. Without showing the other the card, thesender attempts to convey to the receiver what is on the card, whether it be a picture of anobject, a name of a person, or whatever corresponds to the patient's abilities. The receiver mayguess or may ask for more information. When the receiver guesses correctly, s/he picks up thenext card and becomes the sender. In addition to the transfer of new information, this formatincorporates other aspects of natural communication. It involves equal participation by theclinician and patient and senders and receivers. Feedback is natural, depending on whether ornot the patient is successful in communicating messages. It allows the patient to use whichever24communicative modalities that s/he wants in order to convey messages. The authors emphasizethat the free choice of communicative channels is essential to this therapy. It is intended toreduce the anxiety a patient might feel due to failure in verbal communication attempts.Although the clinician should not direct the patient to use particular channels, as is often donein traditional therapy, the clinician may indirectly guide the patient to use alternative channelsby using them during his or her own communicative attempts.Other interesting therapeutic strategies which focus away from traditional linguisticintervention can be found in the literature. Schlanger and Schlanger (1970), for example,presented a progressive group program involving role-playing aimed at improvingcommunication. It included four steps: gesture and pantomime activities; role-playing by patientsas themselves in a variety of situations; role-playing by patients as others in simulated situations;and psychodrama, in which a patient acted out his or her problems. Patients with severelylimited or no verbal communication were initially introduced to gestural and pantomimeactivities, in which they were required to gesture objects and actions; other patients in the groupwere required to choose a corresponding picture. Some aphasics with severe verbal disturbancesshowed excellent gestural abilities. The authors reported that patients undergoing their therapyprogram experienced some relief of frustration from deficient communication, a loss of inhibition,and a sense of accomplishment. Schlanger and Schlanger concluded that role-playing helpedimprove communicative ability. They suggested that although role-playing likely does notimprove the linguistic structure of an aphasic, it somehow acts as a "catalyst", allowing for easieruse of linguistic skillsAlthough it does not look specifically at the use of gesture, another interesting approach,presented by Wepman (1972), proposes that patients' energies in therapy should be shifted fromlinguistic accuracy to communicative adequacy. Recognizing the limitations of traditional therapy25in improving communication abilities, Wepman suggested that, for some aphasics, therapy shouldshift its concentration from a focus on language to a focus on ideas. He described his approachas a "non-language, content-centered discussion therapy" in which communication about interestsof the patient was encouraged. In this approach, there is no attempt to correct verbal efforts.The emphasis is on content, not linguistic accuracy. Wepman felt that some types of aphasics,which he called "pragmatic aphasics" (and appear to strongly resemble classical Wernicke'saphasics), are limited in their use of ideas, hence it is their use of ideas which must be targetedfor intervention. Like other intervention techniques mentioned here, this procedure was notmeant to replace traditional therapies. Wepman suggested that a combination of traditional directtherapy and his "idea" therapy might be more productive for some patients.Thus far, I have primarily discussed therapy approaches that encourage the use of gestureas an integral part of successful communication. A discussion of gesture and interventionstrategies would not be complete, however, without mention of techniques that have beenemployed both in research and in the clinic to develop and improve the actual gestures.Because formal sign languages, such as American Sign Language (ASL), are independentlanguages with their own arbitrary relationships between referent and sign, I will not focus muchattention on them here. My concern is with natural gestural abilities rather than the ability tolearn or use a new language. As American-Indian (Amer-Ind) signs are highly iconic and cantypically be understood without having to learn a new language, I relate them closely to naturalgestures for purposes of this discussion.Numerous studies have looked at an aphasic's ability to learn to use gesture when verbalabilities have been disrupted. Results have not conclusively shown the effectiveness (orineffectiveness) of gestural training. Listed here are several studies which have reported at leastpartial success in training the use of gestures. Helm-Estabrooks et al. (1982) introduced Visual26Action Therapy (VAT). This is a hierarchical program intended to teach global aphasics toproduce gestures to represent hidden visual stimuli. They found that all eight subjects in theirstudy improved significantly on pantomime and auditory comprehension subtests of the PICA.Schlanger and Schlanger (1970) demonstrated success in training patients with little or no verbalskills to use pantomimes Kirshner and Webb (1981) reported that the single subject in theirstudy, who demonstrated very poor auditory comprehension and verbal production, managed toacquire a significant number of signs of Amer-Ind and ASL. However, many researchers andclinicians reporting success in gestural training have noted poor carryover from the clinicalsetting to spontaneous communication.In contrast to the incidences of at least partial success in gestural training, otherresearchers have reported failed attempts at such training. For example, describing their personalclinical experiences, Duffy et al. (1975) reported being uniformly unsuccessful in trainingaphasics with poor verbal skills to use nonverbal forms of communication. A review of resultsof studies involving gestural training by Peterson and Kirshner (1981) further reveal discrepantfindings among researchers.To date, studies of the effectiveness of gestural training are inconclusive. None the less,a number of researchers who have successfully implemented such training have suggested factorsthat appear to correlate with the ability to learn a gestural system. For example, Peterson andKirshner (1981) suggest that such factors include: a desire to communicate, acceptance of gesturalcommunication by family members, number and frequency of communicative situations,spontaneous attempts to express gestures, and many others. Kirshner and Webb (1981) proposedthat aphasics with relative preservation of visual and graphic abilities, such as reading andwriting, might be good candidates for gestural training.Consideration of gestural training is interesting not only with regards to the ability to learn27and use the gestures themselves. Some researchers have reported that gestural training led toimproved verbal skills. Rosenbek, Collins and Wertz (1976), as cited in Di Simoni (1986) havecapitalized on this notion with a type of therapy based on "intersystemic reorganization". Thisinvolves the use of one communication modality to assist in the development of others.Rosenbek et al. combined gestural and speech training in a therapy program, which reportedlyresulted in improved speech production in experimental patients.Thus, although the efficacy of gestural training has not yet been clearly demonstrated,such training appears to offer some potential for communication therapy with at least some typesof aphasics. Further research will hopefully provide more information on the characteristics ofaphasic patients suitable for gestural training, and the potential benefits of this type of trainingfor improving speech.In summary, Davis (1986) wrote "an aphasic person may be linguistically inept butcommunicatively superb" (p. 262). New assessment tools and therapy techniques, developedfrom the study of pragmatics, are paying attention to this observation by Davis and many otherresearchers and clinicians. The result is a shift of focus from an aphasic's ability to belinguistically accurate to an expanded view of the aphasic's overall communicative ability,including the ability to gesture.From this review of the literature, it is apparent that the study of gesture is both intriguingand complex. The first section revealed that there is a wide variety of gestural classificationschemes in the literature, making comparisons between studies on gesture more difficult. Thesecond section outlined three theories which have been proposed regarding the relationshipbetween gesture and other communication modalities, such as speech. These three theories werereferred to as the Leading Modality Hypothesis, the Relative Independence Hypothesis, and the28Central Organizer Hypothesis. The third section reviewed studies on gestural communication andaphasia. The fourth, and final, section described pragmatic approaches, and their value, toclinical assessment and treatment of aphasia, with particular attention to gesture in theseapproaches.Having reviewed the literature, the focus turns now to the purpose of the presentinvestigation.Statement of PurposeThis investigation focused primarily on one aphasic subject's ability to communicate. Thesubject's exceptional use of gesture in everyday communication prompted an examination of hisability to produce gestures compared with his ability to produce speech and writing.The purpose of this study was to examine the expressive communication abilities of thissubject with respect to two hypotheses relevant to gestural communication and aphasia.Hypothesis One: There exists a central organizer which controls communicationmodalities. Alternate proposals suggest that one modality leads the others, or the modalities actin relative independence of one another, as discussed in the literature review.Hypothesis Two: There is a loss of symbolic functioning in aphasia. Finkelnburg andothers propose that such a loss results in parallel deficits across communication modalities.An additional purpose of this study was to emphasize the importance of a more pragmaticapproach to clinical assessment and treatment of aphasia.This study focused primarily on gestures which appear most closely related to what29McNeill and Levy (1982) labelled "iconic" gestures. By definition, these gestures appear torepresent some aspect of the content of what is being said. For the subject of this study,however, this definition would likely best be restated as gestures representing some aspect of thecontent of what is intended to be communicated.30CHAPTER 2METHODIncluded in this section is a detailed description of the subject of this study, a descriptionof the pilot investigation that prompted further investigation of the communication abilities of thissubject, and the methodology of this investigation.SubjectR.C. was admitted to hospital with a sudden onset of right hemiplegia and global aphasiaon February 3, 1991. He was 45 years old at the time. An angiogram, on admission, revealedan abrupt occlusion of the inferior branch of the left middle cerebral artery. A subsequent C.T.scan showed a large cortical infarct in the left middle cerebral artery distribution involving theposterior frontal, temporal and parietal lobes. A transesophageal echocardiogram revealed anatrial-septal aneurysm with a right-to-left shunt during cough. Reportedly, this was likely thesource of the embolus. His past medical history was insignificant.R.C. is right-handed and, although his family was French and he spoke French until theage of eight, he now speaks only English. His education following grade 12 included trainingin the police academy and in business management. Prior to his stroke, R.C. was employed asa police detective. Eighteen months have passed since the time of his stroke, and he has beenunable to return to his previous position, but plans are currently being made to find alternateemployment. His wife and two daughters provide a very supportive family environment.31The Speech-Language Pathologist at the hospital to which R.C. was admitted made firstcontact with him three days following his admission. At that time, he presented with fluentspeech characterized by neologistic paraphasias and severe auditory comprehension deficits. Hewas able to perform visual matching and word-picture matching tasks. The following day, theSpeech-Language Pathologist found the jargon to be noticeably reduced although auditorycomprehension remained severely impaired. The Boston Diagnostic Aphasia Examination(BDAE) (Goodglass and Kaplan, 1972) was administered over the following four days. Manyof the tasks proved impossible for R.C.. He did not understand test instructions, and refused toattempt the tasks. During assessment sessions, he often became confused and agitated. He didnot appear to understand what had happened to him, and why he needed the speech-languageassessment. Limited results of subtests of the BDAE, the Minnesota Test for the DifferentialDiagnosis of Aphasia (MTDDA) (Schuell, 1965), and informal assessment revealed a severeimpairment of auditory comprehension of single words. He was unable to identify commonobjects, pictures, body parts, letters and colours. He could correctly follow the well-learnedsimple commands 'close your eyes' and 'open your mouth', but was unable to follow othersimple commands. His responses to Yes/No questions were unreliable. In conversation, heappeared to depend primarily on visual cues and contextual information for comprehension.Verbal expression was difficult to assess as he often did not understand the test instructions. Hedid not recite automatic sequences, but, instead, attempted to write the days of the week and themonths of the year. He was unable to repeat words or phrases, but it was unclear as to whetherhe was truly unable to do the task or couldn't understand the instructions. He could not namereal objects or pictures of objects. Neither verbal nor phonemic cuing facilitated naming. Hewas, however, able to demonstrate functional use of most of the objects. His spontaneous speechwas fluent and spotted with appropriate phrases and questions, such as 'Why do I have to do32this?'. Many spontaneous utterances, however, were stereotyped 'empty' phrases, such as 'Idon't know.', 'well, let's see...', or incomplete phrases, such as 'I can't really...', 'Why can'tanybody...'. He demonstrated severe word-finding difficulties in conversation, and, as indicatedin the examples above, typically couldn't complete phrases that he initiated. There was someevidence of phonemic, verbal, and neologistic paraphasias in his speech. Visual communicationabilities (reading, writing, and gesture) were also assessed to a limited degree. Although ableto perform visual matching and word-picture matching tasks well, he demonstrated muchdifficulty identifying printed words presented auditorily and identifying printed words spelledorally. Auditory comprehension difficulties were likely a major contributor to these results. R.C.refused to attempt more complex reading tasks. He attempted writing tasks with his right hand.He wrote the numbers from one to twenty, the days of the week, and a portion of the alphabet.He was also able to write his name, address, and the names of his family members with relativelyfew errors. He was aware of some of the errors that he made and put brackets around them, buthe could not self-correct them easily. He was not able to write the names of objects or picturespresented visually, nor could he write any letters or words to dictation. His gesturalcomprehension and expression were felt to be intact, and he tended to rely a great deal on visualand gestural information for daily communication needs.Articulatory agility was found to be unimpaired. The right hemiparesis diagnosed onadmission had almost resolved itself by the second week following the stroke.Ten days following his admission, R.C. was discharged home. Twelve days later,outpatient language therapy was initiated through a different hospital. He attended therapysessions approximately three times per week. Therapy activities were designed to facilitatecomprehension of spoken and written words and sentences, to facilitate writing of object's names,and to improve R.C.'s ability to complete, in writing, the missing word(s) in sentences.33Following two months of therapy, his ability to write the names of objects and to complete themissing words in sentences improved significantly. His ability to comprehend single words andsimple sentences, with picture support, was also much improved. Overall comprehension andverbal language abilities in conversation were felt to be improving, but a severe word-findingimpairment was still apparent. He was unable to verbally name objects on request, and he wasunable to repeat words and phrases. He was relying heavily on the use of gesture, drawing, andwriting to express ideas, with a good degree of success.Following two months of outpatient therapy, the focus shifted slightly to work primarilyon reading, auditory comprehension, and, to a lesser extent, on increasing communicativeeffectiveness through a combination of writing, gesture, and speech. For the next two months,however, R.C. was extremely frustrated and depressed. He had significant difficulty focussingon therapy activities. A supplementary home program was introduced, but he demonstrated verylittle interest in it, and it was subsequently discontinued. Following this period of depression,R.C. made significant improvements in therapy. Although his auditory comprehension abilitieswere still markedly impaired, he was able to use contextual cues well, so that this impairmentwas not always apparent in conversation. His reading comprehension was significantly betterthan his auditory comprehension, but it too was still impaired. With assistance, he was able tofollow complex printed directions, and he demonstrated some ability to respond, using writingor gesture, to questions about simple printed paragraphs. He required large amounts of time tocomplete these tasks, and he wasn't consistently correct, but his ability to perform these taskswas significantly improved over initial assessments. His verbal expression still had limitedsemantic content, but occasional meaningful content was increasing. Stimulability through oralreading, intentional imitation, or sentence completion continued to be poor. He continued toeffectively supplement verbal output with written words and drawing to convey meaning to his34listener.As attempts to work on verbal expression were resulting in extremely limited success, bythe sixth month of outpatient therapy, activities were generally focused on auditorycomprehension, reading comprehension, and written expression. Although it was felt that R.C.had become a more effective communicator since the initiation of outpatient therapy, hecontinued to show marked impairment in all modalities. Auditory comprehension was stillsignificantly impaired, although he continued to use contextual cues well to facilitateinterpretation. Reading comprehension was also impaired, but still to a lesser degree thanauditory comprehension. Efforts to produce specific words continued to result in jargonresponses. He frequently used incorrect pronouns and prepositions, but demonstrated littleawareness of these errors. In therapy activities, he was able to write some trained sentencestructures, but in spontaneous conversation, he tended to write single words, or to draw pictureswhen unable to find the word. Although he appeared to know what he wanted to say, his writtenvocabulary was limited primarily to concrete nouns. His written vocabulary, although limited,was significantly more extensive than his spoken vocabulary.At the time of this study, eighteen months had elapsed since R.C.'s stroke. He was 47years old. His communication abilities had not changed significantly in the previous five months.Auditory comprehension for new or complex information was still significantly impaired.Reading comprehension was also impaired, but to a lesser extent. In reading comprehension, heoccasionally requested that the clinician read the stimulus sentence(s) aloud, suggesting that dual-modality comprehension was best. He continued to rely on nonverbal cues to facilitatecomprehension in conversation. Verbal expression was fluent and characterized by empty andincomplete phrases and neologisms, with occasional appropriate phrases and questions. Writtenexpression was less impaired than verbal expression, but was typically limited to single familiar35words in spontaneous conversation and present progressive constructions in therapy activitiesinvolving strictly written expression. These constructions were worked on extensively in therapy.In spontaneous conversation, he continued to use a combination of speech, writing, drawing, andgesture to express himself.Preliminary InvestigationR.C. participated in a small pilot investigation prior to the main study presented here. Thepurpose of this preliminary investigation was to collect data on R.C.'s production skills in orderto define a research task that would capture his unique communication skills In thisinvestigation, he was asked to describe four individual pictures to one of his current Speech-Language Pathologists. To make the task somewhat more functional in nature, the pictures wereheld up so that R.C. could see them but his Speech-Language Pathologist could not. R.C. wasencouraged to communicate everything that he could about the pictures, using any modalities thathe wished. His descriptions were videotaped. The videotape was subsequently transcribed andentered into the CHLLDES (MacWhinney, 1991) database. All information that he wrote downon paper was similarly entered. Data entered into the database included information providedthrough each modality of expression: spoken, written, gestural, and drawn. Orthographictranscription was used for the spoken and written output, and descriptive narration was used forboth gestural and drawn output. Following input of all the data into the database, informationfrom each modality was isolated from the rest and analyzed separately to look for evidence ofthe efficacy of each modality in communication, and to search for trends in the types ofinformation conveyed in each modality. The findings in each modality are outlined below.36There was no evidence of dysarthria or speech apraxia in R.C.'s speech output, but hisspeech was found to be severely impaired in terms of content. Only rarely could he label anyobjects or actions in the pictures. His speech was typically fluent and grammatically correct(with the exception of many incomplete sentences), yet was "empty", or devoid of meaning. Itwas characterized by neologisms and verbal paraphasias. He typically perseverated on the word"eaters", which appeared to replace nouns, and the phrase "down here", which appeared to replaceprepositional phrases. Often, he would say a carrier phrase, such as "it's a  ", andthen would gesture or write down the appropriate word to finish the phrase. He often producedpronouns, but these were inconsistent with regards to correct gender.R.C.'s written output was mostly limited to nouns, with a few descriptors, such as agesof people in the pictures. His only obvious attempt at a verb ('open') was unsuccessful until thefirst two letters were provided, at which point he completed the word. Nouns were often writtendown following a verbal carrier phrase, as mentioned above. His writing was generally legiblewith only a few spelling errors.R.C. made extensive use of gesture in his descriptions. He used his hands to representthe sizes and shapes of objects. For example, on one hand he touched his fingers to his thumbto represent the size and roundness of a cookie. He demonstrated actions performed by peopleand objects in the pictures, such as opening a cupboard and a stool falling over. He also imitatedgestures performed by people in the pictures, such as an index finger being held to pursed lipsto indicate to someone else to be quiet. He often used pointing to indicate items in the picturesas well as objects in the room. He also used head nodding and head shaking extensively. Hisuse of facial expressions was fascinating. Not only did he coordinate facial expressionsaccurately with gestures to illustrate uncertainty and other emotions, but his imitation ofexpressions on people in the pictures was remarkable. Occasionally, he supplemented his37gestures with sound effects. Although much of his gesture conveyed information, there weresome hand movements, such as rotating his wrists while moving his fingers, that did not appearto communicate anything. Such gestures often accompanied contentless speech. Perhaps theywere attempts to help retrieve appropriate words, almost like pulling them from the air.R.C. used drawing to communicate mainly objects, but also some actions, such as 'falling'and 'leaning'. It appeared that often, when he could not find the appropriate word, he woulddraw the corresponding picture, and this would help him retrieve the written word. His drawingskills were excellent, and he used them effectively not only to convey information about thepictures, but also about objects that were absent.Overall, considering the severe impairment in his verbal abilities, this pilot study showedthat R.C. was able to communicate an amazing body of information. Using a variety ofmodalities, he was able to effectively convey even small details.MethodologyThe findings of the pilot investigation prompted a more careful study of the relativecommunicative effectiveness of each of the individual modalities in isolation and in combinationwith one another. The primary focus of the study was on R.C.'s gestural ability, and its relationto speech and writing.The study included three parts: 1) the subject's task, 2) observer judgement of the subjectperforming the task (judging), and 3) analyses of judges' responses. Each of these is describedin more detail below.38The Subject's Task This part of the study was the only part in which R.C. was directly involved. The taskinvolved R.C. in an expressive description task similar to that used in the pilot investigation.Instead of describing individual pictures, however, R.C. was asked to describe sequences ofpictures. Each sequence consisted of three colour photo cards2 depicting highly stereotypedactivities. Six sequences were administered in total. R.C.'s performance on four of these sixwere chosen for subsequent viewing by judges. Table 2.1 outlines the activities depicted in thefour chosen sequences.Table 2.1Activities Depicted in the SequencesSequence 1. A man shavingSequence 2. A girl brushing her teethSequence 3. A man writing a letterSequence 4. A man eating cakeIn the two sequences that were excluded (a boy drawing a picture, and a girl making a sandwich),R.C. demonstrated more difficulty limiting himself to the allowed modalities, as was required inthe different portions of the task.For each of the sequences, R.C. was instructed to communicate as much information ashe could about the sequence using a single modality or combinations of modalities. Eachmodality was considered in isolation as well as in combination with other modalities in order toidentify the relative contributions of each of the modalities to communication. All possiblecombinations of the three modalities - speech, gesture, and writing - resulted in seven Conditions'from the ColorCards Basic Sequences collection produced by Imaginart (Winslow Press,1988).39in total. (Drawing was excluded from the design as drawing of objects pictured in the sequenceswould simply constitute a copying task.) The seven Conditions are presented in Table 2.2.Table 2.2Test ConditionsCondition 1.Condition 2.Condition 3.Condition 4.Condition 5.Condition 6.Condition 7.Speech + Writing + GestureSpeech + WritingSpeech + GestureWriting + GestureSpeechWritingGestureThe task required the subject to describe each of the original six sequences in each of the aboveseven Conditions, for a total of 42 portions to the task.Most of the task was completed in two sessions in the subject's home. The first sessionlasted approximately 2 1/2 hours, and the second seesion approximately 1 1/2 hours. A shortthird session (30 minutes) was conducted after a therapy session at the hospital where he receivesoutpatient therapy. The subject was compliant with the demands of the task, and demonstrateda good sense of humour throughout the sessions.For each portion of the task, the subject was given instructions about which modality orcombination of modalities he was allowed to use. The investigator did not proceed further untilthe subject appeared to understand which were the allowed modalities. One of the six picturesequences was then placed on a table before the subject, and he was asked to describe thesequence using the allowed modality or modalities. Conditions were randomized, but, in orderto reduce the possibility of confusion, three sequences were administered in each Condition40before proceeding to a different Condition. Hence, three sequences in each Condition wereadministered in the first session, and the remaining three sequences in each Condition wereadministered in the second session, with the exception of three sequences in the sixth Condition(writing only), which were completed in the third session. Sequences were randomly assignedwithin each Condition.All portions of the task were videotaped. Written expression was recorded with a pen onblank paper.On a few occasions, R.C. produced gestures when the Condition did not allow them.These gestures were edited out of the video, unless they were closely adjoined to other importantinformation and could not easily be edited out. Similarly, R.C. drew some pictures althoughdrawing was not allowed in any of the Conditions (as it would only demonstrate R.C.'s copyingability). Whenever he drew something during the task, he was asked to label the drawing orparts of the drawing that he could. Drawings were removed from written material beforepresentation to judges.JudgingThis portion of the methodology was designed to determine how effective R.C. was incommunicating the 'story line' of the sequenced pictures. That is, the goal was to determine howmuch information about the sequences of pictures R.C. could effectively convey to an observer(judge), and how effectively he used each modality in trying to convey various types ofinformation.Judges were recruited to observe and rate R.C.'s performance on the videotapes, and alsoto judge the accompanying written responses in those Conditions in which writing was allowed.No restrictions were placed on recruitment of judges, except that they could not have had prior41contact with the subject of the study (R.C.). Each judge observed one Condition of all foursequences (one training sequence and three test sequences). Condition assignment was random.Each judge observed three different Conditions. Twenty-one judges in total were recruited, sothat each Sequence/Condition combination was observed by three judges.Prior to seeing the videotape and/or written material, judges were given a packet ofmaterials that included written instructions and four score sheets (see Appendix A). They weretold that the subject had a sequence of three picture cards in front of him, and that his goal wasto communicate everything that he could about the sequence to a potential observer. Theimportance of observing carefully for information communicated via all modalities was stressed.Judges were not told that the subject was restricted in the modalities he was allowed to use.Judges were instructed to write down everything they observed on the videotape (and fromwritten material where applicable) into columns labelled Actions, Objects, and Descriptors on thescore sheets provided. Actions corresponded roughly to verbs, Objects to nouns, and Descriptorsincluded adjectives, adverbs, and locations. Examples of information that would be suitable forentry into each of these columns were provided for the judges.Beside the information entered in these columns, in the column labelled Modality, judgeswere asked to enter the modality, or modalities, through which the information wascommunicated. The coding for the modalities was "S" for speech, "G" for gesture, and "W" forwriting. If information was conveyed through more than one modality, the information was tobe recorded only once followed by ALL of the modalities through which it was conveyed.Hence, the modality column contained either a single entry (S, G, or W), two entries (S+G, S+W,or G+W), or three entries (S+G+W).Following the entry of modality information, judges were asked to provide a confidencerating according to how confident they were in the interpretation of the piece of information that42was communicated. The rating scale ranged from one to five, with one indicating very littleconfidence in the judge's interpretation and five indicating certainty in the interpretation.At the bottom of the score sheet, judges were required to record a description of whatthey felt the sequence was about. They were also asked to provide any information they felt didnot fit into the columns above, and anything they found particularly interesting in what they hadobserved.The fourth sequence (a man eating cake) was chosen as the training sequence. Thevideotape and written material from Condition 1 for this sequence, in which all modalities wereallowed, was used for training of judges. This sequence was chosen for training because thevariety of information portrayed in this sequence was felt to provide the judges with a goodopportunity to see the different types of information they were to watch for in the test sequences,and to learn how to enter the information on the score sheets.Thus, each judge first underwent a trial/training procedure in which he or she was askedto view the videotape, with accompanying written material, for the training sequence and writedown on the first score sheet all information which the judge perceived to be communicated byR.C. When information appeared ambiguous, judges were asked to guess the subject's intentions.Judgements of the task proper were not time limited. That is, for videotaped material, judgeswere allowed to review sections of the video as often as they felt necessary in order to recordall of the information they perceived was being conveyed about the sequence. After completingthe score sheet for the training sequence, the examiner carefully reviewed the videotape andwritten material with the judge and indicated information that the judge had omitted.The trial/training procedure was performed to familiarize judges with the nature of theirtask, to ensure that the judges would be aware of how information could be transmitted via anumber of different modalities, and to reinforce that careful observation would be required for43the following test sequences in order to be able to record as much information as possible fromthe subject's attempts at sequence description.The remaining three of the chosen sequences were used as the test sequences. Theprocedure for the test sequences was the same as that for the trial/training sequence. The threesequences were provided in succession, allowing time for judges to relax between sequences, ifneeded. The pace for watching the videotapes and analyzing the written output was totallydependent on the needs of each judge, and varied from 50 to 90 minutes for completion of thetraining sequence plus the three test sequences.Analysis Inclusive Dataset The information on the score sheets for the three test sequences for all 21 judges wasentered into columns in a word processing program. Columns contained the followinginformation: judge number (1-21); sequence number (1-3); Condition number (1-7); whether ornot the "gist' of the sequence was perceived (yes=1, no=0); type of information (ie., the columnheadings on the score sheets, Action (Act), Object (Obj) or Descriptor (Des)); information itself(typically a single word or a short phrase); mode(s) through which the information was perceived(gesture=G, writing=W, and speaking=S); and confidence level (1-5). An additional column(labelled "correctness") was added that indicated if the information was incorrect (0), correct (1),or essentially meaningless (2) with regards to the sequence being described. Initially, the"information" column contained the words and phrases actually recorded by the judges on thescore sheet. These entries were reviewed and were recoded into single words having less than3 "gist" is defined hereafter as the basic schema depicted in the sequences (Sequence 1 -shaving face; Sequence 2 - brushing teeth; and Sequence 3 - writing a letter).44eight characters. In this way, entries that had similar meanings but were written in different waysby different judges were coded with the same word. For example, "shaving cream" and "shavingfoam" were recoded as "cream".The Inclusive Dataset was used for analyzing how confident judges were with theinformation they perceived, and whether the information perceived by the judges was correct orincorrect with respect to the sequence being described.Condensed Dataset A second dataset, labelled the Condensed Dataset, was compiled which eliminated allinformation recorded by only a single judge (out of three possible judges) for eachSequence/Condition combination, and then condensed the remaining two or three similarresponses into a single entry. The modality column contained modalities recorded by two ormore judges for each piece of information. Finally, all incorrect information, that is, informationcoded as "0" in the original "correctness" column, was eliminated. This dataset was used todetermine the overall amount of accurate information conveyed in each modality; the amount ofaccurate information conveyed through each modality when all modalities were allowed; modalityoverlap when two modalities were allowed; the type of information (Actions, Objects,Descriptors) conveyed when only writing was allowed, and when only gesturing was allowed;and relative use of writing and gesture when one, two, or three modalities were allowed.Gist D at aset A third, independent, dataset was developed from judges' descriptions, on the bottom ofthe score sheets, of what the sequences were about. This dataset was named the Gist Dataset,as it included whether or not a judge was able to describe "the gist" of the sequence. This45information was used to compare the effectiveness of the subject in conveying the main idea ofthe sequence in each of the seven Conditions.Findings from these analyses are discussed in relation to hypotheses on modalityorganization in communication and the impact of aphasia on the communication modalities forthis subject in the following chapter.46CHAPTER 3RESULTSIn this experiment, modalities capturing the expressive gesturing, writing, and speakingabilities of a single aphasic subject, R.C., were examined and compared, within the context oftwo hypotheses.First, it was hypothesized that there is a central organizer which controls the output of thedifferent modalities. The alternatives to this hypothesis are either (1) that speech leads the othermodalities, or (2) that the modalities operate independently.Second, it was hypothesized that the various modalities tested are differentially impairedin this aphasic subject, although symbolic functioning remains relatively intact. The alternativeto this hypothesis is that the central symbolic system is disturbed in aphasia with resultingparallel impairments in the various modalities.The results are presented in three main sections.The first section contains the results of analyses performed on the dataset that includesevery "piece of information" recorded by each judge on the score sheets. A "piece ofinformation" is defined as a single entry into either the Actions, Objects, or Descriptors columnson the score sheet. This dataset was labelled the Inclusive Dataset and was used in analyses ofconfidence ratings of judges and "correctness" of information.The second section contains the results of analyses performed on a condensed version ofthe original dataset. Details of the manner by which this condensed dataset was formed areoutlined later in this section. This dataset was labelled the Condensed Dataset and was used inanalyses of modality use and the types of information conveyed by the subject.47The third section contains the results from analysis of the judges' descriptions of theoverall meaning, or gist, of the sequences, which the judges recorded at the bottom of their scoresheets. This dataset was labelled the Gist Dataset.The data from the three test sequences were analyzed first independently, and weresubsequently pooled. Pooled results reflected the results from individual sequences.Throughout the Results section, 'S' refers to speech, 'W' refers to writing, and 'G' refersto gesture.Inclusive DatasetAs noted previously, the Inclusive Dataset consisted of all data written on score sheetsby 21 judges. The information included in this dataset is henceforth considered as informationperceived' by the judges, rather than information conveyed5 by the subject. Further clarificationof this terminology is included in the next section on the Condensed Dataset. Two majoranalyses were performed on the Inclusive Dataset: an analysis of the reported confidence ratings,and an analysis of the 'correctness' of information perceived. The results are presented below.Confidence Ratings for Information Perceived by Judges Confidence ratings recorded by the judges for each piece of information perceived aresummarized in Table 3.1, according to the modality or combination of modalities through which4 perceived information is defined as any piece of information recorded by at least a singlejudge for each Sequence/Condition combination.5 conveyed information is defined as any piece of information recorded by at least two outof three judges for each Sequence/Condition combination.48the information was communicated. Judges were asked to rate how confident they felt that whatthey were recording was what the subject was trying to tell them about the sequence before him,regardless of how many modalities were used to convey the information. That is, their task wasto rate their confidence for each piece of information, rather than for each modality, hence thebreakdown into modalities and combinations of modalities in Table 3.1. The confidence ratingscale provided to the judges is included here.1 - very unsure, basically a guess2 - quite unsure, but have a feeling that it is what the subject is trying to convey3 - about a 50/50 chance that it is what the subject is trying to convey4 - quite confident, but a small possibility the subject could be tying to convey something else5 - very confidentTable 3.1Confidence Ratings For Each Individual Modality and Combinations of ModalitiesModalityConfidence Rating5 4 3 2 1SGW 3 0 0 0 0SG 11 5 6 0 0SW 7 0 0 0 0GW 82 3 1 0 0S 35 23 18 14 17G 312 62 39 7 5W 240 30 20 5 4Totals 690 123 84 26 2649Percentages of the total number reported for each confidence level are shown in Figure3.1. Overall, the majority of the responses (72.7 %) were rated as 5, with very few responsesrated as 1 or 2.Figure 3.1Percentage of Each of the Five Confidence Ratings Recorded by Judges5 	 4 	 3 	 2 	 1Confidence RatingsFigure 3.2 shows the confidence ratings for when a piece of information wascommunicated through only one mode (W, G, S). Results for when information wascommunicated through combinations of modes (GW, SW, SG, SGW) were not included here asrelatively few instances of overlap of modes occurred. This is examined later in the Resultssection. Of the information communicated through only one mode, there is a much greatervariation in the confidence ratings for speech than in the confidence ratings for gesture or writing.The majority of the gestured and written information was rated as a 5 (73% and 80%respectively), whereas only 33% of the spoken information was rated as a 5.• 5Iffi 4• 3ED 20 1,E A,111111150Figure 3.2Confidence Ratings for Each Piece of Information Communicated Through Only OneModality0.0% 20.0% 40.0% 60.0% 80.0% 100.0%Amount of Information'Correctness' of Information Perceived by the Judges Pieces of information recorded by the judges on the score sheets were coded as one ofthe following: Incorrect: information bore no relation to the sequence being depicted; Correct:information was, or conceivably could have been, related to the sequence being depicted; orMeaningless: information from standard perseverations produced by the subject, which wasmeaningless with regards to the sequence being depicted. Examples in each category for the firstsequence (the man shaving) are presented in Table 3.2.51Table 3.2Examples of Incorrect, Correct, and Meaningless Information Reported for Sequence 1ExamplesIncorrect 	 house, restaurant, walking, eatingCorrect 	 man, razor, towel, shaving, rinsingMeaningless 	 down here, very niceThe amounts of information in the three different 'correctness' categories, with respectto entries in the modality column, are found in Table 3.3.Table 3.3Amounts of Incorrect, Correct, and Meaningless Information Perceived by Judges ThroughEach Individual Modality and Combinations of ModalitiesModality Incorrect Correct MeaninglessSGW 0 3 0SG 2 16 4SW 0 7 0GW 0 86 0S 63 22 22G 16 407 2W 3 296 0Totals 84 837 2852Figure 3.3 shows the relative amounts of correct, incorrect, and meaningless informationoverall. Most of the information perceived by the judges was correct information, that is, it waseither depicted in, or directly related to, the sequence. Almost 9% of the information, however,was incorrect.Figure 3.3Percentage of Correct, Incorrect, and Meaningless Information Perceived by Judges2.95% 8.85%  • incorrectcorrectmeaningless88.20%  The data from Table 3.3 were collapsed to compare the amount of information in eachof the 'correctness' categories for each modality. For example, if three pieces of informationwere perceived through speech plus gesture ("SG"), two pieces through speech alone ("S"), andfour pieces through gesture alone ("G"), then the total amount of information perceived throughspeech would be five pieces, and the total amount through gesture would be seven pieces.Results are shown in Figure 3.4. In speech, the majority of the information perceived wasincorrect. In gesture and writing, most of the information was correct, with writing having aslightly higher percentage of correct information than gesture.100•	 80600204 	 AMIMFigure 3.4Percentage of Correct, Incorrect, and Meaningless Information in Each Modality53012ModalityCondensed DatasetIn the previous section, the complete dataset of all responses recorded by judges wasanalyzed. This gave results with regards to all of the information perceived by at least one judge.Recall that each Sequence/Condition combination was seen by three judges. For theCondensed Dataset, if a piece of information was recorded by at least two of these three judges,this information was considered to be clearly portrayed by R.C. Hence, the information in thisdataset is defined as information conveyed by the subject, rather than information perceived bythe judges.54To establish this condensed dataset, all pieces of information recorded by only one judgefor each Sequence/Condition combination were removed. This removed idiosyncratic informationperceived by individual judges, and left only data consistent across two or three of the threejudges. This data was then condensed, so that each piece of information listed by two or threejudges became a single entry. For example, if three of the three judges for theSequencel/Conditionl combination reported a piece of information, such as the Action "shaving","shaving" was recorded in the new dataset as a single piece of information. For each piece ofinformation, the modality or modalities agreed on by at least two judges were recorded in themodality column. Finally, all incorrect data was removed, so that the condensed datasetcontained only accurate information conveyed by the subject.Modality Use Table 3.4 contains the counts for modality use in each of the seven Conditions from theCondensed Dataset. The modalities which R.C. was allowed to use under each Condition aregiven in parenthesis following the Condition number.55Table 3.4Modality Use in Each of the ConditionsConditionModalitys g wCondition 1 (SGW) 3 27 13Condition 2 (SW) 2 2* 29Condition 3 (SG) 3 38 1*Condition 4 (GW) 1* 39 27Condition 5 (S) 0 0 0Condition 6 (W) 0 0 63Condition 7 (G) 1* 43 0Totals 10 149 133. These pieces of information were conveyed by modes not allowed in the given Condition but could not readily be editedout of the video.Modality Use Over All Conditions The totals for each modality, converted into percentages, are shown in Figure 3.5.Significantly less accurate information is conveyed through speech than through the othermodalities.56Figure 3.5Percentage of Accurate Information Conveyed Through Each ModalityModalityModality Use When All Modalities Were Allowed (Condition 1) In the first Condition, R.C. was allowed, and encouraged, to use all three modalities toconvey information about the sequences. Figure 3.6 reveals his relative modality use. Most ofthe information in this Condition was conveyed through gesture, with relatively little informationconveyed through speech.Figure 3.6Percentage of Information Conveyed Through Each Modality in Condition 1570060<4.4(4.4 40e)<1.) 	 20G	 W 	 SModalityModality Overlap When Two Modalities Were Allowed Various combinations of two modalities were allowed in Conditions 2, 3, and 4 (SW, SG,and GW respectively). Extracting out the accurate data from only these three Conditions, thenumber of pieces of information conveyed by a single modality was compared with the numberof pieces conveyed by two modalities. Figure 3.7 shows that a significantly greater percentageof the information conveyed in these three Conditions was conveyed through a single modality,rather than through two modalities.40'4111111.5-.., 	 row-100800200Figure 3.7Information Conveyed Through 1 Versus 2 Modalities in Conditions 2, 3, and 4581 	 2Number of ModesRelative Modality Use When One, Two or Three Modalities Were Allowed Writing Conditions 1, 2, 4 and 6 allowed the use of writing. In the first Condition, writing plusthe other two modalities were allowed, for a total of three modalities allowed. In the second andfourth Conditions, writing plus one other modality was allowed (speech or gesture respectively).In the sixth Condition, only writing was allowed. Relative amounts of information conveyedthrough writing depending on when writing only was allowed, when writing plus one othermodality was allowed, and when writing plus two other modalities were allowed are shown inFigure 3.8. The amounts for Conditions 2 and 4 were averaged for the second column. Whenthree modalities were allowed, relatively less information was conveyed through writing thanwhen two modalities were allowed. Similarly, when two modalities were allowed, relatively lessinformation was conveyed through writing than when only the written modality was allowed.Figure 3.8Relative Use of Writing When 1, 2, or 3 Modalities were Allowed5980C60404-4Ce)8 203 	 2 	 1Number of Modes AllowedGesture Gesture was allowed in Conditions 1, 3, 4 and 7. The amount of information conveyedby gesture when Conditions allowed all three modes versus two modes versus gesture alone isshown in Figure 3.9. The second column is an average of the amount of information gesturedin Conditions 3 and 4 (two modes allowed, one being gesture). As for writing in Figure 3.8above, more information was conveyed through gesture when gesture was the only modalityallowed. The least amount of information came via gesture when all three modalities wereallowed.1020cud.)500Figure 3.9Relative Use of Gesture When 1, 2, or 3 Modalities were Allowed3 	 2 	 1Number of Modes Allowed60Types of Information When judges were asked to record information that they perceived, they entered theinformation into one of three distinct columns on the score sheet. The columns were labelledActions, Objects, and Descriptors. These are defined as the three 'Types' of information thatjudges were asked to record. Actions corresponded roughly to verbs, Objects correspondedroughly to nouns, and Descriptors included adjectives, adverbs, and locations. Table 3.5 containsthe number of each of these different Types of information according to the corresponding entriesin the modality column, as well as the total number of each Type of information.61Table 3.5Types of Information in Each Individual Modality and Combinations of ModalitiesType of InfoimationModality TotalsActions Objects DescriptorsSGW 1 0 0 1SG 1 0 1 2SW 0 3 0 3GW 9 20 0 29S 0 1 3 4G 65 45 7 117W 19 57 24 100Totals 95 126 35 256Types of Information Conveyed Over All Conditions From the total amount of information conveyed, the percentages for each of the threetypes of information are portrayed in Figure 3.10. Slightly more Objects than Actions wereconveyed. Significantly fewer Descriptors were conveyed.100 i908070a 60Z= 500..)4.) 40clo3020100Figure 3.10Percentage of Each Type of Information Over All Conditions62Actions	Objects 	 DescriptorsType of InformationTypes of Information Conveyed When Only One Modality was Allowed Writing (Condition 6) In the sixth Condition, R.C. was only allowed to write in order to convey information.Analysis of types of information conveyed in this Condition (Figure 3.11) reveals thatsignificantly more Objects were conveyed than Actions when only writing was allowed.6050a,060504030td)V 20a, 10063Figure 3.11Percentage of Each Type of Information Conveyed When Only Writing was AllowedActions	 Objects 	 DescriptorsType of InformationGesture (Condition 7) In the seventh Condition, R.C. was only allowed to gesture to convey information aboutthe sequences. As shown in Figure 3.12, more Actions than Objects were conveyed in thisCondition.Figure 3.12Percentage of Each Type of Information Conveyed When Only Gesture was AllowedActions 	 Objects 	 DescriptorsType of Information64Gist DatasetAfter judges filled out the information in the columns on the score sheet, they were askedto briefly describe what they felt the sequence was about. This description was subsequentlycoded as correct or incorrect. Considering the three sequences together as a homogeneous group,each Condition was seen by nine judges (three judges x three sequences). Their results areshown in Figure 3.13.Figure 3.13Frequency of Gist Perceived in Each Condition1 	 2 	 3 	 4 	 5 	 6 	 7ConditionNone of the nine judges that observed the speech-only Condition (Condition 5) correctlyidentified what the sequence was about. Only in two other cases was the gist not correctlyidentified. One judge who observed the second sequence in the second Condition and one judgewho observed the third sequence in the third Condition did not correctly identify the gist of thesequence.65CHAPTER 4DISCUSSIONThe purpose of this study was to examine a single aphasic subject's communicativeeffectiveness through the modalities of speech, gesture, and writing. The main emphasis was onhis use of gesture to effectively convey information. Although the subject used a wide varietyof gestures, including gestures subordinated to the speech rhythm ("beats", McNeill and Levy,1982), gestures that acted as complete utterances in themselves ("emblems", Ekman, 1977),pointing gestures, and facial expression, he primarily used "iconic" gestures (McNeill and Levy,1982) to convey particular information in his task in this investigation. Hence, this studyfocusses on this subject's ability to use iconic gestures to communicate.It was anticipated that the findings of the investigation would cast light on twohypotheses. The first hypothesis postulated the existence of a central organizer controlling andcoordinating the output of different modalities. Support for this hypothesis was expected fromfindings for this subject. The second hypothesis proposed that aphasia involves a loss ofsymbolic functioning with resulting parallel impairments across modalities. The results for thissubject were not expected to support this hypothesis, but rather they were expected to supportrelatively intact symbolic functioning with differential impairment of modalities.The following discussion is divided into four sections. The first section considers thejudges, and how they responded to their task. The second section discusses results that supportrelatively intact symbolic functioning and differential impairment of the modalities for the subjectof this study. The third section considers results that support the notion of a central organizerwhich coordinates the modalities. The fourth and final section looks at the clinical implicationsof the findings from this investigation.66JudgesAlthough R.C. was the only subject of this study, the judges themselves provided valuableinformation regarding the communication abilities and deficits of aphasics. Observations madeby the investigator of the judges performing their task, the confidence ratings reported by thejudges, and comments made by the judges at the bottom of the score sheets all proved to bevaluable sources of information about the challenges facing the communication partner of anaphasic patient.The following is an example of R.C. talking about one sequence of pictures. His talkingwas accompanied by gestures [enclosed in square brackets below the corresponding speechoutput]. The sequence which R.C. is attempting to describe is the training sequence. It involvesa man slicing and eating a piece of cake.RC: okay.RC: there's an eaters this big.[hold hands in cylindrical shape, about 8 inches in diameter]RC: this down here.[puts L hand under "cake" depicted by R hand, as if putting a plate underneath]RC: it it down here.[holds pen up in air before him]RC: and go fffft...fffft...[makes slicing motion with pen through "cake"]RC: and down here.[picks up slice of "cake"]RC: and then down here.[places slice onto a plate? napkin?]RC: and eat them.[lifts piece to mouth, makes chewing motion]RC: I don't...RC: what?RC: I can't...RC: You know I can't..."Eaters" was jargon on which R.C. perseverated through all portions of the task. It is also67a typical production in his spontaneous speech. It appeared that often when he attempted toretrieve an object name but couldn't, "eaters" was produced instead. It was as if "eaters" wasa real word in the subject's lexicon, and emerged when retrieval of a target noun wasunsuccessful. Another common perseverative phrase was "down here", which did not alwaysaccompany a gesture. This phrase is ambiguous without an accompanying gesture, yet it wasproduced even in those Conditions in which gesture was not permitted.An example of his use of sound effects is captured in this sequence ("fffft" for cuttingcake). As in the pilot investigation, he often used sound effects to augment his gesturalcommunication.One phenomenon, that could be described as a true desire to make sense of the verbaloutput of the subject, was observed by the investigator as occurring across most of the judges,regardless of how inconsistent the subject's output, and regardless of output from any othermodality For example, several of the judges, in attempting to fill out the score sheets for thistraining sequence, wrote "ears" (transcribed as "eaters" above) as a piece of information in theObject column, even though nothing else about the sequence (especially not the size depicted inconjunction with production of the word (8 inches in diameter)) had any association with 'ears'.In this and numerous other instances, there was a clear tendency on the part of the judges toconsider his speech carefully, even if (or maybe especially if) it was filled with jargon andrecurring meaningless phrases. In one case, a judge chose to base her overall description of thesequence on the subject's verbal output, even though she was also given a fairly coherent writtendescription produced by the subject. She chose to ignore the written information, and focusstrictly on his speech.The confidence ratings provided another interesting view of how the judges perceivedR.C.'s ability to communicate. As shown in Figure 3.1, the majority of the information recorded68by the judges was rated 5 (out of 5) on the Confidence Rating Scale. Thus, judges appeared tobe quite confident that most of the information which they were recording was contained in thesequence that R.C. was attempting to describe. A closer examination of each modality (Figure3.2) shows a difference among the modalities as a function of the confidence ratings of thejudges. For gesture and writing, judges were very confident that most of the information theyrecorded was accurate information about the sequences. For speech, however, there was muchgreater variation in confidence ratings. Only 33% of information communicated through speechalone was rated 5 (versus 73% for gesture and 80% for writing). Almost one-sixth ofinformation perceived through speech alone was rated 1, which was defined as "very unsure,basically a guess".Although judges were generally less confident of information perceived through speechversus gesture and writing, it is interesting to note that of the 33% of the information coded as5 (very confident), less than half of this information (40%) was accurate. A tendency to wantto "trust" in speech, even though fragile, is again evident. These findings underscore theimportance of speech in everyday communication.At the bottom of the score sheets, judges were asked to comment on anything that theyfound particularly interesting in what they had observed. The most frequent comment concernedthe subject's ability to communicate information effectively using gesture. Several judgesindicated that the task of interpretation was more difficult when the subject did not use gesture.A number of judges commented on the obvious frustration the subject experienced when he couldnot convey information effectively through speech. A few judges observed the subject's attentionto detail.69Symbolic Functioning and Differential Impairment of the ModalitiesIn the introduction, it was noted that several investigations of aphasic subjects have ledsome researchers to suggest that impoverished communication in aphasia is due to a loss ofsymbolic functioning (Duffy, Duffy and Pearson, 1975; Gainotti and Lemmo, 1976). Finkelnburg(1870), as translated by Duffy and Liles (1979), had actually claimed that "asymbolia" would bea more suitable title for the disorder commonly referred to as aphasia.Several of the findings in this investigation, however, do not support the notion thatsymbolic functioning has been destroyed in this subject. It would be expected that, if symbolicfunctioning were wiped out, the subject would be unable to use symbolic systems such asgesture, writing, or speech to provide information to an observer. Figure 3.3, however, revealsthat 88.2% of the information perceived by judges was coded as "correct" information. That is,most of the information perceived by judges was, or conceivably could have been, related to thesequence being described. Further, results presented in Table 3.5 show that R.C. managed toconvey 256 pieces of accurate information in total. These findings are not consistent with a lossof symbolic functioning.Additional support for relatively intact symbolic functioning comes from judges'recognition of the gist of the sequences. As 21 judges each viewed three sequences, there wasa total of 63 opportunities for judges to assess the gist of a sequence. Out of 63 opportunities,the gist was correctly identified 52 times. So, an accurate general description of a sequence waspossible over 80% (82.5%) of the time from R.C.'s attempts at describing the sequences. Ifaphasia resulted in a loss of symbolic functioning, one would expect an aphasic to rarely, if ever,be able to convey such information through symbolic means.Much of the support for the notion of "asymbolia" came from findings of parallel deficits70in the speech and gestural abilities of aphasics. Parallel deficits among the different modalitiesare not characteristic of R.C.'s communicative abilities. R.C. demonstrates an apparentdifferential impairment of the modalities, his ability to communicate through speech beingsignificantly more impaired than his ability to communicate through gesture or writing. Forexample, Figure 3.4 shows that most of the information communicated to judges through gestureor writing was correct information (95.5% and 99.2% respectively). However, only 34.5% of theinformation communicated through speech was correct. Much of the information communicatedto the judges through speech was incorrect (46.8%). The remainder of the information recordedby the judges included perseverative phrases and jargon that provided no meaningful informationabout the sequences.Figure 3.5 reveals a similar discrepancy between effectiveness of modalities. Of theaccurate information that R.C. conveyed over all conditions, only 3.4% was conveyed throughspeech, whereas 51.0% and 45.6% were conveyed through gesture and writing respectively.Similarly, looking solely at the Condition in which all modalities were allowed (Condition 1),very little information was conveyed through speech relative to the other modalities (Figure 3.6).In this Condition, which permitted the use of all modalities, however, there was a large differencebetween amounts of information conveyed through gesture versus writing. This may be due tothe fact that writing takes significantly longer than gesture, as one judge pointed out, making ita less favourable approach for efficient communicative interactions.None of the nine judges that saw the Condition which allowed only speech (Condition5) correctly identified the gist of the sequence being described (Figure 3.13). This is the moststriking support for the notion of differential impairment of modalities in R.C. In all otherConditions, either eight or all nine judges correctly identified the gist. These differences clearlyreveal a more significant deficit in the spoken modality, when compared with the gestural and71written modalities.Although significantly more information is conveyed through writing and gesture thanthrough speech, it is important to note that writing and gesture are not completely intact. Theyare both impaired in this subject, but speech is significantly more impaired.Several investigators have reported similar findings of greater impairment in speech thanin gesture, and a number of explanations for this can be found in the literature. Chester andEgolf (1974), as cited in Di Simoni (1986), suggested that nonverbal communication systems aremore primitive phylogenetically and ontogenetically than verbal communication systems, and aretherefore more resistant to insult. For this reason, nonverbal channels may continue to functionpostmorbidly.McNeill (1985) proposed an alternative hypothesis to explain differential impairment ofspeech and gesture. He suggested that, after a computational stage shared by speech and gesture,speech goes through a more complex transformation, hence the difference between the twomodalities. Support for this proposal comes from errors produced by normal speakers. Often,a speaker uses an incorrect word choice, but the accompanying gesture correctly portrays theintended meaning. An example of this scenario, reported by McNeill (1985), occurred in adiscussion between mathematicians. In their discussion, one of the speakers used the wordinverse when the word direct should have been used, yet the gesture accompanying the word wasthat for a direct limit. From my own experience, I recall a colleague placing her index fingeracross her upper lip in synchrony with verbal production of the word mushroom where moustachewas clearly intended. Further support comes from the output of aphasic patients. Verbalparaphasias, which are inappropriate word choices, are often accompanied by gestures conveyingthe appropriate meaning. Even more convincing is the finding that the reverse combination, thatis, inappropriate gestures with appropriate word choices, is rare or nonexistent.72That speech undergoes a more complex transformation than gesture after a commoncomputational stage is also supported by what McNeill refers to as "gestural anticipation". Here,gesture occurs during uninterrupted speech output, but precedes its verbal parallel. McNeillprovides the following example of this phenomenon.they keep on flashing back to Alice just sitting there[hand moves out and points to location](McNeill, 1985, p. 361)The gesture was initiated at "flashing back", therefore anticipating the verbal referent "there" byat least four words. The moment of the gesture is interpreted by McNeill to be the moment ofthe shared computational stage. He suggests that, for grammatical reasons, the unpacking of thecognitive representation is delayed in the speech channel, re-emphasizing the postulated complextransformation of speech. He concludes that because gesture undergoes a less complextransformation, it reflects more faithfully than speech the content and the moment of occurrenceof the shared computational stage.A third possible explanation for differential impairment of speech versus gesture can befound in Di Simoni (1986). Here, it is suggested that for spoken words there is a heavier"symbolic load" than for gestures, as gestures tend to be more concrete. Hence, if symbolicfunctioning is affected in aphasia, speech should be more impaired than gesture.This third explanation cannot readily account for R.C.'s results. Spoken and writtenwords would both have a heavy symbolic load, and would therefore be expected to be similarlyimpaired, which was not the case. R.C.'s written output was much more effective than speechin conveying information. Similarly, for McNeill's hypothesis, if speech undergoes a morecomplex transformation for grammatical reasons, it would be anticipated that writing would73undergo a similarly complex transformation, and, therefore, writing deficits would parallel speechdeficits. This was obviously not borne out by the findings of this investigation. The hypothesisproposed by Chester and Egolf would also predict similarities between writing and speech, asthey would both be less resistant to destruction by cerebral insult than gesture, according to thishypothesis. None of these proposals, therefore, can adequately account for the findings in thisinvestigation. The hypothesis of Chester and Egolf is appealing, however, in that it makes directreference to the cerebral insult itself and its possible effects on communication. From this, itmight be suggested that different cerebral insults may result in different aphasic characteristics.This notion of differences in communication characteristics between aphasics is crucialwhen critically reviewing and analyzing the literature in the fields of gesture and aphasia. Notwo cerebral infarcts are identical, nor are any two brain structures identical. For this reason,studies involving groups of aphasics can be misleading, and must be interpreted with caution.For example, on initial reading of the study by Gainotti and Lemmo (1976), it appears that theirfindings support a disturbance of symbolic functioning in aphasia in general. There is nodescription of the characteristics of the aphasic population, only that "the presence of an aphasicdisorder was assessed by means of a standard test of aphasia". They conclude that their data"strongly support the hypothesis that at least in some forms of aphasia the basic defect does notconsist in a sensory or in a motor disorder, but in a much more deeply situated disturbance".The notion of differences among aphasics, as suggested by the phrase "in some forms ofaphasia" in Gainotti and Lemmo's conclusion above, is crucial when reviewing this and otherarticles. Undoubtedly, there are aphasics whose infarcts are large and/or specifically situatedwhich result in deficits in all areas of communication, for example, some global aphasics.However, for other aphasics, it appears that the infarct may differentially affect differentmodalities. Such appears to be the case for R.C. Hence, generalization of findings such as those74of Gainotti and Lemmo (1976) to all aphasics can have serious implications in clinical andresearch domains. Peterson and Kirshner (1981) summarized this point nicely in their review ofgestural impairment and ability in aphasia:"Studies of gesture in aphasia have revealed the complexity of theprocess of evaluation and representation of gestural/pantomimicbehavior. Any study of the "average gestural performance" of the"average aphasic" may well find below-average gesturalperformances, but generalizations cannot be made from thesestudies to all aphasics. The "average" statistic does not provideinsight into the selective potential of some aphasics to learn to usegestures or pantomime as an alternative communication modality."(Peterson and Kirshner, 1981, p. 345)In contrast to predictions based on the three hypotheses discussed above, this investigationrevealed a closer relationship between writing and gestural abilities than between writing andspeaking abilities for this subject. This finding is not unique. A number of studies have foundcorrelations between gesture and visual language modalities (reading and writing) (Varney, 1978;Kirshner and Webb, 1981). Since most of these studies are searching for prognostic indicatorsof success in gestural training, they typically look at receptive abilities, rather than productiveabilities.It is unclear why gesture and visual language modalities might be more closely relatedthan auditory (speech and auditory comprehension) and visual language modalities for thissubject. It could be a result of the particular cerebral infarct of this subject, or a result of hisparticular cerebral organization. This is an interesting finding that warrants further investigationthrough other single subject studies.As mentioned earlier, there is much heterogeneity within the group of patients categorized75as "aphasic". Single case studies overcome the problem of heterogeneity within subject groups,and can provide valuable insights into the brain and its functioning post-stroke. In group studies,important individual variations are often lost. Such variations can reveal critical information forbetter understanding of cerebral functioning. However, it must be kept in mind that findingsfrom single case studies cannot be generalized to whole populations. Such generalization wouldbe just as dangerous as applying findings of studies on heterogeneous groups to all types ofsubjects within that group.In addition to being a single case study, this investigation of R.C.'s communicationabilities is different in another fundamental way from the studies that supported "asymbolia".This study focused on R.C.'s ability to perform in various ways, rather than on his deficits. Thispoint was proposed by Davis and Wilcox (1981) and is supported by this investigation. Asdiscussed in the introduction, Davis and Wilcox (1981) concluded that the studies that supported"asymbolia", such as Duffy et al. (1975) and Gainotti and Lemmo (1976), overemphasized thedeficit in pantomime recognition. When they reanalyzed the data from these studies in terms ofabilities rather than deficits, Davis and Wilcox found that gestural ability was spared relative toverbal ability. So, the focus of the investigators, whether it be on abilities or deficits, can affectinterpretation of the findings. Such factors must be taken into consideration when reviewing theliterature.It should be apparent that to read such literature blindly and to make sweepinggeneralizations can be dangerous. Careful review of such articles is critical in order to effectivelyincorporate findings into one's own research or clinical work.In summary, it appears that R.C.'s symbolic functioning is relatively intact. His abilityto communicate information through speech, however, is significantly impaired relative to hisability to communicate through gesture or writing, that is, there is a differential impairment of76modalities. This may be due to the nature of the cerebral insult.The Central Organizer HypothesisAs discussed in the literature review, Cicone et al. (1979) and several other investigatorshave proposed that a "central organizer" may initiate and determine the complexity and clarityof both speech and gesture. None of these investigators, however, provide much detail in theirstudies regarding how such a central organizer might work.Several of the findings in the present investigation could be considered to lend supportto the notion of a central organizer which coordinates the various modalities. These findings arediscussed below, followed by a brief account of how the central organizer might function.Figure 3.7 reveals that when R.C. was allowed to use two modalities, an individual pieceof information was typically conveyed through only one of the modalities, rather than throughboth. That is, there was little overlap of portrayal of information through different modalities.If modalities were not coordinated by a central organizer, as suggested by the RelativeIndependence Hypothesis presented in the literature review, it might be expected that all, or atleast most, of the information to be conveyed could be sent through all available modalities.Hence, a central organizer may function here to coordinate the modalities so that informationportrayed through one modality is not unnecessarily portrayed through another modality. Sucha central organizer would improve efficiency in communicating. A number of judges reportedthat in instances in which information was conveyed through both allowed modalities, theinformation was ambiguous through the first modality. A central organizer may have functionedhere to recruit a second modality to convey the information when the first modality failed.77Further support for organization of information through the different output modalitiescomes from looking at the use of one modality when it is the only modality allowed versus whenthis modality is allowed along with one or two other modalities. Figure 3.8 shows thediscrepancy between the amount of information conveyed through writing when only writing wasallowed, versus the amount of information conveyed through writing when one additionalmodality was allowed. Even less information was conveyed through writing when two additionalmodalities were allowed. Similar findings are shown in Figure 3.9 for gesture. It should benoted that the differences for gesture are not as large as those for writing. A similar analysis wasnot performed for speech since, as noted earlier, only a small amount of accurate information wasconveyed through speech. A central organizer might function here to spread the informationamong more modalities as they are made available for use. For example, when writing alone wasallowed, 63 pieces of information were conveyed through writing. So, at least 63 pieces ofinformation could potentially have been conveyed through writing in all conditions allowingwriting. But, as more modalities were allowed, the central organizer may have functioned indirecting information to these other output modalities, leaving less to be directed through writing.Analysis of the types of information conveyed through the different modalities gives asomewhat different perspective on the functioning of a central organizer. As shown in Figure3.11, more Objects than Actions were conveyed when only writing was allowed. On the otherhand, more Actions than Objects were conveyed when gesture was the only modality allowed,as shown in Figure 3.12. Cicone et al. (1979) also found that action verbs tended to be gestured.It may be that the central organizer chooses an output modality depending on the type ofinformation to be conveyed.Although this proposal may be appealing for proponents of central organizers, otherfactors may explain this finding. It may be that different modalities are better suited for78conveying different types of information. For example, it would seem intuitively appropriate thatactions are more easily depicted through gesture, as they both typically involve visually apparentbody movements. Objects, however, could not be directly named by gesture, but rather wouldhave to be described, which might make them more subject to ambiguity. This hypothesis, ofcertain modalities being better suited to convey certain types of information, is further supportedby the finding that significantly more descriptors (adjectives, adverbs, and locations) wereconveyed through writing than through gesture. It is significantly more difficult, if notimpossible, to convey descriptors like colours and ages through gesture. The results regardingdescriptors may have been confounded, however, by the fact that the subject of this study wasa police investigator, who would have been trained to write descriptions of the form: "Chinesegirl, 11 years old, brown hair," etc.Another possible explanation for a preference for one type of information over anotherin gesture and writing is differential impairment of access to different lexicons. It was reportedby R.C.'s Speech-Language Pathologist that his verb lexicon appeared to be significantlydegraded. His use of verbs was confined to those to which he was repeatedly exposed in therapy.Yet, in the task in this investigation, and in observation of his spontaneous conversation, he wasable to gesture verbs well. This suggests that R.C. has an intact representation for verbs, butaccess to the verb lexicon for purposes of written expression is impaired. This notion issupported by the fact that, when writing only was allowed, significantly more objects than actionswere conveyed.Returning to the notion of a central organizer, it should be recalled that in the literaturereview, coordination of modalities was illustrated by complex meanings being divided betweenspeech and gesture (McNeill, 1985). Such division of meanings between modalities was observedin R.C.'s descriptions, but in a slightly different manner than in McNeill's example included in79the literature review. In R.C.'s case, the initial part of an utterance was often conveyed throughspeech, with gesture or writing being used to complete the utterance. Speech acted almost likea carrier phrase' for the main piece of information, which was conveyed through gesture orwriting. For example, R.C. said "This guy is like..." and completed the utterance by rubbing theimaginary stubble on his face, as if to say "needing a shave". A similar phenomenon wasobserved in the preliminary investigation. On numerous occasions, he said "this is a..." and thenwrote down the appropriate word, such as window, girl, or couple. The central organizer mayfunction here again to coordinate the output of different types of information through differentmodalities. It is as if an initial attempt at producing a complete utterance through speech isthwarted, and the central organizer is then called on to redirect the information through anothermodality in order to get this information out to the conversational partner.According to the above discussion, the proposed central organizer may have manyfunctions. An attempt is made now to more clearly define how expressive communication ofinformation may be coordinated by a central organizer. Figure 4.1 illustrates how the centralorganizer might interact with the different output modalities.6a "carrier phrase" is a term commonly used in speech audiometry to refer to a phrase whichprecedes a stimulus word, such as "Say the word	     Writing   Gesture OutputModalities Speech           80Figure 4.1Proposed Central OrganizerCentral OrganizerAs shown in the above figure, information destined for output is received by the centralorganizer from the symbolic system. (No attempt is made here to further define this symbolicsystem - this is an entirely different area of study.) The central organizer then organizes theinformation in order to send it on to an appropriate output modality. The modules labelled outputmodalities in Figure 4.1 would have a wide array of functions and interconnections with otherareas in the brain. For example, the speech output modality must include information from thelexicon(s), syntactic and phonologic organization areas, and motor programming areas. Thecomplexity of organization of each output modality will not be considered here.The choice of modalities is also largely influenced by a conscious decision-makingprocess. For example, when an individual is faced with a written examination, a consciousdecision will be made to pass information out through the written modality, rather than throughspeech or gesture. In a conversation, information will typically be funnelled through the speech8 1modality, but some information is likely to pass through the gesture modality, to supplement thespeech. Hence, the central organizer interacts not only with the output modalities and thesymbolic system, but also with other systems, such as conscious decision-making, in the brain.It appears from the data that in R.C.'s case, the infarct has impaired the pathway leadingfrom the central organizer to the speech output modality, the speech output modality itself, orboth the pathway and the modality. As writing and gesturing also appear to be impaired, thoughnot to the same extent as speech, it may be that diffuse damage resulting from the trauma hasaffected these other passageways or modalities.So, what happens when R.C. tries to communicate? It may be that, when all modalitiesare allowed, the central organizer directs information to the speech modality (along with aconscious decision-making process). This information hits a block (due to damage from theinfarct) along the pathway or at the speech modality. A message is sent back to the centralorganizer informing it that the information was unable to pass through this modality to beexpressed. Undoubtedly, self-awareness and awareness of the communication partners'understanding (or misunderstanding) of the output information also play roles here. When thecentral organizer receives the message that transmission has been unsuccessful, it sends theinformation out through another modality. This set-up can be used to accommodate the "tip ofthe tongue" scenario. When information is sent to the speech modality, access to the lexiconfrom this modality is for some reason denied, hence the message is sent back to the centralorganizer that expression of the information was unsuccessful, and the information might thenbe sent to the gesture output modality, allowing expression of the information in this way.Meanwhile, the speech modality continues to attempt to access the appropriate word in thelexicon, until finally (usually), access is successful and the word is spoken.This model can also be used to describe an alternative hypothesis to the proposed loss of82symbolic function ("asymbolia") postulated for aphasic subjects demonstrating parallelimpairments in all modalities. In such aphasics, it may be that the cerebral insult has resultedin disruption of the central organizer, rather than symbolic functioning. This proposal waspresented in Feyereisen (1988). It might be a particularly appealing proposal for researchers whobelieve that symbolic functioning is diffusely organized within the brain, making total destructionof it by a localized cerebral infarct unlikely.This model is admittedly a rough draft of how a central organizer might work. Furtherresearch in this area is required in order to further define or refute it.Clinical ImplicationsR.C. is a very good communicator. Traditional assessment of strictly linguistic skills,however, would not have revealed this. Much of the information communicated by R.C. inspontaneous conversation is communicated via gesture, and gestures are not typically consideredin traditional aphasia assessment batteries. A more pragmatic approach to assessment whichincorporates gestural communication would provide a more accurate portrait of R.C.'scommunicative abilities. Unfortunately, many clinicians still focus strictly on the verbal abilitiesof their aphasic patients. Hopefully, studies such as this will help them to shift their focus to anemphasis on communication using all available modalities.With respect to therapy, R.C. was already making extensive use of natural gesturespontaneously, so there was no need to introduce gesture to him through therapy. However, forpatients with a profile similar to R.C., enhancing natural use of gesture might prove worthwhile.Two strategies through which this might be done are (1) encouragement of natural use of gesture,83and (2) consolidation of more commonly used gestures in order to avoid potential ambiguities.PACE therapy, as discussed in the literature review, is an ideal method for incorporating suchgestural goals.Another important consideration in aphasia therapy is the communication partners of thepatient. As was shown by the judges in this experiment, there is a desire to focus on acommunication partner's speech. In communicating with patients such as R.C., however, a focuson speech would only prove to be confusing. Counselling of family members and other frequentcommunication partners in order to make them aware of the patient's gestural abilities and toencourage them to focus on these abilities would be an extremely important component oftherapy. Difficulties, however, would obviously arise in attempts at conversation with unfamiliarpartners. In such instances, it would be important for these patients to carry some sort of writingmaterials, not only to convey information, but also to inform the communication partner to paycareful attention to his gesture, rather than his speech.It is hoped that as clinicians are exposed to more and more literature on pragmaticapproaches to aphasia assessment and treatment, they will be encouraged to try these approacheswith their patients. Such a change in direction can only improve the quality of care provided topatients, and hopefully, in improving their care, improve their quality of life.ConclusionThe results of this investigation appear to suggest that R.C. has relatively intact symbolicfunctioning, and that there is differential impairment of his output modalities. His speech isseverely impaired. (Yet judges still tended to have confidence in what he said.) Gesture and84writing, although also impaired, are much more effective modalities for communication thanspeech.R.C.'s output modalities appear to be coordinated for the function of conveyinginformation. A central organizer which interacts with each of the output modalities has beenproposed to account for this coordination of modalities.Pragmatic approaches to aphasia assessment and treatment focus on communication andconsider alternative modes of communication, such as gesture. In order to improve client care,it is hoped that clinicians will consider incorporating pragmatic approaches into currentassessment and treatment procedures.85BIBLIOGRAPHYAten, J. L. 1986. Functional communication treatment. In R. Chapey (Ed.), LanguageIntervention Strategies in Adult Aphasia, Second Edition. Baltimore: Williams andWilkins.Bates, E. 1979. The Emergence of Symbols. New York: Academic Press.Cicone, M., Wapner, W., Foldi, N., Zurif, E., Gardner, H. 1979. The relation between gesture andlanguage in aphasic communication. Brain and Language, 8, 324-349.Coelho, C. A. & Duffy, R. J. 1987. The relationship of the acquisition of manual signs to severityof aphasia: A training study. Brain and Language, 31, 328-345.Daniloff, J. K., Noll, J. D., Fristoe, M., & Lloyd, L.L. 1982. Gestural recognition in patientswith aphasia. Journal of Speech and Hearing Disorders, 47, 43-49.Davis, G. A., & Wilcox, M. J. 1981. Incorporating parameters of natural conversation in aphasiatreatment. In R. Chapey (Ed.), Language Intervention Strategies in Adult Aphasia.Baltimore: Williams and Wilkins.Davis, G. A. 1986. Pragmatics and treatment. In R. Chapey (Ed.), Language InterventionStrategies in Adult Aphasia, Second Edition. Baltimore: Williams and Wilkins.Delis, D., Foldi, N. S., Hamby, S., Gardner, H., & Zurif, E. 1979. A note on temporal relationsbetween language and gestures. Brain and Language, 8, 350-354.Di Simoni, F. 1986. Alternative communication systems for the aphasic patient. In R. Chapey(Ed.), Language Intervention Strategies in Adult Aphasia, Second Edition. Baltimore:Williams and Wilkins.Duffy, R. J., Duffy, J. R., & Pearson, K. L. 1975. Pantomime recognition in aphasics. Journalof Speech and Hearing Research, 18, 115-132.Duffy, R.J., & Liles, B.Z. 1979. A translation of Finkelnburg's [1870] lecture on aphasia as"asymbolia" with commentary. Journal of Speech and Hearing Disorders, 44, 156-168.Duffy, R. J., Duffy, J. R., & Mercaitis, P. A. 1984. Comparison of the performances of a fluentand a nonfluent aphasic on a pantomimic referential task. Brain and Language, 21, 260-273.Egolf, D. B., & Chester, S. L. 1973. Nonverbal communication and the disorders of speech andlanguage. ASHA, 15, 511-517.86Ekman, P. 1977. Biological and cultural contributions to bodily and facial movement. In JohnBlacking (Ed.), The Anthropology of the Body. London: Academic Press.Feyereisen, P., & Seron, X. 1982. Nonverbal communication and aphasia: A review (I.Comprehension, II. Expression). Brain and Language, 16, 191-236.Feyereisen, P., Barter, D., Goossens, M., & Clerebaut, N. 1988. Gestures and speech in referentialcommunication by aphasic subjects: Channel use and efficiency. Aphasiology, 2(1), 21-32.Feyereisen, P. 1988. Non-verbal communication. In F. C. Rose, R. Whuir, & M. A. Wyke (Eds.),Aphasia. London: Whuir Publishers.Freedman, N. 1972. The analysis of movement behaviour during clinical interviews. In A.Siegman & B. Pope (Eds.), Studies in Dyadic Communication. Elmsford, NY: PergamonPress.Gardner, B.T., & Gardner, R.A. 1971. In A. M. Schrier & F. Stollnitz (Eds.), Behaviour ofNonhuman Primates, Vol. IV. New York: Academic Press.Gainotti, G., & Lemmo, M. A. 1976. Comprehension of symbolic gestures in aphasia. Brain andLanguage, 3, 451-460.Geschwind, N. 1965. Disconnexion syndromes in animals and man, Part I. Brain, 80, 237-294.Glosser, G., Wiener, M., & Kaplan E. 1986. Communicative gestures in aphasia. Brain andLanguage, 27, 345-359.Goodglass, H., & Kaplan, E. 1972. The Boston Diagnostic Aphasia Examination. In H. Goodglass& E. Kaplan (Eds.), The Assessment of Aphasia and Related Disorders. Philadelphia: Lea& Febiger.Helm-Estabrooks, N., Fitzpatrick, P. M., & Barresi, B. 1982. Visual action therapy for globalaphasia. Journal of Speech and Hearing Disorders, 47, 385-389.Helm-Estabrooks, N., & Emery, P. A. 1988. Non-vocal approaches to aphasia rehabilitation. InF. C. Rose, R. Whurr, & M. A. Wyke (Eds.), Aphasia. London: Whurr Publishers.Herrmann, M., Reichle, T., Hoene, G. L., Wallesch, C. W., & Johannsen-Horbach, H. 1988.Nonverbal communication as a compensative strategy for severely nonfluent aphasics? -A quantitative approach. Brain and Language, 33, 41-54.Hewes, G. W. 1973. Primate communication and the gestural origin of language. CurrentAnthropology, 14, 5-12.Holland, A. L. 1980. Communicative Abilities in Daily Living (CADL): A Test of FunctionalCommunication for Aphasic Adults. Baltimore: University Park Press.87Kendon, A. 1983. Gesture and speech: How they interact. In J. M. Wiemann & R. P. Harrison(Eds.), Nonverbal Interaction. London: Sage Publications.Kimura, D. 1973. Manual activity during speaking - I. Right handers. Neuropsychologia, 2, 45-60.Kimura, D. 1979. Neuromotor mechanisms in the evolution of human communication. In H. D.Steldis & M. J. Raleigh (Eds.), Neurobiology of Social Communication in Primates: AnEvolutionary Perspective. Toronto: Academic Press.Kirshner, H. S., & Webb, W. G. 1981. Selective involvement of the auditory-verbal modality inan acquired communication disorder: Benefit from sign language therapy. Brain andLanguage, 13, 161-170.Le May, A., Rachel, D., & Thomas, A. P. 1988. The use of spontaneous gesture by aphasicpatients. Aphasiology, 2(2), 137-145.McNeill, D. 1985. So you think gestures are nonverbal? Psychological Review, 92, 350-371.McNeill, D. & Levy, E. 1982. Conceptual representations in language activity and gesture. InR.J. Jarvella & W. Klein (Eds.), Speech, Place and Action: Studies in Deixis andRelated Topics. Chichester: John Wiley.MacWhinney, B. 1991. The CHILDES Project: Computational Tools for Analyzing Talk, Version1.0. Pittsburgh: Carnegie Mellon University.Peterson, L. N., & Kirshner, H. S. 1981. Gestural impairment and gestural ability in aphasia: Areview. Brain and Language, 14, 333-348.Porch, B. E. 1967. Porch Index of Communicative Ability. Palo Alto, California: ConsultingPsychologists Press.Rao, P. R. 1986. The use of Amer-Ind Code with aphasic adults. In R. Chapey (Ed.), LanguageIntervention Strategies in Adult Aphasia, Second Edition. Baltimore: Williams andWilkins.Schlanger, P. H., & Schlanger, B. B. 1970. Adapting role-playing activities with aphasicpatients. Journal of Speech and Hearing Disorders, 35, 229-235.Schuell, H. 1965. The Minnesota Test for Differential Diagnosis of Aphasia. Minneapolis:University of Minnesota Press.Skinner, C., Wirz, S., Thompson, I., & Davidson, J. 1984. Edinburgh Functional CommunicationProfile (EFCP). Queen Margaret College, Edinburgh, Scotland: Winslow Press.88Smith, L. 1987. Nonverbal competency in aphasic stroke patients' conversation. Aphasiology,1(2), 127-139.Varney, N. 1978. Linguistic correlates of pantomime recognition in aphasic patients. Journal ofNeurosurgery and Psychiatry, 41, 564-568.Wepman, J. M. 1972. Aphasia therapy: A new look. Journal of Speech and Hearing Disorders,37, 203-214.Wiener, M., Devoe, S., Rubinow, S., & Geller, J. 1972. Nonverbal behavior and nonverbalcommunication. Psychological Review, 79, 185-214.Wilcox, M. J. 1978. Aphasics' comprehension of contextually conveyed meaning. Brain andLanguage, 6, 362-377.APPENDIX A. INSTRUCTIONS FOR JUDGES AND SAMPLE SCORE SHEET8990INSTRUCTIONS FOR JUDGESYou are about to see some videotapes and/or written material demonstrating a subjectdescribing sequences of pictures. You will not be shown the sequences. Each sequence that thesubject is depicting consists of three individual pictures. You will be asked to make somejudgements on three of these sequences.In order to identify as much information as possible, you may stop, rewind and reviewparts of the videotape as often as you like. You will have use of the remote control for thispurpose, and I will be on hand to assist you if necessary.Please turn to the provided score sheets. Enter your name on each of the three scoresheets now. The first sheet will be used for the practice sequence, the second sheet for the firsttest sequence, the third sheet for the second test sequence, and the fourth sheet for the final testsequence.As you view the videotapes and/or written material, try to identify all of the objects(nouns), actions (verbs), and descriptors (adjectives, adverbs, and prepositions) that the subjectis trying to get across, using speech, gesture, and writing. Enter this information as single wordsin the appropriate columns on the score sheet. Then, enter the modality (or modalities) used toconvey this information (S-Speech, G-Gesture, W-Writing). (Note: For objects and actions, ifyou feel that a single word might lead to an ambiguous meaning that you want to clarify, includeany extra words in brackets (eg. piece [of cake]). For descriptors, phrases can be used todescribe each piece of information, but please keep these concise and clear (eg. 25 years old).)Following each modality column on the score sheet, there is a column entitled"Confidence Rating". Following each piece of information entered in the Objects, Actions, andDescriptors columns, and adjacent modality columns, enter the number which describes howconfident you are that what you wrote down was actually what was intended by the subject. Usethe following scale:1 - very unsure, basically a guess2 - quite unsure, but have a feeling that it's right3 - about a 50/50 chance that it's right4 - quite confident, but a small possibility it could be something else5 - very confidentIf you are considering a couple or several different options for what the subject might bedescribing, write down all of these options with their respective confidence ratings, and draw acircle around all of these options to denote that they indicate the portrayal of a single piece ofinformation.If you find there is information that you picked up that doesn't fit into one of thecategories on the sheet, enter it in the specified area at the bottom of the page. Then, from theinformation that you received from the subject, briefly describe what the sequence was all about.Finally, comment on anything that you found particularly interesting in observing the subject andin the task of identifying information conveyed by this subject.The first sequence will be used for training purposes. You are to follow the aboveinstructions, and then I will review your results with you. If you have any questions, please donot hesitate to ask them.Modal-Confi-dence Modal-Confi-dence Modal-Confi-denceObjects ity Rating Actions ity Rating Descriptors ity Rating91SCORE SHEETJudge's Name: 	Please note any additional information that was conveyed to you but doesn't fit into the above categories:Please briefly describe what the sequence was all about: 	Please comment on anything that you found particularly interesting in what you just saw: _APPENDIX B. SAMPLE OF DATA FROM PRELIMINARY INVESTIGATION9293@Begin@Participants:	RC Subject, BP Examiner@Coding: 	 CHAT 1.0 (according to CHILDES specifications)@Date: 	 March 1992@Situation: 	 Picture description task*RC: is it down down now?%gpx: points to papers on the table*RC: right down here?*RC: <it's> [=? is] on?%gpx: points at video camera*RC: 0%gpx: picks up release form and holds it in front of the camera*BP: okay.%gpx: laughs*RC: that's me.@Stim: 	 Cookie Theft Picture*BP: can you.*RC: okay when.*BP: describe what's happening in that picture?*RC: 0%gpx: pauses, looks at BP, looks down to picture*RC: wha(t) &h all this time?%gpx: points (L hand) to apron of the mother in the picture*BP: mmhm.*RC: what dem [?]?%gpx: looks at BP*RC: water down here.%gpx: points (L hand) to chest of the mother*RC: &sp the eaters@ down here.%gpx: points (L hand) to the plate the mother is washing/drying*RC: they see, you know.%gpx: shakes head and holds hands up in frustration*RC: down here down here.%gpx: follows stream of water to the floor with pencil in R hand*RC: (it')s down there.%gpx: points (L hand) to pool of water at mother's feet or directly at mother's feet*RC: this is like &s slop@ to her.%gpx: 'this'-points (L hand) to girl that is gesturing 'sh' with her finger,'slop'-makes pincingmotion with fingers and thumb of L hand in the air%gpx: laughs, frustrated*RC: I can't sleep.%gpx: fingers of L hand to forehead, as in thinking*RC: I can't &x.%gpx: with fingers pointing toward mouth, L hand pulls away to indicate speech from mouth*BP: okay, you can also write RC, if you want.*RC: what can.94%gpx: fingers of L hand to forehead, shakes his head*RC: but not tonight.*RC: rit@%gpx:looks at BP, tilts head to R, L hand in air, rotates wrist, two fingers up*RC: not (to)night cause.%gpx: L hand in air, rotates wrist, alternates number of fingers up*RC: my time.%gpx: rotates index finger to point toward BP*RC: what can't I I can't can I.%gpx: shakes head looking intensely at BP*RC: I wish I could.%gpx: looking intensely at BP*BP: you can, you can write some down as well.*RC: well I know.*RC: this is <a> [=? uh].%com: 	 I'm not sure if he means 'a' or 'uh'.%wri: DISHES*RC: down here.%gpx: points toward the mother*RC: it's got a.%gpx: points to the towel in the mother's hand%wri: TOWEL with arrow to DISHES*RC: right?*BP: mmhm.*BP: okay, so she's got a towel for dishes.*RC: yes.*RC: well.%gpx: L palm turned up and eyebrows raised as if unsure*RC: this is a.*RC: never mind.%gpx: goes to write something, lifts L palm up, then sits back in chair and puts hand to his sides*RC: never mind.%gpx: moves body back to upright, L palm out toward BP and shaking (never mind)*RC: this down here.%gpx: points to water in sink*RC: he's uh got <um> [=? a].%wri: WATER*RC: 0%gpx: turns picture to show BP what he's pointing to (water in sink)*RC: (o)kay?*BP: okay, yup, a water.*RC: yeah.%gpx: holds picture up to camera*BP: now, we we can do that part later.*RC: I can.%gpx: holds index finger up to BP, laughing*RC: 095%gpx: scratches R eye with left hand*RC: this is a.%gpx: serious expression, focuses gaze back to picture*RC: down down here.%gpx: points to water flowing from sink*RC: (o)kay, this this this is a.%gpx: points to the girl*RC: an eaters@.%wri: GIRL*RC: say(ing) like.%gpx: moves R index finger to lips, L hand out about 1 ft from face-index,middle,thumbextended%com: 	 I think he's imitating the girl with her finger to her mouth, and the outstretchedhand with fingers extended indicates the boy to whom the girl is directing hergesture*BP: a girl.*RC: this time.%gpx: moves L index and middle fingers to mouth, nods head%com: 	 it's as if he's confirming BP's response of 'girl', but he may just be passing overit, since he seemed to be gesturing the action, rather than gesturing the noun 'girl'*RC: and this time is.%gpx: points to boy%wri: BOY*RC: is down here.%gpx: points to stool*RC: like.%gpx: L hand facing palm down about 6 inches above the table, seems to indicate seat of stool*RC: this is a.%wri: STOOL*RC: a her [?]?%gpx: turns the page he's writing on to BP to show her what he's written*BP: mmhm.*RC: <yeah> [=? (o)kay].*RC: (o)kay.*RC: she's up there.%gpx: points to boy*RC: behind here.%gpx: with L hand, motions opening of a cupboard*BP: mmhm.*RC: get this.%gpx: on L hand, touches fingers to thumb to make a circle - appears to represent a cookie*RC: it's down here.%gpx: points with pen in R hand to the boys arms and the cookie jar (hard to see exactly where)*RC: this is.%gpx: on L hand, circle made from fingers and thumb again to represent a cookie%wri: COOKIE*RC: right?96*RC: &wa one down here.%gpx: R hand up, L hand pulling away from R hand - appears to represent cookie being passedfrom boy to girl%com: 	 'here' was pronounced 'her'*RC: and all of a sudden [?].%com: 	 pronounced 'an all a sun'*RC: it's going [?] pshshsh@.%com: 	 'it's going' pronounced 'squin'%gpx: L hand, palm open, flips hand slowly - appears to indicate stool falling over%pic: stool toppling over*BP: okay, that's the only thing you can't do is draw.*RC: well, it's going pshshsh@, so.%gpx: flips L hand over again and slaps palm on table - to indicate stool falling over*BP: okay.*BP: okay.*RC: it's down here.%gpx: points to window in picture, and slides finger up to show that it's open*RC: this <nights> [=? nice].%gpx: lifts L hand up, palm down - appears to indicate that window is open*RC: this is a.%gpx: points to window in picture*RC: what na [?]?%gpx: leans forward to look more carefully at the picture*RC: I thought this was um.%gpx: points to window in picture%wri: WINDOW (underlined)*BP: mmhm*RC: like down here.%gpx: puts L and R hands together, then motions upward with L hand twice simulating awindow opening*RC: I think I know.*RC: it's down here.%gpx: points to window in picture with L hand*RC: it's fff@.%gpx: upward motion with L hand almost along wall, again simulating motion of a windowopening*RC: is that right?%gpx: looks at BP and nods, eyebrows raised*BP: yeah, you mean the window's open?*RC: yeah.*BP: yeah.*RC: yeah.%gpx: moves L hand up to rest on chin, looking at picture*BP: it is.%gpx: looks up briefly at BP*BP: anything else?%gpx: continues to fixate on picture97*RC: and how?%gpx: raises L hand in the air, palm up, looks at BP with an expression of either 'of course,there's a lot more' or 'I don't know what sorts of details you want'*RC: this, this, this, this, this, this.%gpx: points to different things outside the window*BP: no, you don't have to do all that*BP: okay.@EndAPPENDIX C. CONDENSED DATASET9899JUDGE SEQ COND GIST TYPE MODE CONF CORR INFO1 1 1 1 1 act G 	 5 1 rinseF2 1 1 1 1 act GW 	 5 1 shaveF3 1 1 1 1 act G 	 5 1 dryF4 1 1 1 1 act G 	 5 1 lather5 1 1 1 1 obj GW 	 5 1 cream6 1 1 1 1 obj GW 	 5 1 water7 1 1 1 1 obj GW 	 5 1 towel8 1 1 1 1 obj G 	 5 1 razor9 15 1 1 1 act G 	 5 1 rinseR10 15 1 1 1 act G 	 5 1 spreadC11 15 1 1 1 obj S 	 5 1 man12 4 1 2 0 obj SW 	 4 1 man13 6 1 2 1 des W 	 5 1 sixtyYO14 6 1 2 1 obj W 	 5 1 water15 6 1 2 1 obj W 	 5 1 towel16 16 1 2 1 act W 	 5 1 shaveF17 16 1 2 1 obj W 	 5 1 face18 16 1 2 1 obj W 	 5 1 beard19 7 1 3 1 act G 	 5 1 rinseR20 7 1 3 1 act G 	 5 1 rinseF21 7 1 3 1 act GW 	 5 1 spreadC22 7 1 3 1 act G 	 5 1 shaveF23 7 1 3 1 obj G 	 5 1 towel24 7 1 3 1 obj G 	 4 1 sink25 7 1 3 1 obj G 	 5 1 razor26 11 1 3 1 act G 	 4 1 squeezeC27 11 1 3 1 obj G 	 5 1 face28 11 1 3 1 obj G 	 5 1 water29 17 1 3 1 act G 	 5 1 dryF30 17 1 3 1 act G 	 5 1 cutS31 17 1 3 1 act G 	 5 1 liftnose32 17 1 3 1 des S 	 5 1 downhere33 17 1 3 1 obj G 	 5 1 cream34 5 1 4 1 act G 	 5 1 rinseR35 5 1 4 1 act GW 	 5 1 shaveF36 5 1 4 1 act G 	 5 1 spreadC37 5 1 4 1 act G 	 5 1 dryF38 5 1 4 1 act G 	 5 1 pourC39 5 1 4 1 act G 	 5 1 rinseF40 5 1 4 1 obj G 	 5 1 sink41 5 1 4 1 obj G 	 5 1 razor42 5 1 4 1 obj GW 	 5 1 water43 5 1 4 1 obj GW 	 5 1 towel44 18 1 4 1 obj G 	 5 1 cream100JUDGE SEQ COND GIST TYPE MODE CONF CORR INFO45 18 1 4 1 obj G 4 1 stubble46 18 1 4 1 obj W 5 1 man47 18 1 4 1 obj GW 5 1 face48 3 1 6 1 act W 3 1 shaveF49 3 1 6 1 act W 3 1 scrubF50 3 1 6 1 des W 5 1 sixtyY051 3 1 6 1 des W 4 1 twoH52 3 1 6 1 des W 5 1 rightH53 3 1 6 1 des W 5 1 gilette54 3 1 6 1 obj W 5 1 man55 3 1 6 1 obj W 3 1 hands56 3 1 6 1 obj W 5 1 towel57 3 1 6 1 obj W 5 1 razor58 3 1 6 1 obj W 5 1 hand59 3 1 6 1 obj W 5 1 face60 8 1 6 1 act W 5 1 spreadC61 8 1 6 1 obj W 5 1 water62 8 1 6 1 obj W 5 1 hands63 8 1 6 1 obj W 3 1 pyjamas64 20 1 6 1 obj W 5 1 beard65 10 1 7 1 act G 5 1 rinseF66 10 1 7 1 act G 5 1 putRdown67 10 1 7 1 act G 5 1 spreadC68 10 1 7 1 act G 5 1 shaveF69 10 1 7 1 act G 5 1 feelS70 10 1 7 1 obj G 5 1 cream71 10 1 7 1 obj G 5 1 razor72 10 1 7 1 obj G 4 1 water73 14 1 7 1 act G 5 1 pourC74 14 1 7 1 act G 4 1 rinseR75 14 1 7 1 des G 5 1 roughS76 14 1 7 1 des G 5 1 ConF77 14 1 7 1 obj G 4 1 towel78 14 1 7 1 obj G 5 1 face79 21 1 7 1 act G 5 1 cutS80 21 1 7 1 act G 5 1 dryF81 21 1 7 1 obj G 5 1 containr82 21 1 7 1 obj G 4 1 sink83 3 2 1 1 act G 5 1 wipeM84 3 2 1 1 des W 5 1 Chinese85 3 2 1 1 des W 5 1 elevenY086 3 2 1 1 obj GW 5 1 glass87 4 2 1 1 act G 5 1 brushT88 4 2 1 1 obj GW 3 1 towel89 17 2 1 1 act G 5 1 placeTP101JUDGE SEQ COND GIST TYPE MODE CONF CORR INFO90 17 2 1 1 act G 5 1 spitW91 17 2 1 1 obj G 4 1 tpaste92 17 2 1 1 obj W 4 1 girl93 17 2 1 1 obj G 5 1 tbrush94 1 2 2 1 des W 5 1 Chinese95 1 2 2 1 des W 5 1 elevenYO96 1 2 2 1 obj W 5 1 teeth97 1 2 2 1 obj W 5 1 mouth98 1 2 2 1 obj W 5 1 towel99 1 2 2 1 obj W 5 1 water100 1 2 2 1 obj W 5 1 tpaste101 1 2 2 1 obj W 5 1 lips102 5 2 2 1 des W 5 1 RW&B103 5 2 2 1 obj GW 5 1 tbrush104 5 2 2 1 obj W 5 1 girl105 21 2 2 1 act GW 5 1 brushT106 21 2 2 1 obj W 5 1 glass107 8 2 3 1 act G 5 1 rinseM108 8 2 3 1 act G 5 1 wipeM109 8 2 3 1 obj G 5 1 glass110 13 2 3 1 obj G 2 1 braids111 13 2 3 1 obj G 3 1 girl112 13 2 3 1 obj G 3 1 teeth113 13 2 3 1 obj G 3 1 woman114 15 2 3 1 act G 5 1 placeTP115 15 2 3 1 act G 5 1 brushT116 15 2 3 1 act G 5 1 sipW117 15 2 3 1 act G 5 1 spitW118 15 2 3 1 des SG 3 1 small119 15 2 3 1 obj G 5 1 tpaste120 15 2 3 1 obj G 5 1 towel121 15 2 3 1 obj G 3 1 hair122 15 2 3 1 obj G 4 1 tbrush123 2 2 4 1 act G 5 1 spitW124 2 2 4 1 act G 5 1 wipeM125 2 2 4 1 act G 5 1 pickup126 2 2 4 1 act GW 5 1 brushT127 2 2 4 1 des G 4 1 TPonTB128 2 2 4 1 des W 3 1 elevenYO129 2 2 4 1 des S 	 5 1 longtime130 2 2 4 1 obj W 3 1 pigtails131 2 2 4 1 obj W 5 1 teeth132 2 2 4 1 obj GW 4 1 tpaste133 2 2 4 1 obj GW 5 1 towel134 10 2 4 1 act G 5 1 placeTP102JUDGE SEQ COND GIST TYPE MODE CONF CORR INFO135 10 2 4 1 act GW 5 1 rinseM136 10 2 4 1 act G 5 1 gargleW137 10 2 4 1 des W 5 1 RW&B138 10 2 4 1 obj GW 5 1 mouth139 10 2 4 1 obj GW 5 1 glass140 16 2 4 1 act G 4 1 sipW141 16 2 4 1 des W 5 1 Chinese142 16 2 4 1 obj W 5 1 water143 16 2 4 1 obj G 5 1 bangs144 16 2 4 1 obj W 5 1 lips145 16 2 4 1 obj W 5 1 girl146 16 2 4 1 obj GW 5 1 tbrush147 7 2 6 1 act W 4 1 wipeM148 7 2 6 1 act W 4 1 spitW149 7 2 6 1 act W 2 1 placeTP150 7 2 6 1 act W 5 1 brushT151 7 2 6 1 act W 4 1 rinseM152 7 2 6 1 act W 5 1 holdTP153 7 2 6 1 des W 5 1 Chinese154 7 2 6 1 des W 5 1 elevenY0155 7 2 6 1 des W 5 1 rightH156 7 2 6 1 des W 5 1 leftH157 7 2 6 1 obj W 5 1 hair158 7 2 6 1 obj W 5 1 tpaste159 7 2 6 1 obj W 5 1 water160 7 2 6 1 obj W 5 1 pigtails161 7 2 6 1 obj W 4 1 tbrush162 12 2 6 1 act W 3 1 holdWinM163 12 2 6 1 obj W 5 1 mouth164 12 2 6 1 obj W 5 1 teeth165 12 2 6 1 obj W 5 1 towel166 12 2 6 1 obj W 5 1 glass167 12 2 6 1 obj W 4 1 lips168 18 2 6 1 act W 4 1 holdGL169 18 2 6 1 act W 5 1 holdtowl170 18 2 6 1 des W 5 1 beautifu171 18 2 6 1 obj W 5 1 hand172 18 2 6 1 obj W 5 1 girl173 6 2 7 1 act G 5 1 spitW174 6 2 7 1 obj G 5 1 towel175 6 2 7 1 obj G 5 1 tpaste176 11 2 7 1 act G 5 1 squeezeP177 11 2 7 1 act G 5 1 wipeM178 11 2 7 1 act G 5 1 placeTP179 11 2 7 1 des G 5 1 TPonTBJUDGE	 SEQ	 COND GIST180 	 11 	 2 	 7 	 1181	 11 	 2 	 7 	 1182 	 19 	 2 	 7 	 1183	 19 	 2 	 7 	 1184 	 19 	 2 	 7 	 1185 	 19 	 2 	 7 	 1186 	 5 	 3 	 1 	 1187	 5 	 3 	 1 	 1188	 5 	 3 	 1 	 1189 	 7 	 3 	 1190 	 7 	 3 	 1 	 1191 	 7 	 3 	 1 	 1192	 7 	 3 	 1 	 1193	 19 	 3 	 1 	 1194	 19 	 3 	 1 	 1195	 19 	 3 	 1 	 1196	 8 	 3 	 2 	 1197	 8 	 3 	 2 	 1199198	 8 	3 	 23 	 2 	 19 	 1200	 9 	 3 	 2 	 1201 	 9 	 3 	 2 	 1202032	 9 	 3 	 2 	 117 	 3 	 2 	 1204	 17 	 3 	 2 	 1205	 1	 3 	 3 	 1206 	 1	 3 	 3 	 1207	 1	 3 	 3 	 1208	 1 	 3 	 3 	 1209	 1 	 3 	 3 	 1210	 1 	 3 	 3 	 1211	 14 	 3 	 3 	 1212	 14 	 3 	 3 	 1213	 18 	 3 	 3 	 0214 	 11 	 3 	 4 	 1215 	 11 	 3 	 4 	 1216	 12 	 3 	 4 	 1217 	 12 	 3 	 4 	 1218	 12 	 3 	 4 	 1219	 20 	 3 	 4 	 1220	 20 	 3 	 4 	 1221	 20 	 3 	 4 	 1222	 20 	 3 	 4 	 1223	 20 	 3 	 4 	 1224	 20 	 3 	 4 	 1103TYPE MODE CONF CORR INFO	obj 	 G 	 5 	 1 	 teeth	obj 	 G 	 5 	 1 	 glass	act	 G 	 5 	 1 	 sipW	act	 G 	 5 	 1 	 rinseM	act	 G 	 5 	 1 	 brushT	obj 	 G 	 5 	 1 	 tbrush	act	 G 	 5 	 1 	 foldL	obj 	 GW 	 5 	 1 	 pen	obj	 G 	 4 	 1	stamp act	 SGW 5	 1 	 writeL	obj 	 SW 	 5 	 1 	 letter	obj 	 G 	 5 	 1	obj 	 GW 	 5 	 1	 envelopeact	 G 	 5 	 1 	 addressEact	 G 	 5 	 1 	 lickEdes	 G 	 5 	 1 	 inEact	 W 	 5 	 1 	 writeLact	 W 	 5 	 1 	 payobj 	 W 	 5 	 1 	 penobj 	 W 	 5 	 1 	 terletobj 	 W 	 5 	 1 	 paperobj	 W 	 5 	 1 	 Pofficeobj 	 W 	 5 	 1 	 envelope	act	 W 	 5 	 1 	 foldL	obj 	 SW 	 5 	 1 	 man	act 	 G 	 5 	 1 	 sealE	act	 G 	 5 	 1 	 insertL	act	 G 	 5 	 1 	 foldL	des	 S 	 4 	 1 	 closer	obj	 G 	 5 	 1 	 envelope	obj 	 G 	 5 	 1 	 letter	act	 G 	 5 	 1 	 writeL	des 	 G 	 5 	 1	obj 	 G 	 4 	 1 	 pen	act	 G 	 5 	 1 	 sealE	obj 	 GW 	 5 	 1 	 paper	act	 G 	 5 	 1 	 lickE	act	 GW 	 5 	 1 	 foldL	obj	 GW 	 5 	 1	act 	 GW 	pen5 	 1 	 writeL	act 	 GW 	 5 	 1 	 insertL	obj	 GW 	 5 	 1	man obj	 G 	 5 	 1 	 address	obj 	 GW 	 5 	 1 	 envelope	obj 	 G 	 5 	 1 	 letter104JUDGE SEQ COND GIST TYPE MODE CONF CORR INFO225 4 3 6 1 act W 5 1 sit226 4 3 6 1 act W 5 1 foldL227 4 3 6 1 des W 5 1 fiffivY0228 4 3 6 1 obj W 5 1 eyes229 4 3 6 1 obj W 5 1 letter230 4 3 6 1 obj W 5 1 address231 4 3 6 1 obj W 5 1 chair232 10 3 6 1 act W 5 1 writeL233 10 3 6 1 des W 3 1 inthree234 10 3 6 1 obj W 5 1 envelope235 10 3 6 1 obj W 5 1 pen236 10 3 6 1 obj W 5 1 man237 15 3 6 1 des W 5 1 red238 15 3 6 1 des W 5 1 gold239 15 3 6 1 des W 5 1 black240 15 3 6 1 des W 5 1 rightH241 15 3 6 1 obj W 5 1 sweater242 15 3 6 1 obj W 5 1 paper243 15 3 6 1 obj W 5 1 desk244 15 3 6 1 obj W 1 1 frame245 3 3 7 1 act G 4 1 sealE246 3 3 7 1 act G 5 1 foldL247 3 3 7 1 obj G 2 1 pen248 13 3 7 1 act SG 5 1 sendL249 13 3 7 1 act G 5 1 insertL250 13 3 7 1 act G 4 1 putEdown251 13 3 7 1 des G 5 1 inE252 13 3 7 1 obj G 4 1 envelope253 13 3 7 1 obj G 5 1 letter254 16 3 7 1 act G 5 1 lickE255 16 3 7 1 act G 5 1 think256 16 3 7 1 act G 5 1 writeL


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items