Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

An evaluation of three computer based test interpretation systems for the WISC-R Otter, Murray 1986

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-UBC_1986_A8 O87.pdf [ 5.65MB ]
Metadata
JSON: 831-1.0054420.json
JSON-LD: 831-1.0054420-ld.json
RDF/XML (Pretty): 831-1.0054420-rdf.xml
RDF/JSON: 831-1.0054420-rdf.json
Turtle: 831-1.0054420-turtle.txt
N-Triples: 831-1.0054420-rdf-ntriples.txt
Original Record: 831-1.0054420-source.json
Full Text
831-1.0054420-fulltext.txt
Citation
831-1.0054420.ris

Full Text

AN EVALUATION OF THREE COMPUTER BASED TEST INTERPRETATION SYSTEMS FOR THE WISC-R By MURRAY OTTER B . A . , The University of Manitoba, 1972 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ARTS in THE FACULTY OF GRADUATE STUDIES (Department of Educational Psychology & Special Education) We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA May, 1986 © Murray Otter, 1986 In presenting this thesis in p a r t i a l fulf i l lment of the requirements for an advanced degree at the University of B r i t i s h Columbia, I agree that the Library sha l l make i t freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the Head of my department or by his or her representatives. It i s understood that copying or publication of this thesis for f inancial gain shal l not be allowed without my written permission. Department of Educational Psychology and Special Education The University of Br i t i sh Columbia 2075 Wesbrook Place Vancouver, Canada V6T 1W5 Date II Abstract The purpose of this study was to evaluate three computer-based test interpretation systems for the Wechsler Intelligence Scale for Children- Revised (1974). The systems were evaluated f i r s t in terms of whether they were considered acceptable in terms of recently proposed guidelines for computerized interpretation and secondly, the degree to which they were considered adequate and useful in c l i n i c a l practice. A rating scale incorporating the above c r i t e r i a was designed for this study by the author. The results indicate a general fa i lure of the systems evaluated, to meet acceptable levels of adequacy in terms of either set of c r i t e r i a . Possible explanations of the poor ratings of these systems, l imitations of the study and implications for further research are raised in the Discussion. The study ends with some conclusions regarding computerized interpretation systems for the WISC-R. i i i TABLE OF CONTENTS Page ABSTRACT i i LIST OF TABLES V LIST OF FIGURES v i ACKNOWLEDGEMENTS v i i DEDICATION v i i i CHAPTER I INTRODUCTION AND BACKGROUND 1 The Problem 6 The Research Questions 7 Definit ion of Terms 9 Summary . 10 II REVIEW OF THE LITERATURE 12 Introduction. 12 Characteristic Profi les of the Mentally Handicapped 12 Characterist ic Profi les of the Reading Disabled 16 Characteristic Profi les of "Conduct Disorder" Children 21 The Diagnostic U t i l i t y of the WISC-R - A recent study 25 Summary 26 III METHODOLOGY 28 The Sample 28 Rating Scale 29 Procedures 31 CBTI Systems Evaluated 32 IV RESULTS 37 Analysis of data: Part 1 39 Analysis of data: Part 2 41 Pearson Correlation for Items 44 i v V DISCUSSION 46 Summary and Interpretation of F i n d i n g s : P a r t i . . . . 46 Summary and Interpretation of Findings:Part2 . . . . 48 Limitations of the Study 54 Suggestions for Further Research 55 Conclusions 57 REFERENCES 59 APPENDIX A RATING SCALE 66 APPENDIX B PEARSON CORRELATION FOR ITEMS 76 APPENDIX C EXAMPLES OF COMPUTERIZED TEST INTERPRETATIONS.. 80 V LIST OF TABLES Page 1. Interrater Agreement Co-efficients by Raters and Systems... 38 2. Interrater Agreement Co-efficients by Item and System 38 3. Means and Standard Deviations for Systems and Items 40 4. Summary of the Analysis of Variance of Systems by Items (1-9) 40 5. Means and Standard Deviations for Systems and Items 42 6. Summary of the Analysis of Variance of Systems by Items (10-15) 42 v i LIST OF FIGURES Page 1. Mean Rating of Systems on Items (1-9) 43 2. Mean Rating of Systems on Items (10-15) 43 ACKNOWLFJXSEMENTS I would l ike to sincerely thank: My thesis chairperson, Dr. Emily Goetz, for her continued guidance, support and encouragement throughout th is process. Her concern and committment of time for her students, w i l l always be remembered. Dr. Buff Oldridge and Dr. David Kendall for their guidance and contributions to this thesis . Dr. Ron Jarman for his s t a t i s t i c a l advice and suggestions. The Coquitlam School Divis ion and Dr. Joe Zaid, The Children's Hospital, for the use of their departments' computerized test interpretation systems. My friends and fellow students who supported me throughout this project. My parents who fostered in me a sense of deterinination and desire to learn. My raters who gave up a great deal of time to participate in this project. My wife Brenda and my children Lindsay and Evan, for their patience and love they gave to me throughout this project. Without your contributions, this thesis would not have been possible. Thank-you. To Brenda, Lindsay and Evan 1 CHAPTER I Introduction and Background The Wechsler Intelligence Scale for Children-Revised (WISC-R) i s currently the most popular and well researched psychometric instrument for the inte l lectual assessment of individual children (Kaufman, 1979b; Satt ler , 1982). Despite being designed solely as a measure of "global intelligence" (Wechsler, 1974, p . l ) the diagnostic u t i l i t y of the WISC-R, through the inspection of subtest patterning and verbal performance discrepancies, has been sought in the l i terature over the years. According to Meuller, Matheson & Short (1983) the WISC-R has a diagnostic appeal for c l in ic ians mainly because of the number and variety of re l iable subtests, (Kaufman, 1979a), i t s predictive va l id i ty for academic achievement (Hale, 1979), i t s re lat ive ly stable factor composition across age (Kaufman, 1979b), sex (Reynolds & Gutkin, 1980) inte l lectual levels (Van Hagen & Kaufman, 1975) and ethnic-racial groupings (Gutkin & Reynolds, 1980). The research regarding i t s diagnostic u t i l i t y has and w i l l l i k e l y continue to be a popular topic of research despite both major texts (Sattler, 1982; Kaufman, 1979b) used by school psychology training programs, which stress that prof i le analysis i s s t r i c t l y an hypothesis generating procedure. The hypotheses should be treated as tentative, formulated in relation to the ch i ld ' s absolute scaled scores, and not referred to as 'verif iable insights' (Sattler, 1982, p.201). Characteristic prof i les for such groups as the emotionally disturbed (Dean, 1977; Hamm & Evans, 1978; Bortner & Birch, 1969), conduct disordered (Hale & Landino, 1981; Paget, 1982), brain damaged (Bortner & Birch, 1969), the mentally handicapped (Nagelieri, 1980; 2 Kaufman & Van Hagen, 1977), the learning disabled (Anderson, Kaufman & Kaufman, 1976; Vance & Singer, 1979; Bortner & Birch, 1969), delinquent (Groff & Hubble, 1981) as well as the reading disabled (Rugel, 1974; Huelsman, 1970; Vance, Wallbrown & Blaha, 1978) have been suggested. Many researchers, though, stress caution i f not abandonment of the use of characterist ic prof i les for the diagnosing of exceptional chi ldren, because of research problems and possible errors in diagnosis (Miller,1980; Miller&Walker, 1981; Hale &Saxe,1983). It i s important to realize as wel l , that the categorization of exceptional students, i s in i t s e l f a controversial issue. Before examining characteristic prof i les of specif ic groups of exceptional children in chapter two, I would l ike to review two commonly used WISC-R interpretive systems as well as research into subtest patterning. F i r s t I w i l l examine Satt ler 's (1982) and Kaufman's (1979a) methods of WISC-R interpretation. I w i l l then br i e f l y review subtest patterning systems such as Bannatyne's (1968, 1974) recategorization schemes, f i e l d independent/dependent styles, f l u i d vs. crysta l l ized intel l igence, cerebral special ization and cognitive processing research that are commonly used to interpret the WISC-R. Kaufman (1979a) and Sattler (1982) advocate i n i t i a l s t a t i s t i c a l evaluation of prof i l es . Whereas Sattler provides precise differences required for each subtest (Table C-7 p. 568), Kaufman advocates the use of plus or minus three scaled points from the mean of the scale to determine significant strengths or weaknesses. Both authors acknowledge the merits of factor analytic research and incorporate 3 th is research into interpretation of the WISC-R. It appears however that their basic approaches to WISC-R interpretation are quite dif ferent . Whereas Sattler presents a modified version of Rabin & McKinney's (1972) "Successive Level" approach, which involves the examination of the F u l l Scale I . Q . , Verbal and Performance I . Q . ' s , intersubtest scatter, intrasubtest scatter and qualitative analysis in that order; Kaufman emphasizes the i n i t i a l inspection of Verbal Comprehension-Perceptual Organization factors. Only i f s t a t i s t i c a l analysis leads us to reject the factor analytic dichotomy does Kaufman suggest subtest analysis. Both authors draw attention to the d i s t rac tab i l i ty factor (involving Arithmetic, Digi t Span, Coding, Information) indicating that i f there are s t a t i s t i c a l implications for this factor, examiners have to use their c l i n i c a l expertise and observations to interpret the meaning for individual c l i ents . Kaufman encourages using subtests that have common variance in generating hypotheses, eg. Verbal Concept Formation which i s made up of the S imi lar i t ies and Vocabulary subtests. He cautions however that unique subtest interpretations should only be used, providing they have ample spec i f i c i ty , once shared or common hypotheses are systematically rejected. Bannatyne's category system, an approach used by Smith, Coleman, Dokecki and Davis (1977), indicates that reading and learning disabled children have strong Spatial A b i l i t y (Picture Completion, Object Assembly, Block Design), medium Verbal Conceptualization A b i l i t y (Similari t ies , Vocabulary, Comprehension) and weak Sequencing A b i l i t y (Arithmetic, Dig i t Span, Coding), and a l imited fund of Acquired Knowledge (Information, Arithmetic, Vocabulary). Retarded youngsters 4 in contrast have been shown to have no de f i c i t in Sequencing A b i l i t y but a strength in Spatial A b i l i t y that i s offset by an Acquired Knowledge weakness (Rugel, 1974). Other models for WISC-R interpretation such as F ie ld Independent/Dependent styles (Keogh & H a l l , 1974), have implications for the mentally retarded, reading disabled and learning disabled groups (Kaufman, 1979b). A review by Kaufman (1979b) of numerous studies using Horn & Cat te l l ' s (1966) model of f lu id vs. crysta l l ized intelligence indicates that diverse groups of children with school related disorders tended to have a Performance > Verbal pattern on the WISC-R. Kaufman (1979b) cautions however, that numerous alternative explanations can account for this pattern. Various processing models have been used as well for WISC-R interpretation. Cerebral special ization researchers (Bogen, 1969; Gazzaniga, 1975; Nebes, 1974; Ornstein, 1978) have distinguished between le f t hemisphere (analytical , log ica l & sequential processing) and right hemispheric (global, h o l i s t i c & non-verbal processing). A similar dis t inct ion between successive and simultaneous processing (Das, Kirby & Jarman, 1975) has been described. Kaufman (1979b) comments that regardless of the model employed i t seems a l l are potential explanations depending on the specif ic individual being assessed. To determine which approach best explains the data, eg. the factor analysis, Bannatyne's regrouping, the f i e l d independent cognitive style , the two modes of processing st imuli e t c . , Kaufman advocates that group prof i les are important but they do not t e l l us about specif ic individuals within the group. 5 We need to approach each individual's prof i le as a specif ic interpretive challenge, to be understood in the context of that ch i ld ' s particular cul tural background and test behaviours, (p. 19) Recently, computerized systems for intell igence test interpretation have become available. These have aroused concern and cr i t i c i sm (Thomas, 1984; Funk, 1984; Mitche l l , 1984; Bush, 1984; Altemose & Williamson, 1981; Matarazzo, 1983). Mitchel l (1984) states, Computer-Based Test Interpretation (CBTI) presents the f i e l d of psychology with i t s most serious and consequential challenge of the next decade, how we react to that challenge w i l l test our true mettle as professionals, and have a major impact on our c r e d i b i l i t y with the publ ic . In my opinion the stakes are about as high as they can be (p. 1). Many of the concerns centre on the accuracy of computerized interpretations, questionable r e l i a b i l i t y and v a l i d i t y , ethical issues, the concern that CBTI's have a false impression of authority and i n f a l l a b i l i t y , as well as on the inab i l i t y of the systems to consider background or test taking factors. Mitchel l (1984) states that "Computer-based test interpretation systems seem to be prime candidates for being nicely packaged and promising a l l sorts of things." He also comments that "Evidence to confirm or contradict the promises are buried in the program i t s e l f which i s seldom available to the test user." Computerized scoring systems for the WISC-R seem to be among the most popular interpretation packages on the market (Thomas,1984). Most WISC-R interpretation systems generate reports from birthdates and subtest scale scores alone. In view of questions raised here about characterist ic prof i l es , prof i le analysis in general, as well as recent concerns about CBTI systems, the WISC-R scoring interpretive 6 systems need to be evaluated careful ly (see Chapter III for description of CBTI systems to be evaluated). To the author's knowledge, only one published study has addressed th is issue. Replogle and Eicke (1985), through the use of a rating scale adapted from Webb, Mi l l er & Fowler (1969) had 35 psychologists rate psychologist's reports versus computer generated reports. Reports written by psychologists were based on relevant data provided to them by the authors. The results indicated s ignif icant ly higher ratings for computer generated reports on an item addressing their overal l impression, as well as on items pertaining to verbal-performance discrepancies, accurately addressing weaknesses and relat ive lack of irresponsible interpretation. The authors caution the use of automated reports and suggest further study in th is area. The Problem It i s the purpose of th is study to examine three published computerized interpretive systems of the WISC-R in terms of two sets of c r i t e r i a . In both instances a rating scale w i l l be incorporated to measure to what extent each set of c r i t e r i a i s met. F i r s t each interpretation system w i l l be examined in terms of what i s presently considered acceptable interpretation for computerized interpretation systems. A recent American Psychological Association (A.P.A.) proposal entit led "Guidelines for Computer-Based Tests and Interpretations" (1984) w i l l be used for th is purpose. These guidelines were chosen as they are the f i r s t and only guidelines A . P . A . has developed that pertain solely to Computerized Testing. Guidelines that dealt spec i f ica l ly with computerized 7 "interpretations" were transformed by th is author into a rating scale to allow the guidelines to be measured. To the writer's knowledge this i s the f i r s t time guidelines such as these have been quantified (section A, Appendix). As well as meeting the forementioned c r i t e r i a , i t was also necessary to examine the adequacy as well as the usefulness of the programs thus each system w i l l be examined to deterniine to what extent the computer-based interpretations are adequate and/or congruent with the c l i n i c a l interpretations of the same protocols by experienced c l in i c ians . Pertinent statements were developed for Section B of the rating scale to permit the measurement of these c r i t e r i a . Each system w i l l be examined to determine what, i f any, strengths and/or weaknesses exist between CBTI systems and among rating scale items. The Research Questions In this study the following questions w i l l be addressed in terms of two separate c r i t e r i a : C r i t e r i a 1: Rating Scale Section A 1. To what extent do the computerized test interpretations of each system meet the appropriate requirements of the latest draft of the "Guidelines for Computer-Based Tests and Interpretations" as measured by Section A of the rating scale? 2. To what extent are there signif icant overal l differences between CBTI systems and/or Items as measured by the rating scale, as well as to what extent are there d i f ferent ia l strengths and weaknesses across rating scale Items among CBTI systems? 8 C r i t e r i a 2: Rating Scale Section B 3. To what extent are the computerized test interpretations adequate and/or congruent with interpretations of the same data by experienced c l in ic ians as measured by the rating scale? 4. To what extent are there significant overal l differences between CBTI Systems and/or Items as measured by the rating scale, as well as to what extent are there d i f ferent ia l strengths and weaknesses across rating scale Items among CBTI systems? As noted ear l i er , Verbal-Performance discrepancies, Verbal and Performance scatter discrepancies, as well as subtest patterning are commonly used methods of WISC-R interpretation. The six categories of the WISC-R protocols used for the sample were chosen with this in mind. The conduct disorder, mental retardation as well as specif ic reading d i s a b i l i t y categories were chosen as i t appears they are common referrals for psycho-educational assessment. Thus i f CBTI systems were being used for the WISC-R, protocols of children with these d i f f i c u l t i e s would be interpreted quite often by the CBTI systems. The Diagnostic and S ta t i s t i ca l Manual of Mental Disorders (DSM-111) (1980) definit ions were used for the conduct disorder, mental retardation and specif ic reading d i s a b i l i t y categories for two reasons. Samples used in many studies have been c r i t i c i z e d for being too heterogeneous in nature (Miller & Walker, 1981). Thus I chose to use well defined groups so as not to perpetuate this problem. Secondly, the DSM-111 i s an accepted method for the c l i n i c a l c lass i f i cat ion of disorders such as these. 9 Definit ion of Terms The following terms have been defined for this study: 1. Significant Verbal Performance Discrepancy i s when a difference of 15 or greater points exists between the Verbal and the Performance 1. Q. A 15 point discrepancy i s signficant (p less than or equal to .01) (Sattler, 1982, p. 572). 2. Significant Verbal Scatter Discrepancy exists when there i s a s t a t i s t i c a l l y significant difference when comparing a ch i ld ' s scaled score on one verbal subtest with his/her average scaled score on the Verbal subtests. Sattler provides in Table C-7 the difference required at various levels of significance. The .01 level of significance was used for this study (p. 568). 3. Significant Performance Scatter Discrepancy exists when there i s a s t a t i s t i c a l l y significant difference when comparing a ch i ld ' s scaled score on one Performance subtest with his/her average scaled score on the Performance subtests. Sattler (1982) provides in Table C-7 the differences required at various levels of significance. The .01 level of significance was used for th is study (p. 568). The following definit ions are taken from the DSM 111 (1980). 4. Mental Retardation i s when a ch i ld meets the following diagnostic c r i t e r i a : A. Significant subaverage general inte l lectual functioning: an I .Q. of 70 or below on an individually administered I .Q. test (for infants, since available intelligence tests do not y ie ld numerical values, a c l i n i c a l judgement of s ignificant subaverage inte l lectual functioning). 10 B. Concurrent def ic i t s or impairments in adaptive behavior, taking the person's age into consideration. C. In the case of Mild Mental Retardation an I .Q. of between 50 and 70 would meet the f i r s t requirement (pp. 39-40). 5. Conduct Disorder i s defined as a ch i ld who would meet either of the Undersocialized, Aggressive; Undersocialized, Non-Aggressive; Socialized, Agressive; Socialized Non- Agressive; or Atypical Conduct Disorder categories. In a study examining the interrater r e l i a b i l i t y of DSM-III c lass i f icat ions in children, Weery, Methven, Fi tzpatr ick , Hamish, and Dixon (1983) found no u t i l i t y in subdividing the Conduct Disorder categories due to poor r e l i a b i l i t y . Adequate r e l i a b i l i t y was found only when differentiating between the Oppositional and Conduct disorder categories. 6. Developmental Reading Disorder i s when the following c r i t e r i a are met: A. Performance on standardized, individual ly administered tests of reading s k i l l i s s ignif icant ly below the expected leve l , given the individual's schooling, chronological age, and mental age (as determined by an individually administered IQ tes t ) . B. The ch i ld ' s performance on tasks requiring reading s k i l l s i s s ignif icant ly below his or her inte l lectual capacity (pp. 93-94). Summary There has been a concern in the l i terature regarding the diagnostic u t i l i t y of the WISC-R. With the recent ava i lab i l i t y of computerized test interpretation systems i t seems imperative to examine the merits of such systems. Three CBTI systems that interpret the WISC-R have been chosen for this purpose. The purpose of the remaining chapters are as follows. Chapter 2 11 is concerned with a review of the literature regarding the diagnostic ut i l i ty of the WISC-R for children who meet the mild mental retardation, conduct disorder and developmental reading disorder categories. The Methodology chapter includes a description of the sample, instruments and procedures with Chapter 4 containing the results and discussion. Chapter 5 includes the summary and interpretation of findings, implications for further research and conclusions. 12 CHAPTER II REVIEW OF THE LITERATURE Introduction Research regarding characterist ic prof i les of exceptional chi ldren, through the inspection of the WISC-R, i s quite evident in the l i terature . A major cr i t i c i sm has been the use of heterogeneous groupings such as "learning d i sab i l i t i e s ." Mi l l er and Walker (1981) commenting on this problem state, "It only confuses the issue to generalize from MBD to another population, such as reading d i sab i l i t i e s ." I chose therefore to concentrate on research that used samples that were well enough defined so as to meet the following categories: mild mental retardation, developmental reading disorders and conduct disorders. It i s the purpose of this chapter to examine the l i terature concerning characterist ic prof i les of children who meet the mild mental retardation, developmental reading disorder and conduct disorder categories (see pp.9-10). Characteristic prof i les based on the WISC-R as well as the WISC have been examined. When drawing conclusions however, more emphasis has been placed on the WISC-R research. A study by Mueller, Mancini & Short (1984) concerning the diagnostic efficiency of the WISC-R i s then discussed, followed by a summary of the chapter. Characteristic Profi les for the Mentally Handicapped Research regarding characterist ic prof i les for the mentally handicapped has been somewhat limited since the revised edit ion of the WISC (1974). Research investigations have generally taken 2 courses, 13 those investigating characterist ic verbal-performance patterns and those investigating patterns through the use of Bannatyne's (1974) recategorization scheme. Vance et a l . (1978) attempted to analyze the cognitive a b i l i t i e s of mentally handicapped children as measured by the WISC-R. Results indicated that the relative strengths or weaknesses of their sample were not restricted to either the Verbal or Performance area. Differences among the subtests were as great within the Verbal and Performance area as between them. It was concluded that individual subjects showed no pattern which could be used for diagnostic purposes. In contrast, s imilar studies using the WISC, have found characterist ic verbal-performance patterns. Witkin et a l (1966) found that their group of mentally handicapped boys performed extremely poorly on subtests loading a verbal comprehension factor and re lat ive ly much better on subtests loading an analytic factor. Similar results were obtained by Keogh et a l . (1973) and Belmont, Birch & Belmont (1967). Belmont et a l . suggested that a central de f i c i t in inte l lectual functioning of educable mentally handicapped children (EMH) was related to a de f i c i t in their verbal s k i l l s . Kaufman (1979b) in discussing his three factor approach (Perceptual Organization, Verbal Comprehension, Third Factor) commented that i t has been unable to explain the characterist ic prof i les of mentally handicapped youngsters, who generally perform adequately on Digi t Span and poorly on Vocabulary. Researchers have thus looked for alternative groupings to explain the prof i les that quite often occurred. Bannatyne's (1974) recategorization system has 14 been the most popular alternative which i s evident by i t s relat ive prominence in the l i t erature . C l a r i z i o and Bernard (1981) attempted to determine i f the three factor approach proposed by Bannatyne (1968) would be effective in the d i f ferent ia l diagnosis of groups of exceptional chi ldren, of which the mentally handicapped was one. Results indicated that a three factor WISC-R prof i le was not effective in the d i f ferent ia l diagnosis of mentally handicapped children from other exceptional groups. These results were inconsistent with Si lverste in's (1968) findings that EMH children performed re lat ive ly well on Picture Completion, Object Assembly and Block Design subtests but poorly on Information, Arithmetic and Vocabulary, the latter being the not yet implemented Acquired Knowledge category. Similar patterns of strengths and weaknesses were observed by Kaufman and Van Hagen (1977) on the WISC-R. Nagl ier i ' s (1980) study supports the concept of d is t inct patterns of a b i l i t i e s for EMH children, although the lack of control group as well as a lack of procedural and s t a t i s t i c a l information lessened i t s c r e d i b i l i t y . Nagl ieri reported a subtest pattern of OA>Cd>PC>C>V>S>PA>BD>I>A, which i s inconsistent with other WISC-R interpretative patterns of mentally handicapped children. Webster and LaFayette (1980) examined Bannatyne's revised recategorization system (Spatial, Conceptual, Sequential & Acquired Knowledge) to discriminate among three groups of exceptional chi ldren. The results of their analysis indicated that 100% of the EMH students were predicted to be labelled learning disabled on the basis of the recategorization, even though the four factor model was developed with the intention of being able to distinguish between these two groups. 15 The authors suggest that Bannatyne's recategorization scheme may have some usefulness in differentiating normal learners from handicapped learners but has almost no value in distinguishing specif ic subgroups of handicapped students. Rugel (1974) on the other hand comments on d i s t inct patterns shown by mentally handicapped youngsters indicating they have no de f i c i t in Sequencing a b i l i t y but a strength in Spatial a b i l i t y that i s offset by an Acquired Knowledge weakness. Henry & Wittman (1981) in a similar study concurred with Clar i z io et a l . and Webster et a l . , their results indicating that Bannatyne's system was of l i t t l e value in differentiating between EMH and other exceptional groups and i f used could contribute to misdiagnosis. S imi l i ar ly , Schmidt & Saklofske (1983) in examining the diagnostic u t i l i t y of the WISC-R, found no significant patterns of Verbal-Performance discrepancies, subtest scatter or recategorized subtest patterns as proposed by Bannatyne (1974). Of the l i terature reviewed, i t appears that studies examining the diagnostic u t i l i t y of the WISC have concluded that there are characterist ic prof i les of mentally handicapped chi ldren, whether i t be verbal-performance or various subtest patterning (Witkin et a l . , 1966; Keogh et a l . , 1973; Belmont et a l . , 1967; S i lverste in , 1968; Rugel, 1974). In contrast, of the studies examining the diagnostic u t i l i t y for the mentally handicapped, using the WISC-R, the majority (Vance et a l . , 1978; C l a r i z i o et a l . , 1981; Webster et a l . , 1980; Schmidt & Saklofske, 1983) ) have found no patterns which could be used for diagnostic purposes. It appears at best, the WISC-R may be able to differentiate normal from handicapped learners but does not appear to be of value in distinguishing specif ic subgroups 16 of exceptional chi ldren. Characteristic Profi les of the Reading Disabled Research into the u t i l i t y of the WISC-R in diagnosing reading d i s a b i l i t i e s has a long but sparse tradi t ion , as many researchers have chosen to investigate a much broader category of "learning d i sab i l i ty" . Belmont and Birch (1966) concluded that retarded readers, when matched with normal readers for F u l l Scale I . Q . , were characterized by better functioning on the subtests of the Performance Scale and poorer functioning on the Verbal Scale as opposed to Kallos et a l . (1961) & Altus (1956) who found no difference. Characteristic low scores in Arithmetic, Information and Coding were prevalent in the reading disabled l i terature as wel l , during this time (Burks & Bruce, 1955; Al tus , 1956; Sheldon & Cranton, 1959; Dockrel l , 1960; Kallos et a l , 1961). A few years la ter , Bannatyne (1968) suggested that a better understanding of the WISC scores of "genetic dyslexics" (p. 246) could be obtained by grouping the subtests into Spat ia l , Conceptual and Sequential factors rather than Verbal and Performance scales. Bannatyne (1968, 1974) reported that genetic dyslexics scored highest on the Spatial factor, followed by Conceptual then Sequential. Twenty-five studies which reported WISC-subtest scores of disabled readers were reviewed by Rugel (1974). Rugel found the same prof i l e of a b i l i t i e s that Bannatyne had found for dyslexics, with reading disabled populations scoring consistently lower as a group on Arithmetic, Coding and Digi t Span subtests. 17 Once the WISC-R (1976) began to gain popularity, many studies were conducted which continued to support Rugel"s findings. In a sample of reading disabled children, Smith, Coleman, Dockecki & Davis (1977) reported that on the WISC-R, disabled readers showed a pattern of low scores on /Arithmetic, Coding and Digi t Span subtests. Similar results were obtained by Vance, Gaynor & Coleman (1976) who associated poor readers with low scores on the Information, Arithmetic, Coding and sometimes Digi t Span subtests. Johnson & Wallersheim (1977) examined 21 studies involving the performance of reading disabled children on the WISC. Of the 21 studies reviewed 14 noted low scores on Information, 18 showed low scores on Arithmetic, 11 reported low scores on Digit Span and 16 studies noted low Coding scores. Johnson et a l . concluded that disabled readers tend to be deficient in these verbal areas requiring the retention of knowledge impressed formally (in school) and in the immediate retention of auditori ly received stimulation. It was observed that their strengths seemed to be in that aspect of verbal comprehension requiring the use of pract ica l judgement. Vance, Wallbrown & Blaha (1978) isolated 5 WISC-R prof i les (Di s trac t ib i l i ty , Perceptual Organization, Language D i s a b i l i t y -Automatic, Language Disab i l i ty - Pervasive and Behavioural Comprehension & Coding) which they used to define a syndrome or cluster of related behaviours. Interpretations were based on the nature of the prof i les along with results of other assessment data (ie. case histories , observation by reading c l i n i c i a n as well as teacher). They concluded that the WISC-R was a useful instrument for describing the a b i l i t y patterns of many reading disabled chi ldren. 18 Wallbrown, Vance & Blaha (1979) followed up by generating remedial strategies based on those (1978) findings. Mi l l e r (1980) c r i t i c i z e d Vance et a l . for throwing out 24 subjects (19%) who did not show "an appreciable amount of var iabi l i ty" in their own prof i l e s , as well as 11 students who did not f i t into one of their syndromes. Mi l l er commented that the five syndromes and remedial strategies were well supported but fa i led to understand the l ink between them and the WISC-R. In their search for characterist ic prof i les of disabled readers, many researchers have cautioned against generating remedial hypotheses from the WISC-R alone. Rykman (1981) found that the amount of scatter on the Verbal, Performance and F u l l Scale of his reading disabled sample, s ignif icant ly exceeded that found with the standardization sample. He f e l t that to characterize a ch i ld based on one prof i l e would be very misleading. Stevenson (1980) found that a functional or process analysis of the WISC-R may be of real value in d i f ferent ia l diagnosis. She commented on the importance of c l i n c i a l observation of children in their approach to tasks, (what strategies are used, how rapidly decisions are made etc.) in providing direct ion for individual remedial instruction based on factor patterns. Wallbrown et a l . (1979) emphasized that other information besides the WISC-R should be incorporated prior to generating remedial hypotheses. They stressed that the WISC-R prof i le never constitutes an adequate basis for generating a remedial strategy, but i f used with other information can provide a valuable source of information about a ch i ld ' s a b i l i t y pattern. Reynold's (1981) f e l t that for any individual c h i l d , the Bannatyne 19 recategorization may not be the most appropriate one. Reynolds stated that the psychologist's primary task i s to locate the most meaningful interpretation for the ch i ld in question, suggesting that interpretation of the WISC-R should always begin with the three major scores (ie. Verbal, Performance, F u l l Scale) due to their strong empirical factor-analytic support. Many researchers have found characterist ic group prof i les for reading disabled children but have found these prof i les non-apparent in individual scores. Rugel (1974) comments that a definite pattern of strengths or weaknesses may not always emerge, but i f i t does, w i l l hopefully provide useful diagnostic information in terms of planning and remediation. Huelsman (1970) concluded, While groups of disabled readers tend to show high Performance I .Q . ' s and low scores in Information, Arithmetic and Coding, individual disabled readers generally show no items of this pattern and seldom i f ever, show the complete pattern, (p. 549) Huelsman commented that research should be directed toward defining the possible significance of differences in WISC scores, rather than toward pattern identi f icat ion which he f e l t was re lat ive ly useless. Hale's (1979) investigation indicated that underachieving versus adequate readers could be s t a t i s t i c a l l y separated by subtest differences on the WISC-R, i t s use in individual diagnosis however, was not supported. S imi l iar ly , Decker & Corley (1984) found Bannatyne's prof i l e to be s ignif icant ly more common for their reading disabled group, but appeared to have l i t t l e diagnostic v a l i d i t y for individual chi ldren. Badian's (1981) investigation confirmed ear l ier reports that many disabled readers exhibit a Spatial> Conceptual Sequential recategorized factor prof i le when compared to adequate readers. Although the difference between the disabled 20 and adequate readers was s t a t i s t i c a l l y significant Badian cautioned using the prof i l e to predict or to c lass i fy , as 60% of the reading disabled sample did not show this pro f i l e . It was also found that due to the age related changes in prof i les of poor readers, using the WISC-R to predict later reading achievement, was not suitable. Marling, Kaufman & Tarver (1981) reviewed 245 studies that investigated the performance of disabled learners on the WISC or WISC-R and concluded that as a group characterist ic prof i les may be exhibited but few individual learning disabled children conformed to this pattern. They concluded that WISC-R prof i les may not be useful for d i f ferent ia l diagnosis of learning disabled students. Moore & Wielan (1981) measured the indexes of WISC-R test score scatter for reading referred children and compared these to the standardization sample. Although s t a t i s t i c a l l y s ignif icant differences were found between the reading referred and standardization sample, the magnitude of these differences overa l l , was quite small. Moore et a l . provided evidence, that as a group, reading referred children produced about the same amount of WISC-R scatter as normal children. Whitehouse (1983) examined 25 normal and 25 dyslexic readers with particular emphasis on the group's performance on the coding subtest. Writing and copying speed as well as recognition memory for the number/symbol associates of the coding subtests were assessed. Dyslexics performed s ignif icant ly more poorly than normal readers on the Coding subtest and writing speed task but showed no evidence of impaired memory for the number/symbol associates. Whitehouse suggested that for those who administer the WISC-R to children with 21 suspected or diagnosed reading d i s a b i l i t y , i t may be helpful to supplement the experimental memory and writing speed tasks to help provide an accurate understanding of the low Coding score one i s l i k e l y to encounter in such a c h i l d . In summary, i t appears that group prof i les of reading disabled children on the WISC or WISC-R, most commonly exhibit a Spatial> Conceptual Sequential pattern (Bannatyne, 1968). These characterist ic low scores on the Arithmetic, Dig i t Span and Coding subtests however, seldom occur in an individual basis, therefore individual diagnosis based on group patterns does not appear to be warranted. The use of other data ( c l i n i c a l observations, other test measures) to supplement the WISC-R i s frequently advised as opposed to using the WISC-R alone in diagnosing a b i l i t y patterns in "reading disabled" chi ldren. Care must be taken to ensure the use of the most meaningful interpretation for a particular c h i l d , whether i t be the Verbal, Performance & F u l l Scale I . Q . ' s , Bannatyne's recategorization scheme or some other system that provides remedial hypotheses for the c h i l d . Characteristic Profi les of "Conduct Disordered" Children This section reviews the l i terature concerning WISC-R prof i les of "conduct disordered" children. Researchers interested in WISC-R interpretation have sought to isolate various characterist ic prof i les of "conduct disordered" children. Bannatyne's recategorization system, Verbal - Performance discrepancies, as well as various subtest patterns have been used in this quest. Several studies have been done that use the term "emotional disturbance" when describing their sample. Only those studies where i t could be determined that their 22 sample would meet the def ini t ion "conduct disorder",(see p.9-10) were included in this review. In these studies the terms "conduct disorder" and "emotional disturbance" are used to mean one and the same thing. Webster and Lafayette (1980) examined the u t i l i t y of Bannatyne's recategorization system in discriminating between 294 learning disabled (ID), 36 educably mentally retarded (EMR) and 71 emotionally disturbed (ED) children. The results of their analysis revealed that 100% of the ED students were predicted to be labelled LD on the basis of their recategorization. Henry & Wittman's (1981) results were quite s imilar , indicating that Bannatyne*s pattern was of l i t t l e value in differentiating between exceptional groups in which emotionally disturbed children were included. Dean (1977) studied Caucasian males referred for evaluation because of "conduct disorders". Dean concluded that as a group, these students showed lower Verbal than Performance functioning on the WISC-R, as well as greater scatter among Verbal and Performance subtest scores. Dean (1978) t r ied to isolate subtest patterns of the WISC-R that would differentiate between the performance of emotionally disturbed and learning disabled children. He found that children diagnosed as learning disabled scored predict ively lower on the Block Design, Picture Arrangement and Object Assembly subtests and higher on Vocabulary compared to their emotionally disturbed counterparts. This finding suggested a disturbance on the part of the learning disabled children in perceptual integration whereas children with behaviour problems displayed more of a verbal d e f i c i t . Dean's (1978) results were c r i t i c i z e d however, by Coolidge (1983) due to what he f e l t was an innapropriate data analysis for the study. 23 In contrast, Morris, Evans & Pearson (1978), focused primarily on univariate subtest comparisons of the WISC-R. Their sample of conduct disordered children displayed s ignif icant ly lower scaled score means across a l l 10 subtests compared to the standardization sample and exhibited consistently developed verbal and non-verbal a b i l i t i e s . Paget's (1982) findings revealed relative strengths in perceptual organization s k i l l s and weaknesses in s k i l l s that involve sequencing, memory and attention. C l a r i z i o & Veres (1983) examined the diagnostic u t i l i t y of Verbal-Performance discrepancies based on Paget's (1982) findings and high Similarit ies- low Information patterns reported by Dean (1977) in the diagnosing of emotional impairment. C lar i z io et a l . ' s study was unique in that i t employed a "normal" control group. This allowed for comparison of patterns in the emotionally disturbed sample with prevalence in the normal sample. The omission of control groups in many of the studies reported causes concern, especially in l ight of Kaufman's (1975) report of frequent large Verbal-Performance discrepancies in the WISC-R standardization sample. C l a r i z i o et a l . (1983) found, using a discriminant function analysis, that the rule of a 12 point difference correct ly identif ied only 63% of the children and the Performance at least 12 points greater than Verbal rule was successful only 66% of the time in identifying emotionally disturbed chi ldren. They concluded that the Verbal-Performance discrepancy would not lead to a useful decision rule regarding the diagnosis of emotional impairment. Similar findings were found regarding the suggested high Simi lar i t ies - low Information pattern. Using a discrimant function analysis they 24 concluded this rule was not useful in diagnosing emotional impairment. Hamm & Evans (1978) attempted to find characterist ic prof i les for a group of emotionally disturbed children by grouping students according to a paradigm offered by Witkin et al.(1962). Using their factors of Verbal Comprehension (Vocabulary, Information, Comprehension), Attention/Concentration (Arithmetic, Dig i t Span, Coding) and Analytic F i e ld (Object Assembly, Block Design, Picture Arrangement) the authors concluded that "Systematic patterns of performance did not appear which distinguished emotionally disturbed children from normal children" (p.190). Their findings regarding attention de f i c i t s , i t should be noted, were inconsistent with Paget's (1982) finds of attention de f i c i t s . Hale & Landino's (1981) study indicated that behaviourally disturbed and normal children cannot be s t a t i s t i c a l l y separated by subtest differences on the WISC-R. Even using the most optimistic analysis they f e l t c l in ic ians would err with one of every three children i f they based their placement decisions on the WISC-R subtest scores. I t appears then, that there i s quite a controversy in the l i terature regarding characterist ic prof i les for emotionally disturbed chi ldren. Despite the lack of attempts to distinguish between types of emotionally disturbed children as well as the lack of control groups in a l l but one study (Clarizio et al.,1983 ) , some implications can be drawn. Bannatyne's pattern does not appear to be useful in the diagnosis of emotional impairment (Webster et a l . , 1980; Henry et a l . , 1981). Only two studies reviewed indicated the presence of diagnostic patterns (Dean, 1977; Paget, 1982) both indicating low verbal functioning with the latter indicating weaknesses in s k i l l s involving 25 sequencing, memory and attention. Of the remaining studies, those searching for Verbal-Performance discrepancies (Morris et a l . , 1978; C l a r i z i o et a l . , 1983) as well as subtest patterning (Hamm et a l . , 1978; Hale et a l . , 1981) fa i led to find any signif icant patterns or rules that would distinguish emotionally impaired children from "normal" or other exceptional groups of chi ldren. The results in fact were quite often in direct opposition to each other. Webster et a l . comment that, Interpretation of performance on norm-referenced testing must be supplemented by analysis of the student's actual behaviour and learning styles and strategies in r e a l - l i f e settings. I t i s only through trained c l i n i c a l behavioural observation, coupled with careful scrutiny and analysis of norm-referenced test data that the most appropriate and effective educational interventions may be generated and implemented, (p. 240) The l i terature would seem to indicate then, that c lassifying children based on their performance on the WISC-R alone might very often lead to the misdiagnosis of exceptional students (Hale et a l . , 1981; Hirshoren & Kavale, 1976; Henry et a l . 1981). The Diagnostic U t i l i t y of the WISC-R - A Recent Study A major f a i l i n g within the research, as previously mentioned, i s the questionable a b i l i t y of the WISC-R, through d i f ferent ia l patterns of performance, in distinguishing exceptional individuals from normal groups of chi ldren. (Mueller et a l . , 1984). Mueller et a l . (1984) analyzed the d i f ferent ia l diagnostic efficiency of WISC-R by applying Ke l ly ' s (1923) method of estimating the proportion of differences in excess of chance to a l l possible WISC-R subtest scaled and paired comparisons as well as Bannatyne's (1974) and Kaufman's (1975) cluster and factor comparisons 26 respectively. This procedure was applied to the WISC-R standardization data (Wechsler, 1974). Meuller et a l . (1984) found that basing diagnostic statements upon score differences between individual WISC-R subtests was not warranted. /All of the 66 possible subtest comparisons (Table 1, p . 304-305) fa i led to meet minimum requirements for diagnostic efficiency at at least one age l eve l . Comparing the Verbal and Performance scales however, appeared diagnostically tenable at a l l age levels . The diagnostic efficiency of the two scales improved even more so when the WISC-R was divided into three or four scales based on Kaufman's or Bannatyne's regroupings. Meuller et al.(1984) found the Conceptual, Spatial & Sequential c lusters , those resembling Kaufman's WISC-R factor groupings, were diagnostically ef f ic ient at a l l age levels , while the Acquired Knowledge category lacked diagnostic value when compared with the Conceptual or Sequential c lusters . Thus the four factor Bannatyne (1974) recategorization would appear to add l i t t l e to the Kaufman three factor solution other than permitting a comparison between the Spatial and Acquired Knowledge clusters . Summary In summary, i t appears that the majority of researchers interested in diagnostic WISC-R patterns for mentally handicapped as well as conduct disorder chi ldren, have been unable to demonstrate any signif icant characterist ic pro f i l e . Reading disabled children as a group, tend to exhibit a Spatial>Conceptual>Sequential pattern. This pattern however, seldom occurs on an individual basis . Meuller et a l . (1984) concluded that Kaufman's (1975) three factor approach and 27 Bannatyne's (1968) Conceptual, Spatial and Sequential clusters were more diagnostically eff ic ient when compared to the subtest or scale score comparisons. It appears that supplementing the WISC-R with other data and c l i n i c a l observations i s absolutely necessary. Caution i s advised regarding the use of the WISC-R in i so lat ion, as i t may often lead to a misdiagnosis of the c h i l d . 28 CHAPTER III METHCDOLOGY The purpose of the present study i s to evaluate three software systems that purport to interpret the WISC-R in the absence of other diagnostic data. This chapter includes a description of the sample and sampling procedures used followed by a description of the rating scale and the procedures for the study. Next the CBTI systems to be evaluated are described with the remaining chapter devoted to a description of the data analysis. The Sample Protocols The nine WISC-R protocols chosen for this study were selected from assessments done on children at the Education C l i n i c at the University of B r i t i s h Columbia, where the WISC-R was administered as part of the assessment process. The c l i n i c which i s part of the learning f a c i l i t y for graduate students, accepts referrals from private and public inst itutions throughout B r i t i s h Columbia, where psycho-educational assessments ta i lored to each individual re ferra l are provided. The sample protocols were selected according to the following c r i t e r i a : (see Definit ion of Terms, Ch. 1) 1. One protocol had a s ignif icant Verbal-Performance Discrepancy. 2. One protocol had a s ignif icant Verbal Scatter Discrepancy. 3. One protocol had a significant Performance Scatter Discrepancy. 4. Two protocols were selected of children who met the diagnostic c r i t e r i a of "mild mental retardation". 29 5. Two protocols were selected of children who met the diagnostic c r i t e r i a of a "conduct disorder". 6. Two protocols were selected of children who met the diagnostic c r i t e r i a of a "developmental reading disorder". It i s emphasized that these protocols were chosen from completed psycho-educational asessments and not from the WISC-R protocols alone. Protocols that met the c r i t e r i a of the six categories were chosen, as mentioned ear l i er , from assessments completed at the Education C l i n i c . Sample protocols for c r i t e r i a 1-3 were chosen given they did not meet c r i t e r i a 4-6. Protocols were chosen for c r i t e r i a 4-6 regardless of whether they met c r i t e r i a l , 2 , o r 3 . M l extraneous information (ie observations, calculations etc.) were erased and the protocols each given a number. Rating Scale The rating scale used in this study was developed to f a c i l i t a t e the evaluation of three CBTI systems, spec i f i ca l ly to address the research questions presented in Chapter I . A document entit led "Guidelines for Computer-Based Tests and Interpretation " (1984) was chosen to form the basis of Section A (statements 1-9) of the rating scale spec i f ica l ly to address Research Question 1. These guidelines were chosen as they are the most recent, comprehensive and appropriate guidelines available, as well as being sanctioned by the American Psychological Association (APA). Mitchel l (1984) c i t ing other documents of a s imi l iar nature commented, "I feel more secure about a document entit led "Guidelines for Computer-Based Tests and Interpretation." These guidelines were developed under the joint 30 auspices of the APA Committee on Professional Standards (COPS) and the Committee on Psychological Tests and Assessment (CPTA). Guidelines from this document that dealt spec i f ica l ly with computerized test interpretation were incorporated into the rating scale. The remainder of the rating scale (Section B) i s devoted to addressing Research Questions 3 and 4. As well as evaluating to what extent the CBTI systems met approved guidelines, i t also seemed necessary to evaluate to what extent the computerized interpretations were adequate and/or congruent with the interpretations of experienced psychologists. Statements were thus constructed to allow an evaluation of the following areas, namely the technical and s t a t i s t i c a l calculations (computations, factor analyses e t c . ) , interpretations as well as recommendations for the c h i l d . Statements were incorporated as well that are concerned with the usefulness of the report, i t s a b i l i t y to save a "user" time, as well as whether the WISC-R computer report i s suff ic ient ly adequate in terms of being distributable to professionals and other involved persons ie.parents, guardians. The guidelines used for Section A as well as statements in Section B were converted into a Likert Scale. The raters were asked to rate to what extent they "Agree" or "Disagree" with each of the 15 statements in the rating scale. Prior to the actual rating of the CBTI systems, the rating scale was pi loted with each of the three raters. This served two purposes. F i r s t , i t served to reduce any semantic ambiguities present. Second, due to the fact i t appears to be the f i r s t attempt to quantify guidelines such as these, feedback from professionals was seen as useful and allowed for modifications prior to the actual rating taking 31 place. For purposes of analysis, ratings were assigned numerical values ranging from "1" for "Strongly Disagree" to "5" for "Strongly Agree". Procedures Three Computer-Based Interpretation Systems for the WISC-R were obtained. B r i t i s h Columbia. Three psychologists, registered in the Province of B r i t i s h Columbia, rated the CBTI systems using the rating scale designed for this study. Scaled and I .Q. scores, as well as test and b i r th dates from each protocol were entered into each of the three CBTI systems. Twenty-seven computerized interpretations were generated, three for each of the nine protocols. One complete evaluation package for each corresponding protocol, was given to each of the raters, along with three manuals, one for each of the CBTI systems. Each package contained one protocol, three corresponding CBTI interpretations, (one from each system), as well as three rating scales with instructions, one for each of the computerized interpretation systems. The computer-based interpretations were random ordered in each package to avoid rater bias. A l l packages, rating scales, protocols and computerized interpretations were labelled to improve organization but most importantly to avoid any confusion or mix-up of information. The packages were given to the raters and returned within six weeks. Rating time averaged 16-20 hours per rater. 32 CBTI Systems Evaluated It i s the purpose of this thesis to evaluate three CBTI systems for WISC-R interpretation. An example of each system's computerized report i s provided in Appendix C. The systems are as follows: THE EXPLORER (Academic Therapy Publications, 20 Commercial Blvd. Novato, Ca l i forn ia , 92947-6191 c.1983.) (System 1) Vance, Booney. (author) The manual states that The Explorer has been "programmed for many different interpretations of subtest and prof i l e patterns" the purpose being to "provide a simple and effective method of analyzing WISC-R scores." The program purports to recognize patterns (areas of strengths and weaknesses) and to print out c l i n i c a l and educational hypotheses (interpretations) with more r e l i a b i l i t y and efficiency than the authors feel most c l in ic ians would be capable of without such assistance. The program as stated in the manual, i s geared to school psychologists, educational diagnosticians and others who are qualif ied to interpret WISC-R results . System Requirements The Explorer i s designed to operate on a TRS-80 Model III or IV (TRS-80 version) or an Apple 11+ or l i e (Apple version). Two disc drives, an 80 column printer and at least 48K of random access memory (RAM) are required. (Note: A program designed to operate on a single disc drive w i l l be available shortly) . A formatted disc i s required for storage of student data and must be in disc drive when using computer. Description The program combines data management features (creating, edit ing, 33 deleting or selecting f i les) with WISC-R analysis . Options for f i l e management are: Input new records, Edit records, Print out data, Report output, K i l l a record (delete), Change f i l e s and Quit. Options for reporting include: Subtest report only (gives record of student's score and indicates whether they are high, normal or low; "low" <=7, average 8-12, "high"=>13); Report with subtest descriptions (gives subtest report with descriptions of a l l subtests); Subtest report, descriptions and hypotheses (gives subtest report and descriptions with the addition of hypotheses, which give suggestions as to why the ch i ld scored low or high - no hypotheses are given for normal range scores); Subtest report, description, hypotheses and scatterplot (in addition to subtest report, description and hypotheses provides subtest and factor scores, a pr int out of a normal curve for IQ scores); as well as Scatter plot only. The Explorer provides a printed report that describes each of the 12 subtests, and prints the ch i ld ' s subtest scores indicating level of performance. Areas of strengths and weaknesses are identif ied through hypotheses accompanied by a scatterplot of subtest performance. The report also identif ies Verbal-Performance IQ discrepancies and provides a series of statements to explain the discrepancy. In addition to the generation of hypotheses, the Explorer categorizes each ch i ld ' s subtest scaled scores into various factor scores, determined by obtaining the average scaled score for a group of subtests associated with a given factor. The manual provides, on pages 13 and 14, the various factors that are evaluated and the subtests on which they are based. Cost: $60 34 WISC-R SCORING AND INTERPRETIVE REPORT (Psychologistics, Inc. , P.O. Box 3896, Indialantic, Flor ida 32903. c.1982) (System 2) Honaker, M. & Harre l l , T . (author). The seven page manual states that the WISC-R report "is designed to provide comprehensive scoring and interpretation" of the WISC-R. The program provides a record of score analyses (part 1) and a narrative report (part 2). The derived scores and the narrative report are obtained for the subtest scores and "objective behavioural observation" (optional). The authors state "No knowledge of psychological assessment i s required to operate the program and generate a complete interpretive report" but go on to state that the report i s intended for professional use and should be interpreted only by professionals trained in inte l lectual assessment and evaluation. The authors state that their "interpretive logic" i s s imilar in principle to that of Kaufman (1979) and Sattler (1982) and refer the user to those references. System Requirements The WISC-R report i s designed to operate on an Apple 11+ with at least 48K of RAM, one or more disc drives, DOS 3.3 and a s e r i a l or p a r a l l e l interface printer . Description The WISC-R report Output 1 (The Derived Scores) contain the ch i ld ' s age at testing (calculated from date of test and date of b i r t h ) , subtest scaled scores and Verbal-Performance scale averages, IQ scores and percentiles, VIQ-PIQ difference and significance l eve l , 35 factor scores and percentiles as well as subtest scaled score differences. The WISC-R Report Output 2 (The Narrative Report) begins with a summary of the demographic data and general interpretation of the WISC-R subtests, scaled and IQ scores. The second section (optional) contains a description of the ch i ld and his/her behaviour during the evaluation, based on the behavioural check l i s t . The th ird section contains interpretations of subtest and factor strengths and weaknesses followed by a narrative report delineating the general implications of the evaluation findings. Cost: $295 WISC-R COMPUTER REPORT (Southern Micro Systems for Educators, P.O. Box 2097, Burlington, N.C. 27216-2097 c.1983) (System 1) Nicholson, C. (author). The 33 page manual discusses the rationale, philosophy, types of analyses and operation of the program. The purpose of th is program, according to the author, i s to f a c i l i t a t e report writ ing, provide the psychologist with information to make better interpretations and recommendations, saving in report writing time as well as providing the psychologist with quick and accurate information. The author disclaims responsibi l i ty for the "program's performance, accuracy or appropriateness for any particular application." System Requirements The WISC-R Computer Report i s designed to operate on a Radio Shack TRS-80 Model I II , /Apple 11, 11+ or l i e or an IBM Personal Computer. The microcomputer must have a 48K minimum memory capacity, one disc drive and compatible pr inter . 36 Description WISC-R IQs f scaled scores and achievement grade equivalent (GE) scores (optional) are entered to generate a report. When the WISC-R scores are used alone the report contains a br ie f statement describing what each subtest measures and the student's a b i l i t y for that subtest, converts subtest scaled scores and IQ's into percentiles, followed by Verbal, Performance, and F u l l Scale confidence intervals . This i s followed by the calculation of the ch i ld ' s mental age, expected grade level and theoretical expected achievement level at age 16, statement of subtest strengths and weaknesses, as well as statements concerning the three major factors; Verbal Comprehension, Freedom from D i s t r a c t i b i l i t y and Perceptual Organization. The report next generates hypotheses based on subtest strengths and weaknesses, as well as interpretations based on subtest patterns and factor scores. The last page of the report begins with recommendations for remedial instruction based upon those subtests which were s igni f icant ly below the expected grade l eve l . The user i s referred to a remedial text the author has co-authored. The program w i l l perform "discrepancy analyses" based upon the expected grade level i f achievement data i s entered. 37 CHAPTER IV RESULTS The present study was designed to evaluate three computer-based test intepretation systems for the WISC-R. Two sets of c r i t e r i a were established with corresponding research questions. The f i r s t c r i t e r i a (Section A of rating scale) addresses the issue of the degree to which each of the three computerized test interpretations are considered acceptable. The second c r i t e r i a (Section B of rating scale) addresses the issue of adequacy as well as the usefulness of the computerized interpretation systems. To incorporate a Two-Way (Fixed Effects) Analysis of Variance technique, mean ratings were calculated from the three raters in the study. Interrater Agreement was calculated on a rater by system basis (Tablel) as well as on a system by item basis (Table2) to provide for an estimate of the error introduced by the collapsing of the judges' ratings. As can be seen in Tables 1 and 2, sufficient interrater agreement was obtained to just i fy this procedure. The data were analyzed through a two-way analysis of variance, with the independent variables being computerized test interpretation systems and rating scale items. The dependent variable, was the judges' ratings. A 3(systems) by 9(items) Analysis of Variance was conducted for the i n i t i a l cr i t er ion (Research questions 1 & 2) and a 3(systems) by 6(items) Analysis of Variance for the second Cri ter ion (Research questions 3 & 4). Due to the fact that a few ce l l s in the design had zero 38 TABLE 1 Interrater Agreement Co-efficients by Raters and Systems Raters System 1/2 1/3 2/3 1/2 1/3 2/3 1/2 1/3 2/3 1 1 1 2 2 2 3 3 3 Percent Perfect Agreement .407 .251 .348 .103 .185 .393 .607 .333 .481 Percent Agreement Percent Within plus/minus 1 Disagree .370 .223 .489 .260 .430 .222 .526 .371 .426 .319 .289 .318 .237 .156 .356 .311 .341 .178 Mean Percent Perfect Agreement .345 Mean Percent Agreement Within Plus/Minus 1 .393 Mean Percent Disagree (Difference in Ratings >1) .262 TABLE 2 Interrater Agreement Co-efficients by Item and System System 1 System 2 System 3 Marginal Averages Item 1 .60 .60 .73 .64 2 .60 .87 .87 .78 3 .99 .87 .87 .91 4 .87 .87 1.00 .91 5 .87 .73 1.00 .87 6 .85 .46 .87 .73 7 .85 .88 .73 .82 8 .78 .69 1.00 .83 9 .81 .72 .88 .80 10 .75 .70 .75 .73 11 .87 .79 .81 .82 12 .80 .75 .84 .80 13 .75 .76 .81 .77 14 .82 .70 .69 .74 15 .87 .78 .81 .82 Marginal Averages .81 .74 .84 39 variance and because the S ta t i s t i ca l Package for the Social Sciences (SPSSx) used to analyze the data could not handle within c e l l variance of zero, minimal variance was created by the writer. This was achieved by adding a value of .111 to sixteen of the twenty-seven ce l l s in the f i r s t nine items, i . e . simulating one higher mean rating for one subject for sixteen c e l l s . The effect of creating variance in the data reduced the chance of finding signif icant differences and thus did not bias the results in the negative direct ion. The results of the data w i l l be addressed in two sections as follows: Analysis of Data, Part One: Research Questions 1 & 2 The mean and standard deviations of the ratings when c lass i f ied by system and items are presented in Table 3. The mean rating for the to ta l groups (243 cases) was 2.048 with a standard deviation of .801. To assess the effects of systems and items on ratings, a 3(systems) by 9(rating scale items; 1-9) Analysis of Variance was conducted. This analysis i s summarized in Table 4. The results of this analysis yielded a main effect for systems, F(2,216)= 118.606, p<.001; a main effect for items, F(8,216)=420.893, p<.001; as well as a s ignif icant systems by items interaction, F(16,216)=180.190, p<.001. A plot of the c e l l means for this data i s presented in Figure 1. Given the nature of Research Question 1, "To what extent do the computerized interpretations of each system meet the 40 TABLE 3 Means and Standard Deviations for Systems and Items Items 1 2 3 4 5 6 7 8 9 1 3.037 2.037 3.963 1.704 1.370 1.704 1.370 1.889 1.815 .111 .111 .111 .111 .111 .111 .111 .236 .176 System 2 2.370 1.370 1.370 1.593 2.037 2.704 1.296 1.815 2.037 .111 .111 .111 .147 .111 .111 .111 .242 .261 3 3.370 1.704 4.259 1.037 2.037 1.704 2.370 2.037 1.296 .111 .111 .364 .111 .111 .111 .111 .111 .111 TABLE 4 Summary of the Analysis of Variance of Systems by Items 1-9 Source of Variation Sum of Squares df Mean Square F System 5.504 2 2.752 118.606* Item 78.134 8 9.767 420.893* System by Item 66.900 16 4.181 180.190 Within cells 5.012 216 .023 * p<.001 41 appropriate guidelines etc.", and due to the research findings i . e . for the most part , a l l fa i led to meet minimal acceptable standards, further post hoc comparisons were not warranted. As can be seen in Figure 1, the three systems for the most part were judged not to meet the prescribed guidelines, obtaining overal l mean ratings of approximately two. The only exception was that of systems one and three who on item 1 obtained ratings of 3.1 and 3.337 respectively and on item three obtained ratings of 3.963 and 4.259 respectively. Acceptable ratings were subjectively defined by the writer as those approximating the "Agree" ratings. Analysis of Data, Part Two: Research Questions 3 & 4 The mean and standard deviations of the ratings when c lass i f ied by system and items are presented in Table 5 . The mean rating for to ta l groups (162 cases) was 2.2675 with a standard deviation of .7612. To assess the effects of systems and items on ratings a 3(systems) X 6(rating scale items; 10-15) Analysis of Variance was conducted. This analysis i s summarized in Table 6 . The results of the analysis yielded a main effect for systems, F(2,144)= 8.169, p<.001; a main effect for items, F(5,144)=20.252, p<.001; as well as a s ignif icant systems by items interaction, F(10.144)= 2.568, p<.01. A plot of the c e l l means for this data are presented in Figure 2 . In order to detentrine where the signif icant system difference(s) was located, a Scheffe post hoc analysis was used to make group comparisons. System two obtained s igni f icant ly 42 TABLE 5 Means and Standard Deviations for Systems and Items Items 10 11 12 13 14 15 1 3.370 1.667 1.926 2.148 2.593 1.407 .676 .441 .572 .242 .521 .401 2 3.222 2.444 2.630 2.185 2.481 2.148 .745 .928 .716 .648 .444 .689 3 2.704 2.037 2.593 1.556 2.222 1.481 .716 .455 .222 .333 .624 .294 TABLE 6 Summary of the Analysis of Variance of Systems by Items 10-15 Source of Variance Sum of Squares df Mean Square F System 5.306 2 2.653 8.169* Item 32.886 5 6.577 20.252* System by Item 8.340 10 .834 2.568** * rX.001 ** p<.01 FIGURE 1 Agree S t r o n g l y D i s a g r e e Mean R a t i n g s o f Systems on Items 1-9 S t r o n g l y 5-Agree 4--N e u t r a l 3" D i s a g r e e 2--1 -I tern S t r o n g l y 5-Agree Agree N e u t r a l 3-D i s a g r e e 2-S t r o n g l y 1--D i s a g r e e FIGURE 2 Mean R a t i n g s o f Systems on Items 10-15 10 System 1 X System 2 • System 3 O 11 12 13 I tern 14 15 44 higher ratings than did system one (p<.05) on item 12, which addresses to what extent their recommendations are adequate and/or congruent with what the raters' recommendations would have been. Systems one and two obtained significantly higher ratings than did system three (p<.05) on item 13, which addresses to what extent their WISC-R computer report would be useful in terms of its diagnostic utility. System two obtained significantly higher ratings than did systems one and three (p<.05) on item 15 which addresses to what extent their computerized reports would be adequate, as they stand, for public distribution. It is interesting to note, when looking at the plot of the cell means in Figure 2, that system two and three track each other on a parallel basis with the exception of item 12 where their mean rating is almost identical. This indicates fairly consistent higher overall ratings for system two over system 3. As one would expect, system one appears to be responsible for majority of interaction that exists between these systems. In addressing Research Question 3, however, the aforementioned analysis is put into perspective. Despite the specific strengths of certain systems on certain items, the systems were judged not to meet minimal adequacy levels, obtaining overall mean ratings of 2.27 on items 10-15 of the rating scale. Post hoc comparisons between items were not performed. Pearson correlational matrixes are reported in Appendix 2 for items 1-15 as well as being reported for items 1-15 on a system by system basis. It was apparrent when examining the 45 pooled item correlations that large group differences had confounded the variance. System by system correlations were then examined. Only correlations of .5 or greater with probabilities of >.05 were considered. Upon examination of these correlations (see Appendix B), certain items were observed to have high correlations throughout a l l the systems. They will be presented followed by some hypotheses to account for their existence. Items 1-2, 1-5, 2-5 and 12-13 achieved high correlations throughout a l l three systems. Item one is concerned with the manual reporting the rationale and evidence in support of computer-based test interpretation. Item two states that information should be provided to the users of computerized interpretation services concerning the consistency of classifications. An hypothesis to account for the high correlation may be the inadequacy of the manuals concerning these specific items. High correlations on items two and five as well as one and five may be a reflection of the content of these items. These items cal l for information and evidence to be provided to the user regarding interpretive statements. High correlations were obtained on items 12 and 13 which deal with the adequacy of recommendations and the systems' diagnostic ut i l i t y respectively. It is not surprising that when a system is rated low on its adequacy of recommendations that i t may be rated low as well on its diagnostic utility. 46 CHAPTER V DISCUSSION The intent of this study was to evaluate three computer-based test interpretation systems for the WISC-R in terms of two sets of c r i t e r i a . The f i r s t cr i ter ion was to what degree each of the three CBTI systems were acceptable according to recently proposed guidelines, sanctioned by APA, to evaluate computerized test interpretation. The second cr i t er ion addressed the issue of the adequacy and usefulness of these systems. Three expert raters, through the use of the rating scale developed by the writer, evaluated the degree to which each of the CBTI systems met the aforementioned c r i t e r i a . This chapter i s divided into four areas. F i r s t , a summary and interpretation of the results i s discussed. Next the l imitations of the study are presented followed by suggestions for future research. The chapter ends with some conclusions by the writer. Summary and Interpretations of Findings The findings w i l l be presented in two sections: PART ONE: RESEARCH QUESTIONS 1 & 2 Research Question 1 To what extent do the computerized test interpretations of each system meet the appropriate requirements of the latest draft of the "Guidelines for Computer-Based Tests and Interpretations" as measured by Section A of the rating scale? 47 Research Question 2 To what extent are there signif icant overal l differences between CBTI systems and/or items as measured by the rating scale, as well as to what extent are there d i f ferent ia l strengths and weaknesses across rating scale items among CBTI systems? As depicted in Figure 1 (p.43), a l l three systems fa i led to meet the proposed standards for computerized interpretations (items 1-9 rating scale). The systems in general, obtained overal l mean ratings of approximately two. One exception (see Figure 1) was that of systems one and three which on item one obtained "neutral" ratings and on item three obtained "Agree" ratings. Item one states that the computer testing services should provide a manual reporting the rationale and evidence in support of computer-based interpretation of test scores. Item three concerns i t s e l f with the extent to which the or ig ina l scores used in developing interpretive statements are given to test users. Significant system differences were obtained which indicates that certain systems were rated generally higher than others. As wel l , some systems were rated higher on certain items compared to others. As can be seen in Figure 1, the systems were very variable as to their ratings from item to item. This made i t d i f f i c u l t to comment on the superiority of any system(s) on items in general. Further post hoc investigation to determine which systems were rated s ignif icant ly higher than others did not appear to be warranted for two reasons. F i r s t , the i n i t i a l research question concerned i t s e l f with to what degree each of 48 the CBTI systems would be considered acceptable interpretation according to proposed guidelines for computerized interpretation. This was achieved by the use of descriptive s ta t i s t i c s , without the need for futher post hoc analysis. Secondly, a l l systems fa i led to meet appropriate standards. Therefore, i t seemed unnecessary to probe further for d i f ferent ia l strengths of some systems on certain items. PART TWO: RESEARCH QUESTIONS 3 & 4 Research Question 3 To what extent are the computerized test interpretations adequate and/or congruent with interpretations of the same data by experienced c l in ic ians as measured by the rating scale? Research Question 4 To what extent are there s ignif icant overal l differences between CBTI systems and/or items as measured by the rating scale, as well as to what extent are there d i f ferent ia l strengths and weaknesses across rating scale items among CBTI systems? The three systems were judged not to meet acceptable c r i t e r i a for items 10-15 of the rating scale (see Figure 2, p43). Systems one and two on item 10 were the only ratings above the neutral mark. Item 10 was concerned with the extent to which s t a t i s t i c a l calculations provided by the computerized reports were adequate and/or congruent with the raters calculations. This indicates that the raters found the s t a t i s t i c a l calculations provided by the CBTI reports to be re lat ive ly more adequate then other components of the report i . e . interpretations and recommendations. Due to significant system and item differences 49 as well as s ignificant interaction effects, further inter-pretation of these results seemed warranted. I w i l l f i r s t discuss the patterns that seemed to emerge when examining the ratings of the three systems on each of the six rating scale items. Next I w i l l offer some hypotheses concerning the nature of the results . The results of this study w i l l then be compared to a s imi l iar study by Replogle and Eicke (1985). As can be seen in Figure 2 (p.43), system two was rated higher than system three on a l l items except for item 12 where s imi l iar ratings were obtained. Item 12 was concerned with the adequacy of recommendations provided by the computerized reports. As Figure 2 indicates, the more variable ratings for system one are responsible for the degree of interaction between the three systems and six items. Although there were no s t a t i s t i c a l l y significant differences between the systems on items 10 and 14, system one obtained the highest ratings. Item 10 was concerned with the adequacy of s t a t i s t i c a l calculations with item 14 addressing to what extent the report would be useful in terms of saving time for the user. System one, although not s t a t i s t i c a l l y s ignif icant , was rated lowest on items 11 and 12 which dealt with the degree to which the interpretations as well as recomendations provided by the computerized reports were adequate and/or congruent with the raters* interpretations. System one was rated lowest as well on item 15, which dealt with the extent to which the raters f e l t the computerized report would be adequate as i t stood for public d is tr ibut ion . 50 Despite the superiority of some systems on certain individual items however, the systems fa i led to meet acceptable levels of interpretation (see Figure 2 ) overa l l . Examination of the six items on the rating scale may offer some hypotheses as to why this may be so. Item 10 (Mean ratings = 3.37, 3.22, 2.70 for Systems 1,2,& 3) /As a group, the systems were rated highest on item 10 which dealt with the adequacy of the s t a t i s t i c a l calculations. This re lat ive ly higher rating i s not surprising given the fact that s t a t i s t i c a l calculations can more easi ly be done in the absence of other pertinent data. One can hypothesize that the unacceptable rating on this item may be due to the absence of decision rules. Raters may not have been able to ascertain how and why specif ic scaled score strengths, factor analyses were obtained. Another possible explanation, substantiated by one rater's comments, was that the s t a t i s t i c a l calculations, factor scores, strengths and/or weaknesses were sometimes presented in an ambiguous fashion, thus being d i f f i c u l t to understand. Items 11 (1.67, 2.44, 2.037) and Item 12 (1.921, 2.63, 2.593) Items 11 and 12 dealt with the adequacy of interpretation and recommendations respectively. One could hypothesize that the interpretations and recommendations were congruent with those of the raters but s t i l l were not judged adequate. This was substantiated by the occasional comment to this effect made by raters. Another explanation may be that that the CBTI reports, unlike psychologists, were incapable of making intra-subtest comparisons or evaluating the qualitative aspects of the 51 subjects' responses on the WISC-R protocols. Item 13 (Mean ratings =2.148, 2.185, 1.556) The CBTI reports were rated low on their diagnostic u t i l i t y . It could be that the WISC-R alone does not offer adequate data for diagnosis, a point supported by the l i terature reviewed in Chapter 2. Another explanation may be that the computerized interpretations did not provide the raters with any diagnostic information i . e . subtest comparisons, verbal-performance discrepancies, that were not eas i ly calculated or already apparent to them. Item 14 (Mean ratings =2.593, 2.481, 2.22) Item 14 addressed to what extent the CBTI report would be useful in saving time for the user. One can only presume the raters f e l t the CBTI systems did not have much to offer in terms of saving them time possibly excepting basic s t a t i s t i c a l calculations. More recent CBTI systems are capable of converting WISC-R raw scores to scaled scores. Such systems may have been rated more favourably on such an item. Item 15 (Mean ratings =1.407, 2.148, 1.481) Item 15 addressed the issue of the adequacy of the computerized report for public d is tr ibut ion . This was rated poorly as wel l . Comments made by the raters indicated they f e l t the computerized reports were not innaccurate, but that they judged the reports were not comprehensive or adequate enough for public d is tr ibut ion . For instance, one computerized report tended to l i s t the results as opposed to printing them out in report s ty le . Another concern expressed by one of the raters was about 52 interpretive statements made regarding personality t r a i t s . Such unwarranted statements were f e l t to be potential ly damaging to the examinee. An hypthesis to account for the overal l low ratings for computerized reports i s that computerized reports are limited in the information they are capable of using to generate reports when compared with a trained psychologist. Thus the information computerized reports provide to the user about interpretation, recommendations, and diagnostic u t i i t y , may be very l imited. It follows that the reports may also be inadequate in terms of saving time for the user as well as in terms of their adequacy for public d is tr ibut ion. As Replogle and Eicke (1985, p.387) have hypothesized, the conservative nature of the raters also may be a ref lect ion of "professional standards that dictate caution in making decisions on a single test", opting for a more complete assessment prior to decision making. Another hypothesis to consider as well i s the possible ambiguity that the items may have presented. The guidelines for computerized test interpretation, which were used verbatim to minimize content change, were not developed for the purpose of a rating scale but were turned into one by the writer. This i s the f i r s t time to the writer's knowledge that this has been done. A l l guidelines in th is recently proposed document appear to be appropriate for the evaluation of computerized interpretation. It seems to this author, however, that additional guidelines could be devised to expand CBTI evaluation in several important areas. Examples of these are: the use of potential ly damaging statements in reports, 53 concerns regarding the c l a r i t y in format of computerized interpretation, as well as the completeness of the computerized report. Ambiguity may have been introduced by the wording "adequate and/or congruent" in Items 10-12, Section B of the rating scale. This was done intentionally because even i f the s t a t i s t i c a l calculations, interpretations and recommendations were congruent with those of the raters, they may s t i l l have been inadequate. The raters, according to occasional comments, may have had d i f f i c u l t y rating these items due to th i s . To minimize the amount of ambiguity however, the raters prior to rating the systems, were asked to read and discuss the scale with the writer. A l l raters agreed on the content of the rating scale items after this exercise. These results indicate that the use of the three computerized systems examined in this study to generate WISC-R reports, i s not just i f ied according to three expert raters. This finding i s inconsistent with Replogle and Eicke's (1985) results (p.385) which found automated reports to be more highly rated on items pertaining to overal l analysis, Verbal-Performance discrepancies, addressing relative weaknesses and lack of irresponsible interpretation, when compared to reports prepared by psychologists. Although the two studies d i f f e r , there are some possible explanations for the descrepant results . F i r s t , the designs of the studies were quite different . In this study, psychologists rated computerized interpretations against their c l i n i c a l interpretation of the same protocol. Replogle and Eicke compared psychologists' reports with automated reports of the 54 same protocol. The reports in their study were written by a separate group of psychologists who were given demographic, WISC-R test scores and reason for referral information by the author. This study evaluated three CBTI systems while Replogle and Eicke evaluated one, which they authored. Another signif icant difference i s with the rating scale items. Replogle and Eicke 1 s scale was restricted to the analysis section of the computerized reports. The present study addresses other issues e.g. usefulness in terms of time saving factors and adequacy for public d i s tr ibut ion . It i s d i f f i c u l t to make further comparisons as many of the rating scale items are not identif ied in their study. This i s disappointing especially since Replogle and Eicke appear to be evaluating their own CBTI system with a rating scale that they have designed. A cr i t i c i sm can be raised regarding possible bias in item construction especially when the items are not provided to the reader. Limitations of the Study The l imitations of this study w i l l be discussed as follows. I w i l l f i r s t discuss the limited general izabi l i ty of the results . The implications of averaging the three raters ratings into one w i l l then be discussed followed by the possible concern for the non-random selection of expert raters. Only three CBTI systems for the WISC-R were evaluated in this study, meaning results cannot be generalized beyond those systems. The three expert ratings in this study were collapsed to 55 form one mean rat ing. This was done to allow for a more manageable analysis of the data. However, the collapsing of these ratings introduced some error into the study. With only three raters in the study, the averaging of ratings when one rater was quite disparate from the other two (e.g. 4,4,1) may have been misleading. Even though two raters gave an item an acceptable rating of "4", the introduction of the th ird rating brought the mean rating into the "neutral" area. Given the adequate "inter-observer agreement" however (p.38), this was not viewed as a serious l imitat ion. The expert raters in this study were not chosen at random. I f randomly selected, the raters may have been more l i b e r a l or more conservative in their ratings. For instance, i f psychologists who routinely use these CBTI systems for WISC-R interpretation had been consulted, the results may have been more l i b e r a l because their choice of system might bias them in that direct ion. Suggestions for Further Research This study and Replogle and Eicke's (1985) study appear to be the only two studies to date that have attempted to evaluate computerized interpretation systems for the WISC-R. Other CBTI systems for the WISC-R w i l l need to be examined in terms of their adequacy and usefulness. Further studies that evaluate the adequacy and usefulness of computerized interpretation systems during complete psycho-educational assessments, would seem to be warranted as wel l . 56 Both this study and Replogle and Eicke's have used psychologists to rate CBTI systems in the absence of other assessment data. In this study, this was done to be fa i r to the three CBTI systems i . e . not providing the raters with much more information than the systems would have. An evaluation of the usefulness of CBTI systems when other assessment data are available, would seem to be an appropriate extension of this study. Concerns such as confidential i ty as well as copyright violations by the CBTI systems must be addressed. Ethica l issues such as Human vs. Automated reports w i l l undoubtedly become more prevalent as computers become more sophisticated (Altemose & Williamson, 1981; Matarazzo, 1986). Software dissemination to the general public i s also an issue that must be addressed as wel l . Those not adequately trained in psychological assessment may tend to rely on computerized interpretations to supplement their assessment s k i l l s . With access only to scaled scores, untrained examinees or untrained parents could generate their own reports by simply inserting scaled scores into a CBTI system. The sky i s the l imi t regarding possible areas of research in th is "frontier" period of computerized test interpretation. As CBTI systems appear to be here to stay, future research should endeavour to educate potential "users" of the CBTI systems to their strengths and weaknesses in test interpretation. As Thomas (1984) states, "It i s essential to maintain r e a l i s t i c expectations of what computers can and cannot do" (p.472). This can be achieved through research and the subsequent education of 57 consumers. Conclusions The results of this study support the cautious use of computerized test interpretation of the WISC-R. The three systems evaluated in this study were found to be inadequate in terms of recently proposed APA guidelines for computerized test interpretation. Also , they were unacceptable in terms of their adequacy, usefulness, diagnostic u t i l i t y , time saving a b i l i t y as well as their adequacy for public d i s tr ibut ion . Systems one and two in terms of their s t a t i s t i c a l adequacy, obtained the highest ratings. Although s t i l l not in the acceptable range, they were the only two ratings above the "neutral" mark for items 10-15 on the 9 point scale. Many of the current WISC-R CBTI systems have as their core the diagnosis of psycho-educational d i f f i c u l t i e s . I f one wants to evaluate the CBTI's a b i l i t y to accurately interpret psycho-educational d i f f i c u l t i e s from the WISC-R alone, one must f i r s t be confident in the WISC-R*s a b i l i t y to do the same. The l i terature does not appear to support th is premise. The review of the l i terature in this study concluded that the WISC-R's diagnostic u t i l i t y through the use of pattern analysis was not warranted. Anastasi (1976) supports this view by saying, "Three decades of pattern analysis with the Wechsler Scales have provided l i t t l e support for their diagnostic value." Nevertheless, as Altemose and Williamson (1981, p.369) have stated, "Clinicians continue to use this approach, and the method has been computerized". Therefore one must assume that statements 58 in computerized reports based on pattern analysis must be held suspect. One could conclude that the computerized interpretation of the WISC-R that goes beyond simple c l e r i c a l functions such as the computation of raw scores to scaled and I .Q. scores as well as percentile calculations for instance, i s not jus t i f i ed . In order to protect consumers of psycho-educational assessments involving the WISC-R, the evidence would seem to indicate that i f computerized test interpretation i s used, i t s use should should be restricted to and used in conjunction with c l in ic ians who are identif ied as having training well beyond that of basic test administration. Training in testing and measurement theory, test construction, s ta t i s t i c s and measurement, supervised f i e l d experience in test administration and interpretation are necessary. Every professional should be capable of judging in each instance the va l id i ty of the automated report for his/her c l i ent given the to ta l context of assesment information, as a l l interpretations, computerized or not, are ultimately the responsibi l i ty of the c l i n i c i a n . The careful monitoring of the use of computer-based test interpretation systems for the WISC-R by loca l psychological associations and school d i s t r i c t s , i s strongly urged by this writer. 59 REFERENCES Altemose, J . R . , & Williamson, K . B . (1981). C l i n i c a l judgement vs. the computer: Can the school psychologist be replaced by a machine? Psychology in the Schools. 18, 356-363. Al tus , G.T. (1956). WISC prof i l e for retarded readers. Journal of Consulting Psychology. 20, 155-156. American Psychiatric Association (1980). Diagnostic and  s t a t i s t i c a l manual of mental disorders. (3rd ed.) Washington, DC: Author. Anastasi, A . (1976). Psychological Testing 4th ed. New York: MacMillan. Anderson, M . , Kaufman, A . , & Kaufman, N. (1976). Use of the WISC-R with a learning disabled population: Some diagnostic implications. Psychology in the Schools. 13 (3), 381-386. Badian, N. (1981). Recategorized WISC-R scores of disabled and adequate readers. Journal of Educational Research. 25 (2), 109-115. Bannatyne, A. (1968). Diagnosing learning d i s a b i l i t i e s and writing remedial prescriptions. Journal of Learning  D i s a b i l i t i e s . 1, 242-249. Bannatyne, A . (1974). Diagnosis: A note on recategorization of the WISC-R scaled scores. Journal of Learning  D i s a b i l i t i e s . I (2), 272-274. Belmont, L . , & Birch, H.G. (1966). The inte l lectual prof i l e of retarded readers. Perceptual and Motor S k i l l s . 22., 787-816. Belmont, I . , Birch, H . G . , & Belmont, L . (1967). The organization of intell igence test performance in educable mentally subnormal chi ldren. American Journal of Mental Deficiency, 21, 969-976. Bogen, J . E . (1969). The other side of the brain: Parts 1, 2, & 3. Bul le t in of the Los Angeles Neurological Society, 34/ 73-105, 135-162, 191-203. Bortner, M . , and Birch, H.G. (1969). Patterns of inte l lectual a b i l i t y in emotionally disturbed and brain damaged chi ldren. Journal of Special Education. 3 (4), 351-369. Burks, H . F . , & Bruce, P. (1955). The characterist ics of poor and good readers as disclosed by the Wechsler Intelligence Scale for Children. Journal of Educational Psychology. 46, 486-493. 60 Bush, L . (1984, February). CPA's computer ethics posit ion i s analyzed. The Ohio Psychologist, p.1-2. C l a r i z i o , H . , & Bernard, R. (1981). Recategorized WISC-R scores of ID children and d i f ferent ia l diagnosis. Psychology in  the Schools, 18 (1), 5-12. C l a r i z i o , Harvey, H . , & Veres, Valer ie , (1983). WISC-R patterns of emotionally impaired and diagnostic u t i l i t y . Psychology in the Schools. 20, 409-414. Coolidge, Frederick. (1983). WISC-R dicrimination of learning disabled and emotionally disturbed children: An Intragroup Analysis . Journal of Consulting and C l i n i c a l Psychology.51 (2),320. Das, J . P . , Kirby, J . , & Jarman, R . F . (1975). Simultaneous and successive synthesis: An alternative model for cognitive a b i l i t i e s . Psychological Bul l e t in . 82, 87-103. Dean, R. (1977). Patterns of emotional disturbance on the WISC-R. Journal of C l i n i c a l Psychology. 33 (3), 486-490. Dean, R. (1978). Distinguishing learning disabled and emotionally disturbed children on the WISC-R. Journal of Consulting &  C l i n i c a l Psychology. 46. (2), 381-382. Decker, Sadie & Corley, Robin. (1984). Bannatyne's "genetic dyslexic" Subtype: A validation study. Psychology in the Schools.21 300-304, Dockrel l , W.B. (1960). The use of the Wechsler Intelligence Scale for Children in the diagnosis of retarded readers. Alberta Journal of Educational Research. 6_, 86-91. Dudley-Marling, C , Kaufman, N. & Tarver, S. (1981). WISC & WISC-R prof i les of learning disabled children: A review. Learning D i sab i l i t y Quarterly. 4, 307-319. Funk, Arnold (1984). Computerized interpretations of individual ly administered tests. Special Education Association Newsletter (B .C. ) , 2, (142). Gazzaniga, M.S. (1975). Recent research on hemispheric lateral izat ion of the human brain: Review of the s p l i t brain . UCLA Educator. 12, 9-12. Groff, M . , & Hubble, L . (1981). Recategorized WISC-R scores of juvenile delinquents. Journal of Learning D i s a b i l i t i e s . 14 (1)/ 7. Committee on Professional Standards (COPS) and Committee on Psychological Tests and Assessment (CPTA) (1984). Guidelines  for computer-based tests and interpretations unpublished. 61 Gutkin, T . , St Reynolds, C . (1980). Factor ia l s imi lar i ty of the WISC-R for Anglos and Chicanos referred for psychological services. Journal of School Psychology, 18 (1), 34-39. Hale, R. (1979). The u t i l i t y of WISC-R subtest scores in discriminating among adequate and underachieving chi ldren. Multivariate Behavioral Research. 14 (2), 245-253. Hale, R . , & Landino, S. (1981). U t i l i t y of WISC-R subtest analysis in discriminating among groups of conduct problem, withdrawn, mixed, and nonproblem boys. Journal of  Consulting & C l i n i c a l Psychology. 49(1),91-95. Hale, R . , & Saxe, J . (1983). Prof i le analysis of the WISC-R. Journal of Psychoeducational Assessment. 1 (2), 155-162. Hamm, H . , & Evans, J . (1978). WISC-R subtest patterns of severely emotionally disturbed students. Psychology in the Schools. 15, 188-190. Henry, S . , & Wittman, R. (1981). Diagnostic implications of Bannatyne's recategorized WISC-R scores for identifying learning disabled chi ldren. Journal of Learning  D i s a b i l i t i e s . 14 (9), 517. Hirshoren, A. & Kavale, K. (1976). Prof i le analysis of the WISC-R: A continuing malpractice. The Exceptional Chi ld , 23 (2) 83-87. Honaker, M. & Harre l l , T. (1984). WISC-R scoring and interpretation  report (Computer Program). The Psychological Corporation. Horn, J . L . , & C a t t e l l , R.B. (1966). Refinement and test of the theory of f l u i d and crysta l l ized intel l igence. Journal of  Educational Psychology. 57_, 253-270. Huelsman, C.(1970). The WISC subtest syndrome for disabled readers. Perceptual & Motor S k i l l s . 30, 535-550. Johnson, D . , Wollersheim, J . (1977). WISC patterns and other characterist ics of reading disabled chi ldren. Perceptual  and Motor S k i l l s . 45, 729-730. Kal los , G . L . , Grabow, J . M . , & Guarino, E .A . (1961). WISC prof i les of disabled readers. Personnel and Guidance  Journal. 39, 476-478. Kaufman, A.S.(1975). Factor analysis of the WISC-R at eleven age levels between 6 1/2 and 16 1/2 years. Journal of Consulting  and C l i n i c a l Psychology. 43. 135-147. Kaufman, A . S . (1979a). Intell igent testing with the WISC-R New York: Wiley-Interscience. Kaufman, A.S. (1979b). WISC-R research: Implications for interpretation. School Psychology Digest, 8., 5-27. Kaufman, A., & Van Hagen, J. (1977). Investigation of the WISC-R for use with retarded children: Correlations with the 1972 Stanford-Binet and comparison of WISC and WISC-R profiles. Psychology in the Schools.14(1),10-14. Keogh, B.K., & Hall, R.J. (1974). WISC subtest patterns of educable mentally retarded pupils. Psychology in the  Schools. 1, 296-300. Keogh, B.K., Wetter, J., McGinty, A., Donlon, G. (1973). Functional analysis of WISC performance of learning-disordered, hyperactive, and mentally retarded boys. Psychology in the Schools.10(2),178-181. Matarazzo, J.D. (1983). Computerized psychological testing. Science. 221. Matarazzo, J.D. (1986). Computerized c l i n i c a l psychological test Interpretation: Dnvalidated plus a l l mean and no sigma. American Psychologist 41 (1) 14-24. Miller, M. (1980). On the attempt to find WISC-R profiles for learning and reading d i s a b i l i t i e s . Journal of Learning  D i s a b i l i t i e s . !3_ (6), 52-54. Miller, M., & Walker, K. (1981). The myth of the L.D. WISC-R pro f i l e . The Exceptional Child. 28(2),83-88. Mitchell, James V. (1984). Computer-based test interpretation and the public interest. Presented at the Division 5 APA symposium on the use of computer-based test interpretations: Prospects and problems. Moore, D., & Wielen, 0. (1981). WISC-R scatter indexes of children referred for reading diagnosis. Journal of  Learning D i s a b i l i t i e s . 14 (9), 511-516. Morris, J.D., Evans, J.G., & Pearson, D.R. (1978). The WISC-R subtest p r o f i l e of a sample of severely emotionally disturbed children. Psychological Reports. 42. 319-325. Mueller, H., Matheson, D., & Short, R. (1983). Bannatyne -recategorized WISC-R patterns of mentally retarded, learning disabled, normal, and intellectually superior children: A meta-analysis. The Mental Retardation and Learning  Disability Bulletin.11 (2), 60-78. Mueller, H.; Mancini, G.& Short, R. (1984). An evaluation of the diagnostic efficiency of the WISC-R. Alberta Journal of  Educational Research.30. 299-310. Nagl ier i , J . (1980). WISC-R subtest patterns for ID and retarded chi ldren. Perceptual & Motor S k i l l s . 51 (2), 605-606. Nebes, R.D.(1974).Hemispheric special ization in commisurotonized man. Psychological Bu l l e t in . 81, 1-14. Nicholson, C. (1982). WISC-R computer report (Computer Program) Southern Micro System for Educators. Ornstein, R. (1978). The s p l i t and the whole brain. Human Nature, May. Paget, D. (1982). Intel lectual patterns of conduct problem children on the WISC-R. Psychology in the Schools. 19, 439-445. Rabin, A . I . , & McKinney, J . P . (1972). Intelligence tests and childhood psychopathology. In B.B. Wolman (Ed.) , Manual of Child Psychopathology. New York: McGraw-Hill. Replogle, William & Eicke, F.J. . (1985) . Automated analysis of the WISC-R: A validation study. Journal of School Psychology 23 383-387. Reynolds, C . (1981). A note determining s ignif icant discrepancies among category scores on Bannatyne's regrouping of WISC-R subtests. Journal of Learning  D i s a b i l i t i e s . 14 (8), 468-469. Reynolds, C , & Gutkin, T. (1980). S tab i l i ty of the WISC-R factor structure across sex at two age levels . Journal of  C l i n i c a l Psychology. 3JL (3), 775. Rugel, R.P. (1974). WISC subtest scores of disabled readers: A review with respect to Bannatyne's recategorization. Journal of Learning. £ , 48-55. Ryckman, D. (1981). Searching for a WISC-R prof i l e for learning disabled children: An inappropriate task? Journal of  Learning D i s a b i l i t i e s . 14 (9), 507-526. Satt ler , Jerome, M. (1982). Assessment of children's  intell igence and special a b i l i t i e s . Boston: Al lyn and Bacon. Schmidt, H.P. & Saklofske, D . H . . (1983) Comparison of the WISC-R patterns of children of average and exceptional a b i l i t y . Psychological Reports. 53, 539-544. Sheldon, M . S . , & Cranton, J . (1959). A note on a WISC prof i l e for Retarded Readers. Alberta Journal of Educational Research. 5, 264-267. S i lverste in , A . B . (1968). WISC subtest patterns of retardates. Psychological Reports. 23_, 1061-1062. 64 Smith, M., Coleman, J., Dokecki, P., & Davis, E. (1977). Recategorized WISC-R scores of LD children. Journal of  Learning Disabilities. 10 (7), 437-443. Stevenson, L. (1980). WISC-R analysis: Implications for diagnosis and intervention. Journal of Learning Disabilities, 13 (6), 346-349. Thomas, /Alex (1984). Issues and concerns for microcomputer uses in school psychology. School Psychology Review. 13. (4). Vance, Booney. (1983). The explorer (Computer Program). Academic Therapy Publications. Vance, H., Gaynor, R. & Coleman, M. (1976). Analysis of cognitive abilities for learning disabled children. Psychology in the  Schools. 13, 477-483. Vance, H., & Singer, M. (1979). Recategorization of the WISC-R subtest scaled scores for learning disabled children. Journal of Learning Disabilities. 12 (8), 487-490. Vance, H., Wallbrown, F., & Blaha, J. (1978). Determining WISC-R Profiles for Reading Disabled Children. Journal of Learning  Disabilities. 11, (10), 657-661. Van Hagen, J., & Kaufman, A. (1975). Factor analysis of the WISC-R for a group of mentally retarded children and adolescents. Journal of Consulting & Clinical Psychology. 43, 661-667. Wallbrown, F., Vance, H., & Blaha, J. (1979). Developing remedial hypotheses from ability (WISC-R) profiles. Journal of  Learning Disabilities. 12 (1), 59-63. Webb, J.T., Miller, M.L., & Fowler, R.D. (1969). Validation of a computerized MMPI interpretation system (Summary). Proceedings of the 77th Annual Convention of the American  Psychological Association. A , 523-524. Webster, R., & Layfayette, A. (1980). Distinguishing among three subgroups of handicapped students using Bannatyne's recategorization of the WISC-R. Journal of Educational  Research. 13 (4), 237-240. Wechsler, D. (1974). Manual for the Wechsler Intelligence  Scale for Children - Revised. N.Y.: Psychological Corporation. Weery, John S., Methven, James R., Fitzpatrick, Joanne, Hamish, J. & Dixon, R. (1983). The Interrater reliability of DSM-111 in Children. Journal of Abnormal Child Psychology. 11/ (3), 341-354. Whitehouse, C. (1983). Analysis of WISC-R Coding Performance of Normal & Dyslexic Readers. Perceptual and Motor S k i l l s . 51, 951-960. Witkin, H . A . , Faterson, H . F . , Goodenough, D .R . , & Birnbaum, J . (1966). Cognitive patterning in mildly retarded boys. Child Development. 3J., 301-316. APPENDIX A Rating Scale for the Evaluation of Computer-Based Interpretations of the WISC-R Those recommended guidelines which deal spec i f i ca l ly with computerized interpretation were taken from a document entit led "Guidelines for Computer-Based Tests and Interpretation"(1984) and formed the basis for statements 1-8 in section A. The majority of statements (guidelines) are presented in the same format as the or ig ina l document. Some statements were altered somewhat to allow them to be incorporated into an acceptable format for rating. Statements 1,3,5,7&8 are supplemented with explanatory comments while statements 2,4&6 are not. This reflects the format taken in the or ig ina l document. The explanatory comments (those that are single spaced) were provided to help put the actual guidelines in proper prospective. When rating however, please rate only the degree to which the computerized interpretations meet the actual guidelines as opposed to the additional comments. Section B contains statements which address the quality of the computerized interpretations. You w i l l be asked to rate these statements based on your previous c l i n i c a l experience. Thank-you for your co-operation! Rater # Protocol # System Section A 1. Computer testing services should provide a manual reporting the rationale and evidence in support of computer based interpretation of test scores. The developer i s responsible for providing sufficient information in the manual so that users may judge whether the interpretive and/or clas s i f i c a t i o n systems are suited to their needs. The WISC-R interpretation system meets the requirements of the above guideline. Strongly Disagree Neutral Agree Strongly Disagree Agree 2. Information should be provided to the users of computerized interpretation services concerning the consistency of classifications, including, for example, the number of classifications and the interpretive significance of changes from one classification to adjacent ones. The WISC-R system meets the requirements of the above guideline. Strongly Disagree Neutral Agree Strongly Disagree Agree 69 3. The or ig inal scores used in developing interpretive statements should be given to test users. In some cases, the matrix of or ig ina l responses should be provided. The manual or in some cases, interpretive report, should describe how the interpretive statements are derived from the or ig inal scores. Professionals who provide assessment services bear the ultimate responsibi l i ty for providing accurate judgements about the c l ients they evaluate. I t may be possible to f u l f i l l these demands without unduly infringing on the testing service's proprietary rights . To evaluate a computer-based interpretation the test user must know at least two facts: a) the source of data on which the interpretive statements i s based; and (b) the test taker's score or scores on the relevant measures. (In addition raw data or item responses w i l l often be very useful.) The f i r s t requirement can be sat i s f ied , where possible, i f the testing service organizes interpretive statements according to the scale on which they are based, otherwise references statements in the report or provides in the manual a l l the interpretive statements in the program l ibrary and the scales and research on which they are based. The second requirement can be satisf ied by printing each test taker's test and scale prof i l e along with the narrative interpretations together, where appropriate, with the or ig inal set of responses. The WISC-R system meets the requirements of the above guideline. Strongly Disagree Disagree Neutral Agree Strongly Agree 4. Interpretive reports should include information about the consistency of interpretations and warnings related to common errors of interpretation. Test developers must provide information that users need to make correct judgements. Interpretive reports should contain warning statements to preclude overreliance on computerized interpretations. The WISC-R interpretation system meets the requirements of the above guideline. Strongly Disagree Neutral Agree Strongly Disagree Agree 5. The extent to which statements in an interpretive report are based on quantitative research versus expert c l i n i c a l opinion should be delineated. Some interpretations describe or predict objective behaviour while others describe states of mind or internal conf l ic ts . Some interpretations are quite speci f ic , others very general. Some make statements about the test taker's present condition, others make predictions about the future. Some make use of well established concensually understood constructs, others use common language terms with less c lear ly defined meaning. The type of interpretation deteritrines the kinds of evidence that should be provided to the user. The WISC-R system meets the requirements of the above guideline. Strongly Disagree Neutral Agree Strongly Disagree Agree 6. When statements in an interpetive report are based on expert c l i n i c a l opinion, the names and credentials of the experts along with the theoretical orientation of their interpretations should be provided to users. The WISC-R interpretation system meets the requirements of the above guideline. Strongly Disagree Neutral Agree Strongly Disagree Agree 7. When predictions of particular outcomes or specif ic recommendations are based on quantitative research, information should be provided showing the empirical relation between the c lass i f i ca t ion and the probabil i ty of cr i t er ion behaviour in the validation group. Computerized interpretation systems usually divide test takers into classes. Presentation of the relat ion among classes and the probabil i ty of a part icular outcome i s i s desirable (eg. through an expectancy table) as are va l id i ty coefficients between test scores and c r i t e r i a obtained from studies using conventional administration. The WISC-R interpretation system meets the requirements of the above guideline. Strongly Disagree Neutral Agree Strongly Disagree Agree 72 Note: This comment applies to 8 & 9. Some reports, especially in the area of school and vocational counselling, are meant to be given to the test taker. In many cases, this may be done with limited professional review of the the appropriateness of the report. In such cases developers bear a special burden to ensure that the report i s comprehensible. The reports should contain sufficient information to aid the test taker to understand properly the results , and suff icient warnings about possible misinterpretations with supplemental material provided where necessary. 8. Computer testing services should ensure that reports for users and/or test takers are comprehensible. The WISC-R interpretation system meets the requirements of the above guidelines. Strongly Disagree Neutral Agree Strongly Disagree Agree 9. Computer testing services should ensure that reports for users and/or test takers properly delimit (eg. variables such as age or sex that moderate interpretations) the bounds within accurate conclusions can be drawn. The WISC-R interpretation system meets the requirements of the above guideline. Strongly Disagree Neutral Agree Strongly Disagree Agree S e c t i o n B Based on your c l i n i c a l exper ience , p lease ra te the fo l lowing statements f o r P r o t o c o l # . 10. The extent to which the s t a t i s t i c a l c a l c u l a t i o n s provided by the WISC-R i n t e r p r e t a t i o n report ( i e . conf idence i n t e r v a l s , f a c t o r a n a l y s e s , s t rengths & weaknesses e t c . ) a re adequate and/or congruent wi th your c l i n i c a l c a l c u l a t i o n s i s h i g h . S t rong ly Disagree Neutra l Agree S t rong ly Disagree Agree A d d i t i o n a l comments i f d e s i r e d : 11. The extent t o which the i n t e r p r e t a t i o n s provided by the WISC-R i n t e r p r e t a t i o n report) are adequate and/or congruent wi th your i n t e r p r e t a t i o n s o f the same p r o t o c o l , i s h i g h . S t rong ly Disagree Neutra l Agree S t rong ly Disagree Agree A d d i t i o n a l comments i f d e s i r e d : 12.The extent to which the recommendations provided by the _ WISC-R interpretation report) are adequate and/or congruent with what your recommendations would be, i s high. Strongly Disagree Neutral Agree Strongly Disagree Agree Additional comments i f desired: 13. The extent to which the WISC-R report would be useful in terms of i t s diagnostic u t i l i t y , i s high. Strongly Disagree Neutral Agree Strongly Disagree Agree Additional comments i f desired: 14. The extent to which the WISC-R report would be useful in terms of saving time for a "user", i s high. Strongly Disagree Neutral Agree Strongly Disagree Agree Additional comments i f desired: 75 15. The extent to which the WISC-R report would be adequate, as i t stands, for public d is tr ibut ion (ie. other professionals, parents/guardians), i s high. Strongly Disagree Neutral Agree Strongly Disagree Agree Additional comments i f desired: A P P E N D I X B 76 o a o m Z a II i Pooled Item Correlations for Svstems 1-3 — C O C O - 4 C n U I 4 k C J o I 8 "8 8 S 3 k SI O M O bioo bioo bioS O M S b ioS o i £ S i m 3 w - Sw2 1 0 1 -J u i O -J CJ I M o - i - O -O w t t o - n c n w ^ i ODWOI - w w O w c o ^ w « i O w < n O v u it i ii c n • o _ O U « ( J U S U M O O M O O -4 — 4 - ^1 c j O -4 — — - 4 01 O w c n o w M cn w co c o w c o ti i o S - £ S S o S - -Soi 2 M ™ • L K ' J ° - M C O u u a O M i k a d a o i S SwS S d S S-1" i " " <°^u 9 - i o c n CO w 01 O w M — w ^1 0 ) w CJ O w CJ II i II II u i m u cn co 8 S 8 o S 8 2 S 2 S S 2 ° m S m m " 2 u S ° U U > 0 " S O w S 2 w 2 3 i 5 8wS> - ~ j m £ , - ~ , u l « -"° n-jm n ,1 J>. — - 4 CD 4k - 4 01 O ^ i C O U > i U O - 4 CD o ^ A cj w t» to w co o w cj a i w - o w & o w 4k U l II I II II It I II S S 2 3 3 2 8 S 3 S S S g " » o " " 2 u ° 2 i 2 S i * g i * o i S c o i g o : : S : i S _ ' o -4 CO CJ "4 CO M »J 6 - g ^4 fO - - J O Q - 4 O l < 3 > w ^ - J w f l ) Q - w f O M w W ^ 4 w c 3 1 £». w CO I ^ c o £"2 S " S g ^3 - w u O W * - W U I O I O M O M - O M O SwS 2 w j SwS S i o - ^ m c j - , t o " - " ^ - ^ ° ° ^ II I ^ 8 S 2 2 S 2 ; S S s s l b M " b u S « " » s u S ' § - i S SCtS S i S 5-2 3 i „ o ; » 2 - 2 3 8 II i Ul — nj— ^K»~irwi^ > • .» O • CJ • CO Ol - O • Ul c j S m t o S ui 8 S i n M S £ A M - O u U l O M M O M O O O M M •Z S w 2 OwS S w S S w S "4 ^ CJ U M o ~i cn 5 -4 ~4 O O O -J CO . g i o — ~ 4 O v « » - « e n w e j c j w ^ i o i w - o — M cj w as wo M W CJ 1 5 S S 2 S S - s i - - M " b w " ' 8 O M » € S w o i o w e n 3 w 2 o — 2 » - i O to - i A M - 4 - o o O^I^I - J -4 c n £ I O w w O ~ 0 1 O w M O w (O ( J w M Q w 4 k O w O I w Q C J w C O CJ w CO II o S S S ° , S S 8 " S " S S o » o owcli • 8 O N , " O M S b ioS m w S 2 w 2 O w 2 ' 0 1 O ~ i - O O W ^ I - O <n o w t n w u ] L W ~ J o w w O ' > » » » co-—cn —o a o w u i O W M u n — u ti ii it t i l C O W U , C O w C O i w j X w O S w i J S i - O w S II M S " 3 S 1 0 S S S < = u - ' § o « 5 - w u bwoi u w S S i S S w u S i 2 S ^  2 O l - I - I OO 0 ~ l - M ^ I A U ^ I I O - ^ cow~j cjw-j u — Q wo co w cn Owik enw- co w 01 M • oi - 0 1 . o . CJ - — n . Li 8 S S S S S 8 S S 8S,^ o O M - 0 1 0 0 * M 2 O U " 8 i w S i S ° i S 0 " J S 0 0 0 1 - 4 - 4 - - I 0 1 k - I O - 4 -*4 C J & - 4 CD ^ w u — w 01 — w Q _ Q U w C J O O w A C J w u C J w - 4 - 4 — & A O M O U l ' — b ' — IO PO ! ij S S S *" 9 ^ 2 -2 O W - 4 U U O CO M CO - IO CO - M M - M A w S f o i ^ °2 0 - 4 U I C 0 - 4 C 0 ft-IU - 4 - 4 0 M - 4 C O 0 - 4 C n cn —< CJ - 4 w cn 8 S S R S S ^2 O M C O O M C n O M u ! Q M C O i U i O M - - M " 8 i 2 2 ^  S °2 ^ 3 1 O -4 io o<iu O c n - - 4 cn o -4 u i - 4 0 U — * — M w Q — w < n 10 w - 4 O w M O w M U • — ( 0 C J w U l ' C J . O - U l - O M - CJ - M - — • O * M - M - — O S S l o O2 9,^,9, < J " M O M U I O M C O O M C O O M O - M ~ I CJMCO - M O S u a C J M O CJMCO O O O - 4 0 1 C 0 ~ 4 C 0 M ~ l — 0 1 - 4 C 0 - ~ I M C 0 - I 0 I A ~ I M O I ~ 4 - ~ l ~ 4 - 0 - 4 U I - C W - 4 C J u ' , w O — w M C O — CO - 4 w M M w M A w ^ l C O w C n O — ' C O — w cn M w U I C D w U — w M O — M II g ^ . - ^ ^ - ^ - . 1 * -j - M - ii - — - w - ^ - M - — cn - cn - cn ri.^n 2"^S 2 U 2 O M ^ - U C J O M - J - M - 4 - U - O U O - M O M M U I O M C O - M M O M O X S S 0 3 O - 4 O O - J M M - J i O - 4 C 0 C O - J - 4 U ^ I C n O ^ — i . ~ l 3 N J ^ I C J O ^ t C T I O ^ l — O - J M — <j cn — — o — c o - — 'ji o — M o — c j cn — co co -—• o c o w - j cji — i^ C O — M C J — — c — '-O o — cn c — - ^ o 01 CD cn O m OJ M CO O W CD O M 4k O M CO o 2 i ~i cn O -4 O O ui O - J O o o wk <J) — cn CJ O w CJ O — ' CJ — O 1 •o T3 — "TJ —> •o • n 1 II II II — II • • H CJ> - 4 b cn m b M U l . ro ro C J O M - 4 o b M CO 2 o ^ C J 01 * J & O ~ l 4k o o O ~4 O ro o Tmf —. ro ' cn O w CO w o O w CJ TJ J> •o — •o — •o ll 1 II i II — II II X I - 4 CJ & 2 -4 co m cn O W 31 O fO -> S Q M ~1 o M 4k 2 ro CO O o O -4 4k o 01 CJ a ( 0 w C J <n w w O O — CO o w CJ 2 • a T J — . T3 — TJ o ti 1 II II 1 II II | —« -* o a i b Ik — Ul m b ro a i o O M — M M CJ o ro co 2 23 o - ^ i CJ o o — -J — U l --1 4k o —4 O 4^  — o O l w 4k M cn w CJ 73 m l— T J ~ T3 ll II 1 II 1 II 1 II 1 *-« > b 01 CJ cn b m - l o O ro OI O M 01 O M ui CJ ro co 2 o o O -4 CJ M ~ l CO O ^4 CJ — cn Ul — CO w CJ o — - cn w cn a z T3 TJ II II It 1 il 1 - t n cn M CO - 4 03 m b M o O to CO O M 4k O M 4k b ro cn 2 a 5 ui CO O —4 4k O -1 O o —j —. cn io> w CO Ul w CJ O w 4k O — CJ o w CJ rrt -n -n T3 T J — •D — T> It II II II CO cn o -4 m o b O ro - 4 O M O CJ M 01 o ro ui 2 ^ O O M — O —J CO CO -4 CJ o - j — -i HI o — 4k O w 4k cn — c j o — ' cn m Z -D - a —» •o — T3 -* il II 1 ll II II M - H CO CJ CJ — m PO M ro b i o I k O M U l — M CO o ro ro 2 <4 ro CJ OI U -1 M CO —4 M - 4 ro CO i u CO ^ CO cn w — — w - 4 4k w Ul ' • a - o — 13 — TJ II 1 II II 1 II 1 II i N-» - 4 1 ro cn M CO m ro c j O ro co O M CO O M CO o w O 2 I fO 10 O -4 CO -4 —4 M o - 4 CJ (0 o w ro O ~ M O — CJ O w M o — CO 13 "0 "0 — •o — " TJ ll i II II 1 II 1 [1 l [ H ' CJ CJ — b CJ m I o ro Xk O w 4k M M CJ 4k M CJ o ro O 2 CJ CO cn 4k -4 01 M -4 01 cn - 4 ui I • ^ i w cn CO w CD w CO CO w 01 w ro O • T3 V ^ -a ~ T3 — TJ 11 II i [1 1 ll 1 II i -* ro b 4k O l XJ. m ro u (O 4k O M M O M CO b ro O 2 CO 4k — -J CO O - 4 M — - 4 01 - * at CJ w CJ CJ w 4k — — O l CO w ui " * "0 "0 ^ T3 — TJ n II 1 a i 11 1 II i M - H CJ M 4k b m b S3 U O M O — M U l O M O CJ ro co 2 o Ul -vJ CO O - 4 4k — - 4 OI — -4 <n M w O OB w 4k O w cn CO — CD cn w -o ro -a •a — 73 — TJ a 1 n II i II 1 II i k—i —< CJ U l 4k — m b ro O O fo cn O M O CJ M O O M CO 2 01 - J CO O ^ i — — O ~4 - O Ul CO w CO CJ CO — CO cn — co U l CJ GROUP: P E A R S O N System 1 Item C o r r e l a t i o n s C O R R E L A T I O N C O E F F I C I E N T S 77 ITEM1 ITEM2 ITEM3 ITEM4 ITEM5 ITEM6 ITEM7 ITEM8 ITEM9 ITEMIO ITEM1t ITEM12 ITEMI3 ITEM14 ITEMI5 ITEM1 1 .0000 ( 0 ) P= . 1 .0000 ( 9) P = ooo . 1250 ( 9 ) P» .374 1 .0000 ( 9 ) P = .000 1 .0000 ( 9 ) P= . 0 0 0 1250 9) .374 - . 1 2 5 0 ( 9) P = .374 . 17G8 ( 9) P = .325 . 3953 ( 9) P= . 146 - . 5 7 5 4 ( 9) P= .052 - . 2 8 3 5 ( 9 ) P= . 2 3 0 .4857 ( 9 ) P = . 0 9 3 .2867 ( 9 ) P= . 227 . 0533 ( 9 ) P = .446 .38 14 9 ) . I5G ITEM2 1 .0000 ( 9 ) P = -OOO 1 .oooo ( O) p = . . 1250 ( 9) P» .374 1 .0000 ( 9) P» OOO 1 .0000 ( 9) P= . 0 0 0 ( 1250 9) . 374 - . 1 2 5 0 ( 9) P= .374 . 1768 ( 9) P= .325 . 3953 ( 9) P= .146 - . 5 7 5 4 ( 9) P = .052 - . 2 8 3 5 ( 9) P= . 2 3 0 .4857 ( 9 ) P = . 0 9 3 2867 ( 9 ) P = . 227 . 0533 ( 9) P= .446 - . 3 8 14 ( 9 ) P= .156 ITEM3 . 1250 ( 9) P = .374 . 1250 ( 9 ) P = .374 1 . 0 0 0 0 ( o) p = . . 1250 ( 9) P= .374 . 1250 ( 9 ) P = .374 . 1250 ( 9 ) P = .374 . 1250 ( 9) P = .374 - . 1768 ( 9) P = .325 - . 3 9 5 3 ( 9) P - .146 - . 5 3 4 3 ( 9) P= .069 - . 5 6 6 9 ( 9 ) P= .056 - . 2 6 7 1 ( 9 ) P» .244 - . 8 0 3 0 ( 9 ) P = . 0 0 5 - . 2 9 3 2 ( 9) P = .222 5547 9) .06 1 ITEM4 1 . OOOO ( 9) P - .OOO 1.OOOO ( 9 ) P= .OOO . 1250 ( 9) P» .374 1 .OOOO ( O) P - . 1.OOOO ( 9) P - . 0 0 0 - . 1250 ( 9) P= .374 - . 1 2 5 0 ( 9) P» .374 . 1768 ( 9) P= .325 .3953 ( 9) P" .146 - . 5 7 5 4 ( 9) P= .052 - . 2 8 3 5 ( 9) P = . 2 3 0 . 4 8 5 7 ( 9 ) P - . 0 9 3 .2867 ( 9 ) P= . 2 2 7 . 0533 ( 9) P = .446 . 38 14 9 ) . 156 ITEMS 1.OOOO ( 9 ) P» . 0 0 0 1 OOOO ( 9) P= . 0 0 0 . 1250 ( 9 ) P - .374 1.OOOO ( 9) P= . 0 0 0 1.OOOO ( o) P" . - . 1250 ( 9) P= .374 - . 1 2 5 0 ( 9) P» .374 . 1768 ( 9) P= .325 .3953 ( 9) P = . 146 - . 5 7 5 4 ( 9 ) P=- .052 - . 2 8 3 5 ( 9 ) P = . 2 3 0 . 4 8 5 7 ( 9 ) P= . 0 9 3 .2867 ( 9 ) P" .227 . 0533 ( 9 ) P= .446 - . 3 8 14 ( 9 ) P= .156 ITEM6 - . 1 2 5 0 ( 9 ) P= .374 ( P" 1250 9 ) .374 . 1250 ( 9 ) P = .374 - . 1 2 5 0 ( 9) P= .374 - . 1 2 5 0 ( 9 ) P = .374 1 .OOOO ( O) P" . - . 1 2 5 0 < 9) P= .374 . 1768 ( 9) P= .325 .3953 ( 9) P= . 146 . 1644 ( 9) P= .336 . 2835 9 ) . 230 - . 3 8 8 5 ( 9 ) P> .151 - . 2 2 9 4 ( 9 ) P= . 278 . 2932 ( 9 ) P* .222 . 38 14 9) . 156 ITEM7 - . 1 2 5 0 ( 9 ) P= .374 1250 9 ) .374 . 1250 ( 9 ) P= .374 - . 1 2 5 0 ( 9) P= .374 - . 1 2 5 0 ( 9 ) P» .374 - . 1 2 5 0 ( 9) P= .374 1 .0000 ( 0 ) P= . .7071 ( 9) P= .017 - . 3 1 6 2 ( 9) P = .204 . 1644 ( 9) P= .336 .OOOO ( 9) P= . 5 0 0 . 4 8 5 7 ( 9 ) P = . 0 9 3 .2867 9 ) .227 .2932 ( 9 ) P- .222 . 2427 9) . 265 I TEM8 1768 9) . 325 . 1768 ( 9 ) P= 325 - . 1 7 6 8 ( 9) P= . 325 . 1768 ( 9) P - .325 . 1768 ( 9 ) P = . 325 . 1768 ( 9 ) P = .325 .707 1 ( 9) P = .017 1.0000 ( 0) P' . .4472 9 ) . 114 .2034 ( 9 ) P = .300 . 4 0 0 9 ( 9 ) P= . 142 . 5 4 9 5 ( 9 ) P" . 0 6 3 .5677 ( 9 ) P= . 0 5 5 .8292 ( 9) P* . 003 . 245 1 9 ) . 262 ITEM9 . 3953 9 ) . 146 .3953 ( 9 ) P» . 146 - . 3 9 5 3 ( 9 ) P - . 146 . 3953 ( 9) P = .146 . 3953 ( 9 ) P = .146 .3953 ( 9) P= .146 - . 3 1 6 2 ( 9) P = .204 .4472 ( 9) P» .114 1.OOOO ( O) p = . . 0650 ( 9 ) P= .434 .5378 ( 9) P - . 068 . 1229 ( 9 ) P= . 3 7 6 . 3 9 9 0 9 ) . 144 . 74 16 ( 9) P= .011 .02 19 9 ) . 478 ITEMIO - . 5 7 5 4 ( 9 ) P = .052 - . 5 7 5 4 ( 9 ) P" .052 . 5343 9) . 069 .5754 9) .052 - . 5 7 5 4 ( 9) P» .052 . 1644 ( 9) P= .336 . 1644 ( 9) P - .336 .2034 ( 9) P= .300 .0650 ( 9) P« .434 1 .OOOO ( 0 ) P - . .6524 ( 9 ) P = .028 . 1517 ( 9 ) P - . 348 .3017 ( 9 ) P» . 2 1 5 . 3242 ( 9 ) P= . 197 . 8093 9) .004 ITEM11 - . 2 8 3 5 ( 9 ) P= . 2 3 0 - . 2 8 3 5 ( 9 ) P= . 2 3 0 - . 5 6 6 9 ( 9 ) P= .056 - . 2 8 3 5 ( 9) P= . 2 3 0 - . 2 8 3 5 ( 9 ) P= . 2 3 0 . 2835 < 9) P= .230 .OOOO ( 9 ) P= . 5 0 0 .4O09 ( 9) P = .142 .5378 ( 9 ) P= .068 .6524 ( 9 ) P» .028 1 .0000 ( 0 ) P= . - . 1 1 0 1 ( 9 ) P» . 3 8 9 .3902 ( 9 ) P= . 1 5 0 . 5 4 4 0 ( 9 ) P= . 0 6 5 ( . 4717 9) P= .too ITEM12 .4857 ( 9 ) P= . 0 9 3 .4857 ( 9 ) P * . 093 - . 2 6 7 1 ( 9 ) P» .244 .4857 ( 9) P= .093 .4857 ( 9 ) P= . 093 - . 3885 ( 9) P= .151 .4857 ( 9) P= .093 .5495 ( 9) P= .063 . 1229 ( 9) P ° .376 . 1517 ( 9) P= .348 - . 1 1 0 1 ( 9) P» . 389 1.OOOO ( O) P» . . 6907 ( 9 ) P= . 0 2 0 . 352 1 ( 9 ) P= . 176 .4512 9) .111 ITEM13 .2867 ( 9 ) P= . 227 . 2867 ( 9 ) P" .227 - . 8 0 3 0 ( 9 ) P= . 005 . 2867 ( 9) P= .227 . 2867 ( 9) P= .227 - . 2 2 9 4 ( 9) P= .276 .2867 9) .227 .5677 9) .055 .3990 ( 9) P= .144 .3017 ( 9) P= .215 .3902 ( 9 ) P* . 1 5 0 .6907 ( 9 ) P= . 0 2 0 1.OOOO ( 0 ) P - . . 4280 ( 9 ) P= . 125 . 4454 ( 9 ) P= .115 ITEM14 . 0 5 3 3 ( 9 ) P= .446 .OS33 ( 9) P» .446 - . 2 9 3 2 ( 9 ) P= .222 .0533 ( 9) P= .446 .0533 ( 9) P= 446 .2932 ( 9) P= .222 .2932 9) .222 .8292 9) .003 .7416 ( 9) P= .011 .3242 ( 9) P< .197 . 5 4 4 0 ( 9 ) P= . 0 6 5 .352 1 ( 9 ) P= . 1 7 6 . 4 2 8 0 9 ) . 125 I.OOOO ( O) P= . . 3622 ( 9) P= .169 ITEM15 - . 3 8 1 4 ( 9 ) P= .156 - . 3 8 1 4 ( 9 ) P= .156 - . 5 5 4 7 ( 9) P= .061 - . 3 8 1 4 ( 9) P= .156 - . 3 B 1 4 ( 9) P= .156 - . 3 8 1 4 ( 9) P= .156 ( P = . 2427 9) .265 . 2451 ( 9) P= .262 .0219 ( 9) P= .478 . 8093 ( 9) P= .004 .4717 9) . 100 .4512 ( 9 ) P» .111 ( P = .4454 9 ) . 1 15 .3622 ( 9 ) P= . 1 6 9 1.OOOO ( O) p = . ( C O E F F I C I E N T / ( C A S E S ) / 1-TAI LED S IG) IS PRINTED IF A C O E F F I C I E N T CANNOT BE COMPUTED G R O U P : 2 System 2 Item C o r r e l a t i o n s - P E A R S O N C 0 R R E L A T I 0 N 1 C 0 E F F I C I E N T s - -I T E M 1 I T E M 2 I T E M 3 I T E M 4 I T E M 5 I T E M 6 I T E M 7 I T E M 8 I T E M 9 I T E M I O I T E M 1 1 I T E M 1 2 I T E M I 3 I T E M 1 4 IT EM 15 I T E M 1 I ( P = . 0 0 0 0 0 ) 1 ( P = . 0 0 0 0 9 ) . 0 0 0 1 ( P = . O O O O 9 ) . 0 0 0 ( P = . 6 6 1 4 9 ) . 0 2 6 1 ( P = . 0 0 0 0 9 ) . 0 0 0 _ 1 ( P = . 0 0 0 0 9 ) . 0 0 0 ^ -1 ( P = . 0 0 0 0 9 ) . 0 0 0 ( P = . 2 2 9 4 9 ) . 2 7 6 ( P = . 5 3 3 0 9 ) . 0 7 0 ( P = . 2 7 9 5 9 ) . 2 3 3 ( P = . 5 8 3 7 9 ) . 0 4 9 ( P = 0 1 9 4 9 ) . 4 8 0 ( P = 4 9 3 1 9 ) . 0 8 9 ( P = 6 8 7 5 9 ) . 0 2 0 ( P = 6 2 4 5 9 ) . 0 3 6 I T E M 2 ( ( P = . 0 0 0 0 9 ) . 0 0 0 1 ( P = . 0 0 0 0 0 ) 1 ( P = . O O O O 9 ) . 0 0 0 ( P = . 6 6 1 4 9 ) . 0 2 6 1 ( P = . O O O O 9 ) . 0 0 0 1 ( P = O O O O 9 ) . 0 0 0 -1 ( P = . O O O O 9 ) . 0 0 0 ( P = . 2 2 9 4 9 ) . 2 7 6 ( P = . 5 3 3 0 9 ) . 0 7 0 ( P = . 2 7 9 5 9 ) . 2 3 3 ( P = 5 8 3 7 9 ) . 0 4 9 ( P -0 1 9 4 9 ) . 4 8 0 ( P = 4 9 3 1 9 ) . 0 8 9 ( P = 6 8 7 5 9 ) . 0 2 0 ( P = 6 2 4 5 9 ) . 0 3 6 I T E M 3 1 ( P = . 0 0 0 0 9 ) . 0 0 0 ( ' P=> . 0 0 0 0 9 ) . 0 0 0 1 ( P = . O O O O 0 ) ( P = . 6 6 14 9 ) . 0 2 6 ( P = . O O O O 9 ) . O O O 1 ( P = . 0 0 0 0 9 ) . 0 0 0 - 1 ( P = . 0 0 0 0 9 ) . 0 0 0 ( P = . 2 2 9 4 9 ) . 2 7 6 ( P = . 5 3 3 0 9 ) . 0 7 0 ( P = . 2 7 9 5 9 ) . 2 3 3 ( P = . 5 8 3 7 9 ) . 0 4 9 ( P » 0 1 9 4 9 ) . 4 8 0 ( P = 4 9 3 1 9 ) . 0 8 9 ( P = 6 8 7 5 9 ) . 0 2 0 ( P = 6 2 4 5 9 ) . 0 3 6 I T E M 4 ( P = . 6 6 1 4 9 ) . 0 2 6 ( P = . 6 6 1 4 9 ) . 0 2 6 ( P » . 6 6 1 4 9 ) . 0 2 6 1 ( P = . O O O O 0 ) ( P = . 6 6 1 4 9 ) . 0 2 6 ( P = . 6 6 1 4 9 ) . 0 2 6 ( P = . 6 6 14 9 ) . 0 2 6 ( P = . 0 4 3 4 9 ) . 4 5 6 ( P = . 4 4 3 2 9 ) . 1 16 ( P = . 5 4 9 3 9 ) . 0 6 3 ( P = . 2 7 1 5 9 ) . 2 4 0 ( P = 2 9 3 4 9 ) . 2 2 2 ( P = 3 0 7 9 9 ) . 2 1 0 ( P = 6 1 4 2 9 ) . 0 3 9 ( P = 2 5 0 9 9 ) . 2 5 1 I T E M 5 1 ( P = . 0 0 0 0 9 ) . O O O 1 ( P = O O O O 9 ) . 0 0 0 1 ( P = . O O O O 9 ) . O O O ( P = . 6 6 14 9 ) . 0 2 6 1 ( P = . 0 0 0 0 0 ) 1 ( P = . 0 0 0 0 9 ) O O O - 1 ( P = . 0 0 0 0 9 ) . 0 0 0 ( P = . 2 2 9 4 9 ) . 2 7 6 ( P = . 5 3 3 0 9 ) . 0 7 0 ( P = . 2 7 9 5 9 ) . 2 3 3 ( P = . 5 8 3 7 9 ) . 0 4 9 ( P -0 1 9 4 9 ) . 4 8 0 ( P = 4 9 3 1 9 ) . 0 8 9 ( P = , 6 8 7 5 9 ) . 0 2 0 ( P = 6 2 4 5 .9 ) 0 3 6 I T E M 6 1 ( P = . O O O O 9 ) . O O O 1 ( P = . 0 0 0 0 9 ) . 0 0 0 1 ( P » . O O O O 9 ) . O O O ( P = . 6 6 1 4 9 ) . 0 2 6 1 ( P = . 0 0 0 0 9 ) • . O O O 1 ( P = . 0 0 0 0 0 ) - 1 ( P = . O O O O 9 ) . 0 0 0 ( P = . 2 2 9 4 9 ) . 2 7 6 ( P = . 5 3 3 0 9 ) . 0 7 0 ( P = . 2 7 9 5 9 ) . 2 3 3 ( P = . 5 8 3 7 9 ) . 0 4 9 ( P " . 0 1 9 4 9 ) . 4 8 0 ( P = . 4 9 3 1 9 ) . 0 8 9 ( P = . 6 8 7 5 9 ) . 0 2 0 ( P = . 6 2 4 5 9) . 0 3 6 I T E M 7 - t ( P = . 0 0 0 0 9 ) . 0 0 0 - 1 ( P = . O O O O 9 ) . 0 0 0 - 1 ( P = . O O O O 9 ) . O O O ( P = . 6 6 1 4 9 ) . 0 2 6 ' - 1 ( P = . 0 0 0 0 9 ) . O O O - 1 ( P » . 0 0 0 0 9 ) . O O O 1 ( P = . 0 0 0 0 0 ) ( P = . 2 2 9 4 9 ) . 2 7 6 ( P = . 5 3 3 0 9 ) . 0 7 0 ( P = . 2 7 9 5 9 ) . 2 3 3 ( P = . 5 8 3 7 9 ) . 0 4 9 ( P = . 0 1 9 4 9 ) . 4 8 0 ( P = . 4 9 3 1 9 ) . 0 8 9 ( P = . 6 8 7 5 9 ) . 0 2 0 ( P = . 6 2 4 5 9 I . 0 3 6 I T E M S ( P = . 2 2 9 4 9 ) . 2 7 6 ( P = . 2 2 9 4 9 ) . 2 7 6 ( P = . 2 2 9 4 9 ) . 2 7 6 ( P " . 0 4 3 4 9 ) . 4 5 6 ( P = . 2 2 9 4 9 ) . 2 7 6 ( P « . 2 2 9 4 9 ) . 2 7 6 ( P = . 2 2 9 4 9 ) . 2 7 6 1 ( P = O O O O 0 ) ( P = . 0 9 7 8 9 ) . 4 0 1 ( P = . 3 5 9 1 9 ) . 171 ( P" . 4 1 2 0 9 ) . 1 3 5 ( P = . 0 3 5 7 9 ) . 4 6 4 ( P = . 5 1 1 5 9 ) . 0 8 0 ( P = . 5 4 4 8 9 ) . 0 6 5 ( P = . 7 6 7 2 oon I T E M 9 ( P>= , 5 3 3 0 9 ) . 0 7 0 ( P = . 5 3 3 0 9 ) . 0 7 0 ( P = . 5 3 3 0 9 ) . 0 7 0 ( P = . 4 4 3 2 9 ) . 1 16 ( P = . 5 3 3 0 9 ) . 0 7 0 ( P = . 5 3 3 0 9 ) . 0 7 0 ( P = . 5 3 3 0 9 ) . 0 7 0 ( P = . 0 9 7 8 9 ) . 4 0 1 1 ( P = . 0 0 0 0 0 ) A ( P = . 5 9 5 9 9 ) . 0 4 5 ( P = . 6 7 0 1 9 ) . 0 2 4 ( P = . 5 2 9 5 9 ) . 0 7 1 ( P = . 6 1 2 4 9 ) . 0 4 0 ( P = . 1 8 6 5 9 ) . 3 1 5 ( P = . 5 0 6 8 9 ) . 0 8 2 I T E M I O ( P -. 2 7 9 5 9 ) . 2 3 3 ( P' . 2 7 9 5 9 ) . 2 3 3 ( P = . 2 7 9 5 9 ) . 2 3 3 ( P = . 5 4 9 3 9 ) . 0 6 3 ( p . . 2 7 9 5 9 ) . 2 3 3 ( P -. 2 7 9 5 9 ) . 2 3 3 ( P » . 2 7 9 5 9 ) . 2 3 3 ( P = . 3 5 9 1 9 ) . 171 ( P -. 5 9 5 9 9 ) . 0 4 5 1 ( P = . 0 0 0 0 0 ) ( P = . 3 4 14 9 ) . 184 ( P » . 3 8 1 8 9 ) . 1 5 5 ( P = . 5 0 8 1 9 ) . 0 8 1 ( P = . 3 9 1 3 9 ) . 1 4 9 ( P = . 0 3 6 0 9 ) . 4 6 3 I T E M 1 1 ( P = . 5 8 3 7 9 ) . 0 4 9 ( P = . 5 8 3 7 9 ) . 0 4 9 ( p = 5 8 3 7 9 ) . 0 4 9 ( P = . 2 7 1 5 9 ) . 2 4 0 ( P = . 5 8 3 7 9 ) . 0 4 9 ( P = . 5 8 3 7 9 ) . 0 4 9 ( P = . 5 8 3 7 9 ) . 0 4 9 ( P = . 4 1 2 0 9 ) . 1 3 5 ( P = . 6 7 0 1 9 ) . 0 2 4 ( P = . 3 4 1 4 9 ) . 184 1 ( P = O O O O 0 ) ( P » . 6 9 7 0 9 ) . 0 1 8 ( P = . 8 6 2 5 9 ) O O I ( P = . 5 6 1 2 9 ) . 0 5 8 ( P = . 7 7 4 3 9 ) . 0 0 7 I T E M 1 2 ( P = . 0 1 9 4 9 ) . 4 8 0 ( P = . 0 1 9 4 9 ) . 4 8 0 ( P = . 0 1 9 4 9 ) . 4 8 0 ( P = . 2 9 3 4 9 ) . 2 2 2 ( P=-. 0 1 9 4 9 ) . 4 8 0 ( P = . 0 1 9 4 9 ) . 4 8 0 ( P = . 0 1 9 4 9 ) . 4 8 0 ( P * . 0 3 5 7 9 ) . 4 6 4 ( P = . 5 2 9 5 9 ) . 0 7 1 ( P = . 3 8 1 8 9 ) . 1 5 5 ( P = . 6 9 7 0 9 ) . 0 1 8 1 ( P -. O O O O 0 ) ( P = . 6 1 5 6 9 ) . 0 3 9 ( P = . 0 1 9 4 9 ) . 4 8 0 ( P = . 3 2 2 1 9 ) . 199 I T E M I 3 ( P = . 4 9 3 1 9 ) . 0 8 9 ( P = . 4 9 3 1 9 ) . 0 8 9 ( P = . 4 9 3 1 9 ) . 0 8 9 ( P = . 3 0 7 9 9 ) . 2 1 0 ( P = . 4 9 3 1 9 ) . 0 8 9 ( P = . 4 9 3 1 9 ) . 0 8 9 ( P = . 4 9 3 1 9 ) . 0 8 9 ( P = 51 15 9 ) . 0 8 0 ( P-. 6 124 9 ) . 0 4 0 ( P = . 5 0 8 1 9 ) . 0 8 1 ( P = . 8 6 2 5 9 ) . 0 0 1 ( P = . 6 1 5 6 9 ) . 0 3 9 1 ( P " . 0 0 0 0 0 ) ( P = . 6 6 4 6 9 ) . 0 2 5 ( P = . 7 3 9 4 9 I . 0 1 1 I T E M 1 4 ( P-. 6 8 7 5 9 ) . 0 2 0 ( P = . 6 8 7 5 9 ) . 0 2 0 ( P = . 6 8 7 5 9 ) . 0 2 0 ( P-. 6 1 4 2 9 ) . 0 3 9 ( P -. 6 8 7 5 9 ) . 0 2 0 ( P = . 6 8 7 5 9 ) . 0 2 0 ( P = . 6 8 7 5 9 ) . 0 2 0 ( P=> . 5 4 4 8 9 ) . 0 6 5 ( P = . 1 8 6 5 9 ) . 3 15 ( P = . 3 9 1 3 9 ) . 1 4 9 ( P = . 5 6 1 2 9 ) . 0 5 8 ( P = . 0 1 9 4 9 ) . 4 8 0 ( P = . 6 6 4 6 9 ) . 0 2 5 1 ( P = . 0 0 0 0 0 ) ( P = . 6 4 4 6 9 ) . 0 3 0 I T E M 1 5 • ( P = . 6 2 4 5 9 ) . 0 3 6 ( P = . 6 2 4 5 9 ) . 0 3 6 ( P = . 6 2 4 5 9 ) . 0 3 6 i P = . 2 5 8 9 9 ) . 2 5 1 ( P-. 6 2 4 5 9 ) . 0 3 6 ( P = . 6 2 4 5 9 ) . 0 3 6 ( P = . 6 2 4 5 9 ) . 0 3 6 ( P = . 7 6 7 2 9 ) . 0 0 8 ( P = . 5 0 6 8 9 ) . 0 8 2 ( P = , 0 3 6 0 9 ) . 4 6 3 ( P = . 7 7 4 3 9 ) . 0 0 7 ( P = . 3 2 2 1 9 ) . 1 9 9 ( P = . 7 3 9 4 9 ) . 0 1 1 ( P = . 6 4 4 6 9 ) . 0 3 0 ( ' P = . OOOO 0 ) ( C O E F F I C I E N T / ( C A S E S ) / 1 - T A I L E D S I G ) I S P R I N T E D I F A C O E F F I C I E N T C A N N O T B E C O M P U T E O GROUP: 3 System 3 Item C o r r e l a t i o n s P E A R S O N C O R R E L A T I O N C O E F F I C I E N T S - - - - 79 ITEMI ITEM2 ITEM3 ITEM4 ITEM5 ITEM6 ITEM7 ITEM8 ITEM9 ITEMIO ITEM1I ITEM12 ITEM13 ITEMI4 ITEMI5 IT EM 1 1 ( P = .0000 0) 1 ( P = .0000 9) .000 ( P = .4194 9) . 131 ( P = . 1250 9) . 374 1 ( P = .0000 9) OOO 1 ( P = .0000 9) .000 1 ( P* .dboo 9) .000 1 ( P = .0000 9) .000 - 1 ( P = .0000 9) .000 ( P = . 1940 9) . 308 ( P = .5803 9) .051 ( P = . 4375 9) .119 ( P = . 2500 9) . 258 ( P = . 4677 9) . 102 ( P--• . 1890 9) .3 13 ITEM2 1 ( P = .0000 9) .OOO 1 ( P = .0000 0) ( P = .4194 9) . 131 ( P = . 1250 9) . 374 1 ( P = .0000 9) .000 1 ( P = .0000 9) .000 1 ( P-.0000 9) .000 (' P = .0000 9) .000 - 1 ( P<= .0000 9) .000 ( P = . 1940 9) . 308 ( P = .5803 9) .051 ( P = .4375 9) .119 ( P = . 2500 9) .258 ( P = .4677 9) . 102 i P = . 1890 9) .3 13 ITEM3 ( P = . 4 194 9) . 131 ( P = . 4 194 9) . (31 1 ( P-.0000 0) ( P = .0762 9) .423 ( P = . 4 194 9) . 131 ( P = . 4 194 9) . 131 ( P = . 4 194 9) . 131 ( P = . 4 194 9) . 131 ( P = . 4 194 9) . 131 ( P = . 4379 9) . 1 19 ( P = . 4006 9) . 143 ( P = .2479 9) . 260 ( P = .5337 9) .069 ( P = .08 16 9) 4 17 ( P = 2730 9 ) . 238 ITEM4 ( P = . 1250 9) .374 ( P = . 1250 9) .374 ( P = .0762 9) .423 1 ( P = .OOOO 0) ( P = . 1250 9) . 374 ( P=> . 1250 9) . 374 ( . P = . 1250 9) . 374 ( P = . 1250 9) .374 ( P = . 1250 9) .374 ( P = . 3687 9) . 164 ( P = . 3054 9) .212 ( P = .4375 9) . 1 19 ( P = .2500 9) . 258 ( P = .5345 9) .069 ( P-. 1890 9) .3 13 ITEM5 1 ( P = .0000 9) .OOO 1 ( P = .0000 9) .000 ( P = .4194 9) . 131 ( P = . 1250 9) . 374 1 ( P = .0000 0) 1 ( P = .OOOO 9) .OOO 1 ( P = .0000 9) .000 1 ( P = .0000 9) .000 - 1 ( P = .0000 9) .000 ( P-. 1940 9) .308 ( P = .5803 9) .051 ( P = .4375 9) . 1 19 ( P = .2500 9) . 258 ( P = .4677 9) . 102 ( P = . 1890 9) .3 13 ITEM6 1 ( P = OOOO 9) .000 1 ( P = .0000 9) .OOO ( P = .4194 9) . 131 ( P = . t250 9) . 374 ( ' P = .0000 9) .000 • (' P = OOOO 0) 1 ( P = .0000 9) .000 1 ( P = .0000 9) .000 - 1 ( P = .0000 9) .000 ( P = . 1940 9) . . 308 . ( P = . 5803 9) .051 ( P = .4375 9) . 1 19 ( P = . 2500 9) .258 ( P = .4677 9) . 102 ( P = . 1890 9) .3 13 ITEM7 1 ( P-.OOOO 9) .OOO 1 ( P = .0000 9) .000 ( P = .4 194 9) . 131 ( P = . 1250 9) . 374 1 ( P = .0000 9) .OOO 1 ( P = .OOOO 9) .OOO 1 ( P = .0000 0) 1 ( P = .0000 9) .000 - 1 ( P-= .0000 9) .000 ( P = . 1940 9) .308 ( P" .5803 9) .051 ( P-.4375 9) . 1 19 ( P = .2500 9) .258 ( P = . 4677 9) . 102 ( P = 1090 9 1 .3 13 ITEM8 1 ( P* .OOOO 9) .000 1 ( P = .OOOO 9) OOO ( P-.4 194 9) . 131 ( P = . 1250 9) . 374 1 ( P = .0000 9) .000 1 ( P* .OOOO 9) .000 1 ( P = .0000 9) .000 ( ' P = .0000 0) -1 ( P = .0000 9) .000 ( P = . 1940 9) . 308 ( P« .5803 9) .051 ( P = .4375 9) . 1 19 ( P-. 2500 9) . 258 ( P = .4677 9) . 102 ( P = . 1890 9 ) .3 13 ITEM9 - 1 ( P = .0000 9) .000 - 1 ( P = .0000 9) .OOO ( P» .4194 9) . 131 ( P = . 1250 9) .374 - 1 ( P = .0000 9) .000 - 1 ( P = OOOO 9) .000 -1 ( P = .0000 9) OOO - 1. ( P = 0000 9) .000 1 ( P = .0000 0) ( P = . 1940 9) . 308 ( P" .5803 9) .051 ( P = .4375 9) . 1 19 ( P = . 2500 9) . 258 ( P = .4677 9) . 102 ( P = . 1890 9) .3 13 ITEMIO ( P = . 1940 9) .308 ( P = . 1940 9) . 308 ( P = .4379 9) . 1 19 ( P» .3687 9) . 164 ( P = . 1940 9) . 308 ( P = . 1940 9) . 308 ( P = . 1940 9) .308 ( P = . 1940 9) .308 ( P = . 1340 9) . 308 1 ( P = .0000 0) ( P» .5073 9) .082 ( P-.6306 9) 034 ( P = 0195 9) .480 ( P = .2593 9) .250 ( P = . 1027 9) . 396. 1TEM11 ( P = .5803 9) .051 ( P = .5803 9) .051 ( P = . 4O06 9) . 143 ( P = .3054 9) .212 ( P = .5803 9) .051 ( P = 5803 9) .051 ( P = .5803 9) .051 ( P = .5803 9) .051 ( P-.5803 9) .051 ( P = .5073 9) .082 1 < P-.0000 0) ( P-.7177 9) 015 ( P = .5803 9) .051 ( P = .0163 9) 483 ( P = .5772 9) .052 ITEM12 ( P = .4375 9) . 1 19 ( P = .4375 9) . 1 19 ( P = .2479 9) .260 ( P = .4375 9) . 119 ( P = .4375 9) . 1 19 ( P» .4375 9) . 1 19 ( P = .4375 9) . 119 ( P = .4375 9) . 119 ( P = 4375 9) . 1 19 ( P = 6306 9) .034 ( P = 7 177 9) .015 1 . ( P = OOOO 0) ( P = 6250 9) .036 ( P = 5345 9) .069 ( P = 61-12 9 ) .039 ITEM13 ( P = . 2500 9) . 258 ( P = . 2500 9) . 258 ( P = .5337 9) .069 ( P = . 2500 9) . 258 ( P = . 2500 9) . 258 ( P = . 2500 9) . 258 ( P = .2500 9) . 258 ( P = 250O 9) .258 ( P=> 2500 9) . 258 ( P-0195 9) .480 ( P = .5803 9) .051 ( P' 6250 9) .036 1 . ( P = OOOO 0) ( P = 334 1 9) . 190 ( P = 6 142 9) . 039 ITEM14 ( P = . 4677 9) . 102 ( P = . 4677 9) . 102 ( P = .0816 9) .417 ( P = . 5345 9) .069 ( P = .4677 9) . 102 ( P = . 4677 9) . 102 ( P = 4677 9) . 102 ( P = 4677 9) . 102 ( P = 4677 9) . 102 ( P* 2593 9) .250 ( P = 0163 9) .483 ( P = 5345 9) .069 ( P = 334 1 9) . 190 1 . ( P = OOOO 0) ( P = 4 798 9) 09S ITEM15 ( P = . 1890 9) . 3 13 ( P = . 1890 9) .313 ( P = . 2738 9) . 238 ( P = 1890 9) .313 ( P = 1890 " 9) .313 ( P = 1890 9) .313 ( P = 1890 9) . 3 13 ( P = 1890 9) .313 ( P = 1890 9) .313 ( P = 1027 9) . 396 ( P = S772 9) .052 ( P = 6142 9) .039 ( P-6 142 9> .039 ( P = 4798 9) .096 1 . ( P = OOOO 0 ) (COEFFICIENT / (CASES) / 1-TAI LED SIG) IS PRINTED IF A COEFFICIENT CANNOT BE COMPUTED 80 APPENDIX C Examples of Computerized Test Interpretations WISC-REPORT PSYCHOLOGISTICS I N C . PROTOCOL2 TWO NAME: S E X : SCHOOL: JOHN DOE GRADE: DATE OF TEST: 0 3 - 1 9 - 8 2 DATE OF B I R T H : 0 7 - 0 6 - 6 6 RACE: EXAMINER: MURRAY OTTER CURRENT PLACEMENT: REASON FOR REFERRAL: THE SCORES LISTED BELOW WERE USED FOR COMPUTATIONS IN THIS REPORT. THESE AGE-CORRECTED SCALED SCORES SHOULD BE CHECKED CAREFULLY FOR ERRORS. IF DISCREPENCIES ARE FOUND, THE ENTIRE REPORT SHOULD BE REPROCESSED. AGE CORRECTED SCALED SCORES: INFORMATION 3 S I M I L A R I T I E S 8 ARITHMETIC 9 VOCABULARY 7 COMPREHENSION 8 DIGIT SPAN 13 PICTURE COMPLETION 10 PICTURE ARRANGEMENT 14 BLOCK DESIGN 12 OBJECT ASSEMBLY 7 CODING 7 MAZES 11 *** ?ROTOCOL2'S TEST AGE IS 15 YEARS, 8 MONTHS, AND 13 DAYS *** VERBAL SCALED SUBTESTS: SCORE RANGE INFORMATION j> EXTREMELY POOR ==* S I M I L A R I T I E S 8 BELOW AVERAGE - * A'RITHMETIC 9 AVERAGE :== * VOCABULARY 7 BELOW AVERAGE * COMPREHENSION 8 BELOW AVERAGE ====== = * DIGIT SPAN 13 ABOVE AVERAGE * AV E RAG E V E RB AL 8.00 PERFORMANCE SCALED SUBTESTS: SCORE RANGE PICTURE COMPLETION 10 AVERAGE * PICTURE ARRANGEMENT 14 SUPERIOR * BLOCK DESIGN 12 ABOVE AVERAGE ====== =====* OBJECT ASSEMBLY 7 BELOW AVERAGE * CODING 7 BELOW AVERAGE * MAZES 11 AVERAGE * AVERAGE PERFORMANCE 10 .16 WISC-R COPYRIGHT (C) 1974 BY THE PSYCHOLOGICAL CORPORATION. WISC-REPORT COPYRIGHT (C) 1982 BY L . MICHAEL HONAKER. ALL RIGHTS RESERVED. PROTOCOLS TWO PAGE 2 82 ********************************** VERBAL SCALE IQ SCORE 81 10%TILE PERFORMANCE SCALE IQ SCORE 100 50%TILE FULL SCALE IQ SCORE 89 23%TILE ********************************************************************** 95% CONFIDENCE INTERVAL FOR FULL SCALE IQ SCORE = 83 TO 95 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * VERBAL IQ SCORE - PERFORMANCE IQ SCORE = -19 P<.01 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * FACTOR SCORES: VERBAL COMPREHENSION (VCQ) 79 08%TILE PERCEPTUAL ORGANIZATION (POQ) 105 63%TILE FREEDOM FROM D I S T R A C T I E I L I T Y (FDQ) 97 42%TILE FACTOR DIFFERENCES: VCQ - POQ = -2 6 P<'.01 VCQ - FDQ = -18 P<.01 POQ - FDQ = 8 (NS) S U B T E S T D I F F E R E N C E S : SUBTEST SCORE MINUS MEAN VERBAL SCORE INFORMATION S I M I L A R I T I E S ARITHMETIC VOCABULARY COMPREHENSION DIGIT SPAN SUBTEST SCORE MINUS MEAN PERFORMANCE SCORE PICTURE COMPLETION PICTURE ARRANGEMENT BLOCK DESIGN OBJECT ASSEMBLY CODING MAZES -5 .00 0 .00 1.00 -1 .00 0 .00 5 .00 P<.01 (NS) (NS) (NS) (NS) P<.01 -0 .17 3 .83 1.83 -3 .16 -3 .16 0. 83 (NS) P< .05 (NS) (NS) (NS) (NS) 83 WISC-REPORT PSYCHOLOGISTICS I N C . NAME: PROTOCOL2 TWO SEX: SCHOOL: JOHN DOE GRADE: CURRENT PLACEMENT: REASON FOR REFERRAL: DATE OF TEST: 0 3 - 1 9 - 8 2 DATE OF B I R T H : 0 7 - 0 6 - 6 6 RACE: EXAMINER: MURRAY OTTER SUBTEST SCALED SCORES INFORMATION 3 PICTURE COMPLETION 10 S I M I L A R I T I E S 8 PICTURE ARRANGEMENT 14 ARITHMETIC 9 BLOCK DESIGN 12 VOCABULARY 7 OBJECT ASSEMBLY 7 COMPREHENSION 8 CODING 7 DIGIT SPAN 13 MAZ ES 11 VERBAL SCALE IQ SCORE 81 PERFORMANCE SCALE IQ SCORE 100 FULL SCALE IQ SCORE 89 (23%TILE) ON THIS ADMINISTRATION OF THE WECHSLER INTELLIGENCE SCALE FOR CHILDREN-REVISED, PROTOCOL2 OBTAINED A VERBAL SCALE IQ SCORE OF 81 AND A PERFORMANCE SCALE IQ SCORE OF 1 0 0 . THIS RESULTS I N A FULL SCALE IQ SCORE OF 89 WHICH FALLS WITHIN THE LOW AVERAGE (DULL) RANGE OF INTELLECTUAL A B I L I T I E S . THE FULL SCALE IQ SCORE CORRESPONDS TO THE 23%TILE WHICH INDICATES HE IS FUNCTIONING INTELLECTUALLY AT A LEVEL EQUAL TO OR BETTER THAN APPROXIMATELY 23% OF THE CHILDREN THE SAME AGE. OVERALL, PROTOCOL2 PERFORMED SIGNIFICANTLY POORER ON ITEMS TAPPING VERBAL COMPREHENSION S K I L L S THAN HE DID ON TASKS REQUIRING PERCEPTUAL ORGANIZATION. THE A B I L I T Y TO ATTEND TO, CONCENTRATE ON, AND MANIPULATE NUMERICAL MATERIAL, IS S IGNIFICANTLY BETTER THAN PERFORMANCE ON VERBAL COMPREHENSION ITEMS. EXAMINATION OF PROTOCOL2'S PERFORMANCE ACROSS THE DIFFERENT SUBTESTS INDICATES HE EXHIBITED A PATTERN OF STRENGTH ON ITEMS REFLECTING MENTAL ALERTNESS AND SHORT TERM MEMORY OF NUMERICAL STIMULI AND ON SUBTESTS TAPPING PLANNING A B I L I T Y . A PARTICULAR PATTERN OF WEAKNESS WAS EXHIBITED ON ITEMS REFLECTING THE FUND OF GENERAL INFORMATION A V A I L A B L E TO PROTOCOL2. IN COMPARISON TO PROTOCOL2'S OVERALL PERFORMANCE ON VERBAL COMPREHENSION ITEMS, HE EXHIBITED RELATIVE STRENGTH ON SUBTEST MEASURING: ** SHORT TERM AUDITORY MEMORY AND THE A B I L I T Y TO REMEMBER THE ORDER OF SYMBOLIC MATERIAL WISC-R COPYRIGHT (C) 1974 BY THE PSYCHOLOGICAL CORPORATION. WISC-REPORT COPYRIGHT (C) 1982 BY L . MICHAEL HONAKER. ALL RIGHTS RESERVED. 84 SIGNIFICANT RELATIVE WEAKNESSES ON THE VERBAL ITEMS WERE EVIDENCED ON SUBTESTS TAPPING: ** RANGE OF GENERAL FACTUAL INFORMATION PERFORMANCE ON PERCEPTUAL ORGANIZATION SUBTESTS INDICATES RELATIVE STRENGTH ON TASKS MEASURING: ** ANTICIPATION OF CONSEQUENCES AND TEMPORAL SEQUENCING; INTERPRETATION OF SOCIAL SITUATIONS AND NONVERBAL REASONING SIGNIFICANT RELATIVE WEAKNESSES WERE NOT EXHIBITED ON ANY OF THE PERCEPTUAL ORGANIZATION SUETESTS. IN COMPARISON TO OTHER CHILDREN PROTOCOLS'S AGE, HE EXHIBITED SIGNIFICANT STRENGTHS ON SUBTESTS MEASURING: ** SHORT TERM AUDITORY MEMORY AND THE A B I L I T Y TO REMEMBER THE ORDER OF SYMBOLIC MATERIAL ** ANTICIPATION OF CONSEQUENCES AND TEMPORAL SEQUENCING; INTERPRETATION OF SOCIAL SITUATIONS AND NONVERBAL REASONING SIGNIFICANT WEAKNESSES RELATIVE TO HIS AGE GROUP WERE EXHIBITED ON SUBTESTS REFLECTING: - ** RANGE OP GENERAL FACTUAL INFORMATION ** LANGUAGE DEVELOPMENT AND WORD KNOWLEDGE ** A B I L I T Y TO BENEFIT FROM SENSORY-MOTOR FEEDBACK; CONSTRUCTIVE A B I L I T Y I N ABSENCE OF EXTERNAL MODEL ** SPEED OF MENTAL OPERATION AND SHORT TERM VISUAL MEMORY; A B I L I T Y TO LEARN A NEW VISUAL-MOTOR TASK QUICKLY IMPLICATIONS: THE FOLLOWING HYPOTHESES CONCERNING TREATMENT AND NEED FOR FURTHER EVALUATION ARE SUGGESTED BY THE PRESENT RESULTS. THESE HYPOTHESES SHOULD BE EVALUATED IN LIGHT OF PROTOCOL2'S CURRENT ACADEMIC FUNCTIONING, CULTURAL AND RACIAL BACKGROUND, AND SITUATIONAL FACTORS TEAT MAY HAVE AFFECTED PERFORMANCE. PRESENT EVALUATION RESULTS SUGGEST THAT PROTOCOL2 MAY EXPERIENCE-MILD D I F F I C U L T Y IN PERFORMING AT A LEVEL CONSISTENT WITH PEERS ON ACADEMIC TASKS. SOME INDIVIDUALIZED AND/OR REMEDIAL INSTRUCTION MAY EE NECESSARY IN ONE OR MORE AREAS. TEST RESULTS INDICATE THAT GENERALLY PROTOCOL2 PERFORMED SIGNIFICANTLY BETTER ON TASKS REQUIRING PERCEPTUAL ORGANIZATION THAN ON ITEMS REFLECTING VERBAL COMPREHENSION S K I L L S . VERBAL DEFICITS/DYSFUNCTIONS MAY BE INTERFERING WITH OPTIMAL FUNCTIONING. FURTHER EVALUATION TO ASCERTAIN THE PRESENCE AND EXTENT OF ANY VERBAL R E C E P T I V E / E X P R E S S I V E D I F F I C U L T I E S IS RECOMMENDED. IN TEE CLASSROOM, IT MAY PROVE HELPFUL TO PRESENT MATERIAL THROUGH VISUAL MEANS RATHER THAN VERBAL MEANS, PARTICULARLY IN SUBJECTS WHERE PROTOCOL2 IS LEARNING NEW MATERIAL OR I N AREAS WHERE REMEDIATION IS NEEDED. MURRAY OTTER EXAMINER PR0T0C0L2 TWO Date of Test 32 y r . 3 mo. 19 day 8 5 MURRAY OTTER Date of B i r t h 66 y r . 7 mo. 6 day . . Age 15 y r . 8 mo. 13 day T h i s computer report was developed by Charles L. Nic h o l s o n , Ph.D. I t i s based on the WISC-R s c a l e d scores, the three IQ's, achievement t e s t r e s u l t s , and standard s c o r e s . I t a l s o c o n t a i n s i n t e r p r e t a t i o n s , recommen-d a t i o n s , the WISC-R f a c t o r s and other d e s c r i p t i v e statements. Some of these statements should be consid e r e d as only HYPOTHESES which should be i n v e s t i g a t e d f u r t h e r with other instruments or o b s e r v a t i o n s . The v a l i d i t y of t h i s report depends on the v a l i d i t y of the subtest scores, achievement t e s t r e s u l t s and responses of PROTOCOL2. EVALUATIONS BASED ON THE SUBTESTS OF THE WISC-R. Education, c u l t u r a l knowledge and long term memory i s very low. A b i l i t y to see r e l a t i o n s h i p s between t h i n g s and ideas i s average. A b i l i t y to c a l c u l a t e and do simple mental a r i t h m e t i c i s average. V e r b a l word knowledge, word f l u e n c y and judgment i s below average. P r a c t i c a l s o c i a l knowledge and s o c i a l judgment i s average. Short - term v e r b a l number memory and a t t e n t i o n span i s above average. A b i l i t y to separate e s s e n t i a l and n o n e s s e n t i a l p a r t s i s average. A b i l i t y to plan ahead, understand sequences of a c t i o n i s above average. A b i l i t y to make an a b s t r a c t design from i t s p a r t s i s average. A b i l i t y to see and make an o b j e c t from i t s p a r t s i s below average. A b i l i t y to l e a r n and memorize non - v e r b a l m a t e r i a l i s below average. A b i l i t y to concentrate and plan ahead non - v e r b a l l y i s average. WISC-R Subtests, Scaled Scores, P e r c e n t i l e s and IQs Subtest Scaled % t i l e Score Information 3 1 S i m i l a r i t i e s 8 25 A r i t h m e t i c 9 37 Vocabulary 7 16 Comprehension 8 25 D i g i t Span 13 84 Subtest Scaled % t i l e Score P i c t u r e Completion 10 50 P i c t u r e Arrangement 14 91 Block Design 12 75 Object Assembly 7 16 Coding 7 16 Mazes 11 63 V e r b a l S c a l e IQ 81 10 Performance Sca l e IQ 100 50 F u l l Scale IQ 89 23 page 2 PR0T0C0L2 TWO Based on the Ve r b a l S c a l e , mental age i s approximately 12.6 years; achievement should be about 7.1 grade l e v e l ; and a t h e o r e t i c a l achievement at age 16 should be about 7.4 grade l e v e l . Based on the Performance Sca l e , mental age i s approximately 15.6 years; achievement should be about 10.1 grade l e v e l ; and a t h e o r e t i c a l achievement at age 16 should be about 10.5 grade l e v e l . The WISC-R V e r b a l S c a l e shows a b i l i t y at the d u l l normal l e v e l . WISC-R Performance Sca l e a b i l i t y i s at the average l e v e l . The WISC-R F u l l S c a l e shows a b i l i t y at the d u l l normal l e v e l . The 95% confidence l i m i t s f o r the V e r b a l Scale are 74 and 88. The 95% confidence l i m i t s f o r the Performance Sca l e are 91 and 109. The 95% confidence l i m i t s f o r the F u l l Scale are 83 and 95. T h i s means that with 95% c e r t a i n t y PROTOCOL2's t r u e V e r b a l 10, Performance IQ and F u l l Scale IQ l i e between these l i m i t s . Based on o v e r a l l a b i l i t y , weakness i s shown i n the f o l l o w i n g areas: general education, c u l t u r a l knowledge and long term memory; Based on o v e r a l l a b i l i t y , s t r e n g t h i s shown i n the f o l l o w i n g areas: short term v e r b a l memory, c o n c e n t r a t i o n and a t t e n t i o n span; a b i l i t y to plan ahead, note sequence and consequence of a c t i o n ; a b i l i t y to c o n s t r u c t an a b s t r a c t design from i t s p a r t s ; a b i l i t y to attend, concentrate and non-verbal p l a n n i n g ahead. The 50% discrepancy l e v e l based on the V e r b a l S c a l e i s 3.5. The 50% dis c r e p a n c y l e v e l based on the Performance S c a l e i s 5. The 50% dis c r e p a n c y l e v e l based on the F u l l Scale i s 4.2. Achievement below these grade l e v e l s i s c r i t i c a l and should be considered i n a p o s s i b l e c l a s s i f i c a t i o n of l e a r n i n g d i s a b l e d . Page 3 87 PR0T0C0L2 TWO PROTOCOL2 i s able to i n t e r p r e t and organize v i s u a l l y p e r c e i v e d s t i m u l i and m a t e r i a l s b e t t e r than v e r b a l i n f o r m a t i o n and s t i m u l i . PRCT0C0L2 can f u n c t i o n i n an u n d i s t r a c t e d manner. T h i s a b i l i t y i s g r e a t e r than h i s / h e r a b i l i t y to i n t e r p r e t v e r b a l l y presented m a t e r i a l . The l e v e l of the f a c t o r s and i n f l u e n c e s are i n d i c a t e d below. I f the f a c t o r or i n f l u e n c e i s s i g n i f i c a n t l y above or below the l e v e l expected t h i s i s a l s o shown, along with i t s comparison to the mean of a l l the sc a l e d s c o r e s , and the mean of the V e r b a l or Performance s c a l e d scores, where a p p l i c a b l e . Those f a c t o r s and i n f l u e n c e s which are s i g n i f i c a n t l y above the expected l e v e l c o u ld be considered as p o s s i b l e a s s e t s f o r PR0T0C0L2. Those f a c t o r s which are s i g n i f i c a n t l y below the l e v e l expected could be c o n t r i b u t i n g to PR0T0C0L2's low performance i n school and on the WISC-R. Some of these should be i n v e s t i g a t e d f u r t h e r by other instruments. The f o l l o w i n g f a c t o r s and i n f l u e n c e s are c a l c u l a t e d using V e r b a l and Performance s u b t e s t s and are compared to the mean of a l l the sc a l e d s c o r e s . S i g n i f i c a n t Very Above Below Very VorP F u l l High High Avge Avge Avge Low Low Freedom from d i s t r a c t a b i l i t y X Sequencing X F a c i l i t y with numbers X Freedom from anxie t y X Co g n i t i o n ( G u i l f o r d ) X Reasoning X E v a l u a t i o n ( G u i l f o r d ) X D i s t i n g u i s h e s s e n t i a l from n o n e s s e n t i a l d e t a i l s X Learning a b i l i t y X S o c i a l judgment X Conc e n t r a t i o n X C u l t u r a l o p p o r t u n i t i e s X The f o l l o w i n g f a c t o r s and i n f l u e n c e s are based on Ve r b a l s c a l e d scores and are compared to the mean of the V e r b a l and a l l s c a l e d s c o r e s . Memory ( G u i l f o r d ) X Mental a l e r t n e s s above X Ver b a l concep-t u a l i z a t i o n X Acquired knowledge X Degree of abs-t r a c t t h i n k i n g X Fund of i n f o r -mation below X Page 4 PROTOCOLS TWO Long-term memory V e r b a l concept formation V e r b a l e x p r e s s i o n Extent of reading and/or i n t e r e s t s Enrichness of environment A t t e n t i o n span 88 S i g n i f i c a n t Very Above Below Very VorP F u l l High High Avge Avge Avge Low Low below X below above X The f o l l o w i n g f a c t o r s and i n f l u e n c e s are based on Performance s c a l e scor and are compared to the mean of the Performance and a l l s c a l e s c o r e s . Perceptual o r g a n i z a t i o n X S p a t i a l X Integrated b r a i n f u n c t i o n i n g X Planning a b i l i t y above x Visual-motor c o o r d i n a t i o n X ' C u l t u r e - f a i r ' a b i l i t y X A b i l i t y to respond when u n c e r t a i n X Convergent p r o d u c t i o n ( G u i l f o r d ) X H o l i s t i c ( r i g h t brain) f u n c t i o n i n g X Reproduction of a model X Synthesis X V i s u a l memory X V i s u a l o r g a n i z a t i o n without motor a c t i v i t y above X V i s u a l p e r c e p t i o n of a b s t r a c t s t i m u l i X V i s u a l p e r c e p t i o n of meaningful s t i m u l i X C o g n i t i o n s t y l e f i e l d d e p e n d e n c e - f i e l d independence X Working under exact time pressure X Page 5 PRCT0C0L2 TWO Some of the f o l l o w i n g statements, e s p e c i a l l y those r e f e r r i n g to behavior and adjustment, should be c o n s i d e r e d as only HYPOTHESES which should be i n v e s t i g a t e d f u r t h e r with other instruments or o b s e r v a t i o n s . PR0T0C0L2 has a p a t t e r n of subtest scores which resembles those with a g e n e r a l i z e d s t a t e of serious, emotional d i s t u r b a n c e . Although t h i s p a t t e r n of subtest scores i s not a s s o c i a t e d with any p a r i c u l a r d i a g n o s t i c group, only about 10% of the p o p u l a t i o n has t h i s p a t t e r n . The f o l l o w i n g subtest s c a l e d scores(s) are above the l e v e l expected. Some POSSIBLE reasons f o r the e l e v a t i o n s are l i s t e d . D i g i t Span. P o s s i b l e causes are: good short-term v e r b a l memory; a b i l i t y to concentrate; number a b i l i t y ; a b i l i t y to o r ganize and r e o r g a n i z e v e r b a l l y ; p o s s i b l e paranoid p e r s o n a l i t y . P i c t u r e Arrangement. P o s s i b l e causes are: a b i l i t y to p l a n ahead; a b i l i t y to sequence; a b i l i t y to note d e t a i l ; good s o c i a l knowledge; a b i l i t y to see consequences of a c t i o n ; responsive to time demands. Block Design. P o s s i b l e causes are: good v i s u a l motor a b i l i t y ; a b i l i t y to v i s u a l i z e w e l l ; good a b i l i t y to i n t e g r a t e p a r t s i n t o an a b s t r a c t whole; a p e r f e c t i o n i s t p e r s o n a l i t y ; a b i l i t y to respond time demands. Hazes. P o s s i b l e causes are: good a b i l i t y to p l a n ahead; good good visual-motor a b i l i t y ; good f a c i l i t y with a pen; good v i s u a l t r a c k i n g a b i l i t y ; a p e r f e c t i o n i s t p e r s o n a l i t y . The f o l l o w i n g s u b t e s t s c a l e d score(s) i s (are) below the l e v e l expected. POSSIBLE causes f o r the depressed scores are l i s t e d . Information. P o s s i b l e causes are: lack of exposure to the c u l t u r e and environment; poor long term memory; a narrow range of i n t e r e s t s lack of e d u c a t i o n a l o p p o r t u n i t i e s ; a p o s s i b l e d e f i c i t i n a u d i t o r y input; p o s s i b l e r e p r e s s i o n ; a v e r b a l output problem. page 6 PR0T0C0L2 TWO Recommendations Based on the su b t e s t s of the WISC-R suggested remediations are i n the TEACHER'S GUIDE by Nicholson and A l c o r n , p u b l i s h e d by Western P s y c h o l o g i c a l S e r v i c e s , 12031 W i l s h i r e Boulevard, Los Angeles, C a l i f o r n i a 90025, f o r the f o l l o w i n g s u b t e s t s at the i n d i c a t e d developmental l e v e l of 12-16 years (D): Information (1); and f o r the f o l l o w i n g developmental years 12-16 years (D): There i s a s i g n i f i c a n t d i f f e r e n c e between the VIQ and PIQ with the PIQ higher. The TEACHER'S GUIDE c o n t a i n s some p o s s i b l e reasons (14). PROTOCOL2 i n d i c a t e d good a p t i t u d e to concentrate on o r a l p r e s e n t a t i o n . Short-term memory i s adequate, but not perhaps i s long term memory. PROTOCOL2 needs to v e r b a l i z e q u i e t l y but aloud when working. SUBTEST REPORT FOR PROTOCOL2 TWO 91 WECHSLER INTEL. SCALE FOR CHILDREN T h e E x p l o r e r VERBAL-PERFORMANCE IQ DATA VERBAL-PERFORMANCE DISCREPANCY MAY BE INDICATED. PERFORMANCE/VERBAL DISCREPANCY PERFORMANCE SKILLS BETTER DEVELOPED. VISUAL NONVERBAL MODE BETTER THAN AUDITORY PROCESSING. POSSIBLE READING OR LANGUAGE DEFICITS. SUBTEST:INFORMATION PROTOCOL2 OBTAINED A VERY LOW SCORE OF 3 POINTS. INFORMATION TESTS GENERAL KNOWLEDGE; ALERTNESS AND AMBITION; GOOD MEASURE OF LONG TERM MEMORY. BELOW AVERAGE SCORES MAY SHOW POOR MEMORY; OR LIMITED CULTURAL BACKGROUND; LACK OF INTEREST IN THE SURROUNDING ENVIROMENT; LACK OF INTELLECTUAL AMBITION. SUBTEST:SIMILARITIES PROTOCOL2 OBTAINED AN AVERAGE SCORE OF 8 POINTS. SIMILARITIES TESTS LOGICAL CHARACTER OF THINKING; SEEING RELATIONSHIPS; ABSTRACT GENERALIZATIONS; INFERENCE. HYPOTHESES NOT GIVEN FOR NORMAL RANGE. SUBTEST:ARITHMETIC PROTOCOL2 OBTAINED AN AVERAGE SCORE OF 9 POINTS. ARITHMETIC TESTS PERFORMANCE OF REASONING IN A TIME LIMIT. MEASURES CONCENTRATION; ATTENTION; AND MATH. HYPOTHESES NOT GIVEN FOR NORMAL RANGE. SUBTEST:VOCABULARY PROTOCOL2 OBTAINED A BELOW AVERAGE SCORE OF 7 POINTS. VOCABULARY TESTS ABILITY TO DEFINE WORDS; MEASURES EXTENT AND FLUENCY OF VOCABULARY AND LONG TERM MEMORY. POOR SCORES MAY INDICATE THAT THE STUDENT'S THOUGHT PATTERNS HAVE BECOME RIGID AND OVERLY CONCRETE AND THE STUDENT MAY NOT BE ABLE TO GRASP ABSTRACT CONCEPTS. SUBTEST:COMPREH ENSION PROTOCOL2 OBTAINED AN AVERAGE SCORE OF 8 POINTS. , COMPREHENSION TESTS ABILITY TO MAKE JUDGMENTS IN SOCIAL SITUATIONS; USE OF COMMON SENSE AND UNDERSTANDING SOCIETY. HYPOTHESES NOT GIVEN FOR NORMAL RANGE. SUBTEST:DIGIT SPAN PROTOCOL2 OBTAINED AN ABOVE AVERAGE SCORE OF 13 POINTS. DIGIT SPAN REQUIRES CHILD TO REMEMBER AND REPEAT A SERIES OF DIGITS. INDEX OF ATTENTION; CONCENTRATION; MEMORY. ABOVE AVERAGE SCORES MAY SHOW THAT THE STUDENT'S ABILITY TO CONCENTRATE IS GOOD. LEVEL OF SUCCESS MAY SHOW ABILITY TO THINK IN A FLEXIBLE MANNER WITH A CORRESPONDINGLY LOW LEVEL OF ANXIETY. SUBTEST:PICT. COMPLETION PROTOCOL2 OBTAINED AN AVERAGE SCORE OF 10 POINTS. PICT. COMP. TESTS THE ABILITY TO DETECT MISSING ELEMENTS IN PICTURES. MEASURES ABILITY TO NOTE DETAILS. HYPOTHESES NOT GIVEN FOR NORMAL RANGE. SUBTEST:PICT. ARRANGEMENT PROTOCOL2 OBTAINED AN ABOVE AVERAGE SCORE OF 14 POINTS. PICT. ARRANGEMENT TESTS THE ABILITY TO REARRANGE A SET OF PICTURES TO MAKE A SENSIBLE; SEQUENTIAL STORY. HIGH SCORES MAY REFLECT THAT THE STUDENT'S ABILITY TO PLAN AS WELL AS THE STUDENT'S PERCEPTION AND VISUAL PERCEPTION ARE VERY WELL DEVELOPED. HE/SHE IS SOCIALLY ALERT AND HAS WELL DEVLOPED SOCIAL INTELLIGENCE. SUBTEST:BLOCK DESIGN 93 PROTOCOL2 OBTAINED AN AVERAGE SCORE OF 12 POINTS. BLOCK DESIGN REQUIRES THE CHILD TO ARRANGE COLORED BLOCKS TO COPY A GEOMETRIC DESIGN. HYPOTHESES NOT GIVEN FOR NORMAL RANGE. SUBTEST:OBJECT ASSEMBLY PROTOCOL2 OBTAINED A BELOW AVERAGE SCORE OF 7 POINTS. OBJECT ASSEMBLY REQUIRES THE CHILD TO ASSEMBLE A PUZZLE; IT TESTS THINKING AND WORKING HABITS. A POOR SCORE MAY INDICATE THAT THE STUDENT'S VISUAL MOTOR COORD. IS NOT WELL DEVELOPED AND THAT HIS/HER LOGIC AND REASONING ABILITIES AS APPLIED TO SPATIAL RELATIONSHIPS ARE NOT COMMENSURATE WITH OTHER SKILLS; SPATIAL CONCEPTUALATION IS NOT WELL DEVELOPED. - SUBTEST:CODING PROTOCOL2 OBTAINED A BELOW AVERAGE SCORE OF 7 POINTS. CODING REQUIRES ASSOCIATION OF NUMBERS AND GEOMETRIC SYMBOLS; TESTS CONCENTRATION;LEARNING;VISUAL MEMORY. POOR SCORES COULD INDICATE THAT THE STUDENT HAS POOR CONCENTRATION AND/OR LIMITED VISUAL MEMORY. SUBTEST:MAZES PROTOCOL2 OBTAINED AN AVERAGE SCORE OF 11 POINTS. MAZES REQUIRES CHILD TO DRAW A PATH OUT OF A MAZE; TESTS MOTOR CONTROL; CONCENTRATION; IMPULSE CONTROL. HYPOTHESES NOT GIVEN FOR NORMAL RANGE. SCATTERPLOT FOR: PROTOCOL2 TWO WECHSLER INTEL. SCALE FOR CHILDREN * *** **** **** ***** ***** ******** ******** 94 IQ 70 85 100 115 130 S.D. -2 -1 0 +1 +2 %IL E 2 16 50 84 98 V.I.Q.=81 P.I.Q.=100 F.I.Q.=89 VERBAL :.LOW..:AVG:.HIGH.: 1 8 12 19 INFORMATION * SIMILARITIES * ARITHMETIC * VOCABULARY * COMPREHENSION * DIGIT SPAN * PERFORMANCE :.LOW..:AVG:.HIGH.: 1 8 12 19 PICT. COMPLETION * PICT. ARRANGEMENT * BLOCK DESIGN * OBJECT ASSEMBLY * CODING * MAZES * FACTOR ANALYSIS :.LOW..:AVG:.HIGH.: 1 8 12 19 VERBALCOMP. * PERCEPT. ORGAN. * BEST MEASURE OF G * DISTRACTABILITY * FIELD DEPEN. * FIELD INDEP. * VERBAL CONCEPT. * SPATIAL * SEQUENCING * ACQ. KNOWLEDGE * R. BRAIN PROCESS. * INT. FUNCTIONING * SIMULTANEOUS * SUCCESIVE * VIS. ORG. * VIS-MOTOR COORD. * REASONING * RECALL * PERFORM.SCALE COGN. * CONV. PRODUCT. * MUCH EXPRESS.REQ. * LITTLE EXPRESS.REQ. . * 1 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0054420/manifest

Comment

Related Items