UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Practitioner agreement on problem identification in consultation Brix, Patricia A. 1994

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1995-0035.pdf [ 4.85MB ]
Metadata
JSON: 831-1.0086897.json
JSON-LD: 831-1.0086897-ld.json
RDF/XML (Pretty): 831-1.0086897-rdf.xml
RDF/JSON: 831-1.0086897-rdf.json
Turtle: 831-1.0086897-turtle.txt
N-Triples: 831-1.0086897-rdf-ntriples.txt
Original Record: 831-1.0086897-source.json
Full Text
831-1.0086897-fulltext.txt
Citation
831-1.0086897.ris

Full Text

P R A C T I T I O N E R A G R E E M E N T O N P R O B L E M I D E N T I F I C A T I O N I N C O N S U L T A T I O N by PATRICIA A . BRIX B . S c , The City College of N e w York, 1974 M . A . , Cardinal Stritch College, 1985 A THESIS SUBMITTED I N P A R T I A L F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F M A S T E R OF A R T S i n T H E F A C U L T Y OF G R A D U A T E STUDIES (Department of Educational Psychology and Special Education) We accept this thesis as conforming to the required standard T H E UNWFjksrTY OF BRITISH C O L U M B I A November, 1994 ® Patricia A . Brix, 1994 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of dfa<^L£irti4l& /^CA^t^J^^^J^t&6<*U ds&otZ*^*^) The University of British Columbia Vancouver, Canada Date DE-6 (2/88) ABSTRACT This was a descriptive study of the agreements reached by learning assistants and classroom teachers when identifying a student's problem(s) during a consultative problem identification interview. The behavioral consultation research literature suggested that problem identification was a critical component of the problem solving process (Bergan & Tombari, 1976) however, the reliability of information gathered during the consultation interviews required further investigation (White & Edelstein, 1991). This study addressed the issue of reliability of the problem identification interview i n consultation by examining interrater and interparticipant agreements as to the priority, nature and number of problems identified during the interview. Nine learning assistance teachers conducted problem identification interviews wi th each of four classroom teachers from their individual schools regarding students who the teacher identified as difficult to teach. Participants rated their problem identification interviews wi th an evaluative rating scale of interview helpfulness, and levels of problem identification and shared understanding i n their interview dyad. Post consultation interviews wi th each participant revealed the levels to which each identified the presenting problems i n priority by nature and number. Results reported the level to which each interview dyad (N=36) agreed upon the problem(s) identified. Two raters gave independent ratings to the level of shared understanding of the problem(s) identified by the participants as well as to the priority, number and nature of the i i problem(s). Participant-rater agreements were determined for the same variables. The results reported a moderate level of agreement (Kappa = .66) between the participants as to the nature of the highest priority problem. A moderate level of agreement was determined between Rater 2 and the learning assistance teachers (K=.67) and the classroom teachers (K=.78) regarding the nature of the highest priority problem as wel l . The implication of these findings suggested that the dynamic process of problem identification is reliable. However, the process may result i n lack of complete agreement between participants until the process results i n problem descriptions which are specific enough to allow problem solution to be attempted. A replication of this study is needed to further validate these results. Further research is warranted i n order to confirm the level at which problem identification is completed. i i i TABLE OF CONTENTS Page ABSTRACT ii TABLE OF CONTENTS iv LIST OF TABLES vi ACKNOWLEDGEMENTS vii CHAPTER ONE: INTRODUCTION T O THE PROBLEM 1 Prereferral Intervention 3 Behavioral Assessment 4 Consultation 5 Interviews 7 The Purpose of This Study 11 Significance of the Study 12 Definition of Terms 13 Scope and Limitations 15 Summary 16 CHAPTER TWO: REVIEW OF THE LITERATURE 18 Consultative Problem Solving 18 The Initial Interview 19 Problem Identification Interview 23 Empirical Support for Problem Identification . . . . 25 Clinical Behavior Therapy Interview 28 Interviewer Training 34 Consultee Problem Identification 37 Current Practices 40 Summary of the Problem 41 CHAPTER THREE: DESIGN A N D METHODOLOGY 43 Nature of the Study 43 Procedure 45 Participants 45 Setting 46 Assignment of Participants: 46 Instrumentation 48 Summary 56 iv T A B L E OF C O N T E N T S , C O N T I N U E D Page C H A P T E R F O U R : R E S U L T S 57 Demographics 57 Participants 57 Classroom Teachers/ Consultees 58 Prior Experience wi th Consultation 59 Summary 62 Psychometrics 62 Participant Interview Evaluation Measure: Part I 62 Interview Helpfulness 65 Interview Problem Identification 66 Shared Understanding Between Participants 68 Post Consultation Interview Agreement 69 Interparticipant Agreement 71 Interview Helpfulness 71 Problem Identification 72 Shared Understanding 72 Participant - Rater Agreement 73 Participant Interview Evaluation Measure: Part II 76 Problem Identification by Number, Nature, and Priority 76 Number of Problems 76 Nature of the Problem 79 Reliability of Ratings of the Nature of the Problem 81 Other Problems Missed or Ignored 83 Summary 84 C H A P T E R F I V E : S U M M A R Y A N D C O N C L U S I O N S 87 Interview Evaluation Measure 88 Part I: 88 Interparticipant and Interrater Agreement 89 Interview Evaluation Measure 91 Part Et: Number of Problems 91 Nature of the Problems 92 Implications 93 Limitations 95 Future Directions 96 R E F E R E N C E S 97 A P P E N D I C E S 105 v LIST OF T A B L E S Page Table 4.1: Participant Demographic Information 58 Table 4.2: Participant Demographic Information 59 Table 4.3: Participant Familiarity wi th Consultation 60 Table 4.4: Participant Familiarity wi th Consultation 61 Table 4.5: Means (and Standard Deviation) of Learning Assistance and Classroom Teacher Ratings of Interview Characteristics Across Schools 64 Table 4.6: Distribution of Learning Assistants' and Classroom Teachers' Ratings of Interview Helpfulness 66 Table 4.7: Distribution of Learning Assistants' and Classroom Teachers' Ratings of Problem Identification 67 Table 4.8: Distribution of Learning Assistants' and Classroom Teachers' Ratings of Shared Understanding 68 Table 4.9: Interrater Agreement 69 Table 4.10: Participant - Rater Agreement 74 Table 4.11: Participant Interview Evaluation: Part II Mean Number of Problems Reported by Participants i n Individual Schools 77 Table 4.12: Problem Identification: Number of Problems Identified By Participants 77 Table 4.13: Interrater Agreement on the Number of Identified Problems Per Interview 79 Table 4.14: Problem Identification: Nature of the Problem and Ranking Identified by Participants 80 Table 4.15: Interrater Agreement O n the Nature of the Highest Priority Problem 82 Table 4.16: Comparison Agreement Between Participants and Rater 2 . . . . 82 Table 4.17: Other Problems Unreported During The Interview 83 v i A C K N O W L E D G E M E N T S Deus Caritas Est A n undertaking of this k ind can not and is not done alone. I am grateful for the direct and indirect contributions made by so many colleagues, friends, and former students to this work. Notable contributions were made by the following persons to w h o m I must express my greatest appreciation: It is to Dr. Wi l l iam McKee, for his patience and tireless effort i n supervising this project from inception to completion, that I owe a tremendous debt of gratitude. Thanks, Bi l l , for your willingness to share your wisdom, time, and expertise i n support of my emergent skills as a researcher. M y gratitude also extends to the members of my committee, Dr . Nand Kishor and Dr . Jon Shapiro for their timely assistance and encouragement. A n additional word of thanks goes to the members of the EPSE faculty, staff and graduate students for their support wi th special thanks to Dr . Wi l l i am Reynolds, the PRTC staff and Leona Spencer for their assistance along the way. I wish to acknowledge the contribution of the Learning Assistance Teachers and Classroom Teachers from the Vancouver Catholic Schools for their willingness to risk participation i n research by sharing their consultative skills to further the field of educational psychology. I joyfully owe a continuing debt of gratitude to the Sisters of Charity of St. Vincent de Paul (Halifax) for their willingness to support and share i n my dream as it unfolds wi th in our communal commitment to making the love of G o d visible i n the wor ld today. A special acknowledgement goes to Sister Joan Butler i n appreciation for her visible support and inspiration. Long distance thanks extend to the Eastern connection whose love and support continue to sustain me: my parents, my siblings and their families, and especially Hobo: "Belay on Climbing " v i i 1 CHAPTER ONE: INTRODUCTION TO THE PROBLEM The past two decades have brought about a growing number of changes i n the field of special education regarding the classification and placement of special needs students. A reflection of these changes is mirrored i n the evolving roles of special education service providers, and i n particular, that of the school psychologist. Traditionally the major role of school psychologists has been the identification and classification of special needs students through standardized assessment procedures. K n o w n as the "gatekeepers," school psychologists are responsible for identifying students i n need of special educational placement and students who can return to the regular education stream (Wil l , 1988). Reschly (1988) reported that up to two-thirds of the school psychologist's time was spent i n determining student eligibility for special education. The growing influx of students through the referral/classification/placement process led to questioning of the efficiency and effectiveness of traditional assessment practices. W i l l (1986) contended that this practice resulted i n the development of two parallel systems of education i n which services had become fragmented and thus reduced opportunities for communication and collaboration of resources. Another concern wi th this practice had been the resulting denial of services to "at-risk" students who do not fall wi th in the established educational policy boundaries of "special needs" but for w h o m the availability of some psychological or educational interventions would mean the difference between a successful rather than unsuccessful school experience. 2 The U S Department of Education responded to this growing concern by establishing the Regular Education Initiative (Will , 1988) which proposed the expansion of the regular education system to include all students. School psychologists need not abandon their assessment practices but take a proactive stance by using intervention and prevention techniques to assist classroom teachers i n their role of educating all students, particularly those wi th special learning needs. Prior to the Regular Education Initiative, the National Association of School Psychologists issued a position statement challenging the inadequacy of assessment measurement techniques used for categorical placement of students and supporting assistance wi th student program development (Kratochwill & Sheridan, 1990). These proposed changes suggested the need for a new paradigm of service delivery for all students requiring educational and psychological services i n schools. The need for alternatives to current practices was further supported by empirical research of assessment methods. L i l ly (1988) reviewed critiques of the special education and regular education approaches to traditional assessment and classification practices, reporting that such practices did not offer enough empirical evidence to support continued use. Reschly (1988) stated concern wi th the psychometric adequacy of commonly used assessment instruments. Rosenfield and Reynolds (1990) cited examples of the lack of treatment utility wi th current assessment practices. Standardized assessment results were not easily transformed into practical interventions. Another important consideration of the referral/assessment/placement outcome was its orientation to funding and 3 eligibility issues. Inappropriate diagnoses and mislabelling were not uncommon occurrences, particularly wi th a large number of referrals. Despite expressed skepticism wi th traditional assessment methodology, it continues to be utilized. Alternative assessment methods are increasingly used but they need further research to establish their effectiveness as supplemental or replacement procedures for traditional ones. Suggested skills to supplement traditional assessment practices have included: more structured approaches to gathering client information from direct observation of behaviors and their surrounding environmental events; increasing the practitioner's knowledge of behavioral change principles and instructional design; and interpersonal relationship skills such as those inherent to consultation methodology (Reschly, 1988, p.468). These skills typify the methodology of prereferral intervention, behavioral assessment, and consultation, which are among the more popular alternative assessment practices. They support the role of school psychologists as problem-solvers, decision-makers and enablers of other professionals to do the same. Each of the practices share a similar approach to problem solving and receive the same criticisms as wel l . Prereferral Intervention A focal shift from standardized assessment practice to an intervention-oriented one has received more recent attention and support (Wil l , 1988). A prereferral intervention approach identifies problems and intervening strategies which w i l l assist the student to remain i n an appropriate environment either i n place of or prior to formal referral. This approach potentially reduces the number 4 of referrals, classifications and placements into special education. Support for this model was demonstrated i n the Carter and Sugai (1989) nationwide survey of U S State Departments of Education. Nearly 75% of the individual states' policies support some form of a prereferral intervention model. The proposed Special Education Guidelines (1994) i n British Columbia encouraged teachers to use this approach prior to consulting wi th resource personnel. A s popular as it seemed, this model was also reported by Carter & Sugai (1989) as lacking sufficient empirical support to demonstrate its effectiveness i n keeping students i n the mainstream of education. They encouraged its continued use, however, and suggested alternative practices to supplement those reported i n the survey. Kratochwill and Sheridan (1990) favorably reviewed several models of prereferral intervention but also noted that empirical support was lacking. Practitioners were encouraged to continue to use this approach for its beneficial aspects while on-going research was suggested. Behavioral Assessment A similar alternative service delivery model is found i n behavioral assessment. Practitioners utilize similar skills i n gathering information about the client and interfering problems using a functional analysis approach (O 'Ne i l l , Horner, A l b i n , Storey & Sprague, 1990). This process involves targeting a problem behavior, identifying its surrounding events and introducing an intervention intended to alter the behavior to a desired state. Behavioral assessment utilizes standard assessment techniques of information gathering such as questionnaires, checklists, self-report measures and standardized instruments i n order to provide a complete contextual picture of the targeted behavior. Effective practitioners characteristically can apply constructs from social learning theory as a basis for determining solutions to problem behaviors or problematic solutions (Alessi, 1989). The interventions developed during this process may be refined throughout assuring feedback on effectiveness to practitioners. It is the methodology of behavioral assessment which supports the practice of prereferral intervention, intervention, and prevention program development. This methodology has helped to refocus the role of the school psychologist into a stronger problem solving model i n which problems rather than students are identified. Consultation A third alternative service delivery model, which incorporates aspects of both prereferral intervention and behavioral assessment, is consultation. Consultation methodology is inherent to the practice of assessment and intervention problem solving models. It is an interpersonal influence model which requires dialogue for change to occur between participants. The practice of consultation i n a school setting has been given a new and longer look by professionals seeking to re-focus traditional assessment procedures into greater collaborative problem solving activities. The major goal of consultation is that of solving current problems and increasing the ability of its practitioners to solve future problems (Gutkin & Curtis, 1990). Consultation provides a framework for enhancing traditional assessment methods by providing a structure for voluntary communication i n which problem solving can occur among professionals. It is 6 practiced i n many occupations. Of the ten models of consultation identified i n school settings, the model of behavioral consultation has been noted to be the most widely researched and popular wi th educators (Medway, 1979; West & Idol, 1987). Consultation methodology underlies the alternative service delivery models suggested for use i n the regular education initiative. Consultation has generated much attention i n professional journals and research literature i n recent years, especially i n its utilization i n the field of school psychology. Consultation methodology contains the basic approaches suggested by most alternative service delivery models. It has provided the framework for this study of an essential component of its process: problem identification. Witt and Elliott (1983) observed that as a total unit, the process of consultation has demonstrated its effectiveness, particularly wi th regard to its efficiency of professional time and energy. However, i n order for consultation to be understood and utilized to its fullest, it is necessary to address each component part separately. From a behavioral consultative perspective, Witt and Elliott (1983) discussed the importance of the initial interview phase of consultation, the entry level into its process of problem solving. The initial interview provides the context for problem identification to occur i n consultation. A s an integral part of assessment practices, its effectiveness is worthy of further consideration. Identifying a problem for solution is a critical component because as Lazarus (1973) noted, a faulty diagnosis can interfere wi th assistance intended for participants. In a consultative framework, this suggested that difficulties wi th 7 the interview process, especially i n problem identification, could impede the progress of consultation. Interviews The typical way to gather referral information about a student's problem is through an interview. Initially this interview may be held between classroom teachers and school psychologists to identify and operationalize the problem(s) i n order to proceed wi th a problem solving process. A behavioral interview focuses on systematic targeting of behaviors along wi th identification of antecedents and consequences to allow the assessor to conduct a functional analysis of the undesired behavior. A n assumption made i n this process is that the chances are greater that problem solving w i l l proceed to a successful completion once the problem behavior is identified or targeted (Bergan & Tombari, 1976). This has placed strong emphasis on the interview process facilitating problem identification. Appropriate and effective use of an interview was a universal concern for its practitioners. Since the interview itself constitutes an assessment practice, albeit an indirect method, it is subject to the same psychometric requirements of reliability and validity as are all standardized assessment procedures. Al though not all researchers agree that behavioral assessment methodology need be evaluated by the same traditional psychometric requirements, as wi th other assessment procedures, there needs to be some application of empirical analysis by which a successful assessment outcome may be identified and supported. 8 Gresham and Davis (1988) reported inadequate empirical support for reliability of behavioral interviews. The studies they cited i n support of their claim typically focused on interrater reliability to establish reliability for the interview as an assessment procedure. Bergan and Tombari (1975; 1976) conducted early research on the reliability of the behavioral consultation interview demonstrated as part of a sequence of problem solving activities. Their findings suggested that when a problem was identified it would lead to resolution. In their analysis of 806 consultation cases problem identification accounted for nearly 41% of the variance of specific consultant variables present at this initial phase. In turn, problem identification contributed significantly to plan implementation leading to problem solution. This finding led to the specification of verbalizations made during consultation interviews which would lead to problem identification. A coding system to analyze verbalizations, known as the Consultation Analysis Record (CAR) was devised and reliability was determined by comparing independent ratings of verbalizations. Interrater reliability of verbalizations suggested for problem identification was r = .92. Follow-up studies conducted by Bergan and Tombari (1976), Brown, Kratochwill and Bergan (1982) and others supported this practice of establishing reliability of independent ratings of verbalizations i n the process based on their schema of an interview format. The findings from Bergan and Tombari (1976) contributed to the assumption that problem identification must occur i n order for the remainder of the problem solving process to proceed. A n interesting finding is that problem 9 identification only occurred i n 43% of the cases and the remaining 57% did not have a problem identified for various reasons. A major reason cited for 44% of the referrals not continuing was that a referral for testing or staff evaluation had occurred, the rest were quite variable but not unusual given the circumstances. Bergan and Tombari (1976) concluded that consultative problem solving inevitably led to problem resolution and those unresolved or uncompleted cases had not identified a problem at the referral stage. Follow-up studies typically included consultant training to ensure that problem identification i n consultation would not be overlooked. A n issue which remained was the reliability of problem identification itself as determined beyond the simple acknowledgement of whether or not it had occurred. The issue of reliability has been addressed primarily as interrater reliability. Haynes and Jensen (1979) noted that interview reliability is not often determined which could lead to questionable validity for information gathered during the interview. This may be troublesome when identified problems are targeted for intervention. Studies reporting interobserver and interrater agreement on the interview information gathered is noticeably sparse. Gresham (1984) reviewed a number of studies which attempted to investigate interview reliability through interrater reliability. He reported that interrater reliability can be studied through the coding of verbalizations by independent raters which demonstrates good reliability as i n the coding system developed by Bergan and Tombari. This method addresses the ability of the raters to agree on coding for the interview verbalizations and may not necessarily address the nature of the 10 interview content. Further in-depth study would be required to examine the level of their agreement or to validate the nature of the problem on which they are agreeing. Interrater reliability can also be studied as agreement between the interviewers and between the raters as was attempted by Hay, Hay, Angle and Nelson (1979) which reported the lack of interviewer generality wi th regard to the specific problems addressed i n the interviews. Further investigation needed to be done i n order to support the reliability of information gleaned from the interview for the development of interventions. This criticism of problem identification i n behavioral interviews was also addressed i n the clinical behavior therapy literature. A n important finding i n a study conducted by Hay, Hay, Angle and Nelson (1979) was that interviewers of the same clients were able to agree on the number of problem areas identified but not on the nature of the problems identified, although interrater agreement as to the problem areas identified for each interviewer ranged from .90 to .75. Since the purpose of the interview was to target (identify) problem behaviors for which a functional analysis for problem solving could occur, this finding did not support problem identification agreement among the interviewers. Wilson and Evans (1983) presented case studies to 118 professionals and found a mean percentage agreement of 38.6% for the highest priority problem area. A later study conducted by Felton and Nelson (1984) found low inter-assessor agreement for hypothesized controlling variables following the specification of the problem behavior to the interviewers prior to conducting the interview. 11 Recent criticism of interview use comes from White and Edelstein (1991) and N e z u and N e z u (1993) who conclude there remains a lack of empirical evidence to support the accuracy and reliability claims of the behavioral assessment interview. The underlying principles of psychometric assessment practices require that an instrument first be reliable i n order to be val id and assure the practitioner of appropriate results (Anastasi, 1988). Research findings from several approaches raise the question as to whether or not the initial interview is a reliable way of identifying the nature of problems, particularly i n selecting them as target behaviors for planning appropriate interventions. Also , problem identification or the selection of target behaviors for intervention which occurs wi th in this process may not be accomplished in a way that would validate an identified problem to solve. A potential way i n which to approach the reliability issue would be to conceptualize the reliability of the interview process as consistent observations of problem representations by both participants i n the problem identification interview. From a consultative perspective, agreement between participants could be sought as to the problems identified which w i l l be targeted for intervention. The Purpose of This Study The purpose of this study was to describe current practices of the initial interview as a means of identifying the nature of a problem or target behaviors wi th in the practice of consultation and to evaluate its effectiveness i n terms of interparticipant and interrater agreements on the nature and number of the problems identified. This was based on the assumption that an interview 12 conducted by a reasonably skilled interviewer would likely lead to problem identification and agreement between the participants as to what had been discussed i n their interview regarding the nature of the highest priority problem. This study attempted to respond to the reported needs of researchers to establish greater empirical support for use of problem identification i n the initial interview as a reliable problem solving technique. Independent ratings for consultative participants' agreement were obtained from the participants themselves. Interrater agreements were determined from two raters independently rating the participants' responses to problem identification and agreement measures. This study proposed to increase support for current practices of consultation models used i n school situations. It attempted to demonstrate the reliability of the process through agreement on problems identified by participants and two independent raters. In summary, this study attempted to answer the following question: Do practitioners of consultation agree on the problem identified following an initial interview for problem identification i n the process of consultation? Significance of the Study This study added to current knowledge of the reliability of the initial interview for identifying the nature of a student's (client) problems. The consulting participants' agreements on the interview outcome were compared wi th agreements made by independent evaluators (raters) of the interview results. 13 This study attempted to respond to identified research needs of evaluating components of the consultation process leading to successful outcomes rather than viewing it solely as a unitary process. It obtained further knowledge of consultation efficacy as a problem solving process used by consulting teachers i n naturalistic settings. Definition of Terms Behavioral Assessment Interview - an interview during which the necessary information to engage in the assessment process is elicited using behavioral descriptors i n operational and measurable terms. Behavioral Consultation Interview - an interview between a consultant and consultee regarding a client rather than an interview directly between a consultant and client or therapist and client. This interview is characteristically similar to a behavioral assessment interview. Client - a student wi th w h o m the teacher/consultee is experiencing difficulty or finds difficult to teach; Consultant - a school psychologist or educator wi th some training and/or experience i n conducting a problem solving process i n a consultative manner. The consultant usually has direct contact wi th the consultee and occasionally, if appropriate, wi th the client. In this study the learning assistance teacher from each school assumed the role of the consultant i n conducting the consultation interviews and therefore the terms "consultant, consulting teacher, and learning assistant or assistance teacher" are used interchangeably. 14 Consultation - an indirect service delivery model of problem solving which occurs through collaborative dialogue between a consultant and consultee regarding a client wi th whom the consultee is experiencing difficulty. Participants engage i n the process voluntarily and consultees anticipate improvement i n their present problem-solving skills as wel l as i n handling future situations. Consultee - a classroom teacher who is experiencing difficulty wi th a student and voluntarily seeks assistance wi th this student through the consultative process. The terms "consultee and classroom teacher" are used interchangeably. Functional Analysis - the process of behavioral assessment i n which the target behaviors are identified along wi th their intervening variables (antecedents and consequences) which are impacting upon these behaviors. Interrater Agreement - ability of two independent raters, following a training session by the researcher, to obtain similar results when scoring transcribed protocols. Intervention - the process of manipulating the intervening variables around target behaviors as a way of bringing about positive behavioral change. Interview - the verbal interchange or dialogue which occurs between two or more persons as a medium for obtaining relevant information. Participant Agreement - established by agreement of the consultant and the consultee as to the nature of the client's (student's) problem(s) as stated i n independent post-consultation interviews following the consultation interview process of problem identification. 15 Prereferral Intervention - the practice of identifying problems and conducting a functional analysis from which situational solutions (interventions) are devised and attempted either i n place of or prior to formal referrals for assessment. Problem - an identified discrepancy between a demonstrated behavior and its desired state. Problem Identification - the beginning of the consultative problem solving process which involves the consultant and consultee naming and agreeing upon the target behaviors exhibited by the client to be i n need of change. Reliability - consistency of results over time, place, and persons. The ability of a psychometric instrument to produce the same results again and again thereby reducing error variance (Anastasi, 1988). Target behaviors - those behaviors exhibited by the client which the consultant and consultee identify as interfering wi th the client's ability to be successful i n the classroom. Scope and Limitations This study examined participant agreement on the nature of students' problems following the initial interview, a critical component of the problem identification phase in consultation. The consultation model of problem solving was approached primarily from a behavioral perspective, identified i n research literature since 1970. Volunteer learning assistants and classroom teachers from nine schools i n the Catholic Archdiocese of Vancouver participated by conducting initial interviews for problem identification. Post-consultation interviews were conducted by the researcher. Post-consultation interview 16 transcriptions were coded for participant agreement and rated independently by two raters to establish interrater agreement as to the nature of the problem(s) identified. Results reflected current consultation practices used i n the Catholic schools i n Vancouver. Summary Alternative models of assessment challenged the role of school psychology to include ways of providing services to more students following the regular education initiative. Consultation has been one way i n which the problem solving aspects of assessment are emphasized. A n interview is the context i n which problem solving is initiated and the importance of problem identification is paramount to successful completion of the process. Studies conducted i n several disciplines as to the effectiveness of defining a problem i n an interview have demonstrated that raters of the interview content generally agree on what was discussed but participant agreement on problem areas is considerably lower. This could lead to gathering questionable data i n an assessment interview and also to the reliability of the problem identified as wel l . A new study was proposed to investigate problem identification wi th in an interview context by assessing the level of agreement by participants as to the nature of the highest priority problem This study proposed to describe the practice of problem identification i n the initial interview currently used by learning assistance teachers i n consultation wi th classroom teachers. The agreement on a student's highest priority problem(s) following their consultation interview w i l l be determined. This study addressed the following question: Do practitioners of consultation agree on the problem identified following an initial interview for problem identification i n the process of consultation? C H A P T E R T W O : R E V I E W OF T H E L I T E R A T U R E 18 The purpose of this chapter was to review the literature wi th respect to problem identification as it occurs i n the initial interview of consultation. Research literature from several assessment methodologies were reviewed wi th regard to their use of an interview as an instrument for conducting problem identification. Consultative Problem Solving Approaching the practice of school psychology from a consultative framework allows the psychologist to function i n a problem solving mode. The process of problem solving is wel l described from a behavioral perspective, particularly i n behaviorally oriented literature (e.g. Kazdin , 1985; Kratochwill , 1985; Martens, 1993). Consultation, notably behavioral consultation, has been identified as an important way of increasing the problem solving skills of both consultants and consultees. The process which the consultant and the consultee undertake to problem solve is represented i n a series of stages which suggests that problem resolution follows sequential movement through designated steps. The first step is known as problem identification i n which consultant and consultee voluntarily engage in a dialogue to define and clarify problems faced by the consultee i n relationship to a client. The problem analysis phase requires further discussion of the problem and a functional analysis of its environmental contexts followed by the formulation and implementation of plans or interventions to solve it. These interventions are then assessed to determine their 19 effectiveness and finally evaluated for the attainment of a positive outcome wi th the possibility of redesigning them if the desired outcome is not achieved (Bergan & Kratochwill , 1990; Gutk in & Curtis, 1990; Idol, 1990; Polsgrove & M c N e i l , 1989; West & Idol, 1987). A fundamental aspect of this approach is that it is goal-directed i n which the participants may structure the process i n light of specified goals to which they anticipate achievable ends (Bergan & Kratochwill , 1990). A n advantage of viewing the consultation process as a problem solving activity is that the consultant is able "...to operationalize the conceptual basis of the process" (Sloves, Docherty & Schneider, 1979, p. 30). That is, consultation participants actively address a problem so that it w i l l be "operationalized," reframed, represented or clarified i n a structure which w i l l allow it to be examined systematically within the environment. Participants are able to determine and agree upon the presence and function of the problem. This enables a process for successful resolution to occur. In order to clarify consultative problem solving as an assessment practice, it is important to examine the stage at which the problem is identified, the place where "operationalization" begins, the initial interview. The Initial Interview The interview is a method used i n all forms of assessment. It is a component of the assessment procedure utilized as a basis for referral. A n initial interview is the first formal conversation or dialogue which occurs for the purpose of setting the goals of assessment. The interview is more than a 20 nonspecific cursory meeting between consultant and consultee to gather minimally reliable information about the client (Gresham, 1984). It can assist participants i n determining a direction i n which to proceed. In behavioral consultation, the initial interview between a consultant and consultee is used for targeting behaviors which are deemed "problems." It allows the collection of information necessary for determining what outcomes the consultee desires as a function of the process as wel l as assisting to evaluate these outcomes (Haynes & Jensen, 1979). The consultation interview is not unlike interviews conducted for the purpose of clinical diagnoses. In addition to gathering background information about a client from a variety of sources, it may include a direct interview wi th the client. The clinical interview is used to assist i n forming a preliminary diagnosis as to the client's presenting problems. The suggested interview format for this situation is a structured one during which the interviewer asks designated questions, typically following a standard set of procedures and guidelines (Nuttall & Ivey, 1986). As an assessment instrument, the interview is required to demonstrate adequate psychometric properties to assure the clinician and client that reliable and valid information is gathered for diagnostic purposes. The interview would then be subjected to the requirements of reliability and validity i n order to ensure that the information gathered w i l l lead to a successful outcome (Gable, Friend, Laycock & Hendrickson, 1990). In clinical practice, the diagnostic interview is critical to the assessment process. Nuttal and Ivey (1986) reported that early studies which investigated the 21 reliability and validity issues around the interview process found it to be an unreliable measure. Some reasons cited for its questionable reliability were lack of clarity and precision i n operationalizing diagnoses and a need for better trained clinicians. On-going solutions were found by refining the operational definitions needed for characterizing psychiatric conditions and using the kappa statistic for equalizing base rates for different diagnoses (Nuttal & Ivey, 1986, p. 114). Investigating the reliability and validity of psychiatric clinical interview data was part of a comprehensive research project developed by Graham and Rutter (1968). They investigated the reliability of gathering information about a child's psychiatric disorder based on the parent interview. The authors reported a semi-structured interview format was used i n which mothers of 268 children suspected of having psychiatric disorders were given an opportunity to elicit spontaneously symptoms observed i n their children. 119 children from this group were identified by their mothers as having disorders characterized as neurotic, antisocial, mixed or other based on their behavior for the previous year. This diagnosis was supported, i n part, on mothers' responses when queried whether they believed their child to have a problem and its manifestation by degree of severity and i n comparison wi th other children. The interviews were rated independently by the authors using a 4 point scale designed to rate the presence or absence of symptoms rather than the mothers' responses. Interrater agreement was determined by a second rating given to 80 of the interviews schedules for symptoms and overall severity of the disorder 22 and diagnosis. A correlation of .81 for overall diagnostic agreement was achieved between the two raters. A second interview was conducted wi th thirty-six of the mothers which asked for symptoms exhibited during the same time period as the first interview. Overall diagnostic agreement of ratings between the two interviews was .64. A n interesting finding reported between the first and second interview was that the correlation coefficient for the ratings given the parental perception of childhood disturbance was .43 indicating that a parent was as likely to state she perceived her child as disturbed i n the first as i n the second interview. The researchers suggested that parental inconsistency i n identifying their children's problems would require consideration i n further research activities. Other difficulties which the researchers encountered were rating individual symptoms because of overlap of many symptoms by researcher descriptions. Also , parental descriptions of symptomatic behaviors did not always conceptually match researcher descriptions. For example, the parental description of "overactive behavior" was characteristic of three distinct areas described by the authors. A n additional threat to reliability was interviewers who did not adhere to the standardized administration of the interview resulting i n data which was useless to the process. The researchers concluded that following a structured interview resulted i n reliable overall psychiatric disorder diagnoses and individual symptoms were rated highly reliable when expressed i n specific behavioral terms. They encouraged continued use of a semi-structured interview as a way of obtaining 23 sufficient information regarding the client. Parental inconsistency i n identifying symptoms across interviews was a concern for future research. Gathering reliable and useful information is critical to the interviewer i n establishing a description of the problem(s) which needs to be addressed. This suggested that interviews could be more effective when structured to gather information and that the information may be used effectively when presented i n behavioral terms. This approach is similar to the problem identification interviews which behavioral consultants conduct as a way of representing a problem for solution. Problem Identification Interview The early work i n establishing behavioral consultation as a structure for a problem solving process comes primarily from John Bergan and his associates (e.g. Bergan, 1970; Bergan & Tombari, 1975, 1976; Bergan & Kratochwill , 1990). Bergan (1977) provided a comprehensive framework i n which problem solving wi th regard to a client (or student) could occur as part of a structured interview process between a consultant and consultee. The initial interview begins wi th identification or targeting of the problem(s) and moves systematically to resolution(s). Several goals are accomplished by the participants as they move through the initial interview structure for problem identification. Kratochwill (1985) reported that i n the targeting of problem behaviors, the consultant assists the consultee i n describing and identifying the problem(s) of concern. Evans (1985) described the goal of problem identification as involving information gathering, 24 sorting and interpreting i n light of behavioral principles so as to create a true representation of the problematic situation. Martens (1993) stressed the importance of defining the behavior i n observable terms, obtaining estimates of how often it occurs, under what condition, and the beginning of on-going data collection for use i n evaluating treatment effectiveness. A s a key component i n the process, baseline data is used as a measure of current performance level. Data to substantiate problem identification can come from several sources. Standardized test results, work samples, and observations are among those useful for data collection. Witt and Elliott (1983) concur wi th the need for data collection at this stage as one of the important components i n the initial interview for problem identification. Inclusion of all components which researchers suggest are essential to problem identification would be unwieldy were it not for the structured interview and standardized checklists devised by Bergan and Kratchowill . Bergan and Kratochwill (1990, p. 72) offered a structured approach for the initial interview for problem identification by suggesting the following steps: 1. Establish objectives. 2. Establish measures for performance objectives. 3. Establish and implement data collection. 4. Display data. 5. Define the problem by establishing the discrepancy between current performance, as reflected i n the data collected, and the desired performance, as indicated i n the performance objectives. 25 Martens (1993) added another step, that of setting up a second interview time indicative of the participant's desire to continue wi th the process. In order to formalize this approach, Kratochwill and Bergan (1990) developed the Problem Identification Checklist which serves as an interviewer guide through the problem identification interview. Kratochwill (1985) reported that problem identification is completed when the target behaviors have been established and a treatment goal set by the consultant and consultee. Once the behaviors have been targeted it is understood by practitioners that by identifying the problem i n terms that are mutually agreeable to both participants, the problem solving process is anticipated to have a successful outcome. Empirical Support for Problem Identification Gresham (1984) reported that empirical support for using a behavioral interview as a means for identifying a client's problem(s) has been demonstrated primarily as interrater reliability. Application of interrater reliability i n establishing the importance of problem identification leading to successful problem resolution comes from the work of Bergan and Tombari (1975, 1976). They developed a coding system for consultation verbalizations known as the Consultation Analysis Record (CAR). Raters are trained to code interview verbalizations i n message units for source (speaker), verbal process (interactions), interview content (behaviors), and control (actions directed by verbalizations). Interviews are coded four times for the presence of data which correspond wi th the subcategories of each of the four areas. The reliability of the interview is 26 determined by the consistency of the raters' coding for specific verbalizations i n the appropriate categories. Bergan and Tombari (1975) reported 96% interrater agreement i n assigning verbalizations to the specific units of observation. The C A R coding system was used to establish the verbalizations necessary to accomplish the specific goal for each stage of behavioral consultation. The criteria for establishing problem identification were derived from the behavior specification utterances which specify the behaviors, the settings where they occur, and procedures for measuring observational specifications. Bergan and Tombari (1975, p. 220) stated that i n addition to discussing behavior, conditions, and measurement procedures, consultation participants would have to agree as to what has been discussed i n each of these areas. Summarization and validational verbalizations would need to be present as an indication of the necessary agreement. The remaining phases of consultation also have specific verbalizations which must occur and be rated i n order to validate manifestation of the specific stage i n the consultation process. Bergan and Tombari (1976) demonstrated the effectiveness of their procedure i n a study of consultant skill and efficiency. This study involved training 11 psychologists to use a four stage model of problem solving to conduct a total of 806 consultation cases wi th classroom teachers during the course of a school year. Effectiveness measures were determined by the interviewers' general efficiency, skill i n applying psychological principles, and interviewing skills. The occurrence of each phase of the problem solving process was noted on the case reporting form which indicated from the interviewer whether or not 27 a phase had occurred. Success was measured by an indication that the goal set i n the problem identification phase had been achieved. Problem identification and problem analysis interviews were used to determine interview skills. This was done using the C A R format. Interrater reliability was established for the content, process and control categories by coding verbalizations from the audio-taped interview. Agreement between two coders on verbalizations from the problem identification and problem analysis interviews was reported wi th a Scott coefficient of .88 and .92 for control, .87 and .90 for content and 1.00 and 1.00 for process. Three multiple regression analyses were performed on all of the data. A significant main finding was the impact of problem identification on the rest of the process. Over 40% of the variance i n the occurrence of problem identification (R = .637) was accounted for by interview-effectiveness and consultant skills as measured from the C A R . In turn, problem identification accounted for almost 59% of the variance i n plan implementation, the next suggested phase i n the process. These findings underlined the importance of problem identification and consultant interview skill i n bringing this about in order for the rest of the consultation process to continue. It was interesting to note that the actual identification of the problem was determined by the interviewer responding to the researchers' query as to its occurrence. Only 43% of the 806 cases received problem identification interviews which then contributed significantly to the regression equation. This suggested that problem identification itself may have been perceived to be important 28 although fewer than half of the consultants engaged i n it. The strongest alternative to the use of the problem identification interview was a testing referral. Other reasons cited for not identifying clients' problems varied but were not extraordinary to a school situation. Bergan and Tombari (1976) reported that the alternatives to problem identification varied predictably wi th consultant skil l and efficiency variables so that future research in this area focused on training consultants as effective problem solvers. Clinical Behavior Therapy Interview Establishing adequate reliability for interviews is also addressed i n the clinical behavior therapy literature. A n interrater reliability approach was undertaken i n Hay, Hay, Angle, and Nelson's (1979) investigation of the reliability of problem identification i n behavioral interviews. Their approach differed from Bergan and Tombari's (1976) by investigating agreement between interviewers rather than rating assigned verbalizations. Hay et al. (1979) accomplished this by attempting to ascertain the number of specific problem areas identified across four interviewers of four clients and by agreement among interviewers for the specific problems identified for each client. A n additional purpose to their study was to identify the sources of variance between the interviewers which could potentially lower their agreement. Four clients were interviewed by each of four interviewers for the purposes of conducting a problem identification interview. Following the interview, each interviewer recorded a verbal summary of what had transpired. The content of the interview transcriptions and verbal summaries were coded and rated for problem areas 29 identified directly or items queried by the interviewers. The criteria for establishing the problem areas came from the Cautela and Upper behavioral coding system. This system listed 25 life areas i n which problems could occur wi th specific items listed for each area to be endorsed by the rater. Problem areas were identified when the name of an area or item within an area was named during the interview and the client indicated the frequency, duration, or intensity of interference during the interview. Raters also coded those areas i n which the interviewer asked questions. Interrater agreement on the presence of the problem was necessary for it to be included i n the analysis. Interrater agreements were calculated wi th the exact percentage agreement method. Mean interrater agreement scores for problem areas identified from the interview transcriptions was .90 and for the verbal summaries .83. The agreement score calculated from the transcript for items identified as problem areas when queried was .87. Agreement scores for areas and items queried during the interview was .85 and .75, respectively. When considering the number of problem areas identified, clients differed significantly on interview transcripts, F(3,9)=4.17, p < .05 and on the verbal summaries, F(3,9) = 10.63, p<.01. The interviewers were not found to differ significantly i n either reporting the number of problems or the problem areas identified. This suggested the possibility of generalizing across interviewers i n terms of the overall numbers of problem areas identified during the interview. Inter-interviewer agreement reported a different finding. Agreement between interviewers wi th respect to the specific area of problem identification 30 resulted i n a mean agreement score of .55 calculated from the interview transcriptions. For items considered to be problems the mean agreement score per problem area was .40. The mean agreement score between interviewers for problem identification based on the verbal summaries was .48. Further analyses revealed significant differences between the interviewer wi th regard to the number of problem areas they investigated during the interview, F(3,9) = 4.02, p<.05. Agreement scores calculated on the type of questions used resulted i n a mean score of .62 for broad problems areas and .29 for specific items wi th in the problem area. By comparison, the mean agreement score of the clients' consistency i n responding was .86. Further comparisons were done of the verbal summaries to the interview transcriptions which suggested that an average of 28% of the information discussed in the interview was lost i n the summary. The results of this study revealed that interviewers did not identify significantly different numbers of problem areas. Inter-interviewer agreement results report a different finding i n which low to marginal agreement was found for the specific areas discussed in the interviews. Consideration of differing interviewer skills, training and bias were cited as potential reasons for the obtained results. A caution was also made as to the use of a recording device for information gathering to avoid potential loss of data. Suggestions made for future problem identification interviews were that using a more structured interview format and conducting a functional analysis as part of the interview be considered. 31 The findings from Hay et al. (1979) suggested that interviewers, or actual participants, do not share agreement based on the outcomes of their interview which was evaluated by independent raters using a different coding scheme. The findings from Bergan and Tombari (1975) suggested that high levels of interrater reliability came from agreement between raters on interview verbalizations. These findings are of concern because they suggest that interview participants may engage i n a process without a clear indication as to the nature of a problem they are seeking to address. The implications of these results and their comparisons have an impact on the purpose of the behavioral interview and its use as an assessment instrument. It suggested that if participants do not agree on the problem areas discussed there would be a negative impact on targeting a behavior and choosing an appropriate intervention for solution. This would suggest that further study of agreement between participants and raters may clarify how problems are identified at this early stage i n problem resolution. Wilson and Evans (1983) conducted an initial investigation of the reliability of the process of target behavior selection. Their study addressed the question of how treatment goals were formulated i n the initial stages of assessment. One hundred eighteen respondents completed questionnaires comprised of three case studies of children exhibiting severe problems of varying complexities. Participants responded to open-ended questions regarding their impression of the child's identified problem, whether treatment was warranted, an indication of treatment goals and, stated treatment targets i n ranked order. Results were 32 reported as interrater agreements to the responses wi th two independent raters and a third who was blind to the purpose of the study. The finding of this study which impacted upon the issue of reliability of problem identification was that the overall percentage agreement of 38.6% reflected moderate agreement among the participants as to their general impression of the nature of the highest priority problem presented. This was accomplished by determining the mean of the percentages of agreement of each participant on their selection of the highest priority problem targeted for resolution for each case. This finding was considered low for intersubject agreement i n choosing the priority target behaviors. A m o n g considerations given to future research was that a more thorough investigation of factors which influence agreement be made. This was an analogue study which used an open response questionnaire. It is uncertain whether the findings can generalize to naturalistic settings. It does however, lend support to the Hay et al. (1979) finding of low agreement between participants regarding the identification of the nature of problems. A study which addressed reliability through agreement between assessors was conducted by Felton & Nelson (1984). Their purpose was to examine assessor agreement wi th regard to identifying hypothesized controlling variables i n conducting functional analyses and designing treatment plans. Six assessors were randomly assigned to one of two groups which were categorized as interview only or interview wi th role-play and questionnaires. A l l were required to conduct a comprehensive behavioral interview on the same three "clients." 33 Problem definitions were supplied i n order to focus attention on identifying controlling variables. Following the interviews, each assessor's list of controlling variables and treatment proposals was compared wi th the other two from their group and interassessor agreement was determined using a percentage of agreement formula. Interrater agreement, by two graduate psychology students' independent judgements, using the same formula was determined to be .90 and followed a correction procedure for disagreements. Decisions were made i n two-steps: first, by initial independent agreements between judges and then their discussed agreements following from their disagreements. Scores for mean inter-assessor agreement across clients, variables and treatment plans was .41 for the interview-only group and .40 for the group using multiple devices. Individual variable agreements were reported for the two groups ranging from a mi ld agreement of .21 and .24 for organism variables (past history, physiological problems) to greater agreement on treatment proposals of .59 and .62. The researchers reported that the major factor they found to have affected low agreement among assessors was the differences found i n the specific questions asked by each of the assessors. These findings also support those of Hay et al. (1979). They suggest low reliability when comparing agreement between participants rather than agreements between the raters. Considerations put forth by the researchers were to seek ways of improving agreement among assessors, i n particular through training of interviewing skills. A n additional consideration was suggested as to the importance of agreement i n targeting behaviors. Were participants' 34 agreement demonstrated not to be of importance, there would be no concerns as to deriving interventions based on their support for establishing treatment integrity. Interviewer Training Follow up on the findings of Bergan and Tombari (1975, 1976) and Hay et al. (1979) has been demonstrated by the number of studies which approach the problem from a training perspective. When considering the findings of the Hay et al. (1979) study Brown, Kratochwill, and Bergan (1982) addressed the issue of training interviewers wi th a specific set of skills designed to reduce interviewer variability i n questioning. This study introduced a standard format for training i n interviewing skills designed to elicit problem definitions. A secondary purpose was to test a standardized interviewing skills training package. Four graduate students were trained i n the model demonstrated on three students who role-played clients. Three observers were trained to use the Problem Identification Checklist and rate consultants' verbalizations accordingly. Interobserver agreements were calculated from observers one and two, observer three's agreements (calibrating observer) were calculated over one third of all the sessions across all phases. Interobserver agreements were calculated for each of the four categories and subcategories of verbalizations. The range over four trainees for the behavior category was .77 to .98; behavior setting range was .82 to .86, the observation category was .98 to 1.00 and summarization category ranged from .78 to 1.00. The findings support Bergan and Tombari's (1975) claim of reliability for problem identification when determined by interrater reliability, 35 and demonstrate that training can be effective in improving interviewer skills. Brown et al. (1982) concluded that the issues of agreements across interviewers surfaced by Hay et al. (1979) could be accounted for i n the standardization of interviews thus preventing the loss of relevant data. Brown et al. (1982) continued to address the reliability question i n terms of interviewer verbalizations assessed by raters on their problem identification measure. They did not address the issue of agreement among interviewers (or interviewees) as to the content of their discussions. A similar study was conducted by Duley, Cancelli, Kratochwill , Bergan and Meredith (1983) which addressed the training issues of interview skills as they generalized to a motivational analysis interview (identification of reinforcements) using the training package described i n Brown et al. (1982). This study also responded to the lack of a functional analytic approach i n Hay et al. (1979). Interviewing was conducted i n an analogue situation. Observers received training to code for verbalizations. The results obtained were the probability of inter-rater agreement across baseline (.95), posttraining (.96) and generalization (.88) phases of the study and analyzed wi th Bergan's quasi-equiprobability model. A social validational component was added and assessed using three expert judges and giving interviews an additional rating to support analysis of the skill development by training, interjudge agreements and correlation between ratings and target skills. In the final analysis 88% of the variance was shared between the percentage of skill improvement demonstrated and social validation ratings across the participants and interviews. This study continued to 36 support the findings of Brown et al.(1982) on the importance of skill training for interviewers. It demonstrated that generalization of skills over time was possible i n an analogue situation. Al though it was an attempt to respond to an element missing from the Hay et al. (1979) study, the functional analytic approach, it did not directly address the issue regarding agreement of participants as to the nature of the problems discussed i n their problem identification interview. Further support for training and generalization of skills to other disciplines was demonstrated i n the study conducted by Keane, Black, Collins & Vinson (1982). Clinical pharmacy students received training i n behavioral interviewing skills i n order to conduct interviews with hospital patients. The purpose of using a behavioral approach was to demonstrate the effectiveness of gathering information using functional analysis regarding patient compliance wi th medication. Students were randomly assigned to three treatment groups which consisted of two levels of training and a control group. Interview skills were taught and assessed wi th regard to content and style. Content referred to patients' medical histories and style to interviewers' use of specific skills. Reliability was determined by interrater agreements using the percentage agreement formula for content area occurrences. Pearson correlations were used for style components. The results supported the group which received behavioral rehearsal training and practice to improve their interview skills. Comprehensive training enabled them to increase significantly their use of open-ended questioning as determined by a repeated measures A N O V A of significant groups X time interaction, F(2,32) = 9.13, p<.01). Further analysis at the generalization 37 phase revealed a significant reduction i n the use of close-ended questions, F(2,31)=4.02, p<.05) and the amount of interviewer speaking time, F(2,31=3.70, p<.05). In conclusion, Keane et al. (1982) suggested that training was essential to effective interviewing, had potentially good generalizability to other related fields, and when divided into content and style areas may have more interactive influence then first thought. Keane et al. (1982) reported that Hay et al.'s (1979) results may be reflected in the variability of content and style. This study suggested that emphasis may be placed on the interview content and interviewer style. Following training interviewers were able to generate more content material but this was only assessed by interrater agreement and not by agreements wi th the participants which is what the Hay et al. (1979) study suggested. Consultee Problem Identification A somewhat different focus was offered by the following two studies. They approached consultation problem identification from the perspective of the consultee. Curtis and Watson (1980) assessed changes i n problem clarification skills of consultees following consultation interviews wi th high-skilled and low-skilled consultants. Twenty-four classroom teachers were assigned randomly to one of eight consultants. Consultants were assessed for their skil l levels following initial neutral interviews. Further skill training was given to already highly skilled consultants to maximize the differences. Following three consultation sessions, the consultees working wi th the highly skilled consultants were found to have improved problem clarification skills. Results were based on 38 the verbalizations scored using Bergan's (1977) Consultation Analysis Record and a Problem Identification Checklist designed to assess consultees' problem clarification skills. The areas of problem identification which did not reach significance were the index of content relevancy and percentage of factual utterances. The results offered further support to the importance of problem identification skill training but suggested, where improvement was not significant, that some skills are already part of a teacher's repertoire without formal training. These skills, such as the component of data collection for problem solving, d id not demonstrate any improvement following consultation training. This study marked a shift i n focus from the consultant's problem solving to that of the consultee in response to the consultant's training. The researchers attempted to describe the consultees' ability to identify or clarify problems of actual students i n the problem identification phase of consultation. The results suggested that consultees' skills required further consideration particularly when attempting to clarify a problem and their input at this level may be a critical factor i n identifying their students' problems. This study was also important as it involved actual consultants wi th classroom teachers discussing their attending students. The primary goal of Cleven and Gutkin 's (1988) study was to increase consultee's skills i n problem solving using a cognitive modeling approach. Their procedure involved training university students to write problem definition statements after viewing a demonstration of various levels of interviewing on a series of videos. Reliability was established by ratings made of 10 pilot cases. Of 39 interest i n this study is the use of 6 raters for interrater reliability who were bl ind to the nature of the study and the group placement of the participant. Dependent measures of Behavioralness, Goal , and Process scales were each rated wi th a three-point criteria from the Problem Definition Description Questionnaire. Behavioralness required inclusion of multiple behavioral examples stated i n concrete, observable, and measurable terms. The Goal scale rated the presence and clarity of a goal statement. The Process Scale required that participants develop a concrete behavioral problem definition, components must be prioritized wi th a goal statement included. Levels of scoring criteria were given for responses which included all components wi th varying degrees of clarity. Percentage agreement was determined by checking the raters' scores against the researcher's. Reliability coefficients were reported for the scales of Behavioralness (.98), Goal (.92) and Process (.92) Addit ional reliability checks were done by having the each evaluator rate five additional protocols and their ratings were compared wi th the original. Subsequent percentage agreements were Behavioralness = .94, Goal scale = .95, and Process scale =.92. The results were supported wi th qualitatively better behavioral problem descriptions written by those who viewed the cognitive model of problem solving. This meant that their problem identification statements contained specific relevant behavioral examples i n observable and measurable terms and that their statements were goal-directed. This study endorsed cognitive modelling as an effective training mode for consultees i n the consultative process. It suggested that improved skills may also contribute to problem 40 identification i n problem solving. The training of skills directed toward the consultee suggests that effective problem identification could reflect agreement between participants given that they were able to define the problems i n terms which would be useful to both when initiating the process. Current Practices Current literature speculated on the continued use of a problem solving approach i n behavioral assessment/consultation. White and Edelstein (1991) noted that research support for behavioral assessment interviews was limited. Their concern was addressed i n support for the investigatory interview which closely resembled the format followed i n behavioral assessment and consultation. They suggested the use of accuracy as a determinant for interview reliability. They also endorsed the practice of training interviewers i n the skills needed to establish inter-interviewer agreement. N e z u and N e z u (1993) suggested a problem-solving sequence as a potential solution to the on-going concern of identifying and selecting target behavior on which to focus intervention activities. They described an approach which recognizes multiple causalities as a back drop for choosing more appropriate target behaviors. Their goal i n target selection was future-oriented for effective functioning. Included i n their approach is the use of the S O R C K model for assistance i n identifying target behaviors and their relational variables. This approach and model await empirical support. This model considered the dynamic dimension of the interview process and acknowledged difficulties when considering the contextual situation i n which problem solving occurs. 41 Summary of the Problem The purpose of this study was to address an issue of empirical support for the interview as an assessment instrument. Problem identification established i n the initial interview was reported to be a critical component i n reaching a solution. Reliability for this process has been demonstrated i n the behavioral assessment/consultation literature using the standardized format introduced by Bergan (1977) and Bergan & Kratchowill (1990). Interrater reliability was established for the structured interview process by comparison (agreement) of ratings of participant verbalizations made during the interview using a standardized coding scheme. The clinical behavior therapy literature represented by Hay et al. (1979) reported strong interrater agreement on the problems discussed i n an interview but could not establish inter-interviewer agreement on the nature of problem areas discussed i n an initial interview situation. Bergan and Tombari's (1975) findings regarding the importance of problem identification have led to the development of structured interviews and coding of verbalizations to support problem identification. This approach has helped to establish interview reliability based on interrater scoring. Variability, a threat to reliability, was reduced by training raters who were able to code transcriptions wi th near-perfect accuracy to meet the requirements of the structured format. Strong support for training interviewer skills as another way of reducing variability was demonstrated i n the literature. Training was also recommended for consultees contributing additional support for this approach. 42 Subsequent studies reporting inter-interviewer or inter-participant agreement were not able to demonstrate strong agreements. Trained raters achieved strong interrater agreements for these interviews, however, the participants themselves did not seem to agree on the content areas of their interview, i n fact, this approach was rarely considered i n the literature. This suggested that participants may not reach an agreement on the nature of the problem identified i n the interview. Consequently the problem may not be identified during the interview and, i n turn, potentially interfere wi th the problem solving process. The present study attempted a response to the reliability issue by assessing the level of agreement of participants as a way of establishing support for successful problem identification i n the initial interview. 43 CHAPTER THREE: DESIGN A N D METHODOLOGY Nature of the Study This study was designed to focus attention on the initial interview component of behavioral consultation. The study assessed the reliability of problem identification i n the initial consultation interview by obtaining a measure of agreement from the participating classroom teacher and school-based consultant (learning assistance teacher) as to the nature of the problem they discussed. Interrater agreement was also assessed using two independent ratings of the interview agreement results to determine if the participants agreed upon the nature of the problem considered to be the highest i n priority for the student discussed. The study followed a descriptive nonexperiemental design i n which participants were asked to engage voluntarily i n a consultative interview about a student w h o m the teacher would characterize as difficult to teach. In order to facilitate generalization of the results of this study to consultation practice i n a naturalistic school environment, the study was conducted i n the participants' school and a real student's problems were discussed. Participants were asked to conduct the interview i n the same manner in which they usually address problematic situations regarding their students. The researcher videotaped the interview. Immediately following the consultation interview, participants were asked independently for a verbal description of the nature of the problem discussed i n the interview and to indicate the order of priority given to each one, if more 44 than one was discussed. Each participant was asked to provide a rating of the extent to which the interview was helpful i n identifying the student's problem(s) / the extent to which the identified students' problems were adequately identified, and the extent to which the interview was successful i n achieving a shared understanding of the problem associated wi th the identified student. Participants were asked to complete a demographic informational questionnaire and problem identification evaluative checklist to aid i n the evaluation of the interview process. The identification evaluation measure used was an adaptation of one used i n a program evaluation model reported by Knoff (1982a, 1982b) and was similar i n style to the parent interview conducted by Graham and Rutter (1968). The tape recorded interview of the participants' verbal response to the researcher's question was transcribed for an evaluation of the representation of the problem and for rating agreement between the consultant and consultee as to the nature of the student's problem. The researcher and a trained research assistant coded each transcription for a level of agreement between the learning assistance teacher and classroom teacher on their independent description of the problem(s) identified during their problem identification interview. The two raters also provided ratings using the same evaluation form as the participants. This procedure of data collection and analysis was similar to that followed by Hay et al. (1979), Felton and Nelson (1984), and suggested by N e z u and N e z u (1993). 45 Procedure Participants The participants i n this study were 45 educators currently employed wi th in the separate school system, the Vancouver Archdiocese Catholic Schools, a member of the Federation of Independent Schools Association of British Columbia. There are currently 45 schools operating i n the Archdiocese. Recruitment was solicited by an explanatory letter sent to each principal inviting voluntary participation of individual school's consulting teacher/learning assistance teacher and 4 classroom teachers. Addit ional support i n seeking voluntary participation was offered by the Office of the Superintendent of the Vancouver Catholic Schools. Agreement to participate i n the study was formalized by each participant's signing a letter of informed consent. Participants' signatures indicated their willingness to take part, having been informed of the purpose of the study, that all information shared during the research study would be held i n confidence, and that they were free to withdraw from the study at any time. During the period of the consulting teachers' participation, researcher substitution was offered, but not required, for any students scheduled to be seen by the consulting teacher at the time of the interview. Nine volunteer consulting teachers (learning assistance teachers) representing nine schools situated i n the Catholic Archdiocese of Vancouver participated. Their participation required that they conduct problem identification interviews and respond to questions regarding the process and outcomes of each 46 consultation interview. Consulting teachers conducted interviews wi th four classroom teachers from their schools. Each consultation interview was focused on a student who was then experiencing difficulties i n the participating teacher's classroom. While the classroom teacher participated i n the consultation interview, the researcher provided a certified substitute teacher through a prearrangement wi th the individual school's principal and learning assistance teacher. Setting A l l interviews and data were collected at the participants' individual schools. Typically these interviews were conducted in the learning assistant's room, the usual setting for such contacts, and a location offering the required level of privacy. The video camera was set up by the researchers i n close proximity to the participants during the interview, but the researcher was not present during the actual interview. The interview usually lasted between 25 and 35 minutes, not including the amount of time need for prior instructions to be given and post-consultation interviews and written measures to be completed. Assignment of Participants: Consultants and Consultees The consultants (learning assistance teachers) requested 4 classroom teachers (consultees) employed at their school to participate voluntarily i n an initial consultation interview to discuss one student (client) i n each teacher's class. The identified student was one whom the teacher described as difficult to teach and about whom the participants had had no previous formal discussions. 47 Prior to each interview, the researcher met wi th both participants simultaneously to review the procedures. A t this time an overview was given as to what wou ld be expected by their participation i n the study. Each participant received a copy of all the measures he/she was expected to complete. Participants were directed to complete the demographic questionnaire before the start of the interview and the evaluation questionnaire at the end of the interview, when alternating wi th the post-consultation interview. Directions were read to the participants and clarified by inviting participants to conduct the interview just as they would typically do so. (A copy of the directions is included i n Appendix #1.) They were encouraged to focus on describing what about the student was currently causing him/her to be difficult for the classroom teacher to teach. Once the participants had completed the demographic questionnaire and indicated they were ready to begin, the researcher turned on the video camera and left the room. A t the conclusion of the interview, the researcher was summoned back into the room. The post-consultation interview was conducted typically wi th the classroom teacher first while the learning assistant teacher completed the evaluation form i n another room. Once the first interviewee completed the post-consultation interview, he/she left the room to complete the evaluation measure. The learning assistant returned and was administered the post-consultation interview at that time. Completed evaluation measures were returned to the researcher soon after concluding the post consultation interview. 48 Instrumentation Measures: 1. Demographic Questionnaire A n information questionnaire was administered to the learning assistance teacher and to the classroom teacher. The information requested was the participants' age (range), sex, level of educational degree, number of years teaching, current title, and prior knowledge of, or training in , consultation practices. This information was used for descriptive purposes. The participants' responses are reported i n percentages, categorized as learning assistance teachers (consultants) and classroom teachers (consultees). (A copy of the participant demographic information form is included in appendix #2.) 2. Problem Identification Interview Evaluation Scale This measure required the consultation participants to evaluate their interview session i n terms of consultation process efficacy, adequacy of problem identification, and shared understanding of the nature of the problem. Responses were endorsed by strength using a four-point Likert-type scale i n a format similar to that reported by Graham and Rutter (1968). This measure was intended to be completed independently by the participants following their consultation interview. In the majority of cases, the classroom teachers completed their evaluation scale after the post-consultation interview with the researcher. The learning assistance teacher completed the evaluation scale i n another location while the classroom teacher was being interviewed by the researcher. 49 The first section of the written evaluation was divided into two parts. Part I restated to the participants the main goal of a problem identification interview which was to formulate a comprehensive description of the major problem or problems. Participants were then asked to respond to three questions regarding focus of the interview as a useful assessment measure of problem identification. The participants were to endorse the level at which they found the interview to succeed i n reaching its goal for the particular variable stated. A four point Likert-type scale was used for each question. The first point on the scale was to be endorsed if the topic i n question did not occur at all, or was not addressed at all. The second point was endorsed if it was somewhat addressed, the third point required that it be mostly addressed, and finally, the fourth point was interpreted as completely addressed. Interview Helpfulness The first question requested participants to rate the level at which they found the interview to be helpful i n identifying the student's problem. This question was selected to be an indication of the usefulness of the interview, a measure of consumer satisfaction. The ratings offered for endorsement ranged from a "One" (Not A t A l l Helpful) to a "Four" (Completely Helpful). Problem Identification The second question required an outcome evaluation of the extent to which the participants believed they had adequately identified the student's problem(s) as a result of the interview. This question evaluated the extent to which problem identification was perceived to have occurred by the participants. 50 Responses were rated at four points ranging from "One" (Not A t A l l Identified) to "Four" (Completely Identified). Shared Understanding/Agreement The third question addressed the main goal of the study which was to identify the participants' agreement as to the nature of the problem(s) discussed i n the interview. The question required the participant to rate what he/she believed to be the level of shared understanding wi th his/her interview partner. The four point responses for this question ranged from "One" (Not A t A l l Understood) to "Four" (Completely Understood). Problem Areas i n Priority Part II of the evaluation required participants to recall up to three of the problems discussed i n the interview beginning wi th the problem they considered to rank the highest priority i n their discussion. These rankings i n priority were similar i n nature to those used by Cleven and Gutk in (1988) i n their Problem Definition Questionnaire. The problems cited were to be categorized primarily as either academic or social/emotional/behavioral. General descriptors were given for both categories i n a checklist format i n which participants were asked to endorse how they perceived the problems to be interfering. Three areas of descriptors were offered for problems which were academic i n nature: content area deficit, production deficit, specific skill deficit. The descriptors offered for the social/emotional/ behavioral - identified problems were: social skil l deficit, behavioral excesses, behavioral deficits, and personality variables. Brief examples were cited wi th each descriptor to assist i n matching the problem descriptor wi th 51 the problem presented during the interview. Three separate sheets, indicating the rank of the problem to be reported, were provided, one for each problem. Interview participants completed their evaluations after their interview session. Additional Problems A final section of the evaluation requested the interview participants to identify further any other problems which they believed to be present with the student but not mentioned during the consultation session. Those who endorsed the presence of unreported problems were asked to specify if the problems were missed or ignored during their interview session. (A copy of the entire interview evaluation scale is provided as appendix #3.) Analysis of the Evaluation In order to assess the levels of agreement reached between the participants, the kappa statistic was used to determine overall agreement between the learning assistance teachers and classroom teachers on their response ratings for each of the three areas. Kappa is the statistic most often used to calculate agreement in preference to the percentage agreement formula (agreements/agreements + disagreements x 100) typically used in behavioral assessment. The use of Cohen's Kappa was suggested by Lee and Suen (1984) as it corrects for the possibility of agreements by chance which the percentage agreement cannot do and therefore may report inflated results. Kappa statistics reported in this study were calculated using Cohen's formula as suggested in Suen and Ary (1989) and Lee and Suen (1984). Interpretation of the level of acceptability for agreement percentages was offered by House et al. (1981) as 52 those generally accepted by researchers in behavioral sciences. Average agreement at or above 70% (.70) is considered necessary for agreements, above 80% (.80) is adequate and above 90% (.90) is good. Further analyses were conducted using chi-square analyses of group responses by considering the association between the participants and their responses. Significant results were reported with alpha set at <.10 rather than using the more conventional standard of < .05 so as to equalize the balance between Type I and Type II errors. Suggested by Cascio and Zedeck (1983), this is a way of increasing power for a small sample size. Contingency coefficients are reported as an indication of the strength of the relationship between the raters and their status of ratings. 3. Agreement Indicator: Post-Consultation Interview Following the consultation each participant was interviewed by the researcher or an assistant. In this interview each participant was to describe what they believed to be the nature of the student's problem. The participants were asked to summarize the content of their consultation interview and present the problems discussed in order of priority. Although the post-consultation interviews were conducted independently, the classroom teacher was typically interviewed first, immediately following the consultation session. Participants completed the remaining evaluation forms while waiting to be interviewed by the researcher or immediately after their post-consultation interview with the researcher. 53 Responses from each participant were tape recorded and later transcribed. Two raters reviewed the written transcripts to code the post consultation interviews for mutual agreement between respondents' descriptions of identified problems, particularly the problem which was given the highest priority. In those cases where respondents identified multiple problems, the level of participant agreement was coded for the problem described by the participant as of highest priority. Independent Interview Evaluators: Raters Two raters, the researcher and another rater, independently examined the transcriptions of the participants' responses collected at the end of the interview for agreement as to the nature of the identified problem(s). In addition to an overall rating of the post consultation interview, the raters completed an evaluation form identical to the second part of the form completed by the participants identifying the number and nature of the problems recalled by the interview participants. Both raters were knowledgeable of the goals and procedures of the interview format i n the consultation process. Approximately two hours of training was provided by the researcher as to procedures for identifying targeted problems and scoring for agreement between consultant and consultee through the use of pilot transcriptions. Training consisted of v iewing a sample interview video tape, listening to the audio-taped responses to the pilot participants' individual post-consultation interview responses and rating sample interview transcripts of the interview dialogue between the researcher and the 54 participants. Individual responses to the pilot transcriptions were compared to assure agreement between the raters prior to their individual rating of collected data. Agreement Ratings The ratings of agreement were based on the learning assistants' and classroom teachers' description of the student's problems described i n their independent post-consultation interview. Two trained raters independently coded/rated the level of agreement reached by the learning assistant and classroom teacher. The rating was accomplished i n a two step process. The first step was a rating of the individual post-consultation interviews for the nature of the problem wi th subsequent descriptions, the number of problem areas identified and the priority given when more than one problem was identified. Each of the 72 post-consultation interviews were selected i n random order for this evaluation. Interview tapes wi th the accompanying transcription were rated independently by the two raters by completion of the problem identification forms, wi th descriptors, for up to three problems i n priority. These were the same problem identification evaluation forms completed by the participants after their consultation session. Use of the audio-tape for the post consultation interview and rating of the transcriptions was based on the description provided by Hay et al. (1979) of their assessment for problem identification i n an initial interview. The written problem identification measure was similar to the measure of consultation evaluation 55 developed by Knoff (1982a, 1982b) for a retrospective assessment of problem identification i n a clinical interview. The second step i n the rater evaluation required the two raters, independently, listen to and read the transcript of both participants' post consultation interviews about the same student. The raters evaluated the level of agreement between the descriptions provided by the two participants. Each rater coded the interview dyads on a four point scale of "One" (Complete Disagreement), "Two" (Mostly Disagree), "Three" (Mostly Agree), and "Four" (Complete Agreement). Complete agreement indicated that both participants described the same problem with the same descriptors in the position of highest priority problem i n their post-consultation interview. If other problems were discussed, the priority given them by both participants also matched. (See appendix #5 for an example of a transcript which was rated as having complete agreement.) A rating of mostly agree received a level three rating based upon participant agreement of the nature of the problem wi th the highest priority but wi th dissimilar descriptions or lack of descriptions to support their choice of the problem. Typically the first and second priority problems were reversed i n their endorsements. Also , other indicated problems did not reflect the same order of priority. (See appendix #6 for a transcript which was rated as "mostly agreed.") A rating of mostly disagree, which received a level 2 rating, was described as different problems receiving the highest priority endorsement and any remaining problems did not share the same priority level. A n y descriptors for the problems were completely dissimilar. A rating of complete disagreement which received a 56 level 1 rating was described as completely dissimilar problems receiving the highest priority position and no indication of agreement i n the nature, number, priority ranking or descriptions was evident. (Examples of level 1 and 2 ratings were not available from the transcripts since no interview dyad received those endorsements from the raters.). Summary The purpose of this chapter was to describe the nature of this study of participant agreement on problem identification in the initial interview process of consultation. The study described the current practice of problem identification wi th in the initial interview of consultation used by participants from nine Catholic Schools. Consultants and consultees reported their shared understanding (interparticipant agreement) of the nature of a student's problems following their consultation interview and completion of evaluative measures. Raters independently rated the post consultation interviews for participant agreement on the nature and number of the problems identified. C H A P T E R F O U R : R E S U L T S 57 This chapter presents the results of the study conducted on participant agreement between the consultant (learning assistance teacher) and consultee (classroom teacher) regarding the nature of school-based problems of a client (student) who was described as difficult to teach. Demographics Participants Learning Assistants/Consultants The participants (n=45) of this study were nine learning assistance and thirty-six classroom teachers from nine schools of w h o m volunteer participation had been requested. The learning assistance teachers conducted individual problem identification interviews with each of four classroom teachers from their schools. Table 4.1 summarizes the demographic information obtained from the consultant/learning assistance teachers. A l l nine learning assistants were female, most i n the age range of 31 - 40. Six of the learning assistants held Bachelor of Education degrees and two held Master's degrees. Their mean number of years i n the field of education was 11.55 (SD=4.79). The learning assistants served students from primary to secondary grades. 58 Table 4.1: Participant Demographic Information Learning Assistance Teachers/Consultants Descriptor N Mean SD %age Learning Assistance Teachers 09 Sex Female 09 100 Age Under 30 01 11.1 31 - 40 05 55.5 41 - 50 03 33.3 Level of Educational Training (N=8) BEd 06 75.0 M A 02 25.0 Years i n Education 11.55 4.79 Level of Students Primary 02 22.2 Secondary 01 11.1 Primary/Intermediate 05 55.5 Intermediate/Secondary 01 11.1 Classroom Teachers/ Consultees Thirty-six classroom teachers participated as consultees and responded to the demographic questionnaires as summarized i n Table 4.2. One half of the classroom teachers were under the age of 30; their mean number of years i n the education field was 9.27 (SD = 7.73). Twenty-three teachers (63.8%) held Bachelor of Education degrees. Primary through secondary level students were served by the classroom teachers who participated. Teachers of primary level students represented 44.4% (n=16) of the classroom teachers. Teachers of intermediate students comprised 25% (n=9) and 19.4% (n=7) taught on the secondary level. 59 Table 4.2: Participant Demographic Information Classroom Teachers/Consultees Descriptor N Mean SD Percent Classroom Teachers Sex Age Female Male Under 30 31 - 40 41 - 50 Over 50 Level of Educational Training B A BEd B A + credits M A Years i n Education Level of Students Primary Intermediate Secondary Primary/Intermediate Intermediate/Secondary 36 31 05 18 07 08 03 06 23 06 01 16 09 07 03 01 9.27 7.73 86.1 13.8 50.0 19.4 22.2 8.3 16.6 63.8 16.6 2.7 44.4 25.0 19.4 8.3 2.7 Participants wi th teaching assignments on combined levels comprised 11% (n=4). Prior Experience wi th Consultation Participants were questioned as to their previous experience, training i n and rating of consultation as part of their occupational function. The learning assistance teachers' responses are reported i n Table 4.3. Of the eight 60 Table 4.3: Participant Familiarity with Consultation Learning Assistance Teachers (N=9) Descriptor N Percentage Formal Training i n Consultation Practices (N=8) Yes 02 N o 06 25.0 75.0 Prior Experience wi th Formal Consultation Practices None 02 Some 02 Frequent 05 22.2 22.2 55.5 Rating of Prior Consultation Experience (N=7) Neutral 01 Positive 06 85.7 14.2 respondents, two (25%) had been formally trained i n consultation through workshops and/or university courses, the remaining six (75%) had no previous formal training. (Two reported informally that they had gotten information on consultation from colleagues and knew about it through professional development seminars but did not feel their experience warranted formal endorsement.) Seven (77.7%) of the nine learning assistants had prior experience i n practicing consultation at their schools and five (55.5%) reported this as their usual approach to problem solving wi th colleagues i n their schools. Two of the learning assistance teachers had not used consultation i n a formal manner prior to their experience in this study. In rating their prior consultation experiences on a three-point scale: positive, neutral, and negative, six learning assistants (85.7) endorsed a positive 61 rating and one (14.2%) endorsed the neutral rating. N o one endorsed the negative rating. Table 4.4 reports the results of the classroom teachers responses to queries regarding their prior experience wi th consultation. Seven classroom teachers (19.4%) had received some formal training i n consultation practices either through workshops or professional development seminars. Twenty-nine classroom teachers (80.5%) reported no formal training. Prior experience wi th the practice of consultation was reported by twenty seven classroom teachers (74.9%), ten of whom (27.7%) reported frequent experiences wi th this method of practice. Table 4.4: Participant Familiarity with Consultation Classroom Teachers (N=36) Descriptor N Percentage Formal Training i n Consultation Practices Yes 07 9.4 N o 29 80.5 Prior Experience wi th Formal Consultation Practices None 09 25.0 Some 17 47.2 Frequent 10 27.7 Rating of Prior Consultation Experience (N=33) Positive 22 66.6 Neutral 11 33.3 O n the three point scale rating the prior experiences as positive, neutral or negative, 33 classroom teachers responded i n total, two-thirds of the respondents 62 endorsed the positive rating and one-third gave it a neutral endorsement. N o one endorsed the negative rating. Summary The study sample included 9 learning assistance teachers and 36 classroom teachers whose combined mean of years i n the field of education was 10.41 years (SD = 1.61). Seventy-five percent of the learning assistance teachers, who conducted consultation interviews as consultants, had not received formal training i n consultation practices. Only half of the learning assistants used consultation as a problem solving approach. Two of the learning assistants had never used consultation prior to participating i n the study. Eighty percent of the classroom teachers, who participated as consultees, had not received any training i n consultation and approximately 25% did not use consultation. The participants were relatively untrained in the use of consultation. Few participants reported frequent use of this format for problem solving but those who did use consultative problem solving reported moderately positive past experiences. This was not an "expert" sample of consultation practitioners. Psychometrics Participant Interview Evaluation Measure: Part I Both members of the Learning Assistant-Classroom Teacher interview dyad completed the three item Interview Evaluation: Participants scale immediately following their consultation interview or post-consultation interview with the researcher. (A copy of the scale is included i n Appendix #3.) This measure required the participants to respond using a four point, Likert-type 63 scale, to each of three questions regarding three aspects of their consultation interview. Responses to the three questions reported the extent to which the process was helpful, how thoroughly the problem was identified, and the level of understanding of the problem shared between the participants. Four rating levels were given for each of the three questions and scoring of the levels was similar for each question. A "1" response meant no occurrence or a complete disagreement wi th the topic i n question, "2" represented a choice of somewhat of an occurrence of the topic or moderate disagreement, "3" was a stronger endorsement for the occurrence of the behavior and moderate agreement, and "4" was the strongest endorsement for the presence of an occurrence and/or complete agreement. Table 4.5 reports the means and standard deviations of the participant responses to the three interview evaluation questions by individual schools. The overall mean rating of learning assistants on interview helpfulness was 3.22 wi th a standard deviation of 0.59 and the classroom teachers' mean rating was 3.14 wi th a standard deviation of 0.80. Responses across schools showed considerable variability wi th in a range of 3.00 - 3.75 for learning assistants and 2.75 - 3.75 for classroom teachers. One third of the schools showed exact mean and standard deviation score agreement between dyad participants. Four of the learning assistance teachers gave the same level three rating to each of her four interviews while the classroom teachers who participated i n these interviews varied i n their rating of interview helpfulness. 64 Table 4.5: Means (and Standard Deviation) of Learning Assistance and Classroom Teacher Ratings of Interview Characteristics Across Schools School Participant Helpfulness Identification Understanding 1 L A 3.00(0.82) 3.00(0.82) 2.50(0.58) C T 3.00(0.82) 3.25(0.50) 3.25(0.50) 2 L A 3.50(0.57) 2.50(0.58) 3.00(0.82) CT 3.50(0.57) 3.50(0.58) 3.25(0.96) 3 L A 3.00(0.00) 3.00(0.00) 3.00(0.00) CT 3.25(0.50) 2.75(0.50) 3.50(0.58) 4 L A 3.25(0.50) 3.00(0.82) 3.50(0.58) C T 3.75(0.50) 3.00(0.82) 3.75(0.50) 5 L A 3.75(0.50) 3.25(0.50) 3.25(0.50) C T 3.75(0.50) 3.25(0.96) 3.75(0.50) 6 L A 3.00(0.00) 3.00(0.00) 2.75(0.50) CT 3.00(0.82) 2.75(0.95) 3.25(0.96) 7 L A 3.50(0.57) 3.00(0.00) 3.25(0.50) C T 2.75(0.50) 3.25(0.50) 3.50(0.58) 8 L A 3.00(0.00) 3.50(0.58) 3.00(0.00) C T 2.50(1.00) 3.00(0.82) 3.25(0.50) 9 L A 3.00(0.00) 3.00(0.82) 3.50(1.00) C T 2.75(0.96) 3.00(0.00) 3.25(0.50) Overall L A 3.22(0.59) 3.02(0.56) 3.08(0.60) CT 3.14(0.80) 3.08(0.65) 3.42(0.60) The overall mean of problem identification ratings by learning assistants was 3.02 wi th a standard deviation of 0.56 and by the classroom teachers was 3.08 wi th a standard deviation of 0.65. Participants from one school reported the 65 same rating between their mean and standard deviation scores for this area. The range of scores for learning assistants (2.50-3.50) showed considerable variability and as did the narrower range of scores for classroom teachers (2.75-3.50). Overall mean rating of participant shared understanding (agreement) by learning assistants was 3.08 wi th a standard deviation of 0.60 and by the classroom teacher was 3.42 wi th a standard deviation of 0.60. This showed the greatest difference between the participants' ratings of the three areas. Learning assistants' ratings ranged between 2.50-3.50 and classroom teachers' range was between 3.25-3.75 wi th great variability indicated by the standard deviations. The results indicated heterogeneity between individual participants' ratings and across schools. The learning assistants were constant for each school and, consequently, their ratings across each area were more homogeneous than those of the classroom teachers. One learning assistant gave each interview the same rating across all areas. Three learning assistants gave the same ratings to interview helpfulness and problem identification but then varied on shared understanding. Interview Helpfulness A summary of the consultation participants' ratings of the degree of their satisfaction wi th the helpfulness of the interview i n defining students' problems is presented i n Table 4.6. The majority of ratings by the Learning Assistants were ratings of "mostly helpful" wi th a smaller percentage indicating their interviews were "completely 66 Table 4.6: Distribution of Learning Assistants' and Classroom Teachers' Ratings of Interview Helpfulness Participant Percent of Ratings 1 2 3 4 L A T 0 8 61 31 C T 0 25 36 39 Note: Ratings of helpfulness were on a 4-point scale where "1" = Not A t A l l Helpful, "2" = Somewhat Helpful, "3" = Most ly Helpful, and "4" = Completely Helpful. helpful." Eight percent of the Learning Assistants' responses assigned a level two rating, "somewhat helpful", to their interview wi th the classroom teacher. The classroom teachers' ratings were more evenly distributed. Twenty-five percent of the classroom teachers rated their interview with a level two rating, 36% rated it wi th a level three and 39% gave their interview a level four rating. Al though a substantial proportion of classroom teachers found their interview to be completely helpful (39%), 25% of the teachers rated their interviews as only somewhat helpful. The learning assistants' responses were generally more favorable as to the helpfulness of the interview i n defining students' problems. Interview Problem Identification A summary of the learning assistants' and classroom teachers' evaluative ratings of the degree to which they identified student problems during the interview is presented i n Table 4.7. 67 Table 4.7: Distribution of Learning Assistants' and Classroom Teachers' Ratings of Problem Identification Participants Percent of Ratings I 2 3 4 L A T 0 14 70 16 C T 0 17 58 25 Note: Ratings of Problem Identification were on a 4-point scale where "1" = Not A t A l l Identified, "2" = Somewhat Identified, "3" = Most ly Identified, and "4" = Completely Identified. The majority of the Learning Assistants' responses (70%) endorsed the interview as having mostly identified the problem(s) discussed. The remaining 30% of endorsements were almost evenly divided between "somewhat" identified and "completely" identified. Over half of the classroom teacher endorsements (58%) assigned ratings of mostly identified. The remaining endorsements were divided between somewhat identified and completely identified. One quarter of the classroom teachers' responses endorsed complete identification of the problem. The distribution of percentages suggests that the learning assistants and classroom teachers endorsed the same levels of agreement as to the degree of problem identification reached i n their interview by their choice of agreement levels but not i n the strength of their endorsements at each level. They differed i n the strength of their endorsements. 68 Shared Understanding Between Participants A summary of the learning assistance teachers' and classroom teachers' rating of the degree of their shared understanding of student problems between consultation participants are presented i n Table 4.8. Table 4.8: Dis t r ibut ion of Learning Assistants' and Classroom Teachers' Ratings of Shared Understanding Participants Percent of Rating 1 2 3 4 L A T 0 14 61 25 C T 0 6 47 47 Note: Ratings of Shared Understanding were on a 4-point scale where "1"= Not A t A l l Understood, "2" = Somewhat Understood, "3" = Most ly Understood and "4" = Completely Understood. More than half of the learning assistants' ratings of shared understanding from their interview were designated as "mostly understood." Twenty-five percent of endorsements assigned ratings of "complete understanding" and the remainder were designated as "somewhat understood." Ninety-four percent of the classroom teachers' ratings were evenly divided between "mostly and completely understood." Only a small percentage endorsed a "somewhat understood" rating. The distribution of percentages suggests the learning assistants' perspective had greater variability and reflected more disagreement than the classroom teachers. 69 Post Consultation Interview Agreement The first level of analysis of these ratings was to investigate the level of interrater agreement to establish the interrater reliability of ratings between the raters. Interrater agreement was determined by comparing ratings assigned by two independent raters to each pair of post consultation interviews. (A copy of the rating scale is provided as Appendix #4.) A l l ratings fell between levels three and four, "mostly agree" and "complete agreement, respectively. Distribution of ratings are summarized i n Table 4.9. Table 4.9: Interrater Agreement Interrater Percent of Ratings Overall Agreement Kappa 1 2 3 4_ Rater 1 0 0 53 47 0.78 Rater 2 0 0 64 36 The raters demonstrated an interrater percentage agreement of .89, calculated by the percentage agreement formula (number of agreements/ number of disagreements + agreements). In order to correct for chance agreement, Cohen's Kappa statistic was calculated for the interrater agreement as wel l . The level of agreement based on the analyses of the post-consultation interviews tape transcriptions and accompanying tapes was K=.78, which is considered average and necessary results for overall agreement. 70 A l l of Rater l ' s and Rater 2's responses were between levels three and four indicating that they rated all interviews as agreements. Rater l ' s percentage of ratings of the interview agreements were almost evenly divided between a "mostly agree" (53%) and "complete agreement" (47%). Rater 2's percentage ratings placed a stronger endorsement on mostly agreeing (64%) than on complete agreement (36%). Neither rater assigned ratings of level one or two indicative of a disagreement between interview participants. When compared wi th the consultation participants' percent of ratings of shared understanding/agreement on the problem(s) identified i n the interview the results indicated that the participants assigned more disagreement than the raters. The learning assistants and classroom teachers assigned to 14% and 6% of the interviews respectively, ratings of "somewhat agreed" to their shared understanding of the interview. This indicated some disagreement on how they perceived their dyad's shared understanding of the problem(s) identified i n the interview. The raters independently reported that the post consultation interview reflected two levels of agreement (either "mostly" or "completely") between the participants. The strongest overall level of support was a rating of "mostly understood" given by the learning assistants and raters to the shared understanding reached as a result of the interview. The classroom teachers evenly divided their ratings between mostly and completely shared understanding. There appears to be general agreement among participants and raters that the interview participants were mostly i n agreement as to the 71 problems discussed but given the diversity of rating assignments a general rather than specific conclusion is drawn. For the purpose of subsequent analyses involving the independent raters, the ratings provided by Rater 2 were used. Interparticipant Agreement The second level of analysis investigating agreement between the participants' ratings of the effectiveness of interview helpfulness, problem identification and shared understanding applies the Kappa statistic for chance-corrected percentage agreement and Chi-Square analyses on their response agreements. Interview Helpfulness In order to explore similarities between the learning assistants' and classroom teachers' responses as to how helpful they found the interview to facilitate problem identification, a Kappa statistic of agreement between the learning assistance teachers' and the classroom teachers' responses was calculated at K=.26, a low level of agreement. This reflected the variation of responses between learning assistants and classroom teachers as to the level of interview helpfulness. When the participants' responses were analyzed as a group, the chi-square calculation was significant, (X 2 (2) = 5.674, P<.10) indicating that an association could be made between the participants and their responses. That is, the pattern of responses for learning assistants was significantly different from that of classroom teachers. The contingency coefficient of .27 indicates the presence of a modest relationship between the 72 raters (participants) and the status of their ratings. The learning assistants found the interview to be more helpful i n identifying student problems than the classroom teachers. Problem Identification In order to investigate further the level of problem identification which occurred between the participants during the interview, Kappa was calculated for participant agreement at K=-.06, indicating substantial disagreement between the learning assistants and classroom teachers i n their rating the extent to which problems were identified i n their interview. When the participants' responses were analyzed as a group, the chi-square calculation was not significant (X 2 (2) = 1.039, p = .595), no association could be made between the ratings and the status of the raters. This indicates that their disagreement ratings, as noted in the Kappa result, are not related to their participant status. Shared Understanding Further investigation of the shared understanding of interview participants on the problem identified in the interview resulted i n a Kappa calculated at K=.05 indicating weak agreement between the learning assistance teachers and classroom. This suggests that the level of shared understanding was viewed differently by the participants. This supported the variability between ratings given by the participants to their shared understanding or agreement reached from their interview. In order to investigate the similarity of ratings of shared understanding provided by the learning assistance teachers and classroom teachers as a group, 73 chi-square analysis was conducted on their ratings. This analysis yielded a significant chi-square (X 2 (2)=5.42, p<.10) indicating differences i n the pattern of response ratings given by learning assistance teachers from those of the classroom teachers. A contingency coefficient of .26 indicated a modest relationship between the ratings and the status of the raters. A s a group, the learning assistants viewed their shared understanding or agreement reached during the interview differently from the classroom teachers. Participant - Rater Agreement Further investigation of the shared understanding or agreement between participants was done by comparisons made between ratings of participants' agreement on problem identification wi th the second rater's overall agreement ratings of participant shared understanding expressed in the post consultation interview. The results of the comparisons are summarized i n Table 4.10. Rater 2's endorsements of the level of agreement between participants were distributed between level three ("mostly agree") and level four ("complete agreement"). Sixty-four percent of the ratings given by Rater 2 and 64% of the ratings given by the learning assistants were level 3 ratings of "mostly agree". The actual number of interviews on which the learning assistants and Rater 2 agreed upon for a level 3 rating was 15. However, the learning assistants endorsed 14% at a level two rating, "somewhat understood", a level response not endorsed by the rater. The rater endorsed complete agreement (level 4) wi th 36% of the ratings compared wi th 22% for the learning assistants. When rating levels for individual interviews were compared wi th learning assistant ratings, a percentage 74 Table 4.10: Participant - Rater Agreement Participant Agreement Percent of Ratings wi th Rater 2 Kappa Response Level 1 2 3 4 L A T 0 14 64 22 0.14 R2 0 0 64 36 C T 0 6 47 47 0.17 R2 0 0 64 36 Note: Ratings of Agreement were on a 4-point scale where "1" = N o Agreement, "2" = Somewhat Agreed, "3" Most ly Agreed and "4" = Complete Agreement agreement of .56 was reached and when corrected for chance agreement, the Kappa statistic reported for their agreement was K=.14, indicating little agreement between the learning assistants and Rater 2 on specific interviews. Comparisons between the classroom teachers and Rater 2 percentages revealed greater overall agreement i n the distribution of their endorsements between level three and level four. Classroom teachers were evenly divided i n their ratings of levels three and four of participant shared understanding, whereas rater two gave stronger endorsement to level three, "mostly agree." Classroom teachers assigned 6% of the ratings to level two, "somewhat understood", a response level not used by rater 2. The comparison of the classroom teachers' ratings wi th rater 2's ratings using Kappa indicated little agreement (K=.17) between rating sources for individual interviews. They 75 appeared to rate their shared understanding/agreement of the interview differently which resulted i n greater variability i n ratings. Al though comparisons between the learning assistants and second rater and the classroom teachers and second rater appear similar, the low Kappa i n each case suggests that they are using different criteria to evaluate their shared understanding. These findings suggest that the ratings of post consultation interview agreement do not accurately predict participants' ratings of their shared understanding of problems identified i n the consultation interview. Further chi-square analysis of interview shared understanding was conducted on the interparticipant agreement wi th Rater 2. This analysis yielded a significant chi-square (X 2 (4) = 9.78, p<.10) indicating that there was a significant difference between the participants and rater i n the association of the raters wi th the status of their ratings. The participants (learning assistants and classroom teachers) were more likely to distribute their ratings of understanding across three levels of responses to include an indication of a "somewhat understood" interview situation. Rater 2's distribution was between levels three and four, indicative of stronger agreement between the participants. A contingency coefficient of .29 indicated a modest relationship between the ratings and the status of the raters. This finding supports the participants' and rater's use of different criteria from which to evaluate their shared understanding of the problems identified during the interview. It suggests that the interview participants were more likely to identify potential disagreements than the rater. 76 Participant Interview Evaluation Measure: Part II Problem Identification by Number. Nature, and Priority The second part of this measure was a checklist which allowed participants to identify up to three student problems they may have discussed during the interview. The first part of the checklist was aimed at broad problem identification as to the nature of the student's problem and further descriptors could be endorsed to assist in clarifying some general characteristics. Participants were asked to decide whether the problem discussed during the interview was academic or social/emotional/behavioral. Next, they were to endorse one or more of the descriptors provided for greater clarification. In addition, participants were asked to indicate the problems i n order of highest priority, if more than one problem was selected. Number of Problems The results of the mean number of problems reported by participants wi th in individual schools and followed by the overall mean are reported i n Table 4.11. The learning assistants and classroom teachers reported approximately the same number of problems (M=2.00 and 2.08, respectively) given the opportunity to identify up to three problems per student. The participants of one school shared the same mean and standard deviation. Participants from the remaining eight schools varied i n their responses as noted i n the distribution of their responses by percentages. Table 4.12 summarizes the number of problems by percentage of occurrence which were identified by the participants. Participants were provided 77 Table 4.11: Participant Interview Evaluation: Part II Mean Number of Problems Reported by Participants in Individual Schools School Participant Problem Mean (SD) 1 LAT 1.25 (0.50) CT 1.75 (0.50) 2 LAT 1.75 (0.96) CT 2.00 (0.82) 3 LAT 1.75 (0.96) CT 1.50 (0.58) 4 LAT 2.00 (0.82) CT 2.00 (0.82) 5 LAT 2.25 (0.50) CT 2.00 (0.00) 6 LAT 2.75 (0.50) CT 1.75 (0.50) 7 LAT 1.75 (0.50) CT 2.25 (0.50) 8 LAT 2.75 (0.50) CT 2.50 (0.60) 9 LAT 1.75 (0.50) CT 2.25 (0.50) Overall LAT 2.00 (0.75) CT 2.08 (0.64) Table 4.12: Problem Identification: Number of Problems Identified By Participants One Problem Two Problems Three Problems N Percent N Percent N Percent LAT 10 28 16 44 10 28 CT 6 17 21 58 9 25 Kappa = .10 78 the opportunity of identifying up to three problem areas. Approximately half of the students discussed i n the interview by either participant were identified wi th at least two problems. The classroom teachers' distribution of the number of problems was more heterogeneous than the learning assistants' distribution. Two or more problems were reported more often than a single problem according to the distribution of percentages. The percentage of agreement between the participants was .53 and Kappa statistic was calculated on the corrected for chance agreement of the participants as to the number of problems identified. Results of K = .10 reflects the diversity of the distribution indicative of low agreement. Table 4.13 summarizes the interrater agreement on the number of identified problems per interview and compares the number identified by the second evaluator (Rater 2) wi th the participants' responses. The Kappa for agreement between Rater 1 and Rater 2 on the learning assistants' responses was 0.50 indicating moderate agreement. Interrater agreement on the classroom teachers' responses using Kappa was 0.31, indicating mild agreement. The low Kappa scores suggest the measure's insufficient sensitivity i n detecting many variations in problem descriptions and inadequate criteria for description. Table 4.13 also summarizes Rater 2's distribution of the numbers of problems identified as compared with the participants' distribution. Using a corrected for chance agreement Kappa, moderately low agreement (K=.28) was found i n comparing the learning assistants' distribution wi th Rater 2. Their strongest endorsements for two and three problems wi th a close agreement i n 79 Table 4.13: Interrater Agreement on the Number of Identified Problems Per Interview One Problem Two Problems Three Problems N Percent N Percent N Percent L A T : R l 10 28 13 36 13 36 R2 7 19 15 42 14 39 Kappa = .50 CT: R l 6 17 16 44 14 39 R2 4 11 17 47 15 42 Kappa = .31 Rater 2 wi th the Learning Assistants: L A T 10 28 16 44 10 28 R2 7 19 15 42 14 39 Kappa = .28 Rater 2 wi th the Classroom Teachers: C T 6 17 21 58 9 25 R2 3 8 18 50 15 42 Kappa = .10 identifying two problems i n approximately half of the interviews. Comparisons wi th the classroom teachers revealed weak agreement (K=.10) wi th Rater 2's endorsements. The greatest diversity of responses occurred between the rater and classroom teachers as to agreement on the number of problem identified. Meaningful interpretation of these results is reduced by the low interrater agreements reported. Nature of the Problem Results of the identification of the nature of the highest and second highest problems i n priority is reported i n Table 4.14. (Endorsements for a third problem were considerably lower and not included i n the analysis.) Percentages 80 Table 4.14: Problem Identification: Nature of the Problem and Ranking Identified by Participants H I G H E S T PRIORITY P R O B L E M : Participant Academic N Percent Soc/Emot/Beh N Percent L A T 15 42 C T 17 47 S E C O N D PRIORITY P R O B L E M : Participant Academic N Percent 21 58 19 53 Soc/Emot/Beh N Percent L A T C T 26 19 62 60 10 38 12 40 reported close to an even split between academic and social/emotional/behavioral problems between both participants favoring social/emotional/behavioral descriptions over academic descriptions for the highest priority problem identified. A Kappa statistic to confirm participant agreement of the highest priority problem was calculated at K=.66, indicative of moderately strong agreement. This result indicates that the participants agreed at least 66% of the time when asked to decide whether the highest priority problem was academic or social/emotional/behavioral. (The percentage of their actual agreement was determined to be .83). This finding suggests that participants are somewhat more likely than not to agree on a general description of the highest priority identified problem. 81 Percentages reported for the second priority problem indicated somewhat stronger endorsement for academic over social/emotional/behavioral problems for the learning assistants and the classroom teachers. A l l learning assistants assigned a second problem to all students discussed i n the interview and 31 of the 36 classroom teachers reported a second problem for the same student. Reliability of Ratings of the Nature of the Problem Interrater reliability was determined as described earlier by the two raters using the participants' post consultation interviews as the source of information for the ratings. The ratings of each participants' post consultation interview were considered separately from their dyad's post consultation interview i n order to reduce potential influence of the other's student problem representation. The interrater agreements on the participants' responses to the nature of the problems identified are summarized in Table 4.15. Interrater agreement reliability of ratings was determined at a Kappa of .61 when comparing the nature of the problems identified by the learning assistants and .66 when comparing those identified by the classroom teachers. Al though these scores reflect moderate agreement on the type of problem given highest priority by the participants, the agreements were not sufficiently strong enough from which to draw meaningful conclusions. These findings also suggest that caution be used i n further analyses of the results. Further analyses were conducted using the participants' ratings of the highest priority problem i n comparison wi th the second rater's ratings of the 82 Table 4.15: Interrater Agreement O n the Nature of the Highest Priority Problem Academic Soc/Emot/Beh N Percent N Percent Learning Assistants: (N=35) R l 11 31 24 69 R2 18 51 17 49 Kappa = .61 Classroom Teachers: (N=36) R l 13 36 23 64 R2 17 47 19 53 Kappa = .66 participants' post consultation interviews. The results are summarized i n Table 4.16. Moderately strong agreement was found between Rater 2 and the learning assistants (K=.67) and Rater 2 wi th the classroom teachers (K=.78). Since the interrater agreement was only of moderate strength on the participants' ratings of the nature of the highest priority problem, the comparative data of the Table 4.16: Comparison Agreement Between Participants and Rater 2 Academic Soc/Emot/Beh N Percent N Percent Rater 2 & L A T L A T 15 42 2158 R2 19 53 1747 Kappa = .67 R a t e r 2 & C T C T 17 47 1953 R2 17 47 1953 Kappa = .78 83 participants wi th the rater, although strong i n appearance, is interpreted wi th caution. A n interesting finding i n these comparisons is the strong level of agreement between the classroom teachers and second rater. A possible explanation for this is that they used similar criteria for identifying a problem as academic verses social/emotional/behavioral problems. The low interrater reliability established for this measure requires caution i n interpreting this finding. However, this suggests there may be potential for stronger agreement than the results indicated which could depend upon the way i n which problem identification is operationalized by using more stringent criteria. Other Problems Missed or Ignored The final stage of the checklist evaluation form completed by the participants asked them to report if any other problems were known about the student which may have been missed or ignored by either participant during the interview. Responses were presented as percentages i n Table 4.17. Table 4.17: Other Problems Unreported During The Interview Percentage Problem Description Participant N Yes N o Other Problems L A T 35 22.8 77.1 CT 36 19.4 80.5 Problems Missed L A T 25 20.0 80.0 CT 27 03.7 96.2 Problems Ignored L A T 26 11.5 88.4 CT 27 0.0 100.0 84 A reduced number of responses were made to these final questions regarding the content of the interview by the respondents. Nearly twenty five percent of the participants' responses indicated other problems did exist for the students i n question. Subsequent responses to the remaining questions were inconsistent and difficult to detect a pattern for analysis. Summary Analyses were conducted on the data collected from the nine learning assistance teachers and thirty-six classroom teachers who participated i n this study of problem identification i n consultation. Demographic information regarding educational background and prior experience wi th consultation i n a school setting was collected initially from the participants. Approximately 75% of the participants from each group reported some experience wi th consultation prior to their involvement i n this study. Nearly all participants lacked formal training and extensive practice therefore this was considered a nonexpert sample of consultation practitioners. Learning assistants were asked to conduct problem identification interviews wi th four classroom teachers from their schools regarding real attending students. Following their interviews, each participant was interviewed separately for a summary statement as to what problem were identified during their interview. Transcriptions of these post-consultation interviews were transcribed and rated independently for agreement between the participants as to the nature, number, and priority of the problem(s) discussed. Interrater 85 agreement on the participant dyad's overall agreement or shared understanding of the interview content was calculated wi th Cohen's Kappa at 0.78. Interparticipant agreement was investigated using a participant evaluation of the interview on variables of interview helpfulness, problem identification, and shared understanding of the interview content. Significant differences were determined regarding the association of ratings given by learning assistants and classroom teachers on interview helpfulness and shared understanding. The second rater's evaluations were compared wi th the learning assistants' and classroom teachers' responses but resulting Kappa statistics yielded an indication of weak agreement, not strong enough to make any conclusive predictions regarding the participants' agreement. Interparticipant agreement was also investigated for the number and nature of the problems reported i n the post consultation interview and second part of the evaluation form. Participant agreement on the number of problems reported was calculated wi th a Kappa which was too low to establish strong agreement. Participant agreement on the nature of the problem was also calculated wi th Kappa (K=.66) which indicated moderate agreement. Interrater agreements were too low for comparisons wi th the participants i n the areas of nature and number of reported problems. This reflected the measure's insufficiency i n detecting the nature and number of the problems discussed. In light of this caution is taken i n interpreting the agreement between the participants since the raters were unable to demonstrate an acceptable level of agreement using Kappa. 86 The majority of the interview evaluations indicated that further problems were present wi th the student and not discussed but there was no consistent pattern from which to evaluate the responses adequately. 87 C H A P T E R F I V E : S U M M A R Y A N D C O N C L U S I O N S The purpose of this study was to describe practitioner agreement on problems identified following an initial interview for problem identification i n the process of consultation. The study was undertaken i n response to criticism of problem identification i n behavioral consultation. Studies of problem identification reported in behavioral consultation literature lacked generalizability of agreement across interviewers as to the nature of the problems discussed i n the interview (Gresham & Davis, 1988). The current study described the levels of agreement between consultant and consultee (learning assistants and classroom teachers) when engaged i n identifying a client's (student) problem during an initial interview of consultation. Agreement or reliability of problem identification was operationalized i n the study i n several ways. The primary measure of reliability of problem identification was rating participant agreement on post consultation descriptions of problems discussed during the consultation interview. Two trained raters reviewed the consultation participants' descriptions of the problems they identified during their interview. Interparticipant agreement was determined by comparing the learning assistants' and classroom teachers' responses to the interview evaluation measure which evaluated the interview and the identified problems discussed by nature and number. Further determination of the reliability of problem identification was done by comparing participants' ratings wi th the second rater 88 for overall agreement on problem identification, problem descriptions and the number of problems identified. Interview Evaluation Measure Part I: Following their post consultation interviews, participants were asked to complete an evaluation of their interview i n terms of helpfulness, level of problem identification and shared understanding within their dyad. There was considerable variability i n the responses from both groups when examining the mean scores. The classroom teachers' responses were more heterogeneous than those of the learning assistants. It is noted that the endorsements of the learning assistants, who participated i n four interviews to the classroom teachers' one, were more consistent across their individual ratings for helpfulness, less so for problem identification and the least consistent i n rating the level of understanding of identified problems shared by the participants. This suggests some validity for the utility of the interview process as an initiator of the representation of a student's problem for which greater agreement between participants for more complete identification may still be sought. The variability of responses for interview helpfulness is reflected i n the distribution of the endorsement percentages. The classroom teachers generally gave lower helpfulness ratings to the interview than did the learning assistants. This suggests that learning assistants found the interview to be more helpful i n identifying students' problems. 89 The second question i n the evaluation measure specifically addressed problem identification and the percentages of endorsement ratings showed that participants disagreed upon the degree to which the problem had been identified. Learning assistants generally supported a lesser degree of problem identification than did classroom teachers. This suggests that the learning assistants were less likely to identify the presenting student's problem i n the same way as the classroom teachers. Endorsements of participants' shared understanding of the problem discussed during the interview reflected some disagreement between the participants. Classroom teachers generally gave more support to levels of agreement and learning assistants' responses suggested support for some disagreement or a less complete agreement on what was discussed during their consultation interview. The findings of this study suggest that learning assistants found the interview helpful i n initiating an emerging identification of the problem while acknowledging that their understanding of the problem was not i n full agreement wi th the classroom teachers. Interparticipant and Interrater Agreement Reliability of problem identification was measured by interparticipant and interrater agreements using the Kappa statistic for comparisons of chance corrected agreements and Chi-Square analyses for comparisons of the associations between the raters and their ratings on the three item, four point rating scale. This second level of analysis of interview helpfulness, problem 90 identification, and shared understanding revealed low levels of agreements between the participants on their ratings of the helpfulness of the interview (K = .26) and shared understanding of the problems discussed (K=.05). Significant differences between the learning assistants and classroom teachers suggested that their ratings of the interview could be predicted from their status as raters. The learning assistants and classroom teachers disagreed on the level to which the problem was identified i n the interview. However, there was no significant difference between the participants on their problem identification rating levels. The distribution pattern of their responses appeared similar. Both groups of participants gave their strongest endorsement to problem identification at a 'mostly' but not 'completely' identified level. By comparison, more support was given to mostly identified problem identification by the learning assistants. Conversely, the classroom teachers gave slightly more support to the problem as 'completely' identified. Interrater agreement was established on the raters' evaluation of the post consultation interview. A four point scale was used by the raters to evaluate the participant post consultation interview statement for a comprehensive description of the problem. Raters evaluated the overall agreement between the participants based on their statements about the highest priority problems discussed. The Kappa statistic yielded a .78 which represents an acceptable level of agreement, indicating that the agreements occurred more frequently than would be expected by chance. 91 Addit ional support for the reliability of agreement on a shared understanding of the problem was attempted i n order to confirm interrater and interparticipant agreements. A s a further measure of assessing the reliability of these agreements, the second rater's evaluations of overall interview agreement were compared wi th those of the learning assistants and classroom teachers. Rater 2's endorsements were divided between two levels of overall agreement signifying agreement between participants. The participants, however, indicated some disagreement by using a partial rating closer to disagreement than to agreement. The Kappa scores calculated indicated that the agreements of ratings were slightly more frequent than could be expected by chance. These results suggest that the participants were using a criteria for determining shared understanding that was different from the measure's specifications. Further investigation of ways to improve upon how to uncover participant agreement is warranted. Interview Evaluation Measure Part II: Number of Problems Following their post consultation interview, the participants were asked to respond to a problem identification measure which provided a global description of the nature of the problem and the number of problems specified for the student. Problems were identified as either academic or social/emotional/behavioral. A list of general descriptors was provided for each category which the participants could endorse as further clarification of the problem(s). Participants were given the opportunity to identify and describe up 92 to three problems i n order of priority for each student. Most participants assigned at least two problems to the students discussed during the interview. Nature of the Problems The highest priority problem was designated as the problem which participants specified that the most attention was given i n the interview. Descriptions of highest priority problems included its identification as having the greatest discrepancy between the expected performance (behavior) and actual performance and, typically, a trigger of other specified problems. In order to establish reliability of agreement of the highest priority problems, interparticipant and interrater agreements were calculated. Interparticipant agreement on the highest priority problem was calculated at K = .66, indicating moderately strong agreement between the learning assistants and classroom teachers on the nature of the highest priority problem. Interrater agreement on the nature of the highest priority problem resulted i n K=.61 for the learning assistants and K=.66 for the classroom teachers. These are moderately strong agreements which fall below acceptable agreement levels suggesting that caution be used i n interpreting the findings. The interparticipant agreement determined by Kappa is of interest as it reports that the learning assistants and classroom teachers agreed on the description of the highest priority problem as either academic or social/emotional/behavioral at least 66% of the time. This implies that there is a basic level of agreement on the problems identified which can occur i n the problem identification interview. This suggests that the general criteria used to 93 identify the problems were insufficient to facilitate agreement beyond a basic level. Participants were also asked if any problems were ignored or missed during their interview. Most agreed that not all problems were discussed but there was no detectable pattern in their responses which could be evaluated further i n this way. The research findings of Hay et al. (1979) reported that the only acceptable agreement across interviewers was on the number of problems identified for each client. This was not the case for these findings, however. The interview evaluation measure imposed a limitation of three possible responses and the structure for responding did not seem to capture an adequate description to discern how many problems could be accurately identified. Hay et al. used a version of Cautela and Upper's checklist of 25 problems, each wi th a specific list of symptoms/descriptors sensitive to more subtle differences i n the problem descriptions offered. A comprehensive instrument similar i n nature would potentially have made the comparisons easier to discern. Implications The findings of this study revealed agreement was present regarding the problem(s) identified by participants during their interview and reported i n the post-consultation interview. It is noted, however, that the level of inter-participant agreement was mild and moderately supported by the interrater agreement. This suggests that a replication of the study be done i n order to 94 validate these findings as support for the reliability of problem identification by participant agreement. The findings of this study suggest that the participants did not have strong agreement on the outcome of their problem identification interview. There was, however, an indication that the participants moderately agreed on the nature of the problem discussed which may partially account for the variability i n the participants' responses to their perception of shared understanding and interview helpfulness. Support for this finding comes from Bergan and Tombari (1976) who reported that problem identification occurred i n only 43% of the consultation cases. Their successful cases provided support for problem identification as a critical element needed for problem solution. The participants from this study did not give full endorsement to complete problem identification or shared understanding. The variability of responses were more indicative of an emerging agreement which i n time and wi th more feedback could potentially lead to agreement i n identification. The participants were able to agree at least 66% of the time as to what was the nature of the highest priority problem. This suggests the unfinished quality of problem identification. The lack of agreement between the participants as to the level of problem identification reached i n the interview suggests that problem identification is i n itself a dynamic process of stages which can better be accomplished when moving to a more complete stage, characterized i n part by agreement between participants. 95 The concept of agreement may also need to be further clarified. Few of the participants chose complete or absolute agreement as a characteristic of their problem identification. This suggests that i f problem identification is possible without absolute agreement, a level of acceptability would have to be determined at which point sufficient enough identification is made for problem solution to occur. Limitations The findings suggested that several limitations of the study interfered somewhat wi th the effects of a complete description of the reliability of problem identification. The evaluation instrument used to aid i n identification and clarification of the problems was problematic i n that it provided limited support i n articulating the problems. Use of an established measure similar to the Cautela and Upper instrument used i n the Hay et al. study may offer more reliable problem representation which can assist the participants and raters to stronger agreement and a more comprehensive description of presenting problems, inadequate and the descriptors were too vague to assist either the participants or raters i n articulating the problem. Another issue raised is that of training interviewers. Empirical support for Bergan and Tombari (1975) and Hay et al.(1979) typically included a training component to ensure that problem identification would occur. Interviewing skills, particularly i n eliciting appropriate verbalizations for problem identification were the focus of several follow-up studies (Brown et al. 1982; Kratochwill et al. 1989). It is noted from the participants' demographic 96 information that only two of the learning assistants received formal training i n consultation, although many more of them used it. Two learning assistants were also using a consultation model for the first time i n this study. The different styles of procedures i n problem identification may have confounded the process, particularly if they were unable to agree on an identified problem during their interview. The number of participants was small, a larger sample, including a greater variety of experiences would be needed to draw any further conclusions. Future Directions Further investigation of the problem identification component of consultation is still needed. The literature suggests that interviewer training and the standardization of the interview format w i l l lead to problem identification (Brown et al. 1982). A further descriptive study of the interview process itself would prove helpful i n attempting to establish causal relationships between the problem identification interview and its successful outcome. Consideration may be given to Nezu and Nezu's (1993) suggested model for targeting problem behaviors. The model offers a multidimensional approach to problem solving which considers a wider variety of intervening variables. This model is still theoretical i n nature and may be unwieldy to attempt i n a school setting. It does, however, challenge the problem solvers to reconceptualize their practices and find new perspectives. Greater emphasis on the dynamic nature of problem identification i n the use of a consultative problem solving process suggests future research consideration be given to this area. R E F E R E N C E S 97 Alessi , G . (1988). Direct observation methods for emotional/behavior problems. In E . Shapiro, & T. Kratochwill (Eds.), Behavioral assessment i n schools (pp. 14-75). N e w York: Guilford Press. Bergan, J. R. (1990). Contributions of behavioral psychology to school psychology. In T. Gutk in & C . Reynolds (Eds.), The handbook of school psychology (2nd ed., pp. 126-142). N e w York: Wiley & Sons. Bergan, J. & Tombari, M . (1975). The analysis of verbal interactions occurring during consultation. Tournal of School Psychology. 13(3). 209-226. Bergan, J. & Tombari, M . (1976). Consultant skill and efficiency and the implementation and outcomes of consultation. Tournal of School Psychology. 14(1), 3-14. Bergan, J. (1977). Behavioral consultation. Columbus, Ohio: Merr i l l . Bergan, J. , & Kratochwill, T. (1990). Behavioral consultation and therapy. N e w York: Plenum Press. Berk, R. A . (1979). Generalizability of behavioral observations: A clarification of interobserver agreement and interobserver reliability. American Tournal of Mental Deficiency. 83(51 460-472. Brown, D . , Kratchowill , T., & Bergan, J. (1982). Teaching interview skills for problem identification: A n analogue study. Behavioral Assessment, 4. 63-73. 98 Brown, D . , Pryzwansky, W . , & Schulte, A . (1991). Psychological consultation: Introduction to theory and practice (2nd ed.). Boston: A l l y n & Bacon. Caplan, G . (1970). The theory and practice of mental health consultation. N e w York: Basic Books. Carter, J. & Sugai, G . (1989). Survey of prereferral practices: Responses from state departments of education. Exceptional Children, 55(4). 298-302. Cascio, W . & Zedeck, S. (1983). Opening a new window i n rational research planning: Adjust alpha to maximize statistical power. Personnel Psychology. 36. 517-526. Cleven, C . & Gutkin , T. (1988). Cognitive modeling of consultation processes: A means for improving consultees' problem definition skills. Tournal of School Psychology. 26. 379-389. Cone, J. (1981). Psychometric considerations. In M . Hersen & A . Bellack (Eds.), Behavioral assessment: A practical handbook (2nd ed., pp. 38-68). N e w York: Pergamon Press. Conoley, J. & Conoley, C . (1991). School consultation: Practice and training (2nd ed.). Boston: Al lyn&Bacon. Curtis, M . & Watson, K . (1980). Changes i n consultee problem clarification following consultation. Tournal of School Psychology, 18(3), 210-221. Dougherty, A . M . (1990). Consultation: Practice and perspectives. California: Brooks/Cole Publishing Co . 99 Duley, S., Cancelli, A . , Kratchowill , T., Bergan, J . , & Meredith, K . (1983). Training and generalization of motivational analysis interview assessment skills. Behavioral Assessment. 5. 281-293. Evans, I. M . (1985). Building systems models as a strategy for target behavior selection i n clinical assessment. Behavioral Assessment. 7. 21-32. Felton, J .L. & Nelson, R . O . (1984). Interassessor agreement on hypothesized controlling variables and treatment proposals. Behavioral Assessment. 6. 199-208. Gable, R. , Friend, M . , Laycock, V . , Hendrickson, V . (1990). Interview skills for problem identification i n school consultation: Separating the trees from the forest. Preventing School Failure. 35(1). 5-10. Graham, P. & Rutter, M . (1968). The reliability and validity of the psychiatric assessment of the child: II. Interview with the parent. British Tournal of Psychiatry. 114. 581-592. Gresham, F. (1984). Behavioral interviews i n school psychology: Issues i n psychometric adequacy and research. School Psychology Review. 13(1). 17-25. Gresham, F. & Davis, C . (1988). Behavioral interviews wi th teachers and parents. In E . Shapiro & T. Kratochwill, (Eds.), Behavioral assessment i n schools (pp. 455-493). N e w York: Guilford Press. Gresham, F. & Kendell , G . (1987). School consultation research: Methodological critique and future research directions. School Psychology Review. 16(3). 306-316. 100 Gutkin , T. & Curtis, M . (1990). School-based consultation: Theory, techniques, and research. In T. Gutk in & C . Reynolds (Eds.), The handbook of school psychology (2nd ed., pp. 577-611). N e w York: Wiley & Sons. Hay, W . , Hay, L . , Angle , H . , & Nelson, R. (1979). The reliability of problem identification i n the behavioral interview. Behavioral Assessment. 1. 107-118. Haynes, S., & Jensen, B. (1979). The interview as a behavioral assessment instrument. Behavioral Assessment. 1. 97-106. House, A . , House, B . , & Campbell, M . (1981). Measures of interobserver agreement: Calculation formulas and distribution effects. Tournal of Behavioral Assessment. 3Q). 37-57. Idol, L . (1990). The scientific art of classroom consultation. Tournal of Educational and Psychological Consultation. 1(1). 3-22. Kanfer, F. (1985). Target selection for clinical change programs. Behavioral Assessment. 7. 7-20. Kazdin , A . (1985). Selection of target behaviors: The relationship of the treatment focus to clinical dysfunction. Behavioral Assessment, 7, 33-47. Keane, T., Black, J . , Collins, F . , & Vinson, M . (1982). A skills training program for teaching the behavioral interview. Behavioral Assessment. 4. 53-62. Knoff, H . M . (1982a). Evaluating consultation service delivery at an independent psychodiagnostic clinic. Professional Psychology, 13(5), 699-705. 101 Knoff, H . M . (1982b). The independent psychodiagnostic clinic: Maintaining accountability through program evaluation. Psychology i n the Schools. 19. 346-353. Kratochwill , T. (1985). Selection of target behaviors i n behavioral consultation. Behavioral Assessment. 7. 49-61. Kratochwill , T., & Bergan, J. (1990). Behavioral consultation i n applied settings: A n individual guide. N e w York: Plenum Press. Kratochwill , T., Elliott, S., & Rotto, P. (1990). Best practices i n behavioral consultation. In A . Thomas & J. Grimes (Eds.), Best practices i n school psychology-II (pp. 147-170). Washington, D C : National Association of School Psychologists. Kratochwill , T. & Sheridan, S. (1990). Advances i n behavioral assessment. In T. Gutk in & C . Reynolds (Eds.), The handbook of school psychology (2nd ed., pp. 328-364). N e w York: Wiley & Sons. Lazarus, A . A . (1973). Mult imodal behavior therapy: Treating the "basic i d . " Tournal of Nervous and Mental Disease. 156, 404-411. Lee, P. & Suen, H . (1984). The estimation of kappa from percentage agreement interobserver reliability. Behavioral Assessment. 6, 375-378. L i l ly , M . S . (1988). The regular education initiative: A force for change i n general and special education. Education and Training In Mental Retardation, 23(4), 253-260. Mannino, F. & Shore, M . (1986). History and development of mental health consultation. In F. Mannino, E . Trickett, M . Shore, M . Kidder & G . 102 Lev in (Eds.)/ Handbook of mental health consultation (pp. 3-28). Maryland: N I M H . Martens, B . K . (1993). A behavioral approach to consultation. In J. Zins, T. Kratochwill , & S. Elliot (Eds.), Handbook of consultation services for children: Applications i n educational and clinical settings (pp. 65-86). San Francisco: Jossey-Bass. Medway, F. (1979). H o w effective is school consultation? A review of recent research. Tournal of School Psychology. 17. 275-282. Ministry of Education. (1994, July). Special education services: A manual of policies, procedures, and guidelines (Draft 2). Province of British Columbia: Special Education Board. Nelson, R. & Hayes, S. (1981). Nature of behavioral assessment. In M . Hersen & A . Bellack (Eds.), Behavioral assessment: A practical handbook (2nd ed., pp. 3-37). N e w York: Pergamon Press. Nezu , A . & Nezu , C . (1993). Identifying and selecting target problems for clinical interventions: A problem-solving model. Psychological Assessment. 5(3). 254-263. Nuttal , E . & Ivey, A . (1986). The diagnostic interview process. In H . Knoff (Ed.), The assessment of child and adolescent personality (pp. 105-140). N e w York: Guilford Press. O ' N e i l l , R. , Horner, R., A l b i n , R. , Storey, K . , & Sprague, J. (1990). Functional analysis of problem behavior. Illinois: Sycamore Publishing Co . 103 Polsgrove, L . & M c N e i l , M . (1989). The consultation process: Research and practice. Remedial and Special Education. 10(1). 6-13, 20. Reschly, D . (1988). Special education reform: School psychology revolution. School Psychology Review. 17(3). 459-475. Rosenfield, S. & Reynolds, M . C . (1990). Mainstreaming school psychology: A proposal to develop and evaluate alternative assessment methods and intervention strategies. School Psychology Quarterly, 5(1), 55-65. Sloves, R., Docherty, E . , & Schneider, K . (1979). A scientific problem-solving model of psychological assessment. Professional Psychology. 10. 28-35. Suen, H o i . (1988) Agreement, reliability, accuracy, and validity: Toward a clarification. Behavioral Assessment. 10. 343-366. Suen, H . & A r y , D . (1989). Analyz ing quantitative behavioral observation data. N e w Jersey: Erlbaum Associates Publishers. West, J.F. & Idol, L . (1987). School consultation (Part 1): A n inter-disciplinary perspective on theory, models and research. Tournal of Learning Disabilities. 20(7). 388-407. White, S. & Edelstein, B. (1991). Behavioral assessment and investigatory interviewing. Behavioral Assessment. 13. 245-264. W i l l , M . (1986). Educating students wi th learning problems: A shared responsibility. Washington, D C : U . S . Department of Education. W i l l , M . (1988). Educating students wi th learning problems and the changing role of the school psychologist. School Psychology Review. 17(3), 476-478. 104 Wilson, F. & Evans, F. (1983). The reliability of target-behavior selection i n behavioral assessment. Behavioral Assessment. 5. 15-32. Witt , J. & Elliott, S. (1983). Assessment i n behavioral consultation: The initial interview. School Psychology Review. 12(1). 42-48. 105 A P P E N D I C E S 106 APPENDIX #1 DIRECTIONS FOR CONSULTATION PARTICIPANTS Prior to the Start of the Interview: Y o u are asked to conduct an interview for the next thirty minutes about a student w h o m you have identified as difficult to teach. Please conduct your interview i n the same way i n which you would begin to discuss a student wi th problems at your school in order to reach a solution. A t the end of the interview the researcher w i l l ask you to state what you have identified as the main difficulties this student presents i n teaching. Post Consultation Interview Question/Statement: Y o u have just met wi th a colleague i n a consultation session about . Please recall the main problem or problems identified during this consultation interview about . Being as specific as you can, please summarize what problem or problems were identified during this interview. If more than one problem was identified be sure to tell which problem was the most important or given the highest priority. APPENDIX #2 107 I d e n t i f i c a t i o n Nunber: PARTICIPANT DEMOGRAPHIC QUESTIONNAIRE 1. Age: 20 - 30 41 - 50 31 - 40 Over 50 2. Sex 3. Highest degree level of educational training received: 4. Total number of years in teaching or other educational roles: 5. Occupational T i t l e : (If more than one t i t l e i s applicable, please indicate by an approximate number of years in each area.) T i t l e Currently Held Formerly Held Teacher Administrator Special Educator Teaching Assistant Learning Assistant Counsellor School Pschologist Consulting Teacher Specialist (specify area): 6. Indicate the level of students which you are now teaching/ or with whom you have the most daily contact. Primary Intermediate Secondary 108 I d e n t i f i c a t i o n Number: 7. Have you had any prior experience with the consultation process? None Some Frequent 8. How would you rate your prior experience with consultation? Positive Neutral Negative 9. Have you had any formal training in consultation? Yes No 10, If so, how did you receive this training? University Course Pro-D In-Service I^ZZ Workshop A P P E N D I X #3 109 I d e n t i f i c a t i o n Number: INTERVIEW EVALUATION: PARTICIPANTS The goal of a Problem Identification Interview in consultation i s to formulate a comprehensive description of a major problem or problems which may then be targeted for the development of an intervention. 1. Was this interview helpful in identifying the student's problem? Not At A l l Helpful Somewhat Helpful Mostly Helpful Completely Helpful 2. As a result of this interview to what extent was/were the student's problems adequately identified? Not At A l l Identified Somewhat Identified Mostly Completely Identified Identified 3. As a result of this interview to what extent do you feel that you and the other participant have a shared understanding of the major problem(s)? Not At A l l Somewhat Understood Understood Mostly Understood Completely Understood I d e n t i f i c a t i o n Number 110 4. Use the outline below to describe the student's major problem(s) identified in your interview. (Please check only those items discussed in your interview. If more than one major problem was identified, use the separate sheets provided for each problem, up to three.) HIGHEST PRIORITY PROBLEM A. First decide, was the identified problem primarily: (Choose one) ACADEMIC OR SOCIAL/ EMOTIONAL/ BEHAVIORAL (If the problem was primarily Academic, complete section J3 only. If the problem was primarily SOCIALVEMOTIONAL/BEHAVIORAL. complete section_C only.) B. How would you describe the problem(s) identified as ACADEMIC: Content Area Deficit (ie. General weakness noted in an area of study) Production Deficit (ie. Incompletion of assignments, tasks, etc.) Specific Skill Deficit (ie. Noted lack of skills in a specific area) C. How would you describe the problem(s) identified in the areas of SOCIAL/EMOTIONAL/BEHAVIORAL: Social Skill Deficit (ie. Interpersonal relationship difficulties) Behavioral Excesses (ie. Frequent responses to stimuli) Behavioral Deficits (ie. Infrequent responses to stimuli) Personality Variables (ie. Behaviors characteristic of negative self-evaluation) I d e n t i f i c a t i o n Number SECOND PROBLEM IN PRIORITY 111 A. Then decide, was the second identified problem primarily: (Choose one) ACADEMIC OR SOCIAL/EMOTIONAL/BEHAVIORAL (If the problem was primarily Academic, complete section J3 only. If the problem was primarily SOCIAL/EMOTIONAL/BEHAVIORAL, complete section_C only.) B. How would you describe the problem(s) identified as ACADEMIC: Content Area Deficit . (ie. General weakness noted in an area of study) Production Deficit (ie. Incompletion of assignments, tasks, etc.) Specific Skill Deficit (ie. Noted lack of skills in a specific area) C. How would you describe the problem(s) identified in the areas of SOCIAL/EMOTIONAL/BEHAVIORAL: Social Skill Deficit (ie. Interpersonal relationship difficulties) Behavioral Excesses (ie. Frequent responses to stimuli) Behavioral Deficits (ie. Infrequent responses to stimuli) Personality Variables (ie. Behaviors characteristic of negative self-evaluation) I d e n t i f i c a t i o n Number 112 THIRD PROBLEM IN PRIORITY A. Then decide, was the third identified problem primarily: (Choose one) ACADEMIC OR SOCIAL/EMOTIONAL/BEHAVIORAL (If the problem was primarily Academic, complete section _B only. If the problem was primarily SOCIAL/EMOTIONAL/BEHAVIORAL complete section_C only.) B. How would you describe the problem(s) identified as ACADEMIC: Content Area Deficit (ie. General Weakness noted in an area of study) Production Deficit (ie. Incompletion of assignments, tasks, etc.) Specific Skill Deficit (ie. Noted lack of skills in a specific area) C. How would you describe the problem(s) identified in the areas of SOCIAL/EMOTIONAL/BEHAVIORAL: Social Skill Deficit (ie. Interpersonal relationship difficulties) Behavioral Excesses • (ie. Frequent responses to stimuli) Behavioral Deficits (ie. Infrequent responses to stimuli) Personality Variables (ie. Behaviors characteristic of negative self-evaluation) I d e n t i f i c a t i o n Number 113 5 . Are there other problems for this student which are as or more important than those identified but did not surface during the consultation interview? Yes No Were these problems missed? Yes No Were these problems ignored? Yes No APPENDIX #4 114 Interview Identification Number: Rater Number: R A T E R S ' E V A L U A T I O N O F PARTICIPANT A G R E E M E N T O F IDENTIFIED P R O B L E M S PARTI The goal of a Problem Identification Interview in consultation i s to formulate a comprehensive description of a major problem or problems which may then be targeted for the development of an intervention. Based on your reading of this post-consultation interview transcript, rate the extent of agreement between the consultant and consultee on the student problem(s) identified in the post-consultation interview. Complete Mostly Mostly Complete Disagreement Disagree Agree Agreement APPENDIX #5 115 EXAMPLE OF A N INTERVIEW WHICH RECEIVED A N INTERRATER AGREEMENT RATING OF COMPLETE AGREEMENT ("4"). POST CONSULTATION INTERVIEW TRANSCRIPTS Learning Assistance Teacher Okay, one of the problems that we talked about first was that ... u m ... (Student) has difficulty understanding any k ind of abstract questions ... u m ... involving problem solving, use of logic. A n d despite, uh, giving her some help on a one-to-one, either the teacher or the teacher assistant (TA), uh, she didn't seem to be able to understand any better. U m , so giving concrete examples, um, she seemed to k ind of, uh, shut down and not be able to take i n what was being taught. A n d then we came to the second problem which is an aspect of it, is that, u m ... the real problem seems to be the emotion, her emotional response to not understanding and then receiving help. So, she seems to be nervous ... u m ... not able to take i n the, sort of adapted instruction and she, she tends to k ind of shut down, and nod and say, "Yes, I've understood it." When, i n fact, the next minute when you ask her a question it 's obvious she has not understood it. Query: H o w does she say she's shut down. I mean, what ... L A T : Just k ind of a glazed look ... um, nodding ... smiling ... Query: Like this . . . . L A T : (agrees) Query: Okay, what did you agree was the highest priority problem? 116 L A T : We felt that dealing wi th the emotional response to receiving help needed to be dealt wi th first. So we thought that we needed to find some ways to help (Student) feel more comfortable ... u m ... asking for help, or asking questions when she does not understand and then receiving the help. Query: Alright? L A T : Okay? Query: Anyth ing else? L A T : A n d we also felt that there was an aspect ... u m ... we might want to look at ... Query: So, this would be the third thing ... L A T : Wel l , it might be part of the emotional response. (Student) has an older brother who is a high achiever and ... u m ... (Student) feels very inadequate compared to him. A n d the mother also feels very ... u m ... guilty and ... and concerned abut (Student) because she's quite different for her brother and how she learns and her achievement. So that we thought that might be taking a look t the family interactions and what's happening at home might help us to ... to ... to, uh, find out a way of dealing wi th the emotional response. But she seems to be really self-conscious and have ... have some self-esteem problems around dealing wi th the fact that she ... she doesn't learn easily and her brother does. POST CONSULTATION INTERVIEW TRANSCRIPTS Classroom Teacher 117 The main problem with (Student) is ... um ... there are two main problems, two problems, but to get to the second one you need to tackle the first one and that is . . . u m ... her emotional state when confronted wi th difficulty. She is, um, quite nervous, and very insecure, and, um, the problem that, that I come up wi th is how to approach her. I feel that sometimes I do intimidate her and, as wel l , um, just, uh , reading into her .. . her facial expressions I find it very difficult. She just comes across wi th this ... wi th a blank look that's ... that seems very hard for ... for me to read. A n d sometimes she agrees that she does understand something when ... when, uh, when the next question is ... is ... to ... to quiz her on that. She ... she very wel l doesn't understand. So to get through the nervousness and that's what we need to tackle first. A n d then dealing wi th , once we've tackled that coming up wi th ways that, she's very good wi th her rote memorization, but when it comes to problem solving ... or ... u m ... abstract thinking, things like that, she finds it very difficult to put things into categories and ... and problem solve. So that's ... Query: So that'd be the second ... CT: . . . the second .. Query: Okay, so you need to get the main one, is that ... CT: That's right. Query: .. . the emotional. CT: That's right, get through that and then we can deal wi th her learning and difficulties that ... she ... that she has there. 118 Query: Okay .. . w i l l you have her next year ... likely? Or ...? CT: N o ... no. Probably not ... probably not. Query: But this is still something that's good to be addressed. CT: That's right, yup. I ... I taught grade 8 and 9 this year and next year I just have the grade 8's 'cause we're starting a new program so unfortunately I won' t get to see some of the kids that I had this year. So ... Query: A n y other problems that come up i n your discussion, wi th (LAT) or ...? CT: N o , I think that's it in a nutshell, that's about it, I guess?! Query: Okay .. . APPENDIX #6 119 EXAMPLE OF A N INTERVIEW WHICH RECEIVED A N INTERRATING AGREEMENT RATING OF MOSTLY AGREED ("3") POST CONSULTATION INTERVIEW TRANSCRIPTION Classroom Teacher For the student that we just discussed, I would say that ... u m ... the main, I would say the main concern that I have for h im is the sense of responsibility for his own learning and I think that that's really at the root of most of the other concerns. That's ... um ... once he gets responsible for his own learning I think that that 'll improve. H e ' l l be more organized, once he's more organized he ' l l have a better time at studying. Once he starts studying a little bit more, it's going to help h i m i n the classroom. A n d ... u m ... He 's actually ... I enjoy h im as a student. He can be challenging but yet he's refreshing because I can see that he's got it there. He just hasta, sort of, scratch the surface, and ... u m ... and continue to bui ld his confidence because he's very sports-wise, he's excellent and, and he enjoys sports and does wel l i n it. A n d socially wi th his peers he's popular and the kids like h im. I think a lot of the time the academics get i n the way and because he doesn't realize that he might not be on the same level as the rest of his friends that a lot of the time that gets h im down and rather than work on that he just figures, "Well , I 'm popular, maybe I ' l l just be funnier or make a joke about it." A n d ... I think that he's gotta get over that fear factor? A n d a lot of times he has to really feel that competence and trust wi th his teacher. U m ... because, you know, he's said that and, and I think that once he feels comfortable and he realizes, okay, wel l , y ' k n o w she's there to help me, she's not going to be upset wi th me because I didn't get the homework done, um, that he feels better about it and he can say, "Well , (Teacher), I had a problem with THIS, i n the assignment and then he knows that ... u m ... that I ' l l say, "Okay, wel l , let's extend the deadline. What 120 could be a conceivable deadline for you? Two days?" "Oh, I can do it i n two days." A n d a lot of times, if he's got the idea that ... u m ... you're not there to catch h im up on a mistake that you're there to assist h im, he ' l l meet you. U m ... he ' l l meet you halfway and he ' l l work wel l for you. A n d that's . . . that's the big thing that I wanna see for h im for the end of this year and next year, is that he doesn't lose that competence that he's gained i n the year because I can just see just from September to now ... um ... how much improvement that he's made. Part of it could be because, you know. H E ' S 13!!! Y o u know, there's that sense of maturity that's being built i n as wel l . But, uh, yeah, that's . . . that's my goal for h im, is just to make sure that ... um ... he realizes that it's his learning, that he has to be more responsible for it and not depend on, you know, mom to pick up the homework or someone to tell h im what the assignment is last minute. Or to finish it up i n the morning. A n d I think that that w i l l get h im on the road to more small successes. POST CONSULTATION INTERVIEW TRANSCRIPTION Learning Assistance Teacher Okay, uh, there's a few problems that were identified — one, was that, u m ... oh yeah, he (Student), um, is still having trouble accepting responsibility for his own learning and u m ... and that, um, he doesn't respond wel l to being placed i n a situation where he needs support and he is not getting support from his family i n that area. They both feel that it 's ... it 's more derogatory and w i l l not help h im, they don't want to stigmatize him. A n d , um, also another problem that he was having is ... is i n general, is getting his ideas down on paper. He seems to be much more of a verbal learner and, uh, I think the main problem, again it's ... it 's a bit of a stand-off, there is ... is the problem of h im accepting, accepting ... u m ... help and not feeling ... u m ... and not letting it affect his self-esteem. A n d also getting information down on ... on paper were the two, uh, major problems that we're having, that we found. 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0086897/manifest

Comment

Related Items