UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Understanding the clinical evaluation practices of a clinical nurse teacher : a critical descriptive-exploratory… Mahara, Mary Star 2001

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2001-0242.pdf [ 11.45MB ]
Metadata
JSON: 831-1.0090093.json
JSON-LD: 831-1.0090093-ld.json
RDF/XML (Pretty): 831-1.0090093-rdf.xml
RDF/JSON: 831-1.0090093-rdf.json
Turtle: 831-1.0090093-turtle.txt
N-Triples: 831-1.0090093-rdf-ntriples.txt
Original Record: 831-1.0090093-source.json
Full Text
831-1.0090093-fulltext.txt
Citation
831-1.0090093.ris

Full Text

UNDERSTANDING THE CLINICAL E V A L U A T I O N PRACTICES OF A CLINICAL NURSE TEACHER: A CRITICAL DESCRIPTIVE-EXPLORATORY CASE STUDY by M A R Y STAR M A H A R A B.S.N., The University of British Columbia, 1980 A THESIS SUBMITTED IN PARITAL F U L F I L L M E N T OF THE REQUIREMENTS FOR THE DEGREE OF M A S T E R OF SCIENCE IN NURSING in THE F A C U L T Y OF G R A D U A T E STUDIES The School of Nursing We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH C O L U M B I A April 2001 © Mary Star Mahara, 2001 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. The University of British Columbia Vancouver, Canada Department DE-6 (2/88) A B S T R A C T The importance of clinical evaluation is well established. However, many facets of clinical evaluation require further investigation including how clinical nurse teachers (CNTs) actually evaluate students in clinical courses. The purpose of this research was to inquire into the clinical evaluation practices of a C N T in a manner that captured the complexity of these practices, promoted her understanding of her practices, and contributed to a general understanding of clinical evaluation in nursing education. A critical descriptive-exploratory case study was selected as the research method because of its potential to achieve all three purposes. Data were collected from seven tape recordings using a modified think-aloud technique, nine semi-structured interviews, examination of the participant's weekly anecdotal (instructor) notes, five students' final evaluation documents, and a concept map of the participant's evaluative practices and influencing variables. Data collection and analysis occurred concurrently in a recursive and cyclical manner. Themes, issues and questions that arose from the preliminary analysis of each week's set of data were compared to units of data from previous tapes and interviews and were discussed and clarified with the participant each week. Understandings and questions from each week's interview were then used to guide the next data collection set. A t the end of the data collection period the data were analyzed further and the findings were discussed and validated with the participant. Cl inical evaluation emerged as a complex and dynamic process that was embedded in the teaching-learning process. The participant, J, utilized a number of practices for collecting data and determining its meaning with respect to the student's level of performance and the teaching and evaluating strategies that should follow. The data indicated that J attempted to be accurate, objective, fair, comprehensive, and caring in her evaluative decision making, i i particularly when an evaluative decision could result in the student failing the course. J had made many changes to her evaluative practices over her years as a C N T . Her greatest gains were in developing awareness of the impact of her practices on the student's performance and in her ability to create an educative environment in the clinical area where evaluation was considered an important part of the student's clinical learning experience. J's data collection and evaluative decision making practices were influenced by her ability to find opportunities to sample student practice, her skil l at data collection, the amount of time she had with each student, the nature of her relationship with the student, and whether the intent of her thinking was teaching or evaluating. The number of clinical days she had with the students turned out to be a major determinant of her ability to compile an accurate and comprehensive picture of the students' practice. What J chose to focus on in evaluation was influenced by her own way of practicing nursing, how nursing was practiced on the unit, the expectations of the workplace/employers and the licensing body, and the ideal view of nursing practice envisioned in the curriculum. Finally, the findings raised several questions about the evaluative practices of C N T s which have implications for both nursing education and further nursing research. i i i T A B L E OF CONTENTS A B S T R A C T i i T A B L E OF CONTENTS iv LIST OF FIGURES ix A C K N O W L E D G M E N T S x C H A P T E R ONE: Introduction 1 Introduction 1 Definition of Terms 4 Problem Statement 5 Methodology 6 Significance of the Study 8 Organization of the Thesis 8 CH APTER TWO: What is Currently Known 10 Introduction 10 Overview of the Problem of Clinical Evaluation 12 Aspects of Clinical Learning that Have Influenced Evaluation Practice 13 Evaluation as Measurement 16 Outcome Assessment 16 Criterion-Referenced Rating Scales 17 Standardized Performance Exams 19 Evaluation of the Affective Domain 19 Reliability and Validity of Objectified Measures 21 Summary 22 Effective Evaluation Practices 22 Effective and Ineffective Clinical Evaluation Behaviours 23 Effective Evaluation Practices 24 Formative and Summative Evaluation Practices 25 Clinical Learning as Evaluation 26 Feedback as a Key Formative Evaluation Practice 30 Questioning and Observation 32 Teaching or Evaluating? 33 Summary 34 Qualitative Evaluation 35 The Practical and Emancipatory Interests 37 Interpretive-Criticism Model 38 Student Self-Evaluation 41 Self-Evaluation as a Developmental Process 42 Validity of Self-Evaluation 44 Portfolios 47 iv Journals 49 Summary 52 Influences on Evaluative Decision Making 53 The CNT's Perspective 53 Preparation for Clinical Evaluation 55 Clinical Teaching Experience 56 Intuition 57 Student Variables 59 The Borderline and Failing Student 62 Institutional Policy and Standards 67 CNT Reflexivity 70 Summary 71 Conclusion 72 C H A P T E R THREE: Research Design 74 Introduction 74 Research Method 74 Case Study Approach 74 Critical Constructivist Inquiry 76 Reciprocity 78 Dialectical Theory Building 80 The Case 81 Participant Selection 81 Time Boundaries 82 Criteria for Selection of the Case 83 Recruitment 84 Data Collection ' 84 Modified Think-Aloud Technique 85 Interviewing 90 Document Analysis 92 Concept Map 92 Data Analysis 93 Data Management 94 Data Analysis Procedure - 95 Rigor 96 Trustworthiness 97 Authenticity 97 Procedures to Ensure Trustworthiness and Authenticity 98 Generalization 100 Role of the Researcher 101 Problem o f the Insider Role 101 Researcher Reflexive Diary 102 Ethical Considerations 103 Limitations of the Study 105 Conclusion 106 v C H A P T E R FOUR: The Findings 107 Introduction 107 The Case: J 107 Background in Nursing Practice and Education 107 Figure Evaluation Practices and Influences: J 108 The Study Context 109 The Rotation Format 110 The Unit 111 The Group 111 Evaluative Decision Making 112 Data Collection 112 Sampling 112 The "Plan" 113 Tracking 113 Clinical Assignment 114 Time 116 Pattern 117 Data Collection Practices 118 Observation 119 Discussion and Questioning 121 Written Work 124 Staff 125 Clients 126 Previous Knowledge 126 Intuition 127 Students' Ability to Discuss Their Practice 127 Other Sources 128 Focus of Evaluation 128 Clinical Judgment Domain 129 Knowledge 129 Assessment 130 Decision Making 130 Charting 131 Skills 132 Organization 132 Safety 133 Health and Healing Domain 133 Teaching-Learning Domain 134 Collaborative Leadership Domain 135 Professional Responsibility Domain 135 Influences on the Focus 136 Personal Practices 136 Institutions 136 Decision-Making: What the Data Means 137 Normative Referenced Evaluation 137 v i Criterion Referenced Evaluation 139 Safety 140 Assistance and Cueing 141 Contextual Variables 142 Teaching and Evaluating: "Pieces of the Pie" 144 Feedback 145 Verbal Feedback 146 Written Feedback 147 Formative Decision Making ' 148 Formalizing the Teaching and Evaluation Distinction: The L C 150 The Final Evaluation Process 152 Student Input into Evaluative Decisions 153 View of Student Input 153 Self-Evaluation in the Clinical Area 154 Students' Written Work 15 5 Student Final Self-Evaluation 155 Formative in Summative 156 Bias 157 Conclusion 159 C H A P T E R FIVE: Discussion, Conclusions, and Implications 161 Introduction 161 Discussion o f the Research Findings 161 Data Collection 161 Sampling, Safe Practice, and Level of Supervision 162 Clinical Assignment 165 Rotation Length 167 Data Collection Methods 169 Observation 170 Evaluating Student Knowledge and Thinking Processes 172 Focus of Evaluation 175 Deciding What the Data Means 177 Outcome-Based Evaluation systems 178 Qualitative Evaluation Systems 179 Expectations, Standards, and Criteria 180 Contextual Variables 183 Teaching and Evaluating 185 Partnership in Evaluation 191 Conclusions 195 Implications for Nursing Education and Research 199 Nursing Education 199 Nursing Education Research 205 Conclusion 209 REFERENCES 211 v i i APPENDIX A : Concept Map - J 226 APPENDIX B: Consent Form: Clinical Nurse Teacher 227 APPENDIX C: Consent Form: Students 231 APPENDIX D: Practice Appraisal Form: Semester V 234 v i i i LIST OF FIGURES Figure Evaluation Practices and Influences: J Appendix A Concept Map - J ix A C K N O W L E D G M E N T S I would like to acknowledge several people whose assistance and support were crucial in terms of enabling me to construct and complete this study. Firstly, I wish to acknowledge my committee members, Drs. Barbara Paterson, Janet Storch, and Carol Jillings for helping me to learn the research process and to develop as a scholar. I am greatly indebted to Barbara for sticking with me through the many disruptions in my life that threatened to derail this research. She truly has the patience of a saint. I also appreciated her editorial magic; coming when I needed it most. Finally, she is living proof that the roles of teacher and evaluator are not separate, nor are they incompatible, at least not in graduate education. I need to thank my Department Chair and Associate Dean for their support and for the consideration I received when I needed help. I especially want to acknowledge the contribution made by my colleagues and friends. These people helped me wherever they could, so that I could concentrate on data analysis and writing, and they kept in touch with me even though it looked as though I had abandoned them. I am most grateful for the encouragement and support I received from my healing circle of friends, and from V H , my newest colleague and friend. These very special women are the embodiment of caring. I would also like to thank my two (mostly) wonderful teenagers, John Paul and Rheanne, for continuing to grow and develop despite the absence of a mother for the past four months, and my life-partner, Martin, for weathering my emotional storms patiently, and with love. Finally, I wish to dedicate this thesis to J; nurse clinician, educator, and colleague extraordinaire. J generously opened her evaluation and teaching practices to be examined, questioned, and critiqued by me, in order to further our understanding of how students are evaluated in the clinical area. Despite her heavy workload during the semester that we collected and analyzed the data, she made our research work a priority and always gave more than 100% to the project. The success of case study research depends heavily on the case that is selected for study. I believe that this case study was enhanced by J's openness and her ability to reflect on her practice and my questions, even when it "made her head hurt". I am grateful for the opportunity I had to study clinical evaluation with J. x 1 CHAPTER ONE Introduction Any institution wanting an overflow audience for a symposium need only plan a program called "Evaluation of Nursing Students in the Clinical Area". (Woolley, 1977, p. 308) Teaching and learning in a clinical setting are essential components of the educational process in the practice disciplines. In nursing, students spend a major portion of their weekly educational hours in clinical learning. The complex and multifaceted nature of clinical education presents unique challenges to the practices of teaching and evaluation (Karuhije, 1997; Stokes, 1998). The direct observation of students engaged in actual practice in unpredictable clinical environments presents challenges to objectivity and consistency of student evaluation (Benner, 1982; Friedman & Mennin, 1991; Ross et al., 1988; Wood 1986). Other challenges faced by clinical nurse teachers (CNTs) include creating a positive learning environment in a setting where they have limited authority, determining learning when course objectives are often subordinated by the work requirements of the clinical agency, managing the instructional dynamics of interacting with the multiple others involved in health care work, being answerable to several parties (the educational institution, the clinical agency, their licensing body, students, staff, clients), and dealing with the dual responsibility of ensuring adequate learning opportunities for students while also ensuring the students' clients receive safe and effective nursing care. Clinical evaluation has assumed increased significance in recent years as nurse educators respond to a cry from both education and health care sectors for increased accountability, definitive standards, and cost effectiveness. "Doing evaluation and doing it well matters in pragmatic terms because bad products and services cost lives and health, 2 destroy the quality of life, and waste the resources of those who cannot afford waste. In ethical terms, evaluation is a key tool in the service of justice, in programs as well as in personnel evaluation" (Scriven, 1991, p. 43). The outcomes of clinical evaluation decisions have serious implications for students in terms of self esteem and protection of their rights to pursue a livelihood, for patients and clients with regards to protection from unsafe practice, and for teachers concerned with potential charges of unfair evaluative practices and the legal repercussions of decisions where students are wrongly failed (or passed) in a clinical practice course (Cohen et al., 1993; Orchard, 1994b). A n understanding of how to evaluate students effectively within the complex learning environment of a clinical setting is beginning to emerge (Reilly & Oermann, 1992; Stokes, 1998). Early studies on teacher effectiveness delineated and compared teaching practices of effective and ineffective CNTs (see the literature reviews in Bergman & Gaitskill, 1990; Morgan, 1991; Stokes, 1998). Studies consistently list fair, objective, and supportive evaluative practices as one of the characteristics an effective CNT. Other studies have connected students' perceptions of the evaluative practices of CNTs to the students' experiences of clinical learning. CNTs whose teaching practices were perceived as arising from an "evaluative" rather than an "educational" focus were reported to have a negative impact on student learning (Diekelmann, 1992; Flagler, Loper-Powers & Spitzer, 1988; Hedin, 1989; Loving, 1993; Wilson, 1994). CNTs develop and utilize a variety of evaluative practices for collecting, interpreting, and judging information about the overall quality of a student's clinical practice. The evaluation of a student's practice involves the collection of data through various means, the analysis and interpretation of the data according to standards, the conclusion and judgments 3 that are arrived at about a student's practice, and recommendations as to what action should follow. But, evaluative practices are more than just a set of techniques and methods; an ideological core informs them. Evaluative practices reflect and promote certain values and beliefs about what is and what ought to be (Bricker-Jenkins, 1997). A CNT's world view informs how evaluation is framed and how evaluative decisions and conclusions about students are made. Little has been written about both the actual evaluative practices of CNTs (i.e., how CNTs view clinical evaluation and how they make evaluative decisions about students) and the sources of influence on a CNT's evaluative practices. The absence of theory and research is especially noticeable with respect to clinical evaluation within today's critical-interpretive nursing curricula. This shift in the theoretical basis of nursing education provides for a different conceptualization of evaluation (Bevis & Watson, 1989; Diekelmann, 1992). Critical-interpretive curricula are informed by concepts such as dialogue, meaning-making, multiple ways of knowing, critical reflection, and egalitarian nurse-client and teacher-student relationships. There is a belief in the socially constructed nature of reality, an appreciation of how CNT and student interact and shape one another, and a recognition of the situational constraints and power relations that shape teachmg-learning and nursing practice. A critical-interpretive approach to evaluation moves beyond the traditional quantitative methods for measurement of clinical skill levels to include qualitative processes for explicating and judging a student's clinical practice. Blomquist (1985) believes qualitative evaluation is "process-oriented, exploratory, expansionist, inductive, holistic, and assumes a dynamic reality"(p. 8). Theoretically, CNTs utilizing a critical-interpretive approach to evaluation would examine not only the student's performance but also the influence of the clinical environment 4 and the teacher's practices on the performance. CNTs and students would co-create a picture of the student's practice and then determine what the practice means in relation to professional standards. The authority and power of the teacher would be acknowledged and utilized to create a relationship that empowers students to assume increasing responsibility for participating in the description and judgment of their practice. "Dialogue and negotiation are key processes for establishing and maintaining the process and rules of the evaluative relationship and for maintaining the integrity of the interactions to assure the quality of the evaluation process and product" (Mahara, 1998, p. 1343). The importance of clinical evaluation is well established. However, many facets of clinical evaluation require further investigation including how CNTs actually evaluate students in clinical courses. The purpose of this critical descriptive-exploratory single case study was to describe the clinical evaluation practices of a CNT working with undergraduate students in an critical-interpretive baccalaureate nursing curriculum, to explore the basis of these practices, and to further the CNT's understanding of her practices. Definition of Terms Evaluation is both a process and a product; the word 'evaluation' refers to the process of systematically and objectively determining the merit, worth and value of things, and it also denotes the products of that process (Scriven 1991). Clinical evaluation represents a particular application of the broader discipline of evaluation, specifically a combination of the fields of performance evaluation (evaluation of student work) and personnel evaluation (evaluation of students). Performance appraisal is the assessment by another person(s) of an individual's performance based on observation and expectations of performance or achievement. As a type of performance appraisal, clinical evaluation can be defined as the 5 assessment, by a CNT, of a nursing student's performance while in a clinical setting (Stewart, 1991). Clinical nurse teacher - a full time teacher of undergraduate nursing students teaching in a nursing practice setting as well as in the classroom. Clinical evaluation practices - those thoughts, actions, or behaviors which contribute to evaluative decisions about a student nurse's clinical practice within a particular clinical course and include the collection and interpretation of data, and the drawing of conclusions and/or making of judgments about the meaning of the data in order to arrive at evaluative judgments about the student's clinical performance. Clinical nursing education - learning by baccalaureate nursing students in a clinical setting, involving the care of patients/clients/residents within a health care agency under the supervision of a faculty member from the university or university-college nursing program. Problem Statement The clinical evaluation practices of a CNT working within an critical-interpretive curriculum were examined by addressing the following two research questions: 1. How does a CNT evaluate the clinical performance of students? 2. What are the sources of influence on these evaluative practices? There were several research sub-questions generated from the main question. These questions are as follows: a. What are the CNT's values, beliefs, assumptions, knowledge, and expectations about nursing practice, clinical teaching, and evaluation and how do they influence/inform the CNT's clinical evaluation practices? 6 b. How does the CNT's past experiences as a student, as a nurse, and as a teacher influence/inform the CNT's evaluative practices? c. How does the nature of the student (e.g., age, gender, ethnicity, ability, attitude, level in the program) influence/inform the CNT's evaluative practices? d. How does the organizational context (e.g., patient acuity level, nature of staffing, staffs past experience as/with students in clinical learning, support of administration) influence/inform the CNT's evaluative practices? e. How does the organizational context of the educational institution (e.g., faculty turnover, mission and philosophy, support of its nursing programs, funding situation) influence/inform the CNT's evaluative practices? Methodology The philosophy and approach proposed for this study is briefly overviewed here and will be expanded on in chapter three, in relation to the research design. This case study research was based in a critical constructivist ontology and epistemology in which reality and knowledge are viewed as human constructions that are value-laden and embedded within a social context. According to Lather (1991), the goal of critically-oriented paradigms is to produce emancipatory knowledge and empower the researched,".. .to encourage self-reflection and deeper understanding on the part of the researched at least as much as is it to generate empirically grounded theoretical knowledge" (p. 60). Emancipatory knowledge increases awareness of the contradictions distorted or hidden by everyday understandings and ".. .enables people to change by encouraging self-reflection and a deeper understanding of their particular situations" (Lather, p. 56). 7 Research from a critical-constructivist perspective should involve more than a single interview, and should entail an interactive approach to the research that invites reciprocal reflexivity and critique (Lather, 1991). Meaning is constructed through researcher-participant dialogue and negotiation. In this study, data collection and analysis were ongoing and cyclical and meaning was be co-constructed by the C N T and researcher. Themes, issues, and questions arising from each set of data were be discussed and clarified with the CNT in an attempt to describe what was occurring in her clinical evaluation of students and why practices were carried out in a certain way. Apparent relationships between influences and evaluative practices were reviewed and examined further with the CNT, and guided the next data collection set. Critical research is always concerned with improving practice (Kincheloe, 1995). Aided by increased consciousness of the tacit ideological assumptions and sociopolitical values that direct her practice, the C N T was positioned to enter the next clinical situation with a "self-reflexive critical awareness-in-action" (Reason, 1994, p. 325). She was better positioned to choose practices that were congruent with her stated values and beliefs and to understand the world and the way it is shaped in order to transform it (Lather, 1991). According to this theoretical perspective, the underlying assumptions of clinical nursing education are: 1. CNTs have their own ideas, models, or frameworks for attributing meaning and giving explanation to the world they experience (i.e., people have personally constructed realities that mediate their ways of thinking, values, and perspectives on the world). 8 2. Historical, political, social, and ideological structures and processes significantly shape social institutions, including baccalaureate nursing education programs and health care agencies. 3. Clinical evaluation practices are social constructions based in educational and administrative discourses that can justify and/or legitimize the practices as educationally worthwhile. 5. Clinical nursing education is a powerful force in shaping students' constructions of their nursing practice. 6. CNTs can learn to be self-reflexive about their world and their action within it. Significance of the Study The practical and scientific significance of this study is to contribute to the body of knowledge of clinical evaluation in nursing education, particularly, evaluation within critical-interpretive curricula philosophies. This study has the potential to provide a rich description of the interrelatedness of sociocultural and interpersonal influences on a CNT's thoughts, actions, and behaviors concerning clinical evaluation. As critical research, the study should promote the participant's understanding of her clinical evaluation practices. The findings should also contribute to the education of other CNTs by its implications for the teaching and development of effective evaluative practice. The findings of this study should assist CNTs to better understand their own practices and how their individual perspectives influence their approach to clinical evaluation. Organization of the Thesis Chapter one has introduced the purpose and significance of the proposed research. Chapter two includes a review of related literature and a discussion of the framework that 9 guides this study. Chapter three presents a rationale for the research design of the study and a discussion of the case, the generation and analysis of data, strategies to ensure rigor, ethical considerations, and limitations of the study. Chapter four presents the findings of the study. In Chapter five, selected findings of the study are discussed with respect to how the findings relate to what is currently known. Chapter five also includes a discussion of the implications that the study findings have for nursing education and future directions for research. 10 CHAPTER TWO What is Currently Known When research is undertaken from a critical perspective the researcher clarifies and articulates a priori allegiances to theoretical perspectives on the subject under study because there is no neutral point from which the researcher can study reality (Lather, 1991). The researcher poses questions, constructs the study, and makes meaning of the data from a particular understanding of the topic. The purpose of the literature review for this case study is to summarize the current state of knowledge about clinical evaluation in order to clarify my understanding of clinical evaluation in nursing education, and to outline a location where the research process can enter and where I can return in order to add the findings that result and to pose new questions. Literature on clinical evaluation that has been accessed for this literature review includes empirical research and the theorizing, formally and informally, by academics. Other sources of information on what is known about clinical evaluation arise from the knowledge-in-action of other CNTs and their students that I have collected informally through discussions over the years, as well as formally through graduate course work in nursing education courses. Finally, my personal knowing, as a CNT with 15 years of experience in evaluating the clinical performance of students, has been influential in the interpretation of all sources of knowing about clinical evaluation and in the structuring of this literature review. A review of the literature was undertaken based on guidelines for the review process suggested by Cooper (1982). The target population of studies for this review was identified as research and theoretical papers dealing with clinical evaluation in nursing education. Search words used were clinical teaching, clinical supervision, nursing education, clinical 11 evaluation, and clinical assessment. As an understanding of issues in clinical evaluation developed, a selective review of the literature from medical education and to a lesser extent, teacher education, was done where studies were identified that contributed to an understanding of the issues. Perspectives from the field of evaluation theory were considered by reading from the general education literature on performance, personnel, program, and curriculum evaluation and through reading Scriven's (1991) text, Evaluation Thesaurus. The primary search strategy was an on-line computer search of abstracting services and citation indexes using the C D - R O M data bases, CINAHL, MEDLINE, H E A L T H , and ERIC. The second search strategy was to check references cited in articles that were obtained to ensure that seminal works were not overlooked. The accessible population as discussed by Cooper (1982) describes those studies the reviewer is "pragmatically able to obtain". What is accessed is compared to the target population to consider how they might differ. The target population included studies and theoretical reviews on clinical evaluation while the accessible population consisted of studies and reviews that were available from University of British Columbia and University College of the Cariboo libraries. Almost all publications were available. The target and accessible populations were compared to determine the degree to which the reviewed studies represented clinical evaluation in diploma, associate degree, and baccalaureate programs, addressed the major issues of clinical evaluation as identified by Woolley (1977) and Wood (1982), represented a wide variety of settings CNTs teach in, both quantitative and interpretive research approaches, and the nursing studies mentioned most frequently in previous reviews and in the reference list of research studies on clinical evaluation in nursing. 12 A comparison of the target and accessible populations against these criteria shows that this review is based on a balanced representation of published research and theory to date. In this chapter I present a summary of what is known about clinical evaluation in nursing. It begins with an overview of five problems that complicate clinical evaluation in nursing education and a discussion of the development and testing of evaluative practices aimed at promoting objectivity and consistency of clinical evaluation. What is known about the evaluative practices of CNTs is examined next, through a review of research on clinical teaching behaviors of CNTs and studies on the clinical learning experiences of nursing students. Qualitative practices such as the interpretive-criticism evaluation model, self-evaluation, portfolios, and journals are reviewed including problems arising from student participation in clinical evaluation. Following is a consideration of certain internal and external variables that influence a CNT's evaluative practices, including preparation for clinical teaching, experience level, the CNT's perspective, and institutional policies and standards. Overall, a picture of evaluative practices in nursing education and central influences on evaluation ideology emerges. The chapter concludes with a summary of the limitations of the current theoretical and research base for teaching and evaluating students in the clinical area and suggests how this proposed research could contribute to the body of knowledge of clinical evaluation in nursing education. Overview of the Problem of Clinical Evaluation This section introduces several characteristics of clinical teaclimg-learning that have made evaluation of students problematic and have influenced the development of evaluative practices. The five areas of difficulty include the complexity of nursing practice, the many variables that can affect student performance, the inherent subjectivity of the observation 13 process, sampling issues, and the conflicting role of teacher and evaluator. Each of these five issues will be briefly discussed to provide a context for understanding past and current evaluative practices in nursing education. Aspects of Clinical Learning that Have Influenced Evaluation Practices Clinical evaluation of students is complicated by the complexity of nursing practice. Clinical practice involves thought and action that incorporates many steps, complex strategies and goals, and many possible courses of action. Clinical practice often involves the simultaneous use of cognitive, affective, and psychomotor knowledge and skill; some of which is invisible and/or tacit. As a result, CNTs have had to develop evaluative practices that are capable of capturing all that is nursing practice. This has not been easy. Many aspects of nursing practice such as artistry (Curl & Koerner, 1991), clinical decision making (Orchard, 1992; Radwin, 1995; Tanner, 1988), and the affective domain (i.e., values, beliefs, feelings, and dispositions) are difficult to assess (Andrusyszyn, 1989; Dawson, 1992). On the other hand, some areas such as student knowledge base and psychomotor skills are more easily measured and have ended up being the focus of many evaluative practices. A second problem of clinical evaluation is that a student's performance is affected by multiple variables in the clinical setting, many of which are outside the student's control (clinical resources, staff attitudes, unexpected changes in patient status, intervention by other health team members). A student's clinical performance is also shaped, in part, by the CNT's actions and interactions with the student, patients, and the clinical staff (Oermann, 1996; Paterson, 1991, Paterson & Groening, 1996). In addition, variations in the quality and frequency of learning opportunities creates a situation where students do not receive 14 comparable clinical experiences (Wood, 1982; Orchard, 1994b). Thus evaluative practices have had to be sensitive to the contextual nature of student clinical performance. The most common evaluative practice in clinical settings is direct observation of student performance and because the observational process is subjective, it is vulnerable to many distortions and biases of human perception (Orchard, 1994b). Pavlish (1987) comments wryly on the problem of subjectivity in clinical evaluation, "one of the difficulties stems from the fact that the educators try to observe in an objective manner to make subjective decisions and then often defend those subjective decisions with objective data" (p. 338). In addition, CNTs hold individualized expectations for students' performance and often rely on their own nursing practice experiences in judging their students' performance (Orchard, 1992). This has lead CNTs to attempt to develop objective and fair evaluative practices, including expectations that are reasonable (students should be able to meet them), applied consistently and equally to each student, and clearly communicated to students at the start of the course (Orchard, 1992). Evaluative practices must be capable of accurately describing and interpreting a student's current practice as well as making predictions about future capabilities. The validity of evaluative decisions can be questionable because CNTs are making inferences about a student's general competence from a limited number of clinical situations. "Professional competence is neither visible nor tangible. Instead we infer its presence or absence from measurements - some crude, some precise - that are assumed to be good indicators that certain people 'have' or 'can demonstrate' competence under certain circumstances" (McGaghie, 1991, p. 7). Yet, because of the pressures of short rotations, many CNTs are being required to make premature judgments about students based on inadequate and 15 selectively sampled data (Paterson, 1991). In addition, CNTs have little theoretical or empirical guidance as to what samples of student performance can best help them generalize who can and cannot meet professional practice standards. The final problem of clinical evaluation is that students are being evaluated while they are learning. CNTs have had to develop evaluative practices that' fit within their dual roles of teacher/mentor and evaluator/judge. However, there is.much confusion over which role should predominate, and when, and whether the roles are actually incompatible. There has been a difference of opinion among CNTs as to whether the focus of evaluation should be on clinical learning or clinical performance. And, as will be seen, there is also much evidence to suggest that CNTs view the clinical area as a place for faculty to evaluate students rather than teach them and that this perspective has had a negative effect on the stress level of CNTs and students, and even that it has stunted the development of initiative and autonomy in graduates, thus impeding progression of the profession. Finally, in their role as teachers, CNTs form close relationships with students in clinical courses and whereas the CNT-student relationship can facilitate a student's clinical learning, this close relationship can also be problematic in terms of its potential to bias evaluative judgments, both positively and negatively. The earliest solutions to the problems of clinical evaluation were attempts to objectify and standardize evaluation. To this end, a great deal of effort was put into the utilization of psychometric testing theory to develop quantitative evaluative practices for the collection and interpretation of data about a student's clinical performance. 16 Evaluation as Measurement Early efforts to address the challenge of subjectivity and inconsistency in clinical evaluation were directed toward objectifying clinical practice and standardizing assessment procedures (Bower, Line, & Denega, 1988; DeVore, 1993; Girot, 1993b; While, 1991; Wood 1982; 1986; Woolley, 1977). Objectivity was attractive as a solution to the problems of evaluation because it represented fairness, reliability, predictability, order, and control, whereas subjectivity implied bias, inconsistency, and arbitrariness. To increase objectivity of the observation process, CNTs had to reduce complex clinical behavior to manageable units that could be measured using standardized data collection instruments (Benner, 1982; Friedman & Mennin, 1991). Criterion referenced rating scales and standardized performance tests are two evaluation methods based in theory from the fields of educational technology and industrial psychology and fitting nicely with the quantitative philosophy of outcome-based evaluation systems. Outcome Assessment Evaluation systems based on outcomes have a central focus on the specification and documentation of student performance. The outcomes movement is based in a concern with the ability of educational institutions to prepare competent individuals for the work of current and future needs of society in a time of economic constraints and increasing diversity of the student population. Outcome assessment provides government funding agencies with a means to increase accountability among educators and administrators for documenting academic effectiveness and stewardship of public resources (Krichbaum et al., 1994; Lenburg, 1991). CNTs working within such evaluation systems are required to define and state standards of student clinical performance and then develop criterion-referenced tools to measure student 17 performance against the standards. Standardized performance examinations are also an integral component of outcome-focused evaluation systems (Luttrell, Lenburg, Scherubel, Jacom, & Koch, 1999; Lenburg & Mitchell, 1991; Woolley, Bryan, & Davis, 1998). Criterion-Referenced Rating Scales. The aim of evaluative practices involving checklists and rating scales is to identify and record observable and measurable behaviors. The C N T focuses on observing students' clinical performance in order to document the presence of various behaviors (Gomez, Lobodzinski, & Hartwell West, 1998; Reilly & Oermann, 1992). Because the early rating scales suffered from vague definitions, descriptions, or criteria for the reference points, criterion-referenced standards were developed in order to increase the reliability and accuracy of observation as a method for evaluating students' clinical performance, provide the discrimination needed to assign letter grades for clinical courses, and increase student understanding of expected behaviors. Bondy's levels of competency have been utilized by many CNTs and is one of the only scales to specify a theory base and be empirically tested (Bondy, 1983; 1984; Coates & Chambers, 1992; Donoghue & Pelletier, 1991; Krichbaum, et a l , 1994; Wiles & Bishop, 2001). Bondy developed a detailed description for five levels of competency (independent, supervised, assisted, marginal, and dependent) based on three broad characteristics of practice: professional standards (safety, accuracy, effect, and affect), qualitative aspects of the performance (degree of skill development such as use of time, space, equipment, and expenditure of energy), and type and amount of CNT assistance or cues needed to perform the behavior. She then conducted a well designed study to test the accuracy and reliability of her criterion-referenced evaluation tools (Bondy, 1983; 1984). Using an experimental design she compared the accuracy and reliability of clinical evaluations by groups of CNTs using a 18 criterion-referenced rating scale and groups using a rating scale with no criteria or descriptions. Both groups of CNTs were asked to evaluate videotaped portrayal of a student's clinical performance in different three clinical situations (medication administration, dressing change, interviewing) at Bondy's five levels of performance. A l l CNTs also provided a global rating of student performance. Two-thirds of the CNTs participated in a retest 6-8 weeks later where they evaluated the videotapes a second time. CNTs who used criteria were more accurate in evaluating the level of student performance than those who did not use criteria (in both global and calculated ratings). As student level of performance increased, the beneficial effect of using criteria was more pronounced. A l l CNTs gave student performance the highest ratings in the interviewing scenario and the lowest scores in response to the dressing change scenario. Bondy (1984) believed that it reflected the tendency of CNTs to be more lenient with behaviors that are more open to subjective interpretation, for example, the affective domain, and more critical as behaviors become tangible and measurable. In both experimental and control groups, the correlation between global and computed scores from the evaluation tool were low. The test-retest evaluations values were significantly higher for the CNTs using the criteria, but only at the three lowest levels of performance (representing the failing and borderline performance levels). That there were no differences between experienced and inexperienced CNTs (experience defined as three or more semesters of clinical teaching), suggesting that training rather than experience may be the way to increase rater reliability in clinical evaluation. Both CNTs and students in Bondy's studies expressed satisfaction with the rating scale. The CNTs felt they were able to more accurately describe and classify the strengths 19 limitations of a student's performance, give students specific diagnostic feedback, and provide students with a tool they can use to validate self-perceptions of their performance. Participants felt the rating scales made expectations of performance levels clear to students from the onset and reduced the perceived subjectivity of clinical evaluations. Standardized performance exams. Among the health professions, the best known and most widely researched performance examination is the objective structured clinical examination (OSCE) (Lenburg & Mitchell, 1991; McKnight et al. 1987; Nicol & Freeth, 1998; O'Neill & McCall , 1996; Ross et al. 1988, Roberts & Brown 1990, Van der Vleuten, Norman, & DeGraaff, 1991; Woolley et al., 1998). The OSCE is designed to assess clinical skills performance in simulations of actual clinical situations, thus eliminating most of the problems associated with evaluation of students in the clinical area. It is common for student scores on performance tests to be lower than scores obtained on traditional methods of skills evaluation (Lenberg & Mitchell, 1991; McKnight et al., 1987; Roberts & Brown, 1990). There is also a low correlation between scores on performance exams and those on other testing methods (Harper, Roy, Norman, Rand, & Feightner, 1983; McCrae et al., 1995; Roberts & Brown, 1990). Supporters of performance exams suggest that scores are lower because standardized testing procedures provide a more rigorous method of data collection thereby resulting in less inflated scores (Lenburg & Mitchell, 1991; McGuire, 1988; Roberts & Brown, 1990). Evaluation of the Affective Domain There is agreement that evaluation of affective aspects of a student's clinical practice should be a critical component of clinical teaching. Certain attitudes and values are outlined in professional standards and codes of ethics and students must demonstrate these in their 20 practice. Difficulty arises when attempting to define and evaluate less visible and more private areas such as inwardly held values and beliefs, feelings, sensitivity, caring, empathy, openness, reflectiveness, and self-awareness. A wide range of evaluative practices, both quantitative and qualitative, have been used to assess the attitudinal component of clinical practice including direct observation of students in their practice (with and without checklists and rating scales), observation and rating of student performance on standardized performance exams, self-evaluation, CNT anecdotal notes, critical incident and/or paradigm case analysis, individual and group discussions, written examinations, and journals (Andrusyszyn, 1989; Dawson, 1992; Green, 1994). There are conflicting views in the literature as to the effectiveness of evaluative practices for assessing competencies in the affective domain and whether this domain can be included in summative evaluation. Quantitative evaluation methods allow CNTs to obtain objective evidence of student achievement, however the subjective and context-sensitive nature of attitudes and values limits the degree to which these aspects of practice can be objectified. Dawson (1992) is critical of the validity of the various assessment methods in assessing the affective domain. He believes that attitudes are not readily identifiable, amenable to being taught, or capable of being assessed at a minimally acceptable level, and therefore should only be evaluated formatively. He claims that assessment of attitudes is always made by inference from an observable activity. CNTs are assuming that the associated behaviors automatically represent the underlying attitude. It is possible to conclude that students know the correct attitude but not that they value doing it or that they will do it in their future practice. Andrusyszyn (1989) believes that summative evaluation should only be done on areas of practice that can be quantified. She recommends that the growth-producing 21 or developmental elements of affective learning be used for formative evaluation only. But, unlike Dawson, she believes that certain aspects of the affective domain can and should be quantified in order to be evaluated summatively; for instance, professional behaviors related to honesty, safety, morality, ethics, and confidentiality. I believe that the affective domain must be evaluated formatively and summatively and that this can only be accomplished by using both quantitative and qualitative evaluative methods. Reliability and Validity of Objectified Measures While standardized measuring systems have improved reliability ratings and are generally perceived by students and teachers to be more fair and objective than traditional 'subjective' methods, these evaluative methods suffer other threats to reliability and validity. Problems that have been identified with standardized exams include the high levels of student anxiety generated, the limited range of behaviors that can objectified, a tendency of these evaluative methods to reinforce the rote memorization of skills checklists, and the artificiality factor; that is, the more structured the simulated exam, the less it resembles real life practice situations (Benner 1982, DeVore, 1993; Friedman & Mennin 1991, McGaghie 1991). Standardized tests have been found to have low predictive validity (Norman et ah, 1991; McGaghie, 1991; Roberts & Brown, 1990). Findings from studies in both nursing and medicine suggest that performance in one problem or case does not necessarily predict performance on other problems or cases, particularly in situations that test problem solving and clinical reasoning ability rather than simple clinical skill acquisition (Dauphinee, 1995; Ross et al., 1988; Reed, 1992). In several studies there were low correlations between scores on standardized performance tests, marks from written and oral tests of knowledge, and global ratings of performance in the clinical setting, suggesting that each evaluative method 22 assesses different aspects of clinical performance (Harper et al., 1983, Roberts & Brown, 1990, MacRae et al., 1995). Summary Quantitative evaluative practices are an important evaluative method and should be included as one component of a comprehensive plan for the clinical evaluation of nursing students because objectified methods of evaluation enable CNTs to standardize their evaluations of several aspects of students' clinical performance and to document the same. The ability to document student achievement is critical with respect to the regulatory and academic components of clinical evaluation. However, the limitations of quantified methods of evaluation make these practices insufficient, on their own, as a basis for decisions regarding student clinical performance. It is clear that criterion-referenced scales and standardized performance examinations are best suited to observable behaviors such as psychomotor skill performance and that nursing practice involves much more than skills performance. Effective Evaluation Practices There is a large body of nursing research on clinical teaching behaviors (for example, see literature reviews in Bergman & Gaitskill, 1990; Oermann, 1996; Stokes, 1998). Effective evaluative practices should promote the development of skilled practitioners, promote student independence, and increase students' self-confidence as nurses (Diekelmann, 1992; Flagler et al., 1988; Gomez et al., 1998; Hedin, 1989; Loving, 1993). While some of this research has focused specifically on clinical evaluation, most often researchers look at the broader topic of clinical teaching, with evaluation emerging as one of several practices influencing a student's experience of clinical learning and development as a nurse. In some 23 studies, evaluation is specified as a category in the findings, and in the other, it is necessary to review the findings carefully to identify references to evaluative practices. A CNT's evaluative practices and her/his view of the purpose of clinical teaching and evaluation result in the creation of a learning environment that is experienced by students in predictable ways. A consistent finding from several studies is that certain evaluative practices are perceived as restrictive/punitive and others as growth-enhancing, depending on whether the clinical learning experience is viewed primarily as educative or evaluative. CNTs who created an educative learning environment appeared to utilize evaluative practices of a formative nature, whereas CNTs perceived as creating an evaluative learning environment, adopted evaluative practices that were summative. Effective and Ineffective Clinical Evaluation Behaviors The focus of this body of research has been to identify those teaching behaviors which enhance or obstruct student learning in clinical settings in order that CNTs may function more effectively, and to assist in curriculum development for graduate programs preparing CNTs (Reeve, 1994). Another purpose of this research has been the development of valid and reliable tools for use in the evaluation of CNTs both formative (as a source of learning for CNTs) and summative (for purposes of promotion and merit decisions). Twelve such studies were reviewed for this chapter. Most of the studies utilized survey methods to identify characteristics of effective CNTs (Brown, 1981; Knox & Mogan, 1983; O'Shea & Parsons, 1979), compare the relative importance assigned to behaviors by students, faculty, and graduates (Knox & Mogan, 1985; Mogan & Knox, 1987; Pugh, 1988), as well as to develop and determine the reliability of tools for evaluating CNT effectiveness (Fong & McCauley, 1993; Reeve, 1994; Zimmerman 24 & Westfall, 1988). Two studies incorporated short periods of direct observation of CNTs in practice (Mogan & Warbinek, 1994; Pugh 1986b). Two studies attempted to replicate Brown and Mogan & Knox's lists of behaviors respectively (Bergman & Gaitskill, 1990; Nehring, 1990), and one study was undertaken in relation to effective behaviors in associate degree programs (Sieh & Bell, 1994). Many of the studies have a small to moderate sample size. Faculty sample size ranged from 22-63 with most being between 24-29. Student sample size varied from 82-393. The number of behaviors on the instruments range from 22 to 53. Overall, data has been collected and analyzed from a large number of B S N (and to a lesser extent, diploma and associate degree) faculty and students, and from a wide variety of nursing programs (large and small, private and public institutions) from across the United States and Western Canada. The 12 studies that were reviewed had more similarities than differences in the types of behaviors considered effective/ineffective by faculty, students, or graduates. While there are differences in methodology and instruments, considered as a group, these studies provide an impressive amount of information on student and faculty perceptions of effective evaluative practices (Oermann, 1996). Despite limitations such as sample size and use of convenience samples, research on effective teaching behaviors has been useful for identifying the range of evaluative practices of CNTs. Effective evaluative practices. Certain evaluative practices are consistently noted, by both faculty and students as effective clinical teaching behaviors (Brown, 1981; Knox & Mogan, 1985). Evaluative practices identified as effective include: setting realistic expectations and clearly communicating these to the student, clarifying course objectives at the beginning of the experience, being fair and objective in evaluation (evaluating students 25 using appropriate evaluation criteria and objectively identify student strengths and weaknesses), and utilizing effective practices for the provision of feedback. In most of the 12 studies, the CNT-student relationship was identified as an important determinant of perceived faculty effectiveness in relation to evaluation. This finding is important because many of the qualitative studies that followed from the research on effective practices also linked evaluative practices with the CNT-student relationship and interpersonal qualities of the CNT. A consistent finding from the research on clinical teaching and evaluation is the importance of a CNT's ability to provide positive feedback and suggestions for improvement, and to focus on learning rather than testing. These latter two practices reflect the dual role of CNT as teacher/evaluator and highlight the important distinction between formative and summative evaluation practices. Formative and Summative Evaluation Practices A CNT's evaluative practices can be shown to reflect two distinct purposes of clinical evaluation: education (formative evaluation) and regulation (summative evaluation). Each purpose has different underlying assumptions about the role of the CNT, what data should be obtained and toward what end, and who should be involved in providing and interpreting data for decisions about a student's clinical. The purpose of formative evaluation is educative; to provide feedback regarding student progress to guide teacher and student in further learning experiences. Formative evaluation is diagnostic in nature, occurs throughout the learning experience, information is used to assist in correcting deficiencies and/or promoting abilities, it is not subjected to the grading process, and student self-evaluation is encouraged as part of the process (Reilly & Oermann, 1992). There is also a view that certain aspects of practice are appropriate for formative purposes only (i.e., not for summative); for example, the 26 affective domain, esthetic knowing, self-awareness, and reflective thinking (Curl & Koerner, 1991; Dawson, 1992; Landeen, Byrne, & Brown, 1995; McGuire, 1988; Pierson, 1998). Clinical evaluation is also concerned with decisions as to the progression and graduation of nursing students (Gomez et al., 1998). Summative evaluation provides information on the extent to which learning objectives were met, occurs at the end only, and is often used in to determination of grades performance (Reilly & Oermann, 1992). Summative evaluation functions to maintain professional standards and protect the public by assuring that graduates are qualified to practice as autonomous professionals. CNTs are charged with the responsibility of protecting the public, present and future, from unsafe students. Licensing boards assume that if a student successfully completes her or his education, the graduate is competent. CNTs are usually evenly divided over the role of student self-evaluation in the summative decision making process; some supporting student input into summative decisions, and others opposing it (Arthur, 1995; Best, Carswell & Abbott, 1990; Buraard, 1988b; Green, 1994; Pavlish, 1987). Clinical Learning as Evaluation In several studies, certain evaluative practices of CNTs are viewed by students as a major source of anxiety and a block to their learning (Diekelmann, 1992; Flagler et al., 1988; Kleehamer, Hart, & Keck, 1990; Loving, 1993; Pagana, 1988; Wilson, 1994; Windsor, 1987). One researcher goes as far as to suggest that the evaluative practices of some CNTs may be unethical (Theis, 1988). In a survey of 204 BSN students' perspectives of unethical teaching behaviors, 50% of reported incidents occurred in the clinical setting. Incidents such as reprimanding the student in front of others, "drill-questioning'', and being asked to gather evaluative data on another student, were examples of evaluative practices seen to violate the 27 ethical principle of respect for persons, while blatant displays of student, favoritism and unfair evaluations were given as examples of violation of the principle of justice. The results of several other studies on student perceptions of clinical learning suggest that clinical learning environments perceived as evaluative resulted in high levels of student stress and anxiety. In addition, observation and questioning emerge as the main evaluative practices associated with an evaluative context, whereas evaluative practices concerned with feedback predominate in the clinical environments perceived by students as educative. Flagler et al. (1988) documented the possible impact of a CNT's evaluative practices on student self-confidence. Results from a survey of 139 B S N students suggest certain evaluative practices hinder the development of students' self-confidence. Evaluative practices that affect students negatively include giving no or mostly negative feedback, lack of specifics in feedback, observing student practice without warning, observing nursing care with a view to evaluate, asking questions about patient care at random times, intimidating behaviors (belittling comments, being unapproachable, "drill-quizzing"), and evaluative practices focused on what was wrong or omitted rather than what was correct. Negative evaluative practices appeared to be rooted in a view of clinical teaching as primarily designed to "weed out" students who would be poor nurses, rather than a view of clinical teaching as learning. Students in Flagler et al.'s study felt threatened by the C N T rather than assisted, which resulted in students hiding from the CNT. The students perceived that the CNT wrote down everything they said or did in an evaluative "black book". This is similar to what Pagana found in her (1988) survey of 262 B S N students on stresses and threats of their initial medical surgical clinical experience. Students believed the instructor was watching and evaluating their every move. Students in Kleehammer et al.'s research (1990) also reported 28 faculty observation and evaluation as anxiety producing. These researchers stress the anxiety producing nature of faculty observation and evaluation and recommend that observation and evaluation be done in a supportive non-threatening manner and be used for formative guidance, not just summative purposes. In a grounded theory study of 22 B S N nursing students, Loving (1993) found that the development of students' clinical nursing judgment was hindered in instructional contexts he called "evaluative", whereas the development of judgment was facilitated in a "learning" context. The link between teaching practices and development of judgment was theoretical as no empiric measures of clinical judgment were used. A student's identity of self as a competent beginning nurse requires an environment that is primarily learning-centered. Faced with what students viewed as an evaluative environment (one in which the CNT is seen as primarily evaluative in interactions or with an emphasis on written forms of evaluation) students were motivated to achieve an external validation of their competence; i.e., their efforts were motivated toward obtaining the CNT's approval. Potential evaluative consequences were the basis for student decision making about patient care and educational activities. Students did not risk experimentation for fear it would result in failure. A significant finding in Loving's (1993) study is that the students believed clinical learning took place outside the context of the CNT-student interaction. To look good as a nurse, students looked to patients, family members, and the staff nurses for feedback. Students often went to nursing staff, rather than to the CNT, with questions or requests for supervision of skill performance. In addition, students considered two types of evaluation of their clinical practice, each having a different outcome: the evaluation of their clinical performance by the instructor resulted in a grade, whereas their own self-evaluation of their 29 performance, which focused on the quality of nursing care that they delivered to their patients, resulted in a sense of satisfaction and confidence. Wilson (1994) came to similar conclusions in her ethnographic study of 12 B S N students and two CNTs. She found that students differentiated their goal of "looking good as a student" from that of "looking good as a nurse". Looking good as a student arose from students' desire to look good to the CNT because of a perception that the CNT was always evaluating them. Students were unsure what knowledge or behavior the CNT was expecting. Having the right answers in the student-instructor interaction was the most commonly identified criterion for looking good as a student. Students approached or avoided instructors based on how confident and competent they felt. Student-CNT interactions were approached as an examination rather than a learning experience. Students believed it was not so much what you knew that counted, but rather what the teacher thought you knew. Diekelmann (1992) undertook a program of interpretive research on the lived experience of teachers and students in nursing education. She found that learning is viewed as testing by nursing students and faculty and that this limits students' learning while consuming an inordinate amount of student and faculty time and energy. The curriculum revolution that occurred in the 1980s in nursing education is based on a critical response to the premise that learning driven by behavioral objectives inevitably leads to a focus on evaluation. The negative consequences for the teaching and learning of nursing practice of locating evaluation at the centre of clinical teaching-learning (regardless of whether this move is actual or perceived) is a central point in the arguments for a move from behavioral-based to educative-caring curricula for nursing education (Bevis & Watson, 1989). However, because educative-caring programs place student-teacher interactions at the centre of clinical teaching, CNTs 30 must familiarize themselves with evaluative practices that students perceive as helpful, and those that result in student avoidance of interaction with their CNTs. Feedback as a Key Formative Evaluation Practice Evaluative practices related to the provision of feedback emerge from most studies as important CNT teaching behaviors (Bergman & Gaitskill, 1990; Brown, 1981; Kirschling et al., 1995; Krichbaum, 1994; Pugh, 1988). Characteristics of effective feedback are related to its quantity (given frequently), quality (specificity, supported by evidence gathered about student performance, constructive, given in a supportive manner and in a honest, forthright, and positive manner), usefulness (feedback is on student progress with specific suggestions for improvement), and timing (immediate, in private). In studies on effective teaching behaviors, students and faculty agree that the following feedback practices discriminate between best and worst CNTs: corrects mistakes without belittling, gives constructive evaluations without embarrassing students, corrects students tactfully, does not criticize the student in front of others, makes specific suggestions for improvement, and provides constructive feedback on student performance (Fong & McAuley, 1993; Mogan & Knox, 1985; Sieh & Bell; Zimmerman & Westfall, 1988). Most CNTs agree that formative evaluation practices are an integral part of teaching-learning process; however, several studies have indicated that CNTs' practices are not consistent with their beliefs (Reilly & Oermann, 1992). Participants (204 B S N students) in Mozingo, Thomas and Brooks' (1995) survey research expressed an overwhelming desire for more positive feedback from their CNTs. These students believed that CNT feedback helped them to increase their feelings of self-confidence. Pugh (1988) surveyed 50 faculty and 358 students from eight B S N programs. Students perceived that they received less feedback than 31 they needed. The students wished to receive more frequent and specific feedback as to how they are doing and what they need to work on. This group of BSN students wished for frequent formative evaluation and not just a summative at the end of the term. Faculty responses indicated that the CNTs also believed feedback and formative behaviors to be important. However, when Pugh observed each faculty member for one clinical day, she found that faculty gave relatively little positive feedback or formative evaluation of the students' performances (Pugh, 1986a; 1986b). Mogan & Warbinek (1994), on the other hand, noted a preponderance of positive feedback as opposed to corrective statements in their short amount of time they spent observing 12 CNTs. Becoming skillful at giving feedback appears to be a developmental process for CNTs. In Wolffs (1998) grounded theory research with 11 CNTs, participants identified the transitions they experienced with respect to their ability to provide feedback. When lacking self-confidence in their abilities as CNTs, they were uncomfortable in giving or incapable of providing constructive feedback to students. The CNTs reported a lack of knowledge of how and when to give feedback, saying it was difficult to ascertain when students needed feedback. This was especially so when the CNT was unsure about what was expected of the students. The CNTs also reported that, initially, they gave lots of negative feedback, most likely because it was much easier to see what was wrong. As they matured as CNTs, they learned to be more positive and to focus on the things that were really important rather than what the students were not doing. The CNTs felt that giving feedback was easiest when knew the level of clinical expectations and could convey these clearly to students at beginning of the semester. 32 Questioning and Observation Questioning and observation emerged from several qualitative studies as evaluative practices that lead students to experience clinical learning as evaluative. Morgan (1991) surveyed a group of nine CNTs on what they said they did in the direct client care period. The results showed that the CNTs used mostly questioning, observation, and written work to evaluate students and that the CNTs had difficulty separating teaching activities from evaluation activities. Most of the CNTs made reference to practices that assess what the student already knew, for example, observing or talking with the student. When asked how they would proceed in a clinical teaching scenario, assessment activities (talking, discussing, and asking) were the predominate teaching strategy discussed by the CNT. A l l the CNTs said they would assess the student's ability to perform the clinical activity or assess where the CNT could help. When Mogan & Warbinek (1994) observed 12 CNTs in their teaching, these researchers found that verbal interactions (questioning, directing, and explaining) were more frequently observed than demonstrating or assisting with patient care. The researchers felt that overall, this group of CNTs utilized a directive rather than a problem-solving teaching style, for instance, their questioning included more requests for facts and closed (yes/no) questions versus questions that would get at cause and effect or elicit information about clinical decision making. Many CNTs feel justified in their use of questioning; however, it is not clear that CNTs realize the impact of questioning practices on the students. Pugh (1986b) found that CNTs questioned students extensively in order to assess their knowledge base. One of the CNTs in Paterson's (1991) study was drilling students with questions because she believed that it is was the only way to find out what they knew in order to prevent them from making 33 major errors in patient care. CNT questioning practices were reported to be upsetting to the students in Pagana's (1988) research. One student said you are "trying to be as prepared as possible so when your instructor drills you, you will hopefully be able to answer all 1000 questions" (p. 422). In some studies, the constant testing of their knowledge and ability to do things right was perceived by some students to result in a fear of failing and decreased sense of self-worth and competency (Flagler et a l , 1988; Loving, 1993; Pagana, 1988). Teaching or Evaluating? The evaluation of students' performance should be made on the basis of both their consistency of performance and their professional growth throughout the semester being assessed thus, incidents that occurred early in the rotation should not show up on the summative evaluation (Infante, 1985; Orchard, 1992). It is not unusual, however, for CNTs to rate students on the basis of a single critical incident of their performance or as one of the CNTs in Paterson's study (1991) says, "beat students to death with the same mistake" (p. 237). Many CNTs believe that explicitly separating teaching from evaluating emphasizes the need to balance the two purposes of evaluation (Infante, 1985; Orchard, 1992; Packer, 1994; Pavlish, 1987; Schoenhofer & Coffrnan 1994). As will be discussed later, a proposition of the interpretive-criticism model of evaluation is that the separation of teaching and evaluating is no longer an issue if evaluation is used as teaching (Bevis, 1989). In Infante et al.'s (1989) research, students in the experimental group (receiving clinical learning experiences guided by Infante's clinical teaching model) were involved in formative evaluation in the first seven to eight weeks of clinical experience, during which time the students could focus on orienting themselves to the clinical settings and the clinical course. They were guided by CNTs and preceptors who focused on formative aspects of 34 evaluation, for example, providing students with information on their strengths and weaknesses and methods of improving their performance. During the last two to three weeks of the clinical experience, the student knew their performance was evaluated by faculty for grading purposes. Infante et al. suggest that the clear separation of formative and summative evaluation may have been conducive to the experimental student's stronger academic performance in comparison to the control group who experienced a "traditional" clinical rotation. Pavlish (1987) developed a model for clinical evaluation, based in educational psychology, that uses level of learner maturity as the criterion for CNT decisions around when to teach and when to evaluate, when to use formative and summative evaluations, what type of measure to use for evaluation, and how self-evaluation pertains to clinical evaluation. The model was never tested empirically and does not appear to be mentioned in any further publications dealing with clinical evaluation. Summary There is a lot known about certain evaluation practices utilized by CNTs. There is evidence that CNTs utilize a narrow range of evaluative practices in their clinical teaching (Mogan & Warbinek, 1994; Morgan, 1991;Pagana, 1988; Paterson, 1991; Pugh, 1988). Observation and questioning are the two most common practices identified from the studies that were reviewed here. Ability to clearly articulate course and CNT expectations to students and helpful feedback have been noted as evaluative practices that facilitate learning and contribute to a positive clinical experience for students. The distinction between formative and summative evaluation practices is important because it is linked to students' experiences of the clinical learning environment as growth-producing or stress-producing, but 35 also because each evaluative purpose creates certain learning needs for CNTs in relation to their evaluative practices. This body of research on evaluative practices is limited by its use of survey and interview as primary data collection methods. Further studies are recommended, utilizing data collection methods that capture what it is that CNTs actually do in their work with students in the clinical area. The literature is also lacking in studies that link evaluative practices to student outcomes. Understanding the types of practices used by CNTs is incomplete without an assessment of the effectiveness of each with respect to desired outcomes of clinical teaching. What is the result of certain practices? Which evaluative practices facilitate the development of independent, confident, and competent nurses? Finally, the majority of the studies that were examined appear to involve CNTs in behavioral-based curricula. In the next section I review what little literature there is on the evaluative practices of CNTs in the newer curricula and the types of problems that these practices bring with them. Qualitative Evaluation The trend over the past three decades in the field of program and curriculum evaluation has been to decrease the focus on measurement, psychometric testing, and prediction when assessing educational programs and instead, to utilize qualitative modes of inquiry based in description and interpretation (Guba & Lincoln, 1989a; Whitely, 1992). A similar move has been taking place with respect to clinical evaluation in nursing education where alternate models of evaluation have been proposed based on interpretive and critical worldviews (Bevis, 1989; Blomquist, 1985; DeVore, 1993; Diekelmann, 1988; Girot, 1993a; Malek, 1988; Rideout, 1994). 36 Evaluative practices are based in certain beliefs about the purpose of knowledge and action (Patterson, 1996). Habefmas (1979) identified three fundamental orientations in which knowledge and action can be grounded. Each interest describes the guiding motive that shapes and directs one's actions and behaviors. The way we carry out the work of evaluation depends on whether a technical, practical, or emancipatory interest prevails (Hedin, 1989). The quantitative evaluative practices of behavioral curricula arise from the technical interest (Bevis & Watson, 1989; Patterson, 1996). The technical interest is concerned primarily with controlling and managing one's surroundings and is the basis of the positivist paradigm. The purpose of knowledge in the technical interest is explanation, prediction, and control. Evaluative practices within this interest are important for assessing those aspects of nursing that can be quantified (Bevis, 1989). The next section concerns evaluative practices based in the practical and critical interest. The interpretive-criticism model of evaluation is discussed because it provides direction for the development of evaluative practices that incorporate subjectivity and provide an alternate view of the role of CNT and student in evaluation. Student self-evaluation, portfolios, and journals are examined for their usefulness in evaluating areas of clinical practice that are difficult to access such as reflective thinking, aspects of the affective domain, and lived experience, and to illustrate how subjectivity and student participation complicate qualitative evaluative practices. The question of whether subjective sources of evaluative data should be limited to formative evaluation or extended for use for summative purposes is a central question of CNTs with regard to student self-evaluation and the use of portfolios and journals. 37 The Practical and Emancipatory Interests The practical interest, which underpins the interpretive paradigm, has understanding as its basic orientation. Knowledge and action is concerned with interpretation and understanding of the meanings people attach to actions and events. Knowledge is produced through our interactions with others and our environment in the making of meaning and this interpreted meaning assists the process of making judgments about how to act rationally and morally (Habermas, 1979). When subjectivity and the subjective become central to knowing and acting, alternative practices are required for collecting information and making decisions about students competency. Interpretive evaluative practices allow for a broader definition of evidence that acknowledges subjectivity, qualitative judgment, and more than one right answer (McGaghie, 1991). Evaluative practices involving story telling, dialogue, reflection, and meaning-making are considered necessary in order for CNTs to appreciate the constructed nature of meaning and for understanding the lived experience of students. Evaluative practices underpinned by an emancipatory view of knowledge encourage the exploration of power issues in CNT-student evaluative relationships. Evaluative practices with an emancipatory intent are concerned with recognizing barriers to student and CNT participation in evaluation. A partnership view of evaluation works to shift some of the power and influence from the CNT to the student. The use of egalitarian evaluative processes are said to be empowering and transformative for students as they are thought to develop student responsibility for evaluating their own progress and students are emancipated from some of the constraints of an authoritarian system (Bevis & Watson, 1989; Patterson, 1996). A version of Eisner's model of educational connoisseurship has been put forth as an evaluative process that is representative of both practical and emancipatory knowledge 38 interests (Bevis, 1989; Goldenberg, 1994; McGaghie, 1991). An interpretative-criticism model requires a participatory CNT-student evaluative relationship and incorporates qualitative evaluative practices. Interpretive-Criticism Model. In the interpretive-criticism model of evaluation the CNT acts as a role model of connoiseurship while teaching students to do the same. A connoisseur, or critic, is an expert, both in the subject area under examination, and in the art of appreciation (Bevis, 1989). A nursing critic is able to discern subtle, important qualities and characteristics of nursing practice and to make informed and discriminating judgments as to the merit of the practice (Goldenberg, 1994). In this model, criticism is a shared CNT-student formative and summative evaluative activity. CNT and student work jointly to create a picture of the student's practice and determine what the practice means in relation to professional standards. Involvement in this method of evaluation requires a great deal of knowledge, skill, and confidence on the part of the student. Sophistication in perceiving and interpreting nursing situations requires clinical experience (Bevis, 1989). In addition, judging necessitates the application of criteria and standards, which presupposes knowledge of the content of nursing (Goldenberg, 1994). Thus, students will have differing ability to take part in the criticism process depending on their clinical level in the program. And, as is discussed later in relation to self-evaluation, students also vary in their ability to self-critique and to participate in collaborative evaluative relationships depending on their individual level of cognitive and personal maturity. Therefore, a criticism model of evaluation is necessarily a graduated one where the CNT is highly directive with beginning students, becoming less so as students 39 amass the clinical and evaluative experiences they need to contribute to the process with increasing levels of competence. Criticism requires a great deal of CNTs as well. To participate in this type of evaluative process requires an expert's grasp on the theory and practice of nursing and of evaluation and criticism (Bevis, 1989; Goldenberg, 1994). Criticism involves the processes of seeing, describing, rendering, interpreting, and judging (Bevis, 1989). CNT and student must work together to develop the knowledge and skills required to notice and describe a clinical experience in rich detail, to determine what aspects of nursing practice are reflected in the description, and to judge their practice by comparing it against standards of quality care. CNTs must also know how to utilize narrative and dialogue to construct and interpret clinical situations, how to use criticism as an evaluative tool, and how to teach the process to students. This is no small feat and there is little guidance in the literature as to the specifics of teaching this complex evaluative process. Interpretive-criticism attempts to merge the concepts of formative and summative evaluation in that participatory criticism is considered a teaching-learning activity as well as an evaluative one (Bevis, 1989; Green, 1994). The process of criticism is intended to assist students to develop nursing knowledge and skill as well as to critique their use of knowledge and skill in actual practice. Thus evaluation is used as a method to teach both nursing practice and self-evaluation. Viewing evaluation as teaching is one way to address the issue of the CNT's dual role of evaluator and teacher. When the CNT's primary role is that of expert critic and co-learner, rather than evaluator, it lessens the need for students to view their interactions with the CNT as a "test" (Bevis, 1989). 40 The joint determination of a student's success in meeting clinical requirements requires a sharing of responsibility and power. In reality, this sharing is not total because, although some of the power and authority of the CNT can be shared with the student, by virtue of position and expertise, the CNT alone is accountable to the public, the profession, and the educational institution for evaluative decisions. In instances where the students are unable to accurately see, interpret, or judge their practice, the CNT's view will be privileged in the final decision (Bevis, 1989). As can be seen, trust is a central issue in the use of qualitative evaluation practices. Participation in evaluation can be an anxiety-provoking experience for students and they will sometimes present a less-than-accurate account of their thoughts and actions. This knowledge may cause CNTs to approach participatory evaluative relationships cautiously. Given the results from the earlier studies on students' perceptions of clinical evaluation, we also know that students have good reason to be wary of their CNTs. Evaluative learning environments that focus on perfect practice and the avoidance of error at all cost push students to create favorable accounts of their practice and to avoid interactions with the CNT. For collaborative evaluative relationships to work, all persons must be aware of those aspects of nursing and evaluation that create barriers to trust. Reed and Procter (1993) express their view on what is needed for trustful self-evaluation in nursing: The pressure to concoct a rationale for behavior will be largely determined by the culture of nursing and of assessment. If the culture of nursing is seen as intolerant of deviations from orthodoxy, and assessment is an exercise in identifying these deviations, then assessees who are skilled at presentation will camouflage their activities under the current professional rhetoric, and those less skilled (or most honest) will approach the assessment with fear and dread. In both cases the assessment will not reflect anything more than the ability to articulate practice in the currently ideologically acceptable way. (p. 181) 41 Student self-evaluation and the evaluatory methods of portfolios and journals are examples of interpretive, interactive, and participatory views of evaluation. Each is alleged to be beneficial to student growth and development and to expand both student and CNT ability to evaluate a student's practice. Each also presents similar threats to the quality and validity of evaluative process and product associated with their subjective and participatory nature. Student Self-Evaluation The concept of professionalism involves responsibility, accountability, autonomy, and self-regulation. Self-evaluation is used in many nursing education programs as a means by which to develop and monitor these aspects of professional behavior (Bartels, 1998; Best, Carswell, & Abbott, 1990; Leino-Kilpi, 1992). Self-evaluation is said to provide opportunities for students to examine their progress, identify their strengths and weaknesses, seek feedback from others, and plan to improve in areas if needed (Green, 1994). Having participated in directing the evaluation, drawing conclusions and recommending a plan of action, the student is believed to have good reason to support the suggestions for future practice and areas of evaluation (Green, 1994). As a result, students should develop as self-directed, self-motivated, self-reflective, and analytical professionals (Abbott, Carswell, McGuire, & Best, 1988). However, self-evaluation can also be an anxiety-producing experience. For example, students in Abbott et al's (1988) study stated that, although they valued a participatory role in evaluating their clinical performance, they saw self-evaluation as a more threatening experience that clinical evaluation itself. Anxiety results when students do not feel prepared to evaluate themselves and when they perceive that evaluation means to describe the negative aspects of their clinical practice (Abbott et al., 1988). 42 Self-Evaluation as a Developmental Process As with becoming a connoisseur, self-evaluation is thought to be a developmental process (Abbott et al., 1988; Green, 1994; Pavlish, 1987). Students need instruction and practice in order to become comfortable with performance expectations and to develop an understanding of the self-evaluation process. Students also require other skills that are associated with maturity, for example, an ability to reflect on themselves and their practice, self-awareness, and an ability to self-disclose and receive feedback from others (Green, 1994). Pavlish's (1987) model provides CNTs with guidance as to how to foster the development of students' self-evaluation abilities and when to include self-evaluation in the summative process. As students develop understanding of the domains and competencies of clinical practice, they are able to participate more in evaluating their own abilities. The role of the C N T is to provide feedback on student performance, including strengths and weaknesses, as a means of assisting students to understand how to evaluate their own performance. Thus, beginning students are dependent on the CNT to evaluate their abilities and have a limited ability to participate in self-evaluation. As students mature, their abilities to evaluate their own performance is characterized by interdependence. Students still receive regular feedback from the CNT but are more aware of their strengths and weaknesses. Student self-evaluative data is still used for formative purposes only. During high learner maturity, students are ready to be independent in assessing their abilities. At this stage, self-evaluation data are very meariingful and can be added to evaluation data from the CNT in the making of summative judgments. 43 There is evidence to suggest that little systematic effort has been made to teach self-evaluation skills to students in professional health science education programs (Arthur, 1995; Leino-Kilpi, 1992). Abbott et al. (1988) undertook a survey of students and CNTs in one hospital school of nursing to explore perceptions of self-evaluation. The results suggested that self-evaluation practices were inconsistent across the CNTs. Students indicated they found it difficult, they were not well prepared, and they did not fully understand the concept, and as a result, they were not always sure what was expected of them in self-evaluation. Leino-Kilpi (1992) found discouraging practices in her study of self-evaluation in nursing education in Finland. She observed the final summative clinical evaluation sessions of 81 graduating nursing students, their teacher (from a university school of nursing), and their clinical supervisor (staff nurses who supervise students in the clinical area). The evaluation process used in this school of nursing required student self-evaluation as one component. Leino-Kilpi found that the teacher's attempts to encourage and support student self-evaluation occurred only at the beginning of the sessions and then the teacher took over the evaluative discussion while the student assumed a passive role. Leino-Kipli concluded that the teachers in her study did not have the skills needed to promote student self-evaluation. The above information suggests that CNTs need a clear understanding of the self-evaluation process and they also need to develop practices that assist students to learn how to self-evaluate. Believing this, Best, et al. (1990) developed a framework for teaching "collaborative evaluation". The framework outlines a developmental process of three phases of student development in self-evaluation with CNT and student roles with specific instructional techniques for each phase. Their model is similar to Pavlish's except that the use of collaborative evaluation is restricted to formative evaluation. 44 Validity of Self-Evaluation Validity is usually determined by using the teacher's evaluation as the true rating and comparing the degree to which student's rating matches the teacher's. Study findings on the validity of self-evaluations are equivocal (Green, 1994; Hay, 1995). In some research, students have been found to under-rate themselves and to overate themselves in comparison to their teachers; in others, student ratings have been found to correlate highly with their teacher's ratings. Arthur's (1995) review of research on student self-evaluation in the health care disciplines presents conflicting results regarding the validity of student self-evaluations. She reviewed four studies comparing student and faculty evaluation scores and found high test-retest reliability in one study and little or no agreement in another and that students' ratings were higher than the teachers in some studies and lower in others. Krichbaum (1994) found that the 36 students in her study rated themselves higher than their preceptors did on individual descriptors and overall estimates of performance in a critical care rotation in preclinical and postclinical ratings, however, the differences were smaller on the postclinical ratings. On the other hand, a sizable number of the students in Abbott et al.'s (1988) study felt that self-evaluation made them focus on weaknesses rather than strengths resulting in self-criticism and an underrating of their performance. Other authors report a similarity between student and teacher ratings. Green (1994) reviewed five studies in higher education including one small study in nursing, and concluded that self-assessments were similar to teacher assessments, with no consistent tendency for students to overgrade themselves. She also comments that even when research demonstrates differences between the marks of teachers and students, the differences are no more significant that that what occurs between individual teachers. Jackson (1987) studied the 45 dialogue journals of two CNTs and their 12 B S N students and four CNTs and their 23 associate degree students as a means to enhance CNT-student communication in clinical settings and as a formative evaluation tool. In her phenomenological analysis she found the students' views of themselves were comparable to those of the faculty, except in the case of three students who ended up failing the course. Hay (1995) offers one explanation for the similarity of tutor evaluations and student self-evaluations in a problem-based learning medical program. In his study, students initially evaluate themselves much higher than tutors. Over time, the tutors came to rate the students higher until the two ratings were closer in agreement. Hay suggests that the close-knit culture of P B L tutorial groups leads the tutors and teachers to become experienced with each other's expectations so that their evaluations become more similar. They informally negotiate an acceptable compromise. One of the frequently stated concerns about of self-evaluation is that students may inflate their evaluations to impress instructors, or, because of the judgmental nature of clinical evaluation, students will deny inadequacies or problems in their practice because they cannot afford to admit them (Reed & Procter, 1993). McGuire (1988) reviewed six studies on self-evaluation including the literature reviews of each study and concluded that "virtually all report significant agreement between self-ratings and supervisor ratings and adequate rigor in the former, when self-assessment is employed as one component in a program of formative evaluation" (p. 260). McGuire also states that results from general studies of self-reports suggest that the validity of self-evaluations diminishes as their impact on career opportunities increases. Best et al. (1990) agree believing that summative decisions such as grades, salary 46 increments, and promotions are instances where the price of personal objectivity and honesty may be too high and thus it is unfair to ask people to self-evaluate in these cases. Viewed through a critical lens, self-evaluation methods can be seen as a method whereby nursing education helps students become competent "accounters" of themselves and their work. As part of two year study of a diploma nursing program, Campbell (1995) observed how clinical evaluation processes help students learn to produce learning and nursing as textual reality (textual accounts are substituted for an experienced actuality in the learning environment). In Campbell's interpretation, students' ability to participate in self-evaluation was dependent on their skill in seeing their nursing actions as instances of the concepts on their evaluation form. The clinical objectives of the evaluation form reflect ideal nursing practice based on the theory of what is possible, rather than the reality of nurses' work. As a result, students learn to create the "appearance" of adequate nursing practice. The CNTs were seen teaching students how to correctly "account" for their performance by assisting them to use the evaluation categories to "name" their performance. Students were also learning how to make questionable experiences 'count' as compliance with the performance categories. Campbell observed that what the students actually did could be very different from how they made it look in their "official" accounts of then-practice. This means that students learn to "stretch" the account of their actions to make the real practice fit the evaluation form. For instance, one student stated that when there are no clear examples of a performance category, then you need to "shovel it". Even though there is disagreement about the validity of student self-ratings and whether self-evaluative information should be used for summative decision making, self-evaluation is seen by most educators as a valuable means of communication between CNTs 47 and students (Abbott et al., 1988; Green, 1994; Reilly & Oermann, 1992). The various methods of self-evaluation provide information that is hard to obtain by other means. Specifically, portfolios and journals have been studied as methods for developing and evaluating areas of nursing practice that cannot be quantified. Portfolios A portfolio is a focused purposeful collection of student work that provides evidence of learning, progress, and achievement over time (Wenzel, Briggs, & Puryear, 1998). The contents of a portfolio can be specified by the teacher or determined by the student. Portfolios are viewed as a beneficial method for promoting personal responsibility for learning, promoting and documenting student growth and achievement over time, developing and assessing self-reflection, tracking and promoting continuous lifelong learning, and promoting collaboration between CNT and students in the evaluation process (Jensen & Saylor, 1994; Mitchell, 1994; Wenzel et al., 1998). Portfolios are a well-established evaluation method in a wide variety of other fields such as business, art, architecture, design, and journalism. The use of portfolios as an evaluation method in nursing education has been found to result in some of the purported outcomes, however, portfolios also suffer from the same problems as all qualitative evaluation practices: they are anxiety producing for students, their accuracy as a data source depends on student developmental level, and CNTs are not sure how to utilize them effectively. Utilizing Benner's (1984) stages of proficiency, Wenzel et al. (1998) developed a framework to guide the development and evaluation of portfolios. Similar to the other developmental frameworks discussed in this section, novice level students start out with much more teacher direction, including specific guidelines for content selection for the 48 portfolio. As competency in portfolio development and nursing skills builds, students then become more independent in selecting materials for assessment. The framework specifies that portfolios of novice students be used for formative evaluation only, whereas those of graduating students should be included in their summative evaluations. Jensen and Saylor (1994) studied the usefulness of portfolios as a mechanism to facilitate reflection and assess students' professional development in three groups of nursing and physiotherapy students. They found reflection played a central role in many of the portfolios in that the portfolios enabled the students to reflect upon their professional development throughout the semester. However, the quality of the portfolios was found to vary greatly depending on amount of effort put in and student level of maturity. Students who were pressed for time or who had lower expectations for the assignment turned in portfolios that were "almost useless" (p. 354). Some of the students were unsure what they should include which created anxiety until they became familiar with the process. Mitchell (1994) found that her sample of 24 midwifery students were mostly negative about portfolio use. Only the students who were more adept at expressing their personal thoughts and feelings were found to be positive about the portfolios. Many students expressed uncertainty over what was expected of them and several reported being uncomfortable with having to record personal feelings and emotions. Students tended to find it a time-consuming chore rather than a growth-producing experience. The majority of students did not find portfolios motivated them to learn, nor were the portfolios seen as contributing to self-awareness of their strengths and weaknesses. The evaluative nature of the portfolios affected several students negatively. Some students reported that they included what they thought the assessor wanted whereas others said they did not want to express their 49 innermost thoughts and feelings to a person who was going to be evaluating them. The tutors who assessed the portfolios thought it was possible to see personal and professional growth over time but found assessment difficult due to the subjective nature of portfolios. Mitchell (1994) felt that summative evaluation could have a profound effect on the way students contribute to portfolio use. In addition, she also believed that the inability to quantify the various elements of the portfolio made grading of portfolios impossible. Jensen and Saylor also believe that portfolios are not suitable for grading (summative purposes), however they support the use of portfolios as a means to evaluate student learning (formative evaluation). These two researchers suggest CNTs attempt to overcome the problem of lack of quantitative criteria by utilizing qualitative methods, such as criticism. They do admit that that qualitative evaluation of student learning remains underdeveloped. Journals Journals have been used extensively in nursing education both as a teaching and an evaluation practice to promote the development of critical and reflective thinking, help students make sense of and learn from their clinical experiences, and assist students to develop self-awareness, (Brown & Sorrell, 1993; Burnard, 1988a; Callister, 1993; Cameron and Mitchell, 1993; Landeen, Bryne, & Browne, 1992; 1995; Pierson, 1998; Paterson, 1995; Sedlak, 1992). Journals are said to provide a means for dialogue between CNT and student, for exploring student thinking and attitudes, and for understanding students' lived experiences of their nursing practice and clinical learning. Research is beginning to show the valuable contribution that journals can make to a CNT's clinical evaluation practices. The ability of journals to assist CNT and students to access subjective areas such as attitudes, values, feelings, and reflective thinking for qualitative analysis is demonstrated in 50 several studies. Landeen et al. (1992; 1995) subjected the journals of 18 nursing students to both quantitative and qualitative analyses. These researchers found that the journals were rich with descriptions of the students' thoughts and attitudes. Sedlak (1992) examined the journals of 20 students to identify the learning needs of beginning nursing students. In her discussion of the contents of these journals it is easy to identify aspects of student learning and nursing practice such as self-evaluation, self-awareness, empathy, and personal feelings. The benefits of journals for helping students make meaning of clinical experiences is discussed by Callister (1993). When analyzing the journals of some of her students, she found clear evidence of student thinking, feelings, and attitudes. Student journals have been found to provide evidence of reflective thinking in studies by Davies (1995), Richardson & Maltby (1995), and Wong, Kember, Chung, and Yan (1995). The latter two studies are important because they provide evidence that student reflection can be detenruned empirically from journals. Richardson & Maltby (1995) studied the diaries of 30 second year BSN students and concluded that diary writing promotes the concept of reflective practice and assists students with the development of skills in reflection and learning. A framework based on Mezirow's (1981) levels of reflectivity was used to guide analysis of the diary contents. Of interest is that, although they found evidence of all levels of Mezirow's levels of reflectivity, 94% of the examples were at the lower levels. The researchers believe that diaries are an acceptable and effective means of monitoring and evaluating student reflection and learning in a community placement where contact and direct observation are limited. Wong, et al (1995) developed and tested a coding system for reflective journals and, in a study involving 45 post-RN students, found that student writing can be used as evidence 51 for determining the presence or absence of reflective thinking. The study utilized a conceptual framework derived from the work of Boud, Keogh, and Walker (1985), and Mezirow (1991). While they found that a gross determination was straightforward and reliable (determining if student was a non-reflector, reflector, or critical reflector), finer levels of discrimination were found to be problematic and not reliable. Journal writing is another learning/evaluation process that is thought to be developmental. Reflective writing is a skill that is thought to develop with practice and feedback, and that appears to require a certain level of student maturity (Burrows, 1995; Paterson, 1995; Pierson, 1998). In their research, Landeen et al. (1995) found that CNT feedback on how to reflect rather than report, and trust between CNT and student contributed to the development of the student's abilities to be reflective. However, there were a few students who appeared unable to reflect on an experience in any depth, regardless of feedback. Based on a consideration of educational theory on the cognitive development of students and her own experience as a nurse educator, Burrows (1995) concluded that students under 25 may lack the cognitive readiness and experience necessary for mature critical reflection. In her work with journalling, she has observed that older students tended to be more capable of seeing the worth of reflection whereas the younger students did not keep up their journals, citing lack of time, boredom, and the limited usefulness of the exercise. Journals must address the formative/summative question in the same manner as other qualitative practices. Most CNTs believe that journals should not be formally evaluated as this may discourage innovation and freedom of expression. For example, a number of students in Richardson and Maltby's (1995) research stated they felt unable to describe their feelings of discomfort because they knew they were being assessed; "these people are passing 52 or failing you or judging whether you're a good nurse" (p. 239). In their critical analysis of journal writing in nursing education, Wellard and Bethune (1996) concluded that rather than promote growth, journals encourage students to focus on reproducing existing ways of knowing. They believe that reflective writing is used as a method of surveillance that forces students to articulate the school of nursing's view of nursing practice. On the other hand, some CNTs believe that evaluation of journals is important because it demonstrates that we value student effort. Some suggest that students may not maintain their writing if evaluation is limited to formative purposes (Burrows, 1995). In Richardson and Maltby's (1995) interviews, many of the students admitted they would not participate in journalling if they did not have to be submitted for inspection by their tutors. Summary There is no shortage of theoretical speculation on qualitative evaluation practices, however the body of empirical research in this area is skimpy. While the connoisseurship model has been attempted in classroom teaching with graduate students (Goldenberg, 1994), there are no reports of the use of this evaluation model in clinical teaching. Its value as a basis of evaluation practices remains speculatory. With respect to egalitarian CNT-student relationships, anecdotal evidence indicates a general agreement that a partnership model works well for the purposes of formative evaluation, however, CNTs report difficulties when it comes to sharing power with students in summative decision-making (Hornak, 1997; Paterson, 1998). Most of the claims as to outcomes of self-evaluation also remain at the level of theory as they have not been empirically tested (Best et al., 1990; Leino-Kilpi, 1992). There is still too little research on student outcomes in relation to interpretive practices such as portfolios, journalling, and CNT-student dialogue. 53 Influences on Evaluative Decision Making Many variables can affect what a CNT will choose to observe and what is actually observed, how data is interpreted, and the judgments and decisions made about the meaning of a learner's performance (Hepworth, 1991; Orchard,1992; Paterson & Groening, 1996; Reilly & Oermann, 1992). A CNT's evaluation practices have been found to be influenced by several factors (Duke, 1996; Kirchbaum, 1994; Orchard, 1992; 1994b; Paterson, 1991; Stewart, 1991). Internal variables influencing evaluative decision making include the CNT's perspective on clinical teaching and evaluation, educational preparation and experience in teaching/evaluating, personal values and beliefs about students and their practice, level of self-awareness, and ability to reflect on her/his actions. External variables such as characteristics of the students and policies and standards of the educational institution have also been linked to evaluative decision making. What follows is an examination of what is known about the effect of each group of influences on the evaluation practices of CNTs. Overall, there is evidence to suggest that the evaluative decision making of many CNTs is inconsistent and sometimes arbitrary, being based in factors other than the student's actual performance. This is particularly so with respect to summative evaluative decisions. The CNT's Perspective As discussed already, findings from several studies suggest that a CNT's perspective on the purpose of clinical teaching influences the type of evaluative practices that will prevail. This is particularly so if the CNT views clinical instruction as synonymous with clinical supervision (Morgan, 1991). A focus on supervising students leads CNTs to develop practices that are aimed at protecting patients from student errors, for instance extensive 54 questioning students about what they know or don't know before allowing the student to proceed with client care (Mogan & Warbinek, 1994; Morgan, 1991; Pugh, 1986b). A n important outcome of Paterson's (1991) study was the documentation of several differing perspectives on clinical teaching. In her research, the various perspectives arose from knowledge and value claims in regard to teaching and these views influenced how CNTs approached their teaching actions, including their evaluation practices. CNTs with a narrow applied science view of clinical teaching, that is, operating from task mastery and ability-evaluative perspectives, emphasized the gatekeeping function of their clinical teaching. The focus of this group of CNTs was on reproducing in students, behaviors and patterns of thought which the CNTs felt were appropriate to nursing practice. These CNTs located evaluation at the centre of their clinical teaching. In contrast, the CNTs practicing from a broader view believed the purpose of clinical instruction was to foster the capability of students to make decisions in their nursing practice. The teaching practices of CNTs operating from professional-identity mentoring and moral responsibility perspectives focused on the professional development of students; that is, empowering student thinking, facilitating feelings of competence and worth, and forming partnerships to meet student learning needs. Evaluation appeared to be formative in nature throughout the rotation, with summative evaluation occurring only at the end. A group of CNTs interviewed by Hedin (1989) also appeared to hold professional-identity or moral responsibility perspectives. These CNTs had been identified by their department heads and chairs as "expert" nurses. The criteria used to determine expertise and the number of CNTs interviewed were not reported by the researcher. The CNTs saw themselves as resource person, limit and standard setter, and evaluator. The category of 55 evaluate* was not discussed in the paper, however, evaluation practices could be identified from some of the data that was reported. The CNTs appeared to focus on formative processes and participatory evaluative relationships. They believed the goals of their clinical teaching were to ensure quality and safe patient care and to facilitate student learning. One CNT believed her evaluative practices should model what she hoped students would do in nurse-client interactions; if she evaluated students without validating her interpretations of what is happening, this would not reflect the humanistic interaction that she expected in nurse-client interactions. Preparation for Clinical Evaluation Evaluation of students has been reported by many CNTs as one of the most stressful and anxiety-producing aspects of clinical teaching and a role they feel the least prepared to perform (Duke, 1996; Lankshear, 1990; Paterson, 1991; Wood, 1987). Few have had the opportunity to study and develop the specialized knowledge and skill required for clinical evaluation and few had backgrounds in evaluation and measurement (Duke, 1996; Karuhije, 1986; 1997; Orchard, 1992; Wood, 1986). Novice CNTs were found to be particularly prone to evaluation anxiety because of lack of confidence in their ability to accurately evaluate students (Duke, 1996; Wolff, 1998). The eight diploma CNTs in Stewart's (1991) research reported that they had developed their evaluation practices from their personal experiences as students, the study of clinical teaching, trial and error, and suggestions from colleagues. Stewart stated that during the research interviews, she was of the impression that, although the CNTs had sincere intentions to do otherwise, they seemed to not operate from a foundation of theory, research, or formal education in their evaluative practices. She felt that decreased formal educational 56 preparation for clinical evaluation and lack of confidence in the task seemed to increase the influence of subjectivity in evaluative judgments of these CNTs. A l l of the sessional CNTs in Duke's (1996) research were found to be without the educational requirements needed to cope with the complexities of clinical teaching. Because of insecurity about their teaching abilities, especially in relation to making final decisions about student progress, this group of CNTs did not feel confident to act on then-observations. Instead, they often assumed responsibility for student poor performance, attributing the student's lack to a personality conflict or some other thing the CNT must have done wrong. Lack of confidence also resulted in the CNTs "hedging" when giving negative feedback to students despite the CNT's belief that the behavior was grossly unacceptable. The lack of confidence in evaluative decision making seen in the sessional CNTs may have also been due to their lack of clinical teaching experience, as will now be discussed. Clinical Teaching Experience Paterson's (1991) research provided some information on the differences between novice and experienced CNTs with regard to evaluation practices. New CNTs in her study tended to emphasize skill performance in their evaluation, used formal check-off lists, and were more likely to supervise students directly than were experienced CNTs (but only the experienced CNTs indicated concern that direct supervision could impact the student's performance negatively). Novice CNTs supervised the skill and affective domains separately whereas experienced CNTs assessed these two areas at the same time. Novice CNTs also attributed narrowly, often to lack of ability or effort, whereas experienced CNTs considered many more possible causes of a student's clinical performance. Paterson felt that novice 57 CNTs lacked the cognitive and experiential frameworks necessary to respond appropriately to complexities of clinical teaching. The CNTs in Wolffs (1998) study also reported a developmental process whereby confidence in their ability to effectively teach and evaluate students grew as they gained theoretical knowledge and experience with clinical teaching. These CNTs also attributed their evolving abilities to opportunities to reflect upon their experiences and supportive collegial and institutional environments. The CNTs described how they develop evaluation practices for working with borderline and failing students. Their initial lack of knowledge and experience with evaluation made borderline students troublesome and anxiety-provoking. As novices, they saw what they wanted to see and "rubber stamped" students, except when the student's clinical behavior was blatantly unsafe or unskillful. The CNTs felt they lacked the self-confidence needed to trust their judgment of the students' performances. Because they were uncertain they required support from experienced CNTs to reinforce their interpretation of student behavior. It has been suggested that one of the differences between novice and expert practitioners is in how they use intuition to guide their clinical decision making (Benner, 1984). In the anecdotal literature a similar developmental process is suggested regarding the use of intuition by expert CNTs (Blomquist, 1985; Malek, 1988). Theoretically, as CNTs became more experienced, they are more able to grasp student situations as a whole, to separate relevant from irrelevant information, to see patterns, and to recognize what is salient in clinical situations with students. Intuition. There is some evidence that experienced CNTs utilize intuitive practices in their evaluation of students and that this way of knowing develops with time and reflection 58 (Paterson, 1991; Wolff, 1998). CNTs in Paterson's (1991) research stated they often acted on hunches to assess a student's level of supervision, learning needs, or level of competence. Often the CNT was not able to immediately articulate the source of their impression about a student, describing it as a ill-defined, vague, or "gut" feeling. The CNTs would then set out to validate their initial impression through the use of other assessment strategies. The inexperienced CNTs were less confident in their ability to perceive situations accurately by means of intuition. This was also true of the sessional CNTs in Duke's (1996) study. Although these CNTs reported that they developed "gut" feelings about certain students (apparently only those with problems), they stated they were uncomfortable with the subjectivity of this source of knowing and did not believe they could legitimately act upon intuitive knowledge when making decisions about student performance. If these CNTs could not objectify their evaluative decisions, they passed students regardless of their knowledge that student was not performing at the required level. The CNTs in Wolffs (1998) study believed that as they matured their repertoire of evaluation practices expanded to include intuition. As novice CNTs they were not sure what they were seeing. They reported intuitively knowing that something was not what they should be seeing, but didn't know how to articulate it (i.e., what it meant with respect to the clinical evaluation form). Like Benner's novices, the CNTs were initially rule bound, carefully following evaluation guidelines as determined by the program. Once the CNTs could internalize course objectives, evaluation seemed to become second nature, and this understanding of the "whole" of the clinical course substantiated and reinforced their abilities to intuitively evaluate their respective students. As experienced CNTs they had a greater intuitive ability to identify which students had the potential to succeed and to immediately 59 assess students as borderline. Similar to the CNTs in Paterson's research, they stated that they listened to their gut reactions and substantiated these feelings with ongoing observations and other sources of evaluative data. Student Variables There is a notable absence of models or frameworks that describe how CNTs make evaluative decisions in their actual practice. Of the four clinical evaluation models discussed in this review (Krichbaum, 1994; Orchard, 1994a; Paterson, 1991; Pavlish, 1987), only Paterson's includes such a description. Paterson (1991) derived the Crystallization Model of Clinical Teaching from her ethnographic study of six CNTs, in which she uncovered a pattern in how the CNTs made evaluative decisions about certain students. Paterson labeled this process, "crystallization" to represent the solidifying of a position on a student's potential; i.e., whether the student would pass or fail. The CNTs in her study responded to unusual, unexpected or inconsistent student behavior or attitudes by attributing the student's behavior to certain causes. Attributional dimensions of locus, stability, controllability, and globality were assigned by the CNT and these determined CNT behaviors in the situation; that is, whether they responded with compassion and understanding, or anger. These CNTs' practices were influenced by their beliefs as to whether the student's behavior was internal or external, that is, whether the cause of the behavior problem was within or outside of the student's control. If the problem behavior was believed to be within the student's ability to change, it was viewed much more harshly than something over which the student had little control. For instance, if the student problem was thought to be the result of anxiety (something the student could not easily control), the CNT responded with compassion. Regardless of attribution though, CNTs tended to respond with anger if a 60 patient was jeopardized or potentially jeopardized by the student's action. The CNTs also made global judgments on student potential based on certain characteristics. Students who lacked ability were seen as hopeless, anxious and defensive students were described as hard to help, and uncaring and lazy students were viewed by CNTs as the worst of all students. Knowledge of the student was found to affect the attributional process. If the CNT knew the student well, she sometimes attributed what would usually be attributed to internal processes, to external factors, in order to respond compassionately. This was particularly so for students who came to the CNT for help in dealing with personal and school related concerns. Duke (1996) reports a similar finding. The CNTs in her research often acted "motherly" when they knew a student was having personal difficulties, even to the extent of excusing the unsatisfactory performance of these troubled students. The crystallization process has serious consequences for certain students because it resulted in a summative judgment about the student's potential that determined the student's ability to progress in the educational program. The crystallized judgment also influences whether the CNT helps the student to succeed or withholds help. When the CNTs in Paterson's (1991) study decided that the student would pass, the CNTs exerted effort to help the student to succeed. Conversely, when the decision was that the student would fail, the CNTs focused on collecting data to substantiate the crystallized judgment, ignoring data that contradicted the decision. Most of the CNTs crystallized in the last two weeks of a clinical rotation, although early crystallization (e.g., in the first week) occurred in cases where the student was believed by the CNT to lack effort, fail to heed advice, demonstrate uncaring behavior toward other students and/or patients, or had a previous history of borderline performance and was 61 continuing to have difficulties. Early crystallization was also likely to occur with novice CNTs or when the CNT had previous negative experience with borderline/failing students. A l l CNTs tended to delay crystallization in cases of beginning students, graduating students, and instances where they perceived the student had lack of ability but was caring or when students appeared to be motivated to learn or were demonstrating effort; these students were all evaluated up until the last day. Stewart (1991) identified numerous factors that influenced the clinical evaluation practices of the CNTs in her study. Characteristics of the student emerged as a major influencing factor. Student variables included: personal traits (gender, ethnicity, and age), student behaviors (ability to express self, laziness, lateness, neatness, caring, preparedness for clinical, showing remorse/concern when error made, poor grammar and overuse of medical terminology in patient teaching, clinical speed), first impressions, and physical appearance (facial expressions and hairstyle). The CNTs in Stewart's research also were influenced by their personal experiences as students, nurses, and CNTs, input from others about a student (staff, teachers, patients), the student's patient assignment (heavy, light, reasonable), and expectations of the educational and the health care institution. Basing her view on anecdotal information, Reed (1992) concludes that students' personality and interpersonal skills affects the performance rating they receive. In her experience, students who are articulate and have pleasing personalities do better in evaluations regardless of their ability to perform clinical skills. Clinical assessors have also reported that students who express confidence in their work to assessors are assumed to be safe in practice by inference rather than observation (Chfford, 1994; Girot, 1993a). 62 Student characteristics appear to have a substantial impact on evaluative decisions, particularly related to summative judgments. It is general knowledge in nursing education that there are a number of problem students who are not identified as such as early as they should be or who receive a passing grade when they should have failed (Duke, 1996; Lankshear, 1990, Luttrel et al., 1999; Paterson, 1991). For instance, one half of the CNTs in Paterson's (1991) research reported that they were frustrated by having to fail a student in the last rotation because other CNTs had let the student move on. These CNTs recounted that it is not unusual to encounter weak students who have nothing documented about their problems, even though it was common knowledge in the informal CNT network. Paterson also found that two of the CNTs in her study would make a judgment that the student was able to succeed notwithstanding the grave concerns they had over some areas of the student's performance. Similarly, the inexperienced CNTs in Duke's (1996) research also downplayed their concerns over poor student performance in order to move the student onto the next rotation. A CNT's unwillingness or inability to assign a failing grade has been linked to the close relationship that develops between CNT and student, the negative consequences of a failing decision for the student, issues surrounding the validity of such decisions given short rotations and the subjective nature of the clinical evaluation process, and potential negative consequences for the CNT. The Borderline and Failing Student Duke (1996) believes that the close CNT-student relationship made it difficult for the CNTs in her study to fail students. At times, knowing about a student's personal problems lead the CNTs to excuse unsatisfactory performance. This group of CNTs also felt their roles of counselor, teacher, role model, and evaluator conflicted. This was especially evident 63 in cases of the borderline or unsatisfactory student, where the CNT was faced with the incongruence of encouraging a student while the possibility existed of not passing the student. On the other hand, in Infante's "atraditional" model of clinical teaching, she recommended extended CNT-student relationships where students and CNTs would work together over an entire semester and when possible, the entire academic year. Theoretically, the benefits of this intensified relationship are to allow CNTs and student to become better acquainted, to promote development of a strong working relationship, and to make the evaluation process less threatening (Infante et al., 1989). In Lankshear's (1990) study of clinical assessors and academic teachers, she found a variety of reasons for failing to fail a student. Many clinical assessors did not fail a student because they could not give a negative decision like this to a student face-to-face, especially "nice" students. It is known that CNTs often feel like they are "passing sentence" when evaluating students (Duke, 1996; Paterson, 1991; Wood, 1987). Some of the CNTs in Duke's study said they did not want to make a decision that could halt or alter the student's choice of career. Indeed, Paterson suggests that CNTs may be reluctant to fail a student because a failed clinical course is a potential termination point for a nursing student; the student must usually withdraw from the entire nursing program until the clinical course is offered again. Schoenhofer and Coffrnan (1994) believe that their nursing school's curriculum philosophy directs CNTs in how to proceed in relationships with failing students. Students should be carefully advised along the way that a time frame exists, be provided with multiple legitimate opportunities to engage productively with the CNT in particular learning experiences, and advised in a timely fashion that course objectives were not being 64 accomplished at satisfactory levels. Honest feedback is communicated to the student in an environment of continuing to learn and of knowing the student as a person of value. Likewise, the experienced CNTs in Wolffs (1998) research felt that confidence in their ability to identify the expected level of student performance and to evaluate students fairly, their understanding of the curriculum philosophy, school of nursing policies, and their legal and professional obligations, and their caring for the student as a person of worth enabled them to inform students of a failing grade in a respectful and supportive manner. Some of the clinical assessors in Lankshear's (1990) research also believed that it was not right to come to a decision to fail a student in such a short amount of time (e.g., in an eight week rotation). This group of assessors believed that students should not be evaluated until near the end of the rotation and that the first portion of the rotation was for settling in and learning. They were also of the view that the clinical setting was primarily for learning, not evaluating. A n exception to this belief was those situations in which a student's problems were serious and consistent, in which case the weak student needed to be identified early on in order to protect the safety of the patients. Short clinical rotations were also found to pressure the CNTs in Paterson's (1991) research to make early decisions about whether a student will pass or fail; often before there was enough evaluation data to accurately arrive at such a decision. Decisions as to whether students pass or fail are difficult in cases where the student's clinical performance is inconsistent. In one faculty workshop, 30 CNTs were given a scenario about a student whose attention to the psychosocial care of her patients was poor, in a rotation that had a clinical focus on this aspect of patient care. The CNTs were asked to make a decision as to whether to pass or fail the student (Brozenec, Marshall, Thomas, & 65 Walsh, 1987). Many chose to pass the student and just as many chose to fail her. CNTs who chose the failing option did so because the student clearly did not meet the clinical objectives of the evaluation tool. Reasons given for passing her were more varied and included: other areas of her clinical performance were satisfactory, her poor behavior (or improvement) was inconsistent (i.e., she had been improving lately and could continue to do so), and the instructor may be partly to blame for the student's poor performance in that she had not provided enough opportunities for the student to meet the course objectives. Decisions to fail students are also difficult when safety is not the issue. A l l of the CNTs in the Brozenec et al. (1987) workshop agreed that safety issues were an example where a failing decision is clear-cut and easier to make. Likewise, Lankshear (1990) found that the assessors in her study would not fail a student on "softer" objectives like communication or attitude alone. However, if students had an attitude problem, some assessors would "nit-pick" until they found enough safety evidence to justify a failing grade (e.g., try to find minor errors such as the student leaving a brake off of a bed). This is similar to Paterson's discovery that once a CNT had crystallized a negative judgment about a student, (s)he would look for evidence to justify the decision. In Bondy's (1984) study, CNTs rated student performance the highest in the interviewing scenario and the lowest scores in response to the dressing change scenario. Bondy believes that it reflects the tendency of CNTs to be more lenient with behaviors that are more open to subjective interpretation (the affective domain) and more critical as behaviors become tangible and measurable. In Duke's (1996) study of sessional instructors, she found that CNTs would not fail a student in an area of clinical practice that could not be objectified. For example, if the student had a problem in the affective domain, CNTs gave the student the benefit of the 66 doubt. They were clearly not comfortable with their conclusions because they were "subjective"; in other words, the CNTs felt they needed to be able to justify a failing decision to others with "facts". The process of obtaining sufficient documentation to justify a failing grade was a stressful one for the CNTs in Paterson's research. There was no evidence in the studies reviewed that suggested that CNTs feel this way when having to justify a high grade for exceptional students. Often CNTs do not make evaluative judgments they know are correct because of external pressures. Fear of gaining a negative reputation with students, faculty, and staff has been cited as a factor in CNT reluctance to fail students (Lankshear, 1990; Paterson, 1991). Summative decisions have also been found to be influenced by the CNT's previous experience with evaluations. Following a negative experience with failing a student, some of the CNTs in Paterson's research considered student performance in such a way so as to not have to fail the next student (Paterson, 1991). Several assessors in Lankshear's (1990) study also reported not failing students in order to avoid negative consequences for themselves. A major negative consequence for these assessors was a review of the evaluative decision, including having to be interviewed by a manager or taking "flak" from managers or peers. CNTs in Paterson's research were also influenced in their decision making by previous experiences with their decisions being appealed. In theses cases, the appeal process was seen as something to avoid. However, the appeal process is an important part of any system of evaluation. The CNTs in Wolffs research felt that their increased confidence in dealing with borderline and failing students was often accompanied by a greater understanding of the institutional processes influencing their decisions to fail or not fail students, for instance, practices related 67 to due process and the student appeal procedure (clearly identifying and delineating areas of concern, developing learning contracts, and documenting observations). These CNTs also reported that their ability to assign failing grades was facilitated by a supportive institutional environment. The role of institutional policies and processes in the evaluation practices of CNTs and what constitutes an effective and supportive evaluation system are under-researched areas in the evaluation literature. Institutional Policy and Standards The evaluation processes mandated by the educational institution shape a CNT's evaluation practices in significant ways. Strong evaluation standards and procedures are thought to ensure a sound evaluation process that provides opportunities for faculty to strengthen the educational program, help students know what is expected of them, and to remove with minimum difficulties, those students who are not progressing satisfactorily or who do not appear to be "salvageable" (Orchard, 1994a; Short, 1993). There is evidence that many schools of nursing lack evaluation standards and procedures and that when these do exist, the resultant emphasis on documentation of observable behaviors forces CNTs to focus on evaluation and make premature summative judgments. Based on a review of the literature on evaluation practices and systems for appealing evaluative decisions, Orchard (1994a) developed a model that captured the elements involved in the formulation of clinical evaluative judgments. According to Orchard's judgment model, the decision making process about a student's clinical performance consists of three interdependent components. The academic component involves CNT practices related to data collection, the comparison of data against pre-established academic standards, and reaching an academic conclusion. The administrative component includes those practices concerned 68 with fairness and partiality of the evaluative decision making (the procedures used to collect data and the process used to formulate and communicate to student the CNT's perception of the student's performance). The final element is the natural justice component of evaluative decision making which directs how standards are applied and what methods are used to make students aware of their clinical performance. Each component should be used to provide guidance to the institution and individual CNTs in developing and mamtaining effective evaluation standards and practices. -Orchard (1994a) utilized her model to survey 81 Canadian schools of nursing in order to assess the relationship between institutional policies and procedures student clinical evaluation practices. She found a general lack of standards and guidelines for the evaluation of students in clinical settings. A majority of the programs did not have clear, objective, and formally recorded standards against which CNTs measured students' performance. While many institutions had policies and procedures governing the conduct of students in clinical settings that could be used as criteria for the evaluations, there were little established standards for the actual process of conducting the evaluations. Orchard also recommended that all institutions have clearly stated behaviors upon which to determine performance expectations (i.e., outcome assessment and/or criterion referenced evaluation models) to promote consistency in interpretation of clinical objectives and their translation in terms of expected student behaviors. The problem of consistent practices has been noted by other researchers. The CNTs in Paterson's study found problems with the criteria used by other CNTs to evaluate students, stating their concern over the diversity of CNTs' interpretations of the various categories in the evaluation record. Complaints included some of the following: all students passed term 69 one no matter what their level of performance, the written feedback on the student evaluation forms did not always tell them very much about students, there were inconsistencies in the amount that CNTs wrote, and important information about students did not always show up in the evaluation form. Institutional policies may operate mainly to protect the institution against litigation by disgruntled students (Orchard, 1994b). The result is that the institution requires CNTs to document student progress carefully, especially regarding failing students. One of the CNTs in Paterson's (1991) study stated that she only had to justify evaluations when the student was failing. "Programs that place excessive emphasis on written documentation to support the achievement of each objective by students can lead instructors to transfer from their dual role of teacher/evaluator to a singular role of evaluator. This role alteration can result in instructors evaluating students' clinical performance without providing students with learning time to achieve mastery of the skills being assessed during the semester" (Orchard, 1994b, p. 249). Paterson (1991) found this to be the case in her study. CNTs in Paterson's (1991) research were expected to arrive at a judgment at the end of each rotation, in the form of a written summative evaluation. This institutional requirement made the CNTs focus their teaching efforts on the collection of sufficient data to complete the detailed evaluation form. Because they needed objective data, the CNTs were limited to evaluation practices such as observation and questioning. The CNTs felt pressured to write summative evaluations even though they did not feel they had sufficient data. Short clinical rotations (e.g., four weeks) were thought by Paterson to contribute to the CNT's tendency to make early judgments about student potential. In her study, once the CNTs had a generalized assessment of the a 70 student, for instance, passing-failing, or caring-uncaring, all data collected from this point in time was utilized to reinforce that judgment, or as one CNT put it, to "build your case". CNT Reflexivity Awareness of the various influences on their evaluation practices is thought to be essential in order for CNTs to improve the quality of the clinical evaluation process (Hepworth, 1991; Hunt, 1992; Wong & Wong, 1987). While studying subjectivity in clinical evaluation, Stewart (1991) documented several processes that eight CNTs said they utilized to minimize the impact of subjectivity on their evaluation of students: reflexivity, verbal discussions with students, comparing data with clinical objectives, using skills checklists, using objective terminology that was descriptive in nature, and reconsidering their data when re-reading or re-writing their evaluative notes. This group of CNTs also claimed that, as awareness of their subjectivity increased, the influence of subjectivity on their evaluative decision-making decreased. The CNTs in Wolff's (1998) research felt that engaging in reflective thinking helped them to gain meaning and insight from their clinical teaching experiences and to develop a sense of themselves as competent CNTs. The ability to reflect on one's evaluation practices and to make changes may be selective. For example, if the CNTs in Paterson's (1991) research received data that contradicted their crystallized judgment of a student, or if information suggested their evaluation was inaccurate or unfair, the CNT sometimes reflected on her perspective of clinical teaching and revised it accordingly. But, Paterson also thought that the CNTs were not aware of the fact that they may be making incorrect attributions, for example, that apathy could be due to causes other than laziness. Similarly, the CNTs in Wolff's (1998) research said that once they had informed students of borderline status, they focused on gathering 71 evidence to support their assessment rather than focusing on learning, indicating that they did not have a process for addressing the possibility that their initial decision as to the student's level of performance could be in error. Thus the ability to reflect is not sufficient, their must also be an awareness that one's evaluation practices may be inaccurate and/or ineffective and a commitment to critically examine them. Summary We do know some things about several sources of influence on the evaluation practices of CNTs. Developing skill in clinical evaluation appears to require education, experience, awareness of the many subjective influences (particularly personal values, beliefs, expectations and biases around students), and a commitment to reflexivity. The available literature shows that summative decision making is a difficult and little understood process, particularly in the case of borderline and failing students and that institutional policies and standards can be a double edged sword. Finally, aside from Paterson's (1991) research, most of what is known has been gathered through surveys or interviews with CNTs; there has been little empirical evidence gathered from CNTs as they actually participate in evaluative decision making. There is still much to be learned about how CNTs develop expertise in clinical evaluation. Questions to be answered include: how do experienced CNTs utilize intuition in evaluation? what is the result of CNT reflection on evaluative decision making processes? what should be included in educational programs for CNTs? what is the best way to support novice CNTs in developing effective evaluation practices? and how can we increase consistency in evaluation without creating an evaluative focus that turns clinical teaching into an ongoing evaluation session? 72 Conclusion In this chapter I have presented my review of the literature on clinical evaluation in nursing education. We know that CNTs develop and utilize a variety of practices for collecting, analyzing, and interpreting data, and for reaching conclusions about the overall quality of a student's clinical practice and we are beginning to understand how to evaluate students effectively within the complex learning environment of the clinical setting, given the myriad of variables that can influence evaluative processes. Overall, there is substantial information about quantitative methods for evaluating students' clinical progress (checklists, rating scales, standardized performance examinations, and written tests). Whereas there is a lot written about practices for collecting data, little is known about the actual processes utilized by CNTs as they interpret evaluation data and make judgments about the value of a student's clinical practice. There is also a lack of literature on how CNTs collect, interpret, and judge data about a student's performance from a qualitative perspective. Research around evaluation practices incorporating ideas such as dialogue, meaning-making, multiple ways of knowing, reflection, and participation is still in its infancy as is information on the how the clinical environment and the CNT's practices shape and influence a student's performance. The current body of knowledge is also limited by research methodologies consisting primarily of survey, reports of perceptions, interviews, and self-reports, with few research designs incorporating methods to gather data from CNTs as they are actually involved in the evaluation of students. There is a need for further study on clinical teaching and clinical evaluation to establish a theoretical and research base for teaching and evaluating students in the clinical area. This study proposed to contribute to this body of knowledge by providing empirical 73 evidence of the actual practices of a CNT working within an critical-interpretive curricula. The methodology should allow for detailed description of the interrelatedness of sociocultural and interpersonal influences on the CNT's thoughts, actions, and behaviors concerning clinical evaluation. This study should also contribute to the CNT's understanding of her clinical evaluation practices, thereby providing a rich opportunity for praxis. Information gained from this study should also contribute to the education of other CNTs by its implications for the teaching and development of effective evaluative practices. 74 CHAPTER THREE Research Design The research design is "the logical sequence that connects the empirical data to a study's initial research questions and, ultimately, to its conclusions" (Yin, 1994, p. 19). In this chapter, the design for this research is discussed, including the methodological framework, strategies selected for data collection and analysis, considerations around the role of the researcher, processes to promote rigor, procedures for addressing ethical considerations, and limitations of the study. Research Method The purpose of this research was to inquire into the clinical evaluation practices of a CNT in a manner that captured the complexity of these practices, promoted the CNT's understanding of her practices, and contributed to a general understanding of clinical evaluation in nursing education. A critical single case study was selected as the research method because of its potential to achieve all three purposes. Case Study Approach Case study method allows for the systematic and holistic exploration of a CNT's actual clinical evaluation practices within their contextual setting. Case study is "a strategy for doing research which involves an empirical investigation of a particular contemporary phenomenon within its real life context using multiple sources of evidence" (Sharp, 1998, p. 785). Yin (1994) suggests a case study is suitable where "what" and "why" research questions are posed, where there is no requirement for control by the researcher over events, and where the material sought is contemporary. 75 The primary advantage of a case study approach is the depth and breadth of information it can provide. Case studies produce a wealth of descriptive information thus allowing for an intimate knowledge of a participant's thoughts, feelings, intentions, past and present actions, and environment (Polit & Hungler, 1995). Case studies are also useful for understanding the influence of social institutions, structures, and ideologies on a participant's practices; "thickly described case studies take multiple perspectives into account and attempt to understand the influences of multilayered social systems on subjects' perspectives and behaviors" (Gilgun, 1994, p. 371). In addition, case studies provide the opportunity to explain relationships that might not emerge in other types of design. The intensive probing that characterizes case studies often leads to insights concerning heretofore unsuspected relationships (Lewis, 1995; Sharp, 1998). A case study was the approach used in this research because the inquiry questions were about the "what and why" of clinical evaluation practices. The actualities of this area of clinical teaching in nursing has not been widely studied or described. The what (actual practices) and the why (influences) were examined by describing and exploring a CNT's evaluation practices as these occured in the real-life context of a current clinical teaching experience with a group of students. A secondary goal of this study was to develop knowledge of clinical evaluation through focusing on one CNT's particular practices. Findings from this study should be able to be used to understand and even account for the evaluation practices of other CNTs. In addition to generating a deep understanding of the particular, a case study can also clarify, expand, and modify what is already known about clinical evaluation (Dale, 1995; Gilgun, 1994; Yin, 1994). "The case study approach focuses on understanding the particular in addition to the general.. .and has the unique potential to 76 integrate, rather than dichotomize the particular and the universal within science...." (Meier & Pugh, 1986, p. 197). More is said about this in the discussion of generalizability. However, "case study is not a methodological choice, but a choice of object to be studied" (Stake, 1994, p. 236). Thus, a methodological framework is required to guide the inquiry process. A critical constructivist ontology and epistemology were selected as the philosophical underpinnings of the study to reflect the researcher's view of the nature of reality, how we can learn about the world of human action, and the purpose of human inquiry (Schwandt, 1994). Critical Constructivist Inquiry The world of lived reality and situation-specific meanings that constitute the general object of investigation is thought to be constructed by social actors. That is, particular actors, in particular places, at particular times, fashion meaning out of events and phenomenon through prolonged, complex processes of social interaction through history, language, and action. (Schwandt, 1994, p. 118) A critical constructivist perspective was chosen to underpin this study because it enabled the researcher to understand the complex world of clinical evaluation from the point of view of those who live it, encouraged self-reflection and deeper understanding on the part of the researcher and participant, and was capable of generating empirically grounded theoretical knowledge. Critical research methods are a "...change enhancing, interactive, contextualized approach to knowledge-building...." (Lather, 1991, p. 53). A critical constructivist view holds that reality and knowledge are socially constructed, contextual, dependent on interpretation, and imbued with power relationships (Lather, 1991; Reason, 1994). From this perspective, researchers seek to make clear how participants construct meaning, to clarify what and how meanings are embodied in language 77 and the practices of the participants, and to uncover the political nature of knowledge, knowledge development, and our day-to-day experiences (Lather, 1991; Schwandt, 1994). The clinical evaluation practices of CNTs are more than just a set of techniques and methods; these practices are located in social traditions and the personal histories of the CNT (McTaggart & Garbutcheon Singh, 1986). CNTs operate from personally constructed realities that inform how evaluation is framed and how evaluative decisions and conclusions about a student's clinical performance are made. Thus, a CNT's evaluative perspective reflects particular traditions, habits, and values, as well as their ideological roots. In critical inquiry, evaluation practices are understood as social constructions based in educational and administrative discourses that legitimize certain practices as educationally worthwhile. Educational discourses are statements, ideas, rituals, practices, and social relations that become routinized in familiar, accepted patterns, gradually coming to constitute the status-quo (Kemmis & McTaggart, 1988). In addition, the discourses informing evaluation in nursing education are often implicit, unacknowledged, and unquestioned. As a result, many CNTs are unaware of their own purposes, theories, and behavior and the consequences of these for their work (Reason, 1994). Critical inquiry attempts to uncover, understand, and attenuate the influence and constraint of these hidden forces controlling human action (McTaggart & Garbutcheon Singh, 1986). A central goal of the inquiry is to "alleviate oppression by spurring the emergence of people who know who they are and are conscious of themselves as active and deciding beings, who bear responsibility for their choices and who are able to explain them in terms of their own freely adopted purposes and ideals" (Lather, 1991, p. 105). Social change in the form of improved practice is a desired outcome of critical research. Critical inquiry 78 aims to promote greater teacher self-understanding and an appreciation of the social forces that shape the educational context (Streubert & Carpenter, 1995). This results in emancipatory knowledge, bringing everyday practice knowledge to the forefront of our consciousness, supplemented by an awareness of the ideological construction of our consciousness and the educational and political results of such construction (Lather, 1991; Reason, 1994). Common characteristics of critical approaches are that they involve collaboration between researcher and participants, are iterative and cyclical, attempt to solve problems in practice, are oriented to change, and strive for theory development (Hyrkas, 1997; Reason, 1994) . Critical inquiry is guided by a belief in the democratic ideal (Streubert & Carpenter, 1995) . Participants are considered partners in the research relationship; they should benefit from participating in the research and be empowered by both the process and outcome of the study. Lather (1991) suggests that there are three interwoven issues to consider in the design of critical research: the need for reciprocity, engaging in dialectical theory building, and how to deal with questions of validity. The first two issues are addressed in the following section as an introduction to the discussion of methods for data collection and analysis. Issues of validity, or as Lather puts it, how to construct research designs that are 'Vigorously self-reflexive", are discussed in the section on rigor. Reciprocity In critical inquiry, methods for data collection and analysis reflect a belief in the social construction of the research encounter and the social dialogic nature of inquiry (Lather, 1991). Thus, the research process and researcher-participant relationship should involve reciprocity, mutuality, and negotiation. Lather (1991) describes what she considers is 79 necessary for "full reciprocity" in critical research: interactive, dialogic interviewing that require self-disclosure on the part of the researcher (Anderson, 1991; Oakley, 1981); sequential interviewing to facilitate collaboration and deeper probing of research issues; negotiating meaning by recycling description, emerging analysis and conclusions with the participants; and creation of conditions that enable participants to question the taken-for-granted beliefs and authority culture has over all of us. There is also reciprocity in that both researcher and participant gain from the inquiry process. The researcher gains knowledge relevant to the research findings and knowledge that enhances practice, while the participant benefits by gaining insight into her situation and discovering ways to deal with it. Empowerment of the participant should result from being actively involved in the research. Participation should enable her to broaden abilities and be better informed when making choices about her/his practice (Muscari, 1994). Mutuality directs researcher and participant to negotiate aspects of the inquiry process, for instance, the specific arrangements around time and place for interviews, how the relationship should be constructed, what contributions the participant should/can make to the study, and how and when the research duo will validate findings and conclusions. Negotiation occurred throughout the inquiry, beginning with initial process issues and/or concerns and continuing through a dialectic of iteration, analysis, critique, reiteration, reanalysis that lead eventually to the joint construction of the findings. It was expected that the research would result in the researcher and participant co-constructing a picture of the CNT's evaluative practices and then determining what the practices mean in relation to underlying values, beliefs, and assumptions. 80 Dialectical Theory Building Knowledge development should proceed by allowing the data to generate concepts and propositions while at the same time realizing the a priori theoretical frameworks that are guiding us. Theory should grow out of context-embedded data without letting theory become the "container into which the data must be poured" (Lather, 1991, p. 62). The researcher needs to be self-reflexively aware of the role that theory plays in what and how data are generated and understood and must continually question both the theoretical and social construction of her own and the participants' thoughts and experiences (Streubert & Carpenter, 1995). The researcher needs to look for shared tacit assumptions influencing what are being constructed as data, or "the research may serve to re-create them 'as i f they were factual" (Thorne, 1997). A dialogically reciprocal design facilitates understanding of the participant's world and also serves as a corrective against the tendency to insert the researcher's preconceptions into the processes of meaning-making. The design should facilitate what Lather (1991) calls, "collaborative theorizing". Researcher and participants are involved in a reflexive critique of the experiences, concepts, and theoretical explanations that are unfolding from the inquiry. Making the various explanations for an experience explicit, increases the researcher and participant's understanding of those observations and interpretations (Streubert & Carpenter, 1995). Finally, Lather advises that researchers develop awareness of how to examine participants' self-understandings without appearing as the all-knowing demystifier of ideology. The researcher-participant relationship must be such that participants feel able to challenge and question the researcher and experience dialogue and negotiation as a 81 consensual meaning-making process and not as a disguised presentation of the researcher's predetermined interpretation of an event. Emancipatory research should serve an energizing catalytic role, the results should be both meaningful and useful for the participants allowing for a view of how larger social issues are embedded in the particulars of their everyday practice. The Case In qualitative research, the researcher deliberately selects a sample of individuals who are theoretically representative of the culture, role, or position needed for the study (Polit & Hungler, 1995). Stake (1994) stresses the importance of proper case selection to the design of a case study. The case needs to be a good representative of the population of potential cases in order to offer an opportunity to learn about the phenomenon of interest, in this instance, the clinical evaluation practices of a CNT. Because this case study was about an individual, participant selection was central to the quality of the research (Muscari, 1994). In selecting a participant, the researcher needed to recognize that potential informants within a population do not have an equal amount of knowledge or equivalent experiences and furthermore, some members of the group are more willing to be interviewed and to share their experience than others (Morse, 1991). Data collection decisions are also inherent in the case selection. Specifying time boundaries to define the beginning and end of the case is necessary in order to determine the limits of data collection and analysis (Yin, 1994). Participant Selection Participant selection was determined by the purposes of the research as well as constraints such as subject availability and time. To start with, because the study involved 82 frequent in-depth interview and discussion sessions, the CNT needed to teach in a nursing program that was located near to the researcher. This meant that it was highly likely the researcher would be working with a colleague. The difficulties inherent in studying a peer are discussed in the sections on role of the researcher and limitations of the study. " A good participant is someone who is undergoing the experience to be studied and who is able to reflect and provide detailed information" (Muscari, 1994, p. 223). Thus, this research required a CNT who was able, interested in, and willing to engage with the researcher to identify and articulate her practices and reflect on the underlying values, beliefs and assumptions underpinning her thoughts and actions. One of the major difficulties of critical research rests in the defensiveness of human beings and their ability to produce self-fulfilling and self-sealing systems of action and justification (Reason, 1994). Thus, a suitable C N T should have a level of self and ego-development where she was aware that perceptions, including her own, are always framed by assumptions, and that such assumptions can be tested and changed (Reason, 1994). Likewise, Lather (1991) discusses the fear, dislike and hesitance that most people have when sorting through their lives and rebuilding according to one's own values and choices. She stresses that researchers must have strategies for mamtaining respect for the participants while they explore and challenge their assumed worldview. Time Boundaries "The case researcher faces a strategic choice in deciding how much and how long the complexities of the case should be studied" (Stake, 1994, p. 238). The amount of time that a researcher should spend studying a case is variable (Bogdan & Biklen, 1992). The duration of the study must be long enough to collect sufficient data for describing and understanding the 83 case, while at the same time being short enough to fit within the resources of the researcher. In addition, the longer the study time, the more data that must be managed and analyzed. Too much data has been recognized as one of the disadvantages of case study method (Yin, 1994). For the purposes of this study, it was proposed that the case extend over a six-week clinical teaching block for the following three reasons. First of all, this is the usual duration of clinical rotations within the local nursing programs. Except for unusual situations, a CNT expects to arrive at an evaluative judgment about a student's clinical performance within this time period. Secondly, six weeks allowed data to be collected with respect to the clinical evaluation of an entire clinical group of students. In the nursing program that the CNT was selected from, this meant a group of seven to eight students. Thus, a six week time block provided a degree of variety in potential evaluative situations. And thirdly, the resources of the researcher did not permit a longer study time. Criteria for Selection of the Case Based on the preceding discussion, the following set of criteria were chosen to guide the selection of a CNT for the study. The CNT must: 1. Teach a clinical course within the Collaborative Nursing Program in British Columbia (CNPBC). 2. Teach a clinical course at least six weeks long. 3. Be a faculty member at a university-college in the interior of British Columbia. 4. Teach students in a clinical course requiring the CNTs to directly supervise students in the clinical area. 5. Have at least five years of experience in the clinical evaluation of students. 6. Be able to discuss the philosophical underpinnings of a critical-interpretive curriculum. 84 7. Be able to discuss clinical evaluation of nursing students. 8. Be interested in reflecting on and uncovering the basis of her clinical practices. 9. Be willing to work closely with researcher in analyzing and interpreting her practices, and thus be open to critique of this aspect of her being. Recruitment To recruit the case, the researcher contacted the associate dean of the local school of nursing to request permission to address faculty regarding the research project. The researcher then presented details of the proposed study to interested faculty and the participant, J volunteered. She met all the criteria except that the six weeks of clinical practice were condensed to three weeks in the clinical area (with twice as many clinical hours each week). The fact that J was keenly interested in participating and was willing and able to discuss and reflect on her practices made her an appropriate participant. Data Collection In case study research, many choices must be made about persons, places, events, documents, and other materials to observe or otherwise study. Decision making about the type and amount of data to collect is referred to as internal sampling (Stake, 1994). Several principles guide sampling decisions. It is important to sample widely so that a diversity of data types is explored (Stake, 1994; Yin, 1994). The researcher may throw out a wide net in hopes of capturing significant data sources or sampling can be directed by the researcher's pre-understanding of what is already known or suspected about the topic being studied. In addition, sampling choices should be guided by the quality of data produced by the various sources, as "...some pieces of data are simply richer and deserve more attention" (Bogdan & 85 Biklen, 1992, p. 67). Discerning which sources are richest may occur through trial-and-error or through prior knowledge of the case, context, and/or study topic. As is typical of this type of inquiry, data generation began in a broad comprehensive way and became more focused as insights and understanding emerged (Burns & Grove, 1997). As data were gathered, the researcher and participant formed tentative interpretations. These interpretations were then used to focus successive sampling decisions and data generation. Representativeness of the data was ensured by sampling a variety of evaluation situations from J's practices, utilizing multiple methods. Situation and document selection proceeded both inductively (J generated data from her actual practice) and deductively (guided by the literature on clinical teaching/evaluation and the researcher's previous experiences about variables thought to influence evaluation practices). Data were generated for this study through the use of four methods: (a) tape recordings using a modified think-aloud (TA) technique; (b) semi-structured interviews with J following analysis of each think-aloud tape; (c) review of written evaluation documents; and (d) concept mapping. Findings in a case study are more likely to be convincing and accurate if based on different sources of information. Multiple sources of evidence essentially provide multiple measures of the same phenomenon, thus increasing validity (Gilgun, 1994; Muscari, 1994). Modified Think-Aloud Technique A modified T A procedure was chosen as the primary method of data collection because of its capability for producing rich, in-depth, and accurate data about the participant's everyday world, in a relatively non-intrusive fashion (Fonteyn & Fisher, 1995; Paterson, Thorne, Crawford, & Tarko, 1999). Think aloud is a data gathering method in 86 which participants verbalize their thoughts during the performance of a cognitive task. This method has used in education, psychology, medicine, and nursing in order to describe the information that individuals concentrate on and how it is structured during clinical reasoning and decision making (Fonteyn, Kuipers, & Grobe, 1993). Think-aloud method can produce two types of verbal reports. Concurrent verbal reports result when subjects talk aloud or think aloud while performing cognitive tasks. Concurrent reports provide direct verbalization of what the individual is actually thinking about during the cognitive task (Fonteyn & Fisher, 1995) Retrospective reports result when the participant talks-aloud after the task has been completed. This requires retrieval of information from past experiences and might provide inconsistent or incomplete information about one's thinking during a specific task, although it might provide a more complete description of one's reasoning strategies. Concurrent T A data augmented with retrospective data obtained through a follow-up interview conducted subsequent to the T A session is thought to provide a more complete and detailed description of subjects' reasoning during a problem solving task (Fonteyn et al., 1993). The study of clinical teaching has traditionally utilized direct methods of observation such as participant observation (Paterson, 1991). Using T A method in a clinical setting has some of the advantages of direct observation without several of the disadvantages. The main advantage is that data collection in actual clinical setting provides an opportunity to study the contextual background of the CNT's behavior. Think-aloud is also useful because a volunteered statement from an informant is less likely to reflect the researcher's biases and preconceptions than one which is made in response to a question by the researcher (Paterson, 1991). Think-aloud method avoids some of the difficulties of participant observation such as 87 remaining unobtrusive and staying in the role of researcher vs. that of clinical teacher, nurse, colleague and/or consultant (Paterson, 1991; Pugh, 1986b). As a research method, T A was originally used with case scenarios and other clinical simulations. Research participants talked-aloud as they worked through the clinical scenario and their verbalizations were recorded on audio or video tapes. Since then, Fonteyn & Fisher (1995) used T A method for studying clinical reasoning in an actual setting and found it could be done without disrupting nursing care or ward routine. In a series of studies, Fonteyn and Fisher had participants carry a hand-held tape recorder into which they talked aloud as they were thinking about and/or providing direct patient care. The researchers also took field notes and questioned the nurses shortly after T A recordings to clarify their actions and specific decisions that had been made, or to understand the significance of patient data that had been mentioned frequently during the TA. A modified T A technique has been used to collect qualitative data for content analysis in research into the decision making of persons living with diabetes (Paterson et al., 1999). The participants carried a portable tape recorder with them for a specified time period and made daily recordings of their thinking and feelings. As part of the interview process that followed analysis of transcripts of each tape recording session, the researcher had the opportunity to clarify or expand on what was said in the tape. Paterson et al. found that participants in the diabetic study had no difficulty a week after the taping, recalling what they had recorded. The participants believed this was because the detail they had provided in the recording served as a reminder of the situation. In this study, J was provided with a voice activated portable tape recorder and instructed in the data collection procedure as described below. The quality of data depended on the participant's ability to remember to think aloud and her ability to articulate her 88 thoughts and feelings (Fonteyn & Fisher, 1995; Paterson et al., 1999). In an attempt to improve the quality of the TAs, specific guidelines for the think-aloud procedure were reviewed before the data collection period, providing an opportunity for J to practice the method before data collection officially began (practice carrying the portable tape recorder in a pocket and thinking aloud while working with students), and feedback was provided after the initial recordings were received and reviewed (Fonteyn & Fisher, 1995; Paterson et a l , 1999). J was instructed to carry the tape recorder with her at all times, for the entire period of one clinical rotation. She was directed to record her thoughts and feelings as follows: 1. You should make recordings whenever evaluative thoughts, feelings, and questions occur during the course of clinical teaching. You should record additional thoughts and so forth immediately following the clinical day. 2. It is likely you will reflect on student evaluation at random and unspecified times in between clinical teaching sessions (e.g., driving to the office, during dinner, at bedtime). To capture these important thoughts, you are encouraged to record your reflections on evaluative incidents whenever they occur, and as close as possible to the actual occurrence of the thoughts. 3. You should record thoughts and experiences that you perceive to be important and that may relate to evaluation of some aspect of the student's performance. You will initially be given general directions as to the type of data to collect based on a broad conception of what constitutes clinical evaluation. You should talk aloud any of her thoughts, feelings, concerns, and questions related to the collection of data related to a student's clinical performance, the analysis and interpretation of the data, the conclusion and judgments 89 arrived at about the student's practice, and recommendations as to what action should follow. 4. You should record as many details as possible. Suggested areas include: situation (client factors, location of incident, presence of others, what occurred) staff (involvement, nature of staffing, staffs past experience as/with students, administrative support) your interactions with the student, staff, client, or others student reasoning, actions, interactions, verbal and non-verbal communication, performance feelings expressed (verbal and non-verbal) by actors (in and out of the presence of others) - your thoughts/questions about the situation, student, performance, before, during or after your personal experiences and beliefs around nursing practice and education themes, summary statements, and further comments about the situation 5. As data collection and analysis proceeds, you and the researcher may decide on specific areas that you should address in your recordings. Seven T A tapes were received and transcribed by me, verbatim (except for pauses, and "oh's" and "urns"). Three TAs were recorded during the weeks in the clinical area. J was usually not able to record in the clinical area and most of the TAs were done at the end of the clinical day. J recorded four T A tapes about her written anecdotal (instructor) notes, the student's written work, her further thoughts from the previous clinical week, themes and issues from the previous week's interview, and her final evaluation sessions with the students. 90 Interviewing J was interviewed twice at the beginning of the study and seven more times following preliminary analysis of each set of the T A tapes. A n additional two interviews took place during construction of her concept map and to clarify the map, and to validate the study findings, bringing the total number of interviews to nine. The interviews were semi-structured and each lasted between one to two hours; the average interview was 90 minutes in length. A l l interviews were taped and transcribed by me. A n early transcript and analysis was reviewed by the thesis chair to assist in estabnshing and maintaining procedural and interpretive validity. Preparation for each interview involved systematically reviewing previous tapes, interviews, and analyses and making notes about questions that should be asked. In addition the researcher tried to attend to what was being said/learned as interviews were taking place to develop additional questions as needed (May, 1991). The initial interviews focused on collection of demographic data (age, nursing education, clinical teaching and nursing practice experience, course work in clinical teaching/evaluation) and initial stimulation of J's thinking about clinical evaluation. Initial interviews were semi-structured to allow J to guide the interview and illuminate her own perspective on the topic (May, 1991). Exploration of J's clinical evaluation practices was guided by broad trigger questions such as "how do you evaluate students in the clinical area?" and "Tell me about an evaluation situation that stands out for you". The first interviews were mostly directed by prompts and clarifying statements such as "tell me more about that, "when you say , what do you mean?" As T A data was generated, interview questions were developed each week based on data previously collected in interviews and/or the T A statements of the previous week. 91 Questions were also generated by situations that arose in discussions with J. For instance, when it appeared that J was influenced by the student's attitude, I would ask, "how did your thoughts about the student's attitude factor into your evaluative decision in this case?" or "How did the emergency with student D's client affect your evaluation of her performance at the time?" The pattern of interview, analysis, interpretation, and re-interview allowed J to comment on or modify my understandings and the emerging themes. While posing questions, I attempted to reflexively observe myself and the interview dynamics (Hutchinson & Wilson, 1994). I tried to simultaneously attend to the form of questioning, what was said, my interpretations of what was said, and the interaction dynamics, including reactive effects. Researchers must also attend to the theoretical assumptions underlying their choice of questions. When research is done within the researcher's area of expertise, subtle factors, such as use of language and researcher prior knowledge and theoretical perspective, may have significant consequences in the process of interviewing (May, 1991). Non-specific language and open-ended questions need to be used until the participant's terms are identified and defined. Subsequent questions can then be posed using the participant's language. Reactive effects of researcher and participant on one another can influence the process and outcome of the inquiry. In a study such as this one, where repeated interviewing was required, the research could not proceed without a relationship of mutual trust between researcher and participant. Depth and scope of data was facilitated by a research relationship characterized by engagement. "Personal involvement is more than dangerous bias - it is the condition under which people come to know each other and to admit others into their fives" (Oakley, 1981, p. 58). Rapport was assessed by the specificity and type of information shared 92 and by indications that J's was comfortable and willing to share the information (Hall & Stevens 1991). More is said about reactivity in the section on the researcher's role. Document Analysis A third method of data collection involved the analysis of the final clinical evaluation documents for the students in the clinical group. Five students consented to have their evaluation included in the study. As the study progressed it became apparent that J's written anecdotal notes (her instructor notes) were a rich source of information about her evaluative practices. These documents were added as another source of data. The term, content analysis, can be used with or without reference to an actual research method (Polit & Hungler, 1995). Traditional content analysis refers to a method of quantifying narrative material and involves categorizing each word or phrase in a text with labels that reflect concepts and then counting incidents of the concept (Boyle, 1994). In this study, content analysis did not proceed in the classical form. Analysis of the text of the written evaluations focused on identifying concepts, themes, and patterns in a manner similar to the analysis of the tapes and interview. The content of the evaluation documents was compared and contrasted with the emergent analysis from the tapes and interviews in an attempt to clarify, elaborate on, and expand the description of J's evaluation practices. Concept Map J was instructed to create a concept map that represented her perspective on clinical evaluation. A concept map is a schematic representation of concepts and the relationships between them. Concept mapping was introduced to the educational field by Novak and Go win (1984) as a method for getting at the thinking of students in order to improve the 93 teachmg-learning enterprise. Concept mapping has been used as a research method, teaching-learning strategy, technique for curriculum planning, method for integrating concepts and theory from the literature, and as a reflective tool to facilitate personal discovery and development (Deshler, 1991; Munroe, 1988; Novak & Gowin; Paterson, 1994b). For this study, concept mapping was utilized as a method for explicating and clarifying J's evaluative perspective as a basis for analysis and critique. Concept mapping in this study was based on the process utilized by Paterson (1991) to assist six clinical teachers to formulate concept maps of their views of clinical teaching. J was asked to construct a map of the concepts that represented her view of clinical teaching. Two unstructured interviews took place to clarify the meaning of the concepts and how they were interrelated, which resulted in revisions of the map. A further purpose of the final two interviews was to discuss and validate my interpretation of both the concept map, and the data analysis to date. The end result of the mapping process was a graphic representation of the basis of the CNT's evaluative practices, including influencing factors (see Appendix A). Data Analysis Data analysis was guided by general methods for data management and analysis as described by Huberman and Miles (1994). The overall aim of analysis is to organize, synthesize, provide structure to, and elicit meaning from the data (Polit & Hungler, 1995). General processes for data storage and retrieval, data reduction, data display, and conclusion drawing and verification (i.e., developing categories and codes, linking categories, identifying and integrating themes and patterns, diagramming) were drawn from the practical data analysis strategies of Riley (1990) and the constant comparison procedure of grounded theory (Corbin, 1986). Within a critical, constructivist design, the constant comparison 94 method is not utilized for theory generation as originally intended by Glaser and Strauss, but simply as a way to process data (Lincoln & Guba, 1985). A single case study of short duration is insufficient with respect to formulating theory. Data collection and analysis occurred concurrently and in a recursive and cyclical manner. Themes, issues, and questions arising from the preliminary analysis of each set of data were compared with units of data noted in previous tapes/interviews and discussed and clarified with J in an attempt to describe what was occurring in her clinical evaluation of students and why practices were carried out in a certain way. Apparent relationships between influences and evaluation practices were reviewed and examined further with J. Understandings and questions from the interview were then used to guide the next data collection set. Data Management Questions of how much material to transcribe verbatim are usually addressed by considering the resources of the researcher (transcriptions are time consuming) and whether the data is relevant and central to your study (Riley, 1990). Sandelowski (1994) suggests the researcher ask if the transcript is necessary to achieve the research goals, and if so, what features of the interview should be transcribed, and what can be ignored. In this study, the T A tapes and interviews were transcribed verbatim (except for pauses etc). The transcriptions attempted to preserve as much detail as possible, especially material that may prove useful for illustrating situations, concepts, themes, and/or reasoning about J's practices. A detailed log of the location of material was kept for each tape, using the counter function of the recorder. Thus, I was able to retrieve significant quotes or passages when needed. 95 Data Analysis Procedure 1. The transcripts and documents were read in detail, noting how J's thoughts, actions, and interactions with students and others were described and margin notes were made about concepts, ideas, or phrases that best described the how and why of her evaluative thought and action and to record initial reactions. A summary of the transcript was written, pulling out the main points. 2. Category codes were developed to classify elements of the data. The transcripts were re-read and the text coded. Significant words and phrases were highlighted and rewritten in the margin according to categories. 3. The text of each transcript and document was compared and contrasted with that of other documents and the similarities and differences were noted. This process continued as more data was collected in order to define, develop, and integrate the categories. This also involved a move back and forth among data sets to determine the presence, variation, and/or absence of patterns. 4. The categories were linked by searching for themes, recurring regularities, and patterns, and looking for relationships among categories, actions, and events. Several concept maps were constructed to assist in exploring, describing, and ordering the emerging findings. The thematic pieces were woven together into an integrated whole, an "integrated description". 5. Summaries of analytic and interpretive thinking (memos) were written throughout the analysis process. Memos included thoughts and questions going through my mind when reading, discovering, building, and linking categories, coding, re-coding; theorizing. Analytic memos were kept in a diary alongside my reflexivity entries. 96 Rigor M y point is, rather, that if illuminating and resonant theory grounded in trustworthy data is desired, we must formulate self-corrective techniques that check the credibility of data and minimize the distorting effects of personal bias upon the logic of evidence. (Lather, 1991, p. 66) A l l research must be concerned with providing ".. .the grounds upon which findings are considered plausible or convincing and procedures are viewed as legitimate" (Huberman & Miles, 1994, p. 438). Thorne (1997) suggests that four general principles form the basis of evaluation standards forjudging the theoretical, epistemological, and technical soundness of qualitative research. First, the research must demonstrate epistemological integrity, i.e., design decisions must reflect a consistent ontological, epistemological and methodological stance. Second, theoretical claims must be consistent with the manner in which the phenomenon under study was sampled. Third, the analytic logic at all steps of the study must be made explicit. And finally, the researcher's interpretations must "fairly illustrate or reveal some truth external to his or her own bias and experience" (p. 121), i.e., the researcher must make explicit the relationship between the data and abstract concepts. Several views, based on these four principles, have been advanced regarding appropriate criteria and procedures for establishing and judging rigor in qualitative research (Hall & Stevens, 1991; Lincoln & Guba, 1985; Leininger, 1994; Sandelowski, 1986). In this study, criteria and procedures for estabtishing trustworthiness and authenticity were drawn from a synthesis of those discussed in Guba & Lincoln (1989b), Lather (1991), and Hall and Stevens (1991). 97 Tixistworthiness Sandelowski (1993) suggests that trustworthiness is a matter of persuasion, i.e., it depends on the degree to which the researcher is able to make the research practices visible and therefore auditable. Trustworthiness requires attention to issues and principles concerning the credibility, transferability, dependability, and confirmability of findings. Credibility refers to the degree the findings accurately represent the experience of the study participants and not the researcher's preconceived views (Sandelowski, 1986). Transferability is concerned with issues of generalizability, which is discussed in greater detail below. Dependability describes the quality and appropriateness of the inquiry process and is maintained through careful documentation of the logic behind the methodological and analytic/interpretive decisions that were made. Confirmability refers to the findings themselves, that the data, interpretations, and findings of the study are actually confirmable, i.e., the ". . .constructs are actually occurring rather than mere inventions of the researcher's perspective" (Lather, 1991, p. 67). Confirmability requires dependability. It is essential that the data can be tracked to their source and the logic used to assemble the interpretations into wholes is both explicit and implicit in the narrative of the case study (Guba & Lincoln, 1989b). Authenticity Various criteria have been proposed that relate to the democratic and empowerment aims of critical constructivist inquiry (Guba & Lincoln, 1989b; Hall & Steven, 1991; Lather, 1991). In this study, two criteria will be used forjudging authenticity: authenticity of relationship (a synthesis of related criteria from Hall and Stevens' and Lather's work), and catalytic validity (Guba & Lincoln, 1989b; Lather, 1991). 98 Criteria to judge authentic inquiry relationships attend to mutuality, reciprocity, and negotiation of meaning and power (Lather, 1991). The quality of the researcher-participant relationship is assessed for engagement, involvement, rapport, honesty, mutuality, and trust, sharing of power and control, degree of dialogue, and reflexivity (Hall & Stevens, 1991; Lather, 1991). Empowerment of the participant in the form of improved and expanded understanding of her situation/world is the basis of catalytic validity. Catalytic validity refers to the degree to which the participant has gained self-understanding and self-direction (Lather, 1991). Procedures to Ensure Trustworthiness and Authenticity In this study, the following procedures were used to establish trastworthiness and authenticity of the research process and findings: 1. A detailed research proposal and final report were written that describe, explain, and justify methodology and methods. 2. A detailed documentation of data collection and analytic processes and decisions was maintained in a retrieval form as suggested by Huberman & Miles, 1994. This included: raw materials (tapes, personal notes, evaluation documents); partially processed data (transcriptions with margin notes; summaries); transcripts with specific codes attached; record of criteria used in applying coding categories; analytic materials (memos); data displays; successive drafts of analysis (re-coding), linking of categories, and progression of writing, researcher diary 3. Triangulation occurred across data sources and data collection procedures to determine the congruence of findings among them. Multiple sources of evidence allowed for the 99 development of converging lines of inquiry, bringing together more than one source of data to focus on an issue (Muscari, 1994; Yin, 1994). 4. Checks took place for the most common or insidious biases that can creep into the process of drawing conclusions (Huberman & Miles, 1994). There was an active search for contrasts, comparisons, outliers, and extreme cases, and checking that descriptions, explanations, or theories about the data contained the typical and atypical elements of the data. Several attempts were made to deliberately try to discount or disprove a conclusion drawn about the data. 5. To avoid theoretical imposition I attempted to clarify my preconceptions about clinical evaluation and consider how these might affect the research. I tried to focus on conceptualizing the participant's perspective, rather than interpreting based on my predetermined theoretical frameworks (Hutchinson & Wilson, 1994). I dialogued weekly in a diary to test my values, beliefs, assumptions, and theoretical alliances. 6. A respectful relationship was established and maintained with J. We attempted to assess our respective influences on the interaction and the impact of feelings and reactions on data gathering and analysis (Lipson, 1991). 7. J was involved in the joint construction of meaning from tapes and interview sessions and the setting of direction for further data collection and analysis. 8. I reflected and wrote regularly in my researcher diary. This is discussed further in the section on role of the researcher. 9. I submitted samples of work (data, transcriptions, analysis, analytic memos, diary) to my thesis chair for review, feedback, and/or validation of thinking/findings. 100 Generalization When a case study is undertaken, the issue of generalizability must be addressed. Claims one may make about generalizability and the ways one might assess it are different depending on whether the research is aimed at idiographic interpretation or nomothetic explanation (Johnson, 1997). Nomothethic generalizations rely on probability sampling and other techniques to ensure external validity (Sandelowski, 1996). Idiographic approaches assume a unique case as the unit of analysis and are primarily concerned with interpretation and meaning (Johnson, 1997). Case studies permit idiographic generalization (Sandelowski, 1996). Morse (1997) states that all inquiry is theory focused. Concepts and theories provide a backdrop that justifies the research and provides a context in which the results are reported. Support for one's findings are located in existing theory. In case study research, generalizability does not come from the participant's representativeness, but rather from the adequacy and explanatory power of the concepts and theory that are used to understand the data that appear from the study (Sharp, 1998). It is the theory or model used to explain the findings that can be generalized to other situations (Morse, 1997). Idiographic interpretation relies on the ability of the reader to recognize attributes and match images. Idiographic work is aimed at enriching understanding, enlarging insight, and capturing new possibilities. It should open up new possibilities and help the reader see things in new ways (Johnson, 1997). Thus, generalizability depends on how well the case study is conceptualized, whether it is presented in sufficient detail so that it interpretable and can be tested in individual situations, and whether it is sufficiently compelling to convince practitioners of their usefulness (Gilgun, 1994). 101 Role of the Researcher This research design relied heavily on the researcher's use of self. If the researcher's self is the primary instrument for collecting and processing data, then the qualities of that instrument must be known (Lipson, 1991). There are many possible ways for the researcher to influence the construction and interpretation of data. The researcher's values, beliefs, and expectations may create selective listening and influence what is seen and understood (Field, 1991). In addition, the researcher's response to those being studied affects her/his analysis and interpretation of the data. Reflexivity acknowledges that the researchers is a part of, rather than separate from, the data and exploits this self-awareness as a source of insight (Lipson, 1991). Problem of the Insider Role In this study, the role of the researcher could be described as an "insider" (Field, 1991). J and I knew each other well and I was very familiar with the clinical setting in which she was evaluating the students. The study of a peer group can be beneficial in that familiarity may assist the researcher to focus on what is happening in the group and her/his own response to it. However, when the study topic and setting are familiar, both researcher and participant may assume too much, important pieces of data can be overlooked and behavior can be taken for granted (Field, 1991). There is also the danger that the researcher will over-identify with the participant and adopt her/his way of thinking (Paterson, 1994a). Finally, the researcher must be conscious that events are not being perceived from the perspective of a CNT, but rather from the perspective of a researcher. This is especially true of novice researchers (Field, 1991). 102 The participants being studied may experience the research process as an evaluation or judgment of their teaching ability (Field, 1991). Hurt feelings can result, especially if the participant perceives the researcher's discussions as suggesting that things are in need of improvement. Participants may also ask the researcher for evaluative feedback or affirmation of the appropriateness of their teaching behavior (Paterson, 1991; Pugh, 1986b). It is important for the researcher to clearly state that the purpose of the study is not to gather data for evaluation nor to establish the effectiveness of the CNT's behavior. In addition, the researcher should consider how to respond if the information gathered is not flattering (May, 1991). Extreme tact and consideration of the subject's self-esteem is required if it becomes evident that the CNT needs considerable improvement in her clinical teaching (Pugh, 1986b). Researcher Reflexive Diary In qualitative research, the possibility of researcher effects (reactivity) must be acknowledged and dealt with, especially in designs where reciprocity and partnership are involved (May, 1991). It is recommended that researchers devise a method for the regular, ongoing, self-conscious assessment, documentation, and analysis of the possible influence of the researcher's biases, feelings, and behaviors on the research process and outcome (Huberman & Miles, 1994). Lipson (1991) agrees that this sort of self-assessment is necessary in order to avoid countertransference, or, "writing down the researcher's own problems and preconceptions as data" (p. 86). Reactivity in the researcher-participant interactions was assessed by using the analysis framework developed by Paterson (1994a). This framework provided a deliberate and thoughtful structure for analyzing intra and interpersonal influences on the collection and interpretation of data. The framework directed me to consider five factors thought to shape 103 interactive behavior: emotional valence; distribution of power; importance of interaction; goal of the interaction; and effect of normative or cultural criteria. Each area was systematically assessed to identify possible reactive effects and also to direct further questioning and taping. Notations (memos) of my ongoing efforts to analyze and theorize about the data were also kept in the diary. These analytic materials were used both to develop my thinking and to track it. Ethical Considerations Researchers are responsible for ensuring that measures are taken to protect the participants' rights to self-determination, privacy, anonymity and confidentiality, and to protect participants from discomfort and harm (Burns & Grove, 1997). The right to self-determination is protected through ensuring that consent to participate in the study is both informed and voluntary. This consent requires that participants be apprised of actual or potential positive and negative consequences of the research and/or publications resulting from the research. Confidentiality requires the meticulous handling of information shared by the participant. Steps must also be taken to prevent unauthorized access to both the data and the identities of the participants. The proposal for the study was approved by both the University of British Columbia Behavioral Sciences Research Board and the University College of the Cariboo Ethics Committee for Research and Other Studies Involving Human Subjects. To ensure the participant's right to informed consent, she received a detailed written consent form that included an explanation of the purpose of the study and research design (see Appendix B). She had an opportunity to discuss any concerns she had regarding the study. This was especially important given the nature of the research design which required her to maintain an 104 intensive dialogical relationship with me for the purpose of exarnining her evaluative practices in detail (which potentially involved the examination of her personal values, beliefs and assumptions). The written consent form contained the following components: 1. A n explanation of the purpose of the study including a detailed description of data collection procedures, her role in the data analysis procedures, the number of tapes and interviews requested, and the amount of time required to complete tapes and interviews. 2. A discussion of the voluntary nature of consent and her right to determine what information she wished to share and/or withhold. That provisions would be made for the two of us to review the consent on a regular basis and make changes as necessary to accommodate our evolving understanding of the research procedure and process. This on-going revisiting of consent recognized the evolutionary nature of qualitative studies that make it difficult to know in advance exactly what kinds of questions might be asked or what potential risks might be involved in the future (Lipson, 1994). 3. Steps to ensure that any information resulting from the research were kept strictly confidential. Procedures for the management of the tapes and documents were outlined. Procedures to protect the anonymity of participants (J, students, clients, and staff) were listed. The actual names of all involved persons did not appear on the transcripts and research report. A l l persons were given an alphabetical code, and pseudonyms will be used in the research report and/or any other publications related to the study, and non-relevant case material will be distorted. J is to be involved in decisions as to what details may be published. Also, we discussed the difficulty of ensuring total anonymity in a single 105 case study where participant and researcher are peers and the potential is there for colleagues to be able to identify her in the written accounts of the study. To ensure the students' right to informed consent, each of the students received an explanation of the study during their orientation to the clinical rotation and were encouraged to ask questions. The students received a detailed written consent form that was similar to the form for the CNT (see Appendix C). Limitations of the Study The major limitation of the research is the limited amount of time of the study period. Ideally, a case study of this nature should extend over more than one clinical rotation in order to provide an increase in the amount of data that could be used to further develop the picture of J's evaluation practices. A second limitation is related to the ability to generalize the findings. While a case has been made for the ability of a case study to add to the body of knowledge on a topic, the scope of generalizability is limited by the fact that the study is descriptive-exploratory in nature and only one case is being studied (Morse, 1997). However, if the data "fit" the experience of other CNT's, then this limitation may not have a significant effect on the ability of the study to inform the practices of other CNTs (Johnson, 1997). Think-aloud method depends on J's ability to accurately record her thoughts in detail. Most clinical settings are busy places and there is little "down" time for CNTs. This may result in rushed recordings or an increase in retrospective recordings. The willingness of J to volunteer may reflect a strong interest in clinical evaluation, which could affect the research findings. The major criticism of volunteer samples is that they are "biased" by virtue of the selection process. The use of a volunteer facilitate a certain type 106 of participant with a certain knowledge being included in the study (Morse, 1991). For this study, the purpose and intent of this sort of sampling was to obtain a willing, informed, and articulate participant and thus bias in the sampling was used positively as a tool to facilitate the research (Morse, 1991). J may have recorded information she wished the researcher to hear. The use of multiple methods of data collection and co-construction of meaning were intended to uncover and counter this possibility. The researcher and participant know each other. We have worked together for approximately eleven years. Although we teach in different areas of the program, we have already been involved in many philosophical discussions and debates about clinical teaching, evaluation practices, and factors impinging on our ability to be effective CNTs. Both of us have acknowledged that the study findings have undoubtedly be influenced by our prior shared knowledge. Conclusion In this chapter, the research design of critical case study has been detailed. The process of case study method and the theoretical underpinning of critical constructivism were reviewed. Proposed strategies for data collection and the data analysis, and the role of the researcher have been presented. A discussion of processes to promote rigor in this study has specifically addressed triangulation, and construct, face, and catalytic validity. The procedures proposed to protect the human rights of those involved in the study were outlined and finally, limitations of the study were suggested. 107 CHAPTER FOUR The Findings In this chapter, I discuss J's evaluative practices and sources of influence on her evaluative decision making. I detail her practices for collecting and interpreting information about students' clinical practice, for arriving at judgments about the quality and safety of the student's performance, and for developing recommendations as to what action should follow. Factors influencing her evaluative decision making, such as availability of opportunities for clinical practice, amount of time J had with the student, and her views on nursing practice and clinical evaluation, are interspersed throughout the chapter. A significant aspect of J's evaluative practices involved balancing the different purposes of clinical evaluation; that is, teaching, ensuring safe practice, and documenting student progress. The chapter ends with a discussion of the practices J used to increase her awareness of potential sources of bias in her evaluative decision making. A schematic representation of her evaluation practices is presented in the Figure. The Case: J J is a CNT in a baccalaureate nursing program at a medium-sized university college in the interior of British Columbia. The nursing program is one of a consortium of several college, university-college, and university schools of nursing that implemented an interpretive- critical baccalaureate nursing curriculum in 1992. Background in Nursing Practice and Education After graduating from a B S N program in 1985, J worked as a medical-surgical and oncology nurse in a metropolitan cancer clinic. She then moved to her current location where she worked in the neurological intensive care unit of the local hospital. She began teaching 108 109 part time for the school of nursing in the diploma program in 1989, obtaining a full time position in 1990. She continued to work on a causal basis at the hospital for two years and completed a post-RN critical care course. In J's first experience as a CNT, she taught surgical nursing with students in the final two semesters of what was then, a six semester diploma program. In 1993, she developed a health science course focusing on pathopysiology for the new curriculum and had taught this course since its inception in 1994. Since that time, she taught surgical nursing with third year students in the fall semester, and second year students in the winter semester and the four week May consolidated practicum experience (CPE). J began M S N studies in 1993, completing her graduate degree in 1995 with a major in adult health and a minor in education. Her coursework included roles and functions of the nurse educator, curriculum development and instruction, and designing classroom evaluation instruments. She also studied social and cultural aspects of counseling because she believed it would assist her in looking at cultural biases affecting her interactions with students. A n important influence on her development as a CNT was her experience with curriculum change as the school of nursing moved from a behavioural-based diploma program to a critical-interpretive baccalaureate program. As a student, J had time to reflect on the "curriculum revolution" in the stimulating and supportive environment of graduate studies. Over the summer, she examined her teaching in light of the theory she was studying, returning to her teaching in the fall and winter with a greater understanding of her practices. The Study Context During the study, J taught a group of eight third year students in their second rotation of Semester V . Students in the nursing program participate in acute care experiences 110 throughout the whole of second year (Semesters III and IV), a four week consolidated practicum experience in May (CPE II), and two rotations in the fall semester of year 3 (Semester V) . The students move through six week rotations on medical, surgical, maternity, pediatric, and psychiatric nursing units. In Semester VI, some students enter a "bridge-out" experience where they do 12 weeks of preceptored clinical learning and three weeks of theory to prepare them to write their registrations exams. Most students elect to continue in the degree completion stream where their Semester VI consists of community work, and then a return to the acute care settings for another four week CPE. Students then complete another semester of community work, followed by a six week community clinical experience, and then a six week preceptored clinical experience, usually in an acute care setting. Students are officially evaluated by comparing their practice to the domains of clinical practice, competencies, and quality indicators (performance indicators) outlined on the Practice Appraisal Form (PAF). The PAF contained 39 competencies categorized under five domains of practice. The domains and competencies are based on Benner's (1984) work, the curriculum philosophy, the R N A B C Standards of Practice and competencies expected of a new graduate, and the C N A Code of Ethics. The PAF for Semester V is in Appendix D. The Rotation Format The clinical course was designed to be six weeks in length with 3 hours of seminar and 12 hours (two days) of clinical practice per week (12 clinical days in the rotation). Because of the students' heavy reading and assignment workload, the format was changed to a condensed time frame of three eight hour days on alternate weeks (9 clinical days). This made a heavy week on the clinical week, but gave the students a light week every second week. Because of a larger number of students than was usual in the third year class, two I l l clinical groups needed to utilize the unit J usually had to herself. In addition, the scheduling of the health science course was changed and J had only two days available for clinical teaching. As a result, J and the students spent two 12 hour days every other week in the clinical area (6 clinical days in total). The Unit J and the students were on one of the surgical units of a 260 bed regional hospital. She had taught exclusively on this unit for the past 10 years. The unit had a small number of beds that functioned unofficially as a step down unit from ICU. These clients tended to be more complex in terms of physiological status and nursing requirements. There recently had been a large staff turnover that resulted in the unit being staffed by a small core of experienced surgical nurses and many new graduates and casual RNs. J reported that she thought she had a good relationship with the majority of the staff and that they worked closely with her and the students. The Group J worked with a group of eight students consisting of seven females and one male student. To protect anonymity, students are referred to by a letter (Students A - H) and all in the female gender. Six of the students had been together since the beginning of the program. To accommodate scheduling needs of students and faculty, there had been minor shuffling of students between the various groups resulting in two new students joining the six core students for this rotation. At the end of the study J made a global assessment of the students utilizing broad criteria and a normative process. According to her, the group consisted of two strong students, three average-to-strong students, one average student, one weaker student, and one student on a learning contract (LC). 112 Evaluative Decision Making In the following section, I will detail J's evaluative decision making processes. Illustrated are the thinking and decision making processes she used to collect, analyze, and interpret the various pieces of data to arrive at conclusions about students' practice. J's evaluation took place within the context of developing the students' nursing practice. As J worked alongside the students, she evaluated their performance. Evaluation often lead to strategies aimed at improving the student's practice and then to further evaluation. Data Collection The clinical evaluation process began for J with the new group of students. At the beginning of her relationship with the students, J knew something about each of them from having previously taught most of them and from discussions with colleagues. Aside from these sources of prior knowledge, J viewed each student as an unknown entity; a blank canvas upon which a picture of the student's practice would emerge. The picture was filled in, bit by bit, by collecting many examples of the student's practice, referred to by J as, the search for the "pieces of the puzzle". Sampling Sampling was a central theme in all of the TAs and interviews. J believed that many rich examples of a student's practice made it more likely that the final picture was representative of the student's actual level of practice. This theme of sampling as getting the whole picture was captured repeatedly in her metaphors "pieces of the puzzle" and "snapshots of practice". Sampling determined the quantity and quality of the pieces of the puzzle. 113 The "Plan". The plan for the rotation was an outline of the kind of clinical opportunities that J wanted for each student over the course of the rotation. The plan reflected what she thought were the most important experiences for students, in terms of their development as nurses, and for herself, with respect to gathering the data she believed would tell her about the quality and safety of the student's nursing practice. J developed her list of essential learning experiences for students from a combination of her previous experience with students, the types of clinical experiences that were typical for the unit and her interpretation of the clinical competencies as outlined in the PAF. For a third year student, they all have to have an experience working with complex clients with multiple needs with lots of healing initiatives going on. Because there will always be something changing in there and it requires high level thinking and organization. There is a lot of things going into that so everyone of them has to have that experience. They all need to have a team medication experience. I like them all to be in a situation where they are going to have to do some sort of teaching. I look for an experience where they have to deal with some psychosocial issues. So I try to give every student an experience where those things are going on, plus they all need experience with thoracic surgery, vascular, urology, and general surgery. (Oct 31) The rotation plan also included an individualized component. During J's first meeting with the clinical group, before they began clinical practice, she interviewed each student. The students discussed goals they had developed jointly with their previous CNT. J attempted to design learning experiences that helped students achieve both the rotation goals and their individual goals. Student G wanted to work on, she is terrified of mini-bags and IV meds because she's had very little experience other than her second rotation in second year. So she got, not a lot of IV meds, but she had a lot of, a ton of anti-emetics and analgesics to give. We gave all the antiemetics by mini-bag and by the end of the day she was just flying, not nervous at all. (TA Oct 26/27) Tracking. The "Plan" was used in conjunction with J's anecdotal (instructor) notes to track student growth. These tracking processes helped J keep abreast of individual student 114 strengths and areas to work on, which in turn guided sampling and focused the time she spent with each student. J used her plan and her instructor notes, in conjunction with "spot checks" during the clinical day, to routinely review what she knew about each student. Tomorrow I'm going to spend more time with Student D and a few others, to get a better handle on how they're doing and I'll do a little more talking about specifics with their clients in terms of knowledge and following through on a few things related to discharge planning and some of the psychosocial issues. There really wasn't any, other than incidental teaching, there wasn't any planned teaching required for anybody today. That may change. I'll see what happens tomorrow. (TA Oct 26/27) Clinical assignment. J assigned a clinical workload to each student. The assignment provided opportunities for the students to learn and develop their practice, while at the same time providing opportunities for J to evaluate their clinical practice. The assignments depended on the opportunities available on the unit and that week's sampling needs. J felt that the student assignment had an enormous influence on the students' learning opportunities and her ability to evaluate. I think the difference is in the quality of the experiences they had on the floor, because Student F, her first week had four clients, not super heavy or anything, but there was lots going on, lots of assessments that she needed to do, was organized, was on top of that, and the next week I gave her that one fellow who was very complex, very challenging, she did an outstanding job with him, dressing, lots going on, teaching, and she's the one that made the assessment he could do it if we paced it at his level and took it slowly and everybody else said he wasn't capable, and the neat thing that turned out, it ended up that he was there when we came back is that he was doing his own ostomy care and she was the one who initiated it so I had some really good data with her. Her medication experience wasn't that heavy and I would have liked to see her on the second day to see what kind of change would have been in her practice. (Dec 14) J put time and effort into selecting the weekly assignment for the students. A common strategy in critical-interpretive curricula is student self-selection of their clinical assignments. J believed that, because the assignment was such an essential sampling strategy, she could not have the students choose their own clinical assignments. The assignment could be 115 somewhat flexible based on changes in client conditions, admissions and discharges, and student requests as illustrated by the following anecdotal note (J called these her "Instructor Notes") that she gave to Student A , and her thoughts concerning Student D during the final week of the rotation. If appropriate next week, I would like you to begin taking on more of a leadership role by team leading (similar to what Student C did last week). This experience would include doing the assessments for 10 clients, charting, contacting physicians, and doing end of shift report. What are your thoughts on this idea? Lets talk about it before clinical. (Instructor Notes, Nov 9/10) D was doing well with the four clients, wanted to work on organization, so she decided to pick up a fifth client tomorrow and that is just fine with me. (TA Nov 23/24) Sampling decisions were made based on evaluation of a student's clinical performance, which in turn influenced the next clinical assignment and where J spent her time. For instance, if J decided that what she saw reflected good practice, or if she felt certain about her interpretation of this area of practice, she was able to let a student work more independently, perhaps alongside an R N , or be sent off the floor to work with staff in one of the ambulatory care units. These alternate assignments allowed J to spend more time with the rest of the group. J wanted to get a broad range of samples of performance for each student which meant she had to have them involved in many situations and activities. On a typical clinical day, J had to "keep on top" of the care of 16-22 acutely i l l clients while also working with two students giving medications for a team of clients. The students received many opportunities to practice various areas and it provided many potential evaluative opportunities, but paradoxically these sorts of assignments made it difficult for J to do more 116 than supervise skills and monitor the client situations, thus decreasing the amount of time J could spend with each student. Time. Several factors influenced how much J could take advantage of the opportunities provided by the assignments: the acuity of the unit, being tied up supervising skills, and the rotation length. The assignment influenced the amount of time J spent with each student, and which aspects of the students' practice she could evaluate. But when the pace of the unit was heavy, physical care took priority. And I need to spend way more time with Student D tomorrow. Because of the acuity of some of the other student's patients, I tended to be that way and Student D didn't see enough of me today, so I have to spend a little more time with her tomorrow. (TA Oct 26/27) If the floor is really heavy and we are worried about just dealing with the physical stuff and the skill things that need to be done for those people, I often have less time to sit down and talk to the student about this family member and how do they contribute to this person's health and healing. (Nov 22) The T A tapes had more comments and J's instructor notes were longer for the students with the complex assignments. Because she spent more time with these students, J had rich data about their practice but less information about the others. J also spent a good deal of time teaching and evaluating skill performance, an activity that took up large portions of her time. If you are in the client's room with a big dressing and the student is slow and the dressing takes an hour, that's an hour that I don't get to see other of things that are going on. And if you have a lot of those procedures, and you often do, your whole morning is gone before you get a chance to do quality things with students, other than just the supervisor role. (Nov 22) The condensed rotation format had a major influence on her sampling practices. In terms of the assignment, J had to start students off in challenging situations earlier than she normally would in order to sample sufficiently by the end of the rotation. The students did 117 not get an opportunity to settle in and become familiar with the area and the different expectations. Because the students were new to J and to surgical nursing, she was tied up supervising students closely in order to get a beginning evaluation of each student's capabilities, and to ensure the safety of the clients. Students who did well clinically did not seem to be disadvantaged by the sudden immersion into acute surgical nursing practice, but J believed that the shortened rotation was unfair to the average and weaker students, and that the "slow starters" were particularly disadvantaged. Pattern. Sampling assisted J to conclude whether what was seen was typical of the student's practice or not. To establish a pattern, J made sampling decisions that would enable her to collect the data she needed in order to conclude that the student's practice was a certain way. The clinical assignment was important because particular clinical opportunities were needed for each student. In this way the sampling needs drove the assignment. Students B, D, and G won't go to the A C U [ambulatory care unit] experience this week because I need to see more of them in relation to patient care, so the choice is between Students H , E, F, C and A. Not sending Student C because client care, team leading, needs to see more meds and patient care. Would like to see H do more client care, has done simple procedures and dressings, but nothing too complex. Would like her to take this one client so she can do the complex dressing. E is scheduled for team meds, haven't seen any team meds, so keeping her on the floor. A , seen client care, team meds with client care last week - have seen med administration, IV, sub-Q and that kind of thing. F hasn't done that, so I guess that's my decision. F has to stay on the floor and do a team med experience and since I've seen A do a more rounded experience, I'm going to send her down to A C U . (TA Nov 23/24) J most often mentioned establishing a pattern with respect to students' practice that was below the expected level or appeared unsafe. If concerned about a student's practice or if J felt unsure about the students' level of performance and/or safety, she decided that further opportunities were needed for the student to practice in that area so that the student could improve and she could gather more data. J seldom made an evaluative conclusion based on 118 the students' first opportunity with an area of nursing practice. This was especially true in cases where the student's practice was unusual or unexpected. She made a mental note of the behaviour and if the student continued to perform in a similar way, J began to suspect that the initial example was typical of the student's practice. I wouldn't say it's a flag but, I know this and if something similar comes up that shows she doesn't use common sense, or there is something that seems a little out of place then I will start asking some more questions but right now it is just one incident and that may be all that it is. (Nov 7) Data Collection Practices J used several methods to collect data about students' nursing practice. Observation, questioning/discussion, and student written work were the most common methods she used. Other sources that she frequently mentioned in the TA's and interviews were clients and family members, the staff, and other CNTs. It was what I saw personally and what we talked about, what I saw her [the student] doing and in our Kardex discussions, I asked her a bit more because of what I saw and that was when I got the really rich data she gave me so much more that I didn't see going on. And I went back and talked to that client later, to get a sense of what the client felt like, and the client had tears in her eyes when she was telling me, and told me how special G was, and she made the difference, so when you have clients affirming what you are seeing as well, you know that student has done something. (Dec 14) J considered herself a "sponge", "soaking up" observations and impressions and squeezing them out later on her anecdotal (instructor's) notes. She could not always reach an evaluative conclusion while she was working with the students, as she had to move quickly onto the next student situation. Because she was unable to process information until later that evening or the next day, the T A often described her immediate thoughts and observations while her instructor notes were a fuller source of research data about how she interpreted the 119 data she collected to reach conclusions about students' practice and make decisions about further teaching and evaluating strategies. J was actively engaged with her students as they cared for their clients. Many examples in the TA's were rich in detail on the context of the student's practice. J utilized both quantitative and qualitative data collection methods and her evaluative conclusions were derived from multiple sources of data. Her conclusions about the students' practice were frequently supported by her observations around specific client situations. While we were doing the care, he [the client] was asking lots of questions so Student A picked up on his cues that said that he was ready for teaching so she got these, they have these little pouches of ostomy supplies so that clients can practice handling equipment, snapping on the bags on and off the phlange, putting on the clip. He was interested in seeing the video and then he wanted his wife to see the video with him. So she arranged in the afternoon to do the teaching with him. She got the teaching sheets going, showed him the video, spent really I thought, quality time with this couple, helping them through the transition dealing with the ileostomy. (TA Oct 26/27) Observation. Observation was one of J's primary data collection practices, particularly related to evaluation of skills, assessment, and organization. She viewed these areas of practice as easier to objectify and thus, to articulate. J also recorded many instances of her observations of student interactions with clients, family members, nursing staff, and other health team members. J appeared skillful at observation with respect to being able to take in many things while observing students. J said that she relied on her expert knowing, as she observed "wholes" and responded to her intuition. I can often tell if they are prepared by just watching them, in terms of how they are processing the information. They all have research sheets already. They have the drug book there. One of the things I have them do when they go into the patients' rooms, I like them to say, "This is your blood pressure medication and it's going to help lower your blood pressure by helping you pee", something like that. And I can tell just by how they are interacting with client, whether they have done their work. So for me, I don't actually have to ask the question. (Oct 31) 120 When students had their first experience with a complex assignment, J worked closely with them. She was aware of the potential for things to go wrong because of the complexity of the situation and because the student was new to the experience. As J guided the student through the experience, she also evaluated the student's abilities. I noticed it almost right away [that she was a capable student], she was the one first day on the floor that had that tremendously heavy assignment and I was sticking pretty close to her because I knew it was a heavy assignment with lots of challenge. Not really knowing her abilities that well and being new on the floor there were a lot of things that could potentially go wrong there. (Nov 22) J always watched students performing skills for the first time, even if they had been doing it independently in the previous rotation. She stated that skills were not "generic" and that the client situation regularly required adaptations that the student may not have faced before. And this is the piece that the students don't understand. Just because they did something well in another area, doesn't mean there isn't value in having someone else watch them do it again. Because there is so much more that goes on while that skill is happening. It is the relationship between them and the client. I can't just make a blanket statement, where the student wants a checklist where they can say, "I don't want you to watch me because it has been checked off', is short sighted. That may be true but I may know something about this experience that I know is going to be new for you so I am going to go into the client's room with you. (Dec 14) J was aware that her observation of students made them nervous and could affect their performance. One strategy she used to lessen this was to talk with the client as the procedure was being done so as to not appear to be evaluating the student. At times, she had tried other creative strategies to decrease student evaluation anxiety. Like another student who said I made her really nervous watching skills and we needed the student to be able to practice. So one of the strategies she came up with was she taped herself doing the skills at home and then she would critique and I would critique the performance and we were able to get over the hump by doing that. (Oct 19) 121 J did not always observe the entire procedure being performed by students. Sometimes she had the student begin without her and would check on the student at some point during the procedure. She did not say how she decided at which point to step in and observe. She also did not believe in routinely observing students without their advance knowledge because of the potential for student anxiety. However, J also recognized the value of serendipitously collected data. I never do it [observe the students' practice] without telling them I am going to do it but I might pop in when you are doing this and I pop in and see what is going on. Sometimes I happen to walk into the room and it wasn't planned. I am looking for somebody and I walk into the room and I see something. There's another chance to see what is going on. (Nov 7) There is an opportunity for Student G to do staples removal assigned on another team and she's never done staple removal but because it's so simple I just talked her through what she needs to do and off she went and I'll just pop in and see how that's going while she's doing that. (TA Nov 9/10) Discussion and Questioning. J used discussion and questioning in two ways: planned sessions via the Kardex discussion and the medication review, and incidental questioning. Discussion and questioning was used as a strategy for teaching and evaluating, i.e., to assess the student's thinking around the client's plan of care in order to evaluate the student's knowledge and to ensure that the student was addressing client's needs appropriately and providing adequate care, and to assist the student to make connections between the many pieces of client data and to integrate classroom theory in the situation. Kardex discussions were highly valued by J. It provided an opportunity for her to spend one-on-one time with a student which enabled her to develop a picture of the student's understanding. 122 You can do the same thing [discussion/questioning] without having the Kardex there. The way I look at it, I use it as a means of getting the student out of the hub-bub of the clinical setting. We go back room where it is quiet and we can spend one-on-one time and I can really focus on some of the aspects that student wants to look at in terms of their learning but also some of the things that I think are important. I like to do that because I'm not quite as available so that people aren't distracting us in the discussion because it is really easy to get torn away from that and I really value the one-on-one. (Oct 24) J commented several times on the student's valuing of Kardex discussions. J also discussed how her questioning practices had evolved as she gained experience as a CNT. As a novice CNT, her questioning practices were designed to create an evaluative environment for students. To decrease anxiety surrounding the evaluative component of Kardex discussions, J adopted several strategies, for example, she involved the student in directing the discussion, starting with areas of student interest or where the student had questions. Sat down and also did a review. She had a good understanding of all her clients' surgical procedures and underlying physiology behind those, things she needed to watch for and potential complications. She did very well in problem solving. I asked her some fairly difficult questions and she was talking and saying things like. "Let's think about it this way", or, "Think about that", and she was able to get to the right answer or pretty close to the right answers. (TA Oct 26/27) J occasionally had students tape a discussion of their client's medical condition, surgical procedure, medications, or diagnostic procedures in cases where the student's anxiety appeared to block her/his ability to directly discuss her/his knowledge or thinking with J. J was also aware of other factors, such as student fatigue, that could interfere with the student's ability to discuss client care with her. Sometime she stopped the discussion when she could see that it was affecting the student negatively. It was late in the shift and she was tired. She had some difficulty recalling some of her pharmacology knowledge and said she was tired. She knew the area but just couldn't pull it out. I wasn't concerned because she has been working really hard and her knowledge and you can see it is coming together. I could also see evidence of her knowledge base in her care over the two days. (TA Nov 23/24) 123 Although J did discuss social and emotional aspects of the client situation and tertiary prevention concepts such as interdisciplinary collaboration, discharge planning, and accessing community resources, her Kardex discussions were still heavily focused on pathophysiology. She felt that the nature of nurses' work on the unit required a focus on the biomedical aspects of nursing practice. He had very interesting diagnostic results so, in the back room, we put the results up on the board and talked about how the multiple myeloma would cause anemia, leukopenia, and thrombocytopenia. And when I asked her some questions about what would you see in terms of presentation with someone who has anemia, she was able to be bang on. He would be pale, might feel weak, lethargic, might be tachycardic and short of breath, and I asked her about the low platelet counts. She knew you'd be worried about bleeding and bruising and with the low white count, she knew the risk of infection and connected it to his pneumonia, so that was quite well done. (TA Nov 9/10) J frequently stressed students' ability to use what they had learned in the health science course. Traditionally, the science courses were difficult for many of the students. A l l of the students had J as their second year health science teacher. She recognized that many students were initially anxious when working with her because they had first-hand experience with her strong grasp of chemistry and biology. The students were fearful that they would be expected to be as knowledgeable as J was. This fear usually dissipated once they had experience with J's open style of discussion/questioning. Students' knowledge of pharmacology was evaluated by J when they were involved in preparing and administering all the medications for a team of clients (the team medication experience). This was referred to as a medication review. The focus of her evaluation was on classifications, why that client needed to be on a drug, important side effects, implications for teaching, related diagnostics, and drug-drug interactions. 124 When I did her med review, it was satisfactory. A couple of things she really needs to work on. She needs to have a better understanding of the antihypertensive meds, particularly since we are working on a floor with vascular patients. The good thing was, she knew that they were antihypertensives. She knew the general side effects, but she didn't know enough about each of the drug classifications. (TA Nov 9/10) Incidental questioning was an important method for J to assess the students' progress with their care and the clients' progress. J did this throughout the day at random times. She also questioned students and discussed procedures with students before the student performed them. Written work. Students handed in ,a series of written assignments following each clinical week, consisting of reflective journalling, a care planning exercise called the decision making model (DMM), and research around the clients' diagnostic procedures and treatments. Although J was clear with me about on how she utilized each aspect of the written work in her evaluation, she did not made this aspect of her evaluation practice clear to the students. J frequently referred to the students' "journal" in the TAs and interviews. This term most often represented D M M work but occasionally, she viewed the journal as the reflective writing piece. This distinction was important because J believed that the reflective piece of the written work should not be used as evaluative data, whereas other written work provided potentially useful information about the student's practice. Although J did not formally evaluate reflective writing, the reflective part of the journal sometimes helped her to understand the students' thinking and feeling. She stated that she knew that the journalling could influence her thinking about a student and that she needed to be reflective in order to "stay true" to her belief about its purpose. 125 The D M M component of the students' written work provided J with information about the students' knowledge, clinical judgment, and ability to evaluate care. J said she usually did not have to consider this source of data because she had enough information about the students' thinking from the clinical area. With one student, J saw evidence of adequate knowledge in the clinical area but not in the written work. J spent extra time with the student to identify the source of this inconsistency (disorganized thinking) and to work to correct it so that J could conclude that the student's knowledge was satisfactory. One incident highlighted the dilemma of knowing things about the student's practice from an alternate source. J identified information that suggested an area of poor practice in a student's reflective journalling and the D M M . J eventually determined that what had actually occurred was not a serious area for concern; however, it caused her to reflect that, if the case had been an example of a clinical problem, she could not have, in good conscience, utilized this information in evaluating the student's performance, given her belief that reflective journalling was not to be a source of evaluative data. Staff. The nursing staff provided a secondary source of evaluative data for J because the students worked closely with the nursing team. Sometimes she asked the staff for specific feedback about students' performance and at other times, the staff volunteered this information. It was J's usual practice to share staff feedback with the students. However, she admitted she did not weigh the feedback of all staff equally. She was more likely to utilize data about students from those staff members who were considered to be good role models, gave feedback that was congruent with J's observations, and provided a balance of positive and negative comments. 126 Clients. J valued data about students that she received from the clients. She felt they were a "good judge" of the quality of the student's care. As with the staff, sometimes J requested feedback from the clients, and at other times, the data came to her spontaneously. This lady has quite a few fears and anxieties, and she [Student B] spent a lot of time just listening, allowing that woman to talk. There were tears. I just, really felt she did an outstanding job and actually when I went in later, both the husband and wife just praised Student A so much and the woman said that Student B had made a huge difference to her recovery. (TA Nov 9/10) Previous knowledge. As a full time faculty, J already knew something about most of the students. Her usual practice was to "bracket" her personal prior knowledge; to approach each student as if they were a "clean slate". J was aware that her previous knowledge of a student could cause her to see the student's performance through the lens of her prior impression. She was aware that she had been using her prior information to negatively judge one of the students in the group. (J) That's very interesting because D tends to, if she can, get out of work. She will do it and sometimes I think she uses, I may be wrong about this, but I know D from CPE as well. She tends to say, "I am not sure, I don't know", so others will do her work for her. And this thing with the charting, she should have known. So granted, maybe she didn't, so she gets help on the Thursday. There were no excuses for not doing it on the Friday. And it was a different R N who didn't know that the R N on Thursday had given D that feedback. (Researcher) If you didn't know D from before, would you think differently? (J) If I didn't know it from personal experience, I would have known it from what was said in our team meetings. It has been a pattern with her. (Nov 7) J's beliefs around the use of previous evaluation documents in her evaluative decision making revealed a picture of a continuous evaluation process from the start to the end of the student's program. 127 Also, a well written evaluation allows you to recognize concerns with a student's performance. I will go back and look at past evaluations and see if there are patterns there. And if you can see that there is documented patterns there and even if it's the smallest little thing but you can see the glimmer where that's the first time it's shown up. Then that is helpful to me. The fact that it was documented. Because sometimes pattern cannot be established in your own rotation. Sometimes it has to be in the year, between rotations, and that helps when somebody else's done that. (Oct 24) Intuition. Sometimes J sensed something was unusual about a student's practice but she could not articulate it. Her practice was to work with the student as she collected data to help her pinpoint the source of her "gut" feeling. Sometimes the data arose naturally and other times, J would actively seek it through the type of assignment she gave a student. With Student B, my interactions with her, I hadn't asked her any in-depth type of questions. It was just general questions about her assessments. What her plan was. What direction she was going to go. I was feeling uncomfortable. I kept thinking, "She's not solid. There's something that she's not saying. She's being quite superficial when she's telling me that information". And generally a student, when you ask those questions, is very forthcoming and they generally tell you too much and you have to cut them off or direct them to more what you are looking for. So she was sort of skirting over things, and I'm making it sound like it was really clear, and at the time it wasn't. I walked away thinking, "There's something not right there. I need to think more about why". (Nov 22) Students' ability to discuss their practice. J recognized that the data she collected around a student's practice was dependent on the students' ability to articulate their practice. She believed that students differed in their depth of thinking, and in their ability to enter in the discussion process with her. M y relationship with her is different than it was with other students. Not that we had a poor relationship. I thought we did have a good relationship. It's just the way D relates and perhaps how she learns. She is not as verbal as the other students. So when I was having a dialogue with D, I wasn't getting the same richness of data or examples. There is probably lots of things going on that she keeps within and doesn't share and I guess one of the things I learned from this experience is that it's really important for me as a CNT to find ways of drawing that out. Because if the relationship isn't maximized, then it's not only the teaching learning process but it also impacts evaluation. (TA Dec 11) 128 J was aware that the CNT-student relationship influenced both the teaching and the evaluation process. In the past, J ignored relationship problems with students, blindly hoping that it would not influence her teaching and evaluating. Once she began confronting these issues, she came to believe that the best approach was to be forward in her attempts to create helpful CNT-student relationships. Through the research discussions she stated that she increased her awareness that some students would be unable to participate in the relationship in the way J envisioned. Other sources. J utilized three other sources of data in her evaluative decision making: students' charting, student presentations in post-conference, and end-of-shift taped reports. She made many comments made about the student's charting. J utilized charting as a source of data about the care that students had given. She used the chart to evaluate the thoroughness of the students' assessments and their ability to organize their thoughts and communicate a clear picture of the clients' status. In addition, when students presented a topic or case study to the group, J collected data about the student's knowledge base, ability to make connections, and teaching skill. She noted student performance in directing discussion in post conference on weekly written notes and on several of the final evaluation documents. Students occasionally wanted to tape an end-of-shift report for their team. J had them do this officially and then listen to the tape, or she would lend them her tape recorder and they would tape the report at home and hand it in with their written work for feedback. Focus of Evaluation The focus of Semester V in the curriculum was prevention and the quality indicators on the PAF had a community focus. J's evaluative focus was acute care nursing practice, especially in the clinical judgment where she did not use any of the prevention quality 129 indicators in her evaluations. Following is a discussion of J's focus of evaluation under each of the domains of practice. They are presented according to the emphasis J placed on each. Clinical Judgment Domain The clinical judgment domain was the primary focus of the TAs and the longest section on most of the final evaluations. This domain encompassed knowledge, assessment, clinical decision making, charting, psychomotor skills, and organization. J considered the aspects of practice under this domain to be central to nursing practice in her clinical area. She described it as the "centre of a flower with the other domains as petals surrounding it". An inadequate level of performance in this domain of practice was cited by J as the most frequent cause of LCs and students' clinical rotation failure. Knowledge. J emphasized evaluation of students' ability to integrate knowledge from multiple sources in order to understand the clients' conditions and their medical and nursing care. She evaluated students on their knowledge of teaching-learning theory and tertiary prevention. Biomedical knowledge was the central focus of her Kardex discussions and her evaluation of the students' written work. One student in the research group was on a L C because of inadequate knowledge. J believed this student's level of understanding to be substantially different from the other students in the group. One of the areas that J emphasized in this domain was pharmacology. She expected that students knew all the medications that had been studied in the health science course. J discussed sophisticated details about the medications with respect to anatomy and pathophysiology. She pushed the students to understand medications in depth. A l l students received satisfactory or excellent evaluations in this area at the end of the rotation. 130 Assessment. J always reviewed and completed students' first time assessments of clients to help them feel less overwhelmed and anxious. She believed this practice helped demystify the equipment and enabled students to see that they did have some skill and ability. J observed the student's ability to perform with her guidance and then evaluated the student's ability to continue assessing as had been taught. She frequently used the phrase "on top of things" to refer to a student's ability to keep up with relevant assessments She had four clients and picked up a new fellow that was admitted first thing in the morning with a lower GI bleed. He had lost a litre of blood, so he was on hourly vital signs. She quickly made up her team sheet to juggle the assessments she needed to do with this fellow. She knew what to look for in terms of vital signs. She knew to assess things like his diaphoresis, how pale he was, was he light headed. Knew to be checking and monitoring his stool. Knew that i f he started to bleed again that she had to keep him in bed. That he couldn't be getting up to the bathroom. To keep him quiet to reduce the amount of bleeding, and actually he was fine all shift. But she was on top of that. (TA Nov 9/10) J evaluated students' level of assessment skill by looking for awareness of relevant assessments (they knew what to assess), and ability to identify change (salience), do the assessment (assessment skills and being systematic) and record the findings accurately, thoroughly and in an orderly fashion. Decision making. J commented many times on student decision making and problem solving ability. Her usual practice was to have students articulate their reasoning about a client-related situation. If she assessed the student's thinking as incomplete, she would help her/him to see other aspects that should be considered. She noted it as a decision made with assistance. When the student's reasoning made sense to J, she would agree with the decision and considered it an instance of autonomous decision making. She spoke with social worker about her concerns that this man was too confused to go home. The social worker did an assessment and totally agreed with H's assessment. Should be transferred to [his local] hospital first. (TA Nov 23/24) 131 Because students performed numerous procedures and gave countless medications, J had ample opportunity to evaluate their ability to solve the typical problems nurses encounter in this area of their practice. There were several examples of students' decision making about how to adapt procedures in atypical situations; for instance, how to complete a dressing on a neck or posterior knee incision. J regularly discussed students' ability to clarify medication orders and to problem solve how to administer medications. She picked up that the morphine sustained release can't be crushed. And questioned why the staff had been crushing it. I asked her how she was going to problem-solve getting these medications down this fellow, because, particularly one of the capsules which was very large, obviously he wasn't going to be able to swallow it. So she said, What if I phone pharmacy to see if there are elixir forms of these?" She did that and there wasn't any elixir forms. So then we discussed phoning the physician. (TA Oct 26/27) Charting. J evaluated the students' charting was evaluated in terms of completeness, accuracy, and organization. She stressed that the charting needed to convey a clear picture of the client's condition and what had taken place because the other health team members relied on this information in their clinical decision making. The other problem was the charting. It was too brief. The same problem as yesterday. Lots written about the morning assessment but a lot of key pieces of information missing like the oxygen by nasal prongs, how fast, what did the central site look like, the fact that the N G was on low suction. It was organized according to systems so it would flow nicely - there was some of that. (TA Nov 23/24) J also stressed the legal aspects of accurate and thorough charting. Because she had experience with previous students being called to testify at inquests, she was sensitive about the legal implications of what the students recorded. One of the things she needs to do, she does her morning charting, and the last entry is around 2 o'clock. She needs to put a statement about whether family is visiting, up walking, whatever, just to show she has seen and had contact with clients between 2 and 1900. ( T A N o v 23/24) 132 Skills. J thought that she assessed psychomotor skills so frequently because they constituted a large part of surgical nursing practice. Students often performed a skill for the first time and sometimes the surgical rotation was the only place they could get experience with this particular skill. J assessed students' application of principles of asepsis and wound healing, dexterity, efficiency, amount of guidance required, and ability to be client-centered. J evaluated students' performance on the team medication experience with respect to attention to safety, efficiency, and knowledge. She evaluated the students' attention to principles of safe medication administration and their efficiency on the first clinical day. By the second day, the students were expected to have picked up on the organizational aspects, at which time J evaluated their knowledge in a medication review. Then in the afternoon, it was like a bomb went off in that place. And she had people discharged, transferred from one unit to the next, transfers in from ICU, and the meds were just upside down. And she methodically plotted her way through, figuring what was what and what needed to be given, what needed to be given and what hadn't been given and she did it with not a lot of guidance from me. Because I was rushing around. She asked appropriate questions to the PCC [patient care coordinator] and I told Student B at the end of the day that I was very impressed with how she managed her meds. (TA Oct 26/27) Organization. J evaluated organization of the students' writing (on their team sheet, their written assignments, and charting), performance on team medication, and ability to set priorities and complete their care within a reasonable timeframe. She did not consider organization to be an area of practice where she would allocate a L C . She believed that problems in organization were highly amenable to teaching strategies and it was an area of practice where her systematic, methodical, and logical way of approaching situations was easy to transmit to the students. At the end of the first clinical day, if students' organization 133 was a problem, she asked the students to consider how they could improve their organization for the next day. We talked first thing this morning and she had come up with a plan for making herself more organized at the end of the shift. On her own she had identified that she's taking too many trips up and down the hall and that if she has a more fluid approach to end of shift duties, like going into the rooms and checking IVs, hanging new IV bags, emptying all the drains, doing washes, the HS settling, then moves to the next room, all of the way down the hall, and then sit down and do her charting, and the other key here is to have the bulk of her charting done earlier on the shift, so that at the end of the shift she's really just doing the flows sheets, you know, the end of the flow sheets, the ins and outs and maybe one or two lines in the narrative notes. (TA Nov 9/10) Safety. Safety was evaluated both under this domain and the professional responsibility domain. J considered students who were consistently not prepared for clinical assignments, made frequent medication errors, repeatedly left side rails down on vulnerable clients, or who demonstrated inadequate knowledge base, as "unsafe". Over the years, J had worked with several students who had failed because of an inability to meet safety standards. Safety is discussed later in this chapter in greater detail in relation to evaluation criteria. Health and Healing Domain J commented on this area of the students' practice many times in the TAs. It was one of the longest sections on the final evaluations. In one instance, the entry about this domain was as long as the clinical judgment section. J mainly used observation and questioning for assessing this domain, however, she also depended on students' written work to assist in "building the picture" of this area of their practice, especially around the student's ability to involve clients in decisions around their care. Describing and evaluating the mutuality of their practice with clients was the focus of one part of the D M M . In J's evaluation of this domain, she stressed attention to the emotional needs of the client and family members. J evaluated students on their ability to assist clients in dealing 134 with major changes occurring in their lives, such as, adapting to an ostomy, amputation, or other large wounds, fears around their surgery or going home. As well, she assessed students' ability to respond appropriately when clients expressed these concerns. Student B was very good. She acknowledged the lady's concerns and how the body image change is difficult to deal with. And she said, you know, asked the lady what kind of relationship she had with her husband and the lady said, you know, very supportive and she said, "Well, maybe right now's not the time but maybe you just want to talk to him a little bit about what your concerns are with this stoma and you might find that he will be okay with it". And the client just said, "No way. I'm just grossed out by looking at it, I don't want him to see it". And Student B just supported her in that feeling and said, "Fine, you know we are not going to push you to do that. That's up to you". (TA Nov 9/10) Rather than use affective domain language to describe this area of practice (e.g., being respectful, attentive, or genuine), J used narratives to articulate the relational aspects of the student's practice; i.e., what she frequently interpreted to reveal the art of caring. Her conclusions about the students were framed in this way, in terms of the student's way of being. Student F developed an understanding of the factors that made her client unique such as context, family, strengths, vulnerabilities. This was particular evident when she came to know [client initials] and his family. She learned about the client's life struggles, strengths, social supports, physical capabilities, and endurance. Even though the client had intellectual delays, Student F never talked down to him and always included him in decisions about care when appropriate. As she developed a relationship with this client, his sense of humor began to emerge. (Final Evaluation) Teaching-Learning Domain J evaluated the students' ability to apply theory from their teaching-learning course. This domain was well-represented in the research data; however, it was the one domain where in some weeks, not every student was evaluated. J regularly used data the students provided her in evaluating their teaching practice as she was seldom able to observed planned teaching and only occasionally observed incidental teaching. 135 Collaborative Leadership Domain J regularly discussed aspects of the collaborative leadership domain in the TAs and evaluation documents. A key competency of this domain was questioning the hegemony of the health care setting. Students sometimes questioned a policy or the standards of practice but J identified no examples of challenging the status quo or identifying oppressive practices. Developing leadership skills was considered one of the goals of Semester V clinical learning and was reflected in examples of initiating client care, advocating for clients, working with other team members, and team leading. Two students had the opportunity to work as a team leader, at a higher level of performance than outlined in the PAF. J reserved this experience for students she perceived as "strong". Can really see how she's used the skills of being a team leader, applied it this week. Really running the team. Talked to the team leader [the RN] a few times, as a sounding board. Initiated changes in client orders where it needed, leaving notes on the communication board for physicians, talking to pharmacy making sure the meds are up. (TA Nov 23/24) Professional Responsibility Domain J considered the professional responsibility domain as an important area of practice; yet she did not discuss or write about this domain as the others. This domain received the least amount of writing in all of the evaluation documents. When presented with this observation J stated that the competencies in this domain were reflected in all the other domains. She typically evaluated student preparation for clinical practice, awareness of limitations, and use of resources to ensure safe practice. She also evaluated their ability to reflect on their practice and their receptiveness to feedback. She came to clinical well prepared and used available resources such as procedure, IV lab, and central line manuals to ensure safe nursing practice. (Final Evaluation Document) 136 Influences on the Focus The focus of evaluation was guided by J's values as a nurse, the nature of the workplace, the R N A B C standards and competencies, and the curriculum philosophy and clinical performance indicators. Personal practices. J acknowledged that she was better at teaching and evaluating certain aspects of practice. She felt that students gained different things from each CNT's strengths and that hers were in knowledge integration, assessment, organization, and psychomotor skill performance. J taught and evaluated in accordance with her style of thinking and practicing nursing. She valued an inquiring mind, a strong base in the natural sciences, and a systematic, methodical, and logical approach to organizing information. Institutions. J's decisions about what to evaluate were determined in part by the view of nursing held by the school of nursing, the work setting/employers, and the profession. These sources of influence were interconnected; the professional body approved the educational program, set standards, and outlined competencies for the new graduate, and the nursing program collected input from the workplace/employers on its graduates' competency. The type of nurses work practiced on the unit required skill in assessment, organization, psychomotor skills, interpersonal skills, and client teaching. Nursing care of these acute clients required the RNs to have a strong biomedical knowledge base in order to practice competently. J admitted to being discouraged by the state of the health care system with respect to the high levels of acuity and rapid turnover of clients, staff shortages, and casualization of the nursing workforce. Although J felt a definite pressure from the workplace with respect to what she felt students needed to be able to do, she did not feel it dictated what she evaluated in students' performance. For example, she assessed student ability as 137 having "one foot in both worlds"; i.e., to be able to practice from a health promoting nursing perspective while also being able to function within the reality of an illness-focused technological workplace. J felt that if students had "both feet" in the ideal and no "grounding in reality", they would not be accepted in the work environment and would be unable to influence practice. She believed that if they had "both feet" in the work environment, they would get so entrenched in the hegemony that they would not be able to practice as self-reflective, critical thinkers with a holistic perspective. J felt "forced" to focus the Semester V rotations on acute care in order to adequately prepare those students selecting to bridge out and receive an R N diploma. J "worked around" the fact that many of the performance indicators on the PAF were generally not obtainable through the type of nursing experiences available on the unit by looking for opportunities to evaluate aspects of tertiary prevention in her setting. Decision Making: What the Data Means This section describes the thinking and decisions making processes used by J to convert data to evaluative decisions, conclusions, and judgments. J judged the students' practice against the expected level of performance, utilizing a set of criteria and standards that, she had adopted over time. Her judgments commonly included a consideration of contextual factors, particularly in situations of practice that were below the expected level and/or involved student errors. Her evaluative decision making was also influenced by whether the intent of her thinking was evaluative or educative. Normative Referenced Evaluation In normative referenced evaluation, a student is considered in terms of the performance of the others in the "norm" group (Reilly & Oermann, 1992). Throughout the 138 study J made references to how the students ranked in relation to each other. J's comments also indicated that she considered the students as having more or less knowledge, skill, or ability than the large group of other students she had taught over the years. Satisfactory is performing at the expected level. Generally satisfactory means for the most part it is there but there are a few areas of concern or a few areas which are not where the standard needs to be. Outstanding and excellent they are far exceeding what those expectations are for a third year level students. Within those, I know what the competencies are, the expectations, sort of the bottom line, within that you have people who are above that standards, and occasionally you have people who are far above what you expect for a third year level they are actually performing in some respects like a fourth year level would be. (Nov 7) J's expectations were based primarily on her knowledge of the typical student at this level. She had developed this knowing over time as she worked with many students from the start of second year through to the end of third year. Because of her previous experience, she understood the usual pattern of student growth through the program. The lady in the next bed was a new admission. She had a gangrenous foot so I went over the vascular assessment with her. Had some difficulty finding pulses but that is typical of, with students the first time and the vascular patients preoperatively. They're often very difficult but she knew how to landmark on where those pulses should be, so that was good. (TA Oct 26/27) You demonstrated leadership skills this week that were beyond what is expected of a 3 r d year student at this point in the nursing program. The phrase that comes to mind to describe what you accomplished is simply amazing! Your level of performance was the equivalent of what is seen and expected at the end of CPE 3. (Written Notes: Nov 23/24) J frequently made global evaluations of a student's practice. She was able to differentiate between three levels of performance: strong, average, and poor or struggling. When the student is doing well, when you are watching them you are seeing that they have a very integrated approach to their care. They are able to do a lot of things at different levels, or a lot of things at the same time. So they may be doing an assessment but I am also noticing the way they are communicating with the person. They are also gathering data. They are coming to know that person. 139 They may be looking at IVs, or tubing, or equipment but they are also gathering information about what that person looks like and if we step out of the room afterwards and go over what was done, I can see just from the evidence that they are giving me, my own observation, that it was a integrated approach, more of a high level. Whereas I have had other students who maybe aren't struggling, what you would call your average student, and they will practice somewhat like the strong student, but not all of it. There will be pieces that are not quite as integrated, i f you talk to them and say, "Oh, have you thought about this?" You can see the lights, "Oh I never thought of that", or say "How is this piece impacting this piece of this person's care?" Once you show it to them, they get the connection and they are ready to roll on their own. A student who is doing poorly, even once you start to help them or give them guidance or tips, don't see the connections. They miss big chunks of what's going on. (Nov 22) Criterion Referenced Evaluation It was evident that J's evaluative practices were also criterion referenced. She regularly spoke about student practice meeting standards. The term standards referred to the R N A B C Standards of Practice and the performance standards outlined in the PAF (of which practicing according to the R N A B C Standards was one standard). J regularly used the wording of the performance indicator to describe her interpretation of the student's performance. Analysis of the T A tapes and written notes showed that J used a set of criterion similar to those developed by Bondy (1984). She was not consciously aware that she used this framework and she did not use the criteria with regard to determining the level of competency as Bondy intended. The research data was full of her comments on safety of the performance, assistance and cueing required, dexterity and skill, efficiency and organization, level of anxiety, application of knowledge and principles, and ability to be client-focused. Safety and amount of assistance were the two criteria that received the most attention from J. 140 Safety. Making a determination about student level of safety was a key decision for J. She frequently commented that the student "performed a variety of skills safely", or "prepared and administered team medications safely". She believed that one of her functions was to ensure no harm comes to clients from the student's practice. Being prepared for clinical and utilizing resources to guide practice were considered important indicators of safe practice. J acknowledged that she could not observe the students in all aspects of their practice. She sampled for behaviours that indicated safe practice and when unsafe behaviour was apparent, she would supervise the student more closely. When J concluded there was a pattern of unsafe practice, she initiated the formal evaluation process by drawing up a L C with the student. J believed that some areas, such as the safe performance of psychomotor skills, were easier to observe for safety. When I'm watching a student do something, generally what will tell me that there is going to be a problem, is a combination of behaviours. It is generally not one thing. Generally, it is not just missing name bands. Other things are going around that like their research doesn't seem to be done. Their thinking is not very clear. Like they are not logical or very scattered, jumping from one thing to the next, and I am using meds again. They are not following the med sheet down logically. They are jumping around on the sheet. It is usually a combination of things that says there is going to be a problem here. (Nov 14) J considered knowledge to be a determinant of safe practice. She believed that students who did not understand the basis of their nursing practice would be unable to differentiate a typical from an atypical client response. She frequently evaluated the student's ability to recognize and act on salient information by assessing the student's understanding of potential complications. J stated she had to trust that students would recognize when something was important, different, or changing. She referred to this level of performance as 141 "picking up on things" and "being on top of things. J had to determine whether the students would come to her when they were not sure about what they were seeing. J frequently commented on students' ability to recognize their limitations, and to seek guidance when needed. It was like a bomb hit that place. A l l sorts of things going on. The physicians. It was chaotic. I thought they did well with that. Asked good questions. Were working as team members. I liked the way they interacted with their clients. I haven't had a chance really, to look at knowledge. But they are coming to me when they're not sure about something, asking good questions that shows me that they are thinking. (TA Oct 26/27) Assistance and cueing. J frequently discussed students' practice in terms of how much assistance was provided. Level of independence was most often described with respect to skills, but also in relation to assessment, organization, charting, and clinical decision making. many simple dressings - well done - may do independently now staple removal and steri strips - done independently autonomous with clinical decision making in many situations yet got help when required with guidance, packed the wound with guidance from the instructor, called the physician with assistance is beginning to see the broader client picture and is integrating data from multiple sources (Preliminary Analysis, Instructor Notes, Oct 26/27) Once they had received guidance and feedback on their performance, students were expected to perform with increasing independence, particularly in those areas of practice where independent practice was specified on the PAF. J recognized a level of independence that also included the quality of smooth performance and self-confidence. This was a level of performance she hoped that the students could achieve; the ideal level. She referred to it as "flying" with an assignment, or "sailing". But she took no time at all to just fly with that assignment. She struggled in terms of the organization but she knew exactly what she needed to be assessing she was aware of the pertinent diagnostics that she needed to be on top of. 142 If I asked her questions about the meds, she knew that. When we looked at the assessment and how pieces were going together she was able to make those kinds of connections. The thing that I really noticed is that she had a lot of first time skills. She didn't get so focused on the skill that she forgot about the person who was there. She was able to do those skills and be able to think about the human side of the person who was there. (Nov 22) In certain areas, performing with assistance and cueing was the expected level of practice. J anticipated that students would require her help with complicated procedures and skills they had not performed before. In some cases, the school of nursing policy required J to supervise the student directly, as with IV push medication or T P N line changes. Usually have three lines coming out and you do have to walk them through so they can separate the lines and make sure they are hanging the right one and they get so confused, especially if you are doing TPN line change with a regular tubing change. You've got the three lines plus the three lines you are putting in, and it gets into a tangled mess so my hands have to be in there, really, helping. And they read the manual and they can tell me the steps but it is really different when you go in there and there are lines everywhere. That's not a negative, that is typical, they need to be walked through TPN. (Nov 22) Contextual Variables J was aware that many factors could influence a student's clinical performance and there were many examples where she thought about these before concluding that the student's practice was inadequate. J believed that this type of thinking helped her to be fair in her evaluations. She got her care done appropriately. Had some difficulty with charting. Once she charted in the wrong chart, a couple of her flow sheets weren't completed at the end of the shift. So I just had to sit down and go through it with her, and that may just be that she is not used to it. She has been out of the sort of, acute care setting hospital since May and [her previous unit] does things very differently. (TA Oct 26/27) She stated she needed to be sensitive to the role that she played when students encountered situations that were beyond their ability. During the rotation, one student was involved in a situation where a client's condition deteriorated and emergency measures were 143 required. The student was caring for several other clients and had left a lot of the care to the last minute which necessitated ranning around to get all the care completed. J recognized that her initial response was to judge the student harshly in terms of poor organization. When she took the time to consider the context, she concluded that it was the student's first time with a complex assignment and much of the problem was outside the student's control. And that happens so often on surgical floor. That is part of the learning for them. I looked at that. She did very well on Thursday. Got all of her care done didn't have any problems. Mind you, it wasn't as heavy but did that and was actually on task up until 3 o'clock and then because she didn't anticipate then didn't do the things that could have been done earlier, and then the guy changes. It was just shot. (Dec 14) There were several situations where J considered how contextual factors contributed to poor student performance. In these instances, she reflected on what she knew about the student and the student's usual practice, looked at what was happening in the situation, and determined whether the poor performance was typical for the student, or merely contextual. Student E actually is having, had some difficulties with the staple removal. She really made a very simple procedure quite difficult and I think part of that, though, is the way the client was interacting with her. He's, fairly, I guess, aggressive is not the right word, just a bit forward, forceful. And he's been in and out of the hospital a lot. So he kind of knows the routine and because Student E's not particularly self-confident, I think she was starting to doubt herself. But I went in and held, showed her how to hold the skin taut to make the incision straighter and its easier to slip the staple remover underneath and, you know, just to take your time and be patient and once I gave her that feedback she just flew along and was doing all right. But her face was bright red and I could tell that the guy was kind of, a little, you know, aggressive with her. (TA Nov 9/10) J had definite beliefs about the role of errors in student learning. She analyzed situations where students made mistakes to see what could be learned by the student, and herself, as a CNT. J had a situation where a student gave a series of cardiac medications to the wrong person, resulting in a drop in blood pressure that necessitated cancellation of his surgery. Her colleagues felt the student should be put on a learning contract. J did not see 144 this situation as being any different than a previous student who had also failed to check a name band adequately and had given an unordered iron preparation to a client. In her analysis of the meaning of the error with respect to the student's overall practice, J considered both instances to be a case of failing to follow correct procedure. She did not weigh the consequence of the error (one client was harmed, the other not) in her determination. There was no other data to suggest a pattern of unsafe practice so she considered this to be a "single snapshot" of the student's practice and a learning experience for the student. The student's acceptance of responsibility in the case of an error was an important factor that J weighed in her interpretation of what the error meant in terms of the student's practice. When I talked to that student about it, there was no acceptance really of the error. It was more like, "She's been in a lot of pain so the fact that I gave morphine SR, she is going to be comfortable". That's the kind of response I got. Rationalizing it, rather than saying, "We gave this wrong medication. It's action is long acting. What else do we need to be worried about here?" More finding the rationale to say, "No it isn't a big deal", and I think that is an important piece when looking at mistakes. (Nov 14) Teaching and Evaluating: "Pieces of the Pie" J viewed evaluation as a dynamic and ongoing process that was embedded in the teaching-learning process. She believed that evaluation created opportunities for teaching and learning, and that teaching provided opportunities for further evaluation. Her interactions with students usually had both a teaching and an evaluative component. J used the analogy, "piece of the pie" to illustrate this view of the relationship between teaching (formative intent) and evaluation (summative intent). Teaching, watching, gathering all this information on that student. You're building a puzzle, putting all the little pieces in. At the end, you have a big picture of what this student is like as a nurse. As you go along doing this, watching students do various tasks, I think of it as a pie. 145 Parts of that interaction may be formative and parts summative. It may be that one-eighth of the pie is summative where the rest of the pie is all of the formative information that you are giving that student. (Oct 19) In the previous semester, J had precepted a returning R N student who was studying clinical teaching. J said she noticed that this student CNT focused heavily on evaluation in her interactions with the students. This caused J to reflect that, as a novice, she had been the same way. When she started out as a CNT, she stated she was "constantly evaluating" students, expecting them to perform well "right from the start". Because she did not know what student practice "looked like" she compared students to practicing RNs. J said she would become frustrated by how slow the students were and she hovered anxiously over them trying to hurry them along. With time, she realized that slow performance was the norm and that her role was to assist students to become more skillful and efficient. There was much evidence of J working alongside the students guiding and supporting them in their learning. First thing, wanted to be sure she was on top of the new patient's assessment. Went in and did the assessment, teaching along the way. D correctly assessed what was there, correctly identified certain things. Went over the Bup infusion and dermatome checks. Was good teaching, was doing it down one side and then the other, the way the staff do which is an ineffective way so showed her how to go across the whole dermatome, found areas where were diminished sensations. Went over all the assessment data, was solid on that. (TA Nov 23/24) Feedback Feedback was an important formative practice. J felt the main purpose of her feedback was student growth. She also wanted students to have a clear understanding of her view of their practice. J felt strongly that the students should know how well they were performing and be aware of any areas where they needed to improve in order to focus their learning so that students did not receive any "surprises" when they arrived at the summative 146 evaluation stage. J gave feedback to the students verbally in the clinical area and again, in writing, in her weekly anecdotal notes (her "instructor notes"). Verbal feedback. Discussions around J's interactions with students almost always included an example of her giving students feedback about how well they were doing and how they could improve their practice. But you can tell that she has done a lot of research. She is on top of what she needed to do. Had some problems with organization but we talked about that at the end of the shift and the changes that she could make to be more organized tomorrow. So we will see how that goes. She just needed some tips about organizing her team sheet and how to approach that room because she had all four fellows in the one room she was making way too many trips in and out of that room, thinking about how you can go do a number of activities in that room and then out. (TA Oct 26/27) J admitted to an allegiance to the "tell it like it is" school of feedback. She did not hedge or "beat around the bush" when she gave students feedback, preferring instead to be clear and direct. In her view, this was a caring practice. I also think that when you are giving feedback, whether it's formative or summative that being honest with your feedback, not couching the feedback that you're giving, like if something needs to be worked on, telling them honestly and giving them the example that makes me feel that way. And also one of the things that I hear students say sometimes is that they always hear everything's wonderful, wonderful and they are never told what they need to work on. But being able to tell them what they are doing well, what their strengths are but also being able to pick up things that they do need to work on, things that will help them expand and improve and develop their practice. (TA Oct 26/27) J believed it was important to give feedback in a timely manner; however, there were occasions where J chose to delay giving feedback. If she was feeling like she was reacting personally to the student or the student's actions, she would wait until she had time to get away from the situation, to reflect, and to analyze her response. J also withheld corrective feedback if she thought that doing so would increase a student's confidence level. For example, when reviewing the last written work from the student who had successfully 147 completed her L C , J could see that the student had problems with organizing her thoughts. This was the only negative comment J could think of about the student's work that week, both clinically and written. J made a decision to wait for the final evaluation meeting to discuss organization in a formative manner (i.e., it was not on the final evaluation document), in order to let the student experience a week where she did not receive any negative feedback from J. I am going to talk about it because I do think it will help her but I felt really it was so small compared to what she had gained. And I didn't want to take anything away from her because, and it was really telling this week when I handed her journal back to her on Monday she said to me, "Is there anything in there that I am going to have to address?" and I said, "No you did a really good job this week." And you could just see the relief. (Nov 28) Written feedback. J provided students with a copy of her anecdotal notes from each week's clinical practice. Her weekly notes to the students contained detailed accounts of the student's practice and the CNT-student teaching-learning encounters. The written notes reflected differing amounts of feedback given to the students. Students with whom J spent a lot of time, received several pages of detailed notes. The student on the L C also received more detailed notes, particularly in the area of practice that was the focus of the L C . Students on the team medication experience received the shortest notes, as their experience focused evaluation around a smaller number of clinical competencies. In her instructor notes, J communicated her thoughts about the student's practice. She organized her descriptions of their practice under the five domains of practice and frequently used the PAF wording so that the student could see where their practice fit with respect to the course expectations. J regularly stressed points about the student's practice or from their discussions, that she felt could make a difference in their practice. 148 You were conscientious about sharing with [client initials] accurate and appropriate information concerning the changes to his foot. You recognized that providing [client initials] with information empowered him to make good decisions regarding the care of his foot (e.g. keeping it elevated to reduce swelling) and allowed him to participate more fully in discussions with the physician. (Instructor Notes, Nov 9/10) The written notes were also a forum for her to provide encouragement to the students. This was apparent in the supportive language that she used. I'm proud of the quality of nursing care you gave clients and that your focus always remained on them and not yourself. This is no easy feat with all the demands placed on you and all the new experiences you had to contend with. Your high standards of care were evident over the two days. Keep up the excellent work. (Instructor Notes Oct 26/27) The instructor notes were one of the ways J kept track of the student's learning needs. These notes were also a formal record of what J had communicated to the students about their practice. For example, information on the key discussions and learning that occurred or areas on which students were working. On one occasion, J had discussed an aspect of Student B's practice "off-the-record". Problems arose when B repeated the behavior. Because J had treated the first incident informally, she felt she could not utilize it officially to establish a pattern, or record it on B's final evaluation. Reflecting on this incident, J said that she learned, "if it is important enough to discuss, then it should be recorded". Formative Decision Making J made evaluative decisions throughout the rotation. In general, most of her evaluative acts were formative in nature. Every formative encounter had an educative and an evaluative component. As she worked with a student, J decided on strategies to assist the student to correct deficiencies and improve her practice (educative intent) and she evaluated the student's practice to decide how the performance fit with the expected level (evaluative intent). 149 J's teaching focus was evident throughout the data. There were numerous comments on the TAs about students' performance, the teaching strategies that followed, and the re-evaluations of this aspect of the practice. Many of J's evaluative remarks were framed in terms of the student's learning, particularly those instances where the student's initial attempt required instruction and guidance, or where the student made an error or missed something. This student learning was communicated by words such as "learned to", "gained an increased awareness o f , "becoming more skillful at", and had a "valuable learning experience". She also made comments about the student's ability to accept, seek, and use feedback. Dressing change QID. Still having problems with asepsis. Took out a dressing tray. Showed what I wanted and then a return demo. The third dressing was well done. So did afternoon dressing on own. It wasn't a difficult dressing. Took three opportunities to correct minor breaks. I was a little concerned about that. (TA Nov 23/24) Throughout the formative process, J "fit together" samples of student performance and learning to see "what the whole picture looked like". If the developing pattern of student performance was as expected, J did not actively seek further evidence in that area. In these cases, samples of behaviour were integrated at the end of the rotation in a summative evaluation. If the student performed poorly in a new learning situation, J did not make a firm judgment initially, but would note it to see if a pattern of poor performance was emerging. She attempted to obtain similar opportunities so the student could practice and improve, and also to evaluate how much growth was occurring. Sometimes, another sample of behaviour produced a different picture of the student's ability, in which case J had to consider the new data and revise her conclusion. The revised conclusion could be that the student's practice met the course requirements, did not meet the requirements, or was inconsistent in meeting the requirements. In the latter two instances, J continued to sample in that area in order to 150 arrive at a definitive judgment about the student's ability. Inconsistent performance was problematic in terms of making summative conclusions about the student's practice. (S) If it turns out she is inconsistent, then what are you going to say? (J) Well, I will have to go back to the PAF. There are key sections where it says consistent or independent, look for those words and see, in the areas of concern, whether those words come up. If it says consistently applies and integrates knowledge, then she wouldn't be consistently doing it, or if it says with minimal guidance and she's not minimal guidance, she's still requiring a lot of instructor guidance. Those would be some of the key words that will help me determine that. (Oct 31) When a student did not change her practice following feedback, J asked herself why the student did not appear to be learning from the previous formative session(s). When the lack of growth continued despite further practice and teaching strategies but the whole picture was satisfactory, the area of substandard practice was recorded on the student's final evaluation. When the area of concern was of a serious nature; i.e., it represented unsafe practice and/or was far below the expected level, J developed a L C . Formalizing the Teaching and Evaluation Distinction: The Learning Contract J said she was committed to student learning but when there was evidence that learning was not occurring and it appeared that the student's practice might pose a danger to clients, J felt that "the prudent thing to do" was to formally notify the student that her/his practice substandard. In her dual role of teacher/mentor and evaluator/judge, J balanced several considerations in these situations; (1) attending to the students' right to adequate opportunities to develop their practice, fair and objective evaluations and to clear and timely notification of their status; (2) protecting current and future clients from unsafe practitioners; and; (3) attending to the administrative requirements of the educational institution. 151 The purpose of a L C was to focus J and the student's teachmg/learning efforts on the deficient area of practice, and to notify the student of the seriousness of the situation. The L C was also a formal record of the concern, the attempts made to assist the student to resolve the concern, and that the student had been appraised that her practice was below standard. The L C accentuated the evaluation process as both J and the student knew that a summative decision would be required based on the student's practice in the remaining clinical experiences. The evaluation was today and I feel it went relatively well but I have a nagging concern. And my concern is I still don't think Student B gets how serious her knowledge deficit is. You know, on the evaluation I used some fairly strong examples to illustrate the point of the weakness in her knowledge, and we went through the whole learning contract and she didn't seem to be registering that this was a significant step that we were taking. So I needed to say to her, "B, are you aware that if you are unsuccessful in meeting this contract that it will mean that you fail this rotation? That you will have failed clinical and cannot go on?" And there was a slight glimmer from her that that piece of information was being acknowledged. (TA Oct 20-23) J put many hours and much effort into working with a student with a L C to identify what the source of difficulty was and to develop strategies to assist the student to practice at an adequate level. J thought that it was possible to work in a partnership with the student and to arrive at a mutual decision at the end. She also believed that in order for students to acknowledge that their practice was not meeting the standards, that this required high levels of self-awareness and ego-strength. It was more typical that, when a student failed the L C , J had to make the final decision herself. J thought her decision making regarding a L C was more weighty because the students did not have an opportunity to be assessed by another CNT. Although J did consult other faculty about her thinking leading up to the L C to obtain 152 a second opinion, she generally made the decision on her own and then reported it to the faculty group. J stated that no matter how caring or fair she perceived she had been, evaluative decision making with a L C was a tension filled process that caused angst for both J and the student. She said students saw the intent of the L C as strictly evaluative. Students viewed being on a L C as stigmatizing and even, as punishment. She acknowledged that this feeling was not entirely unfounded. Learning contracts had long term consequences for students in terms of being considered for awards, and in limiting their opportunities for out-of-town clinical placements, particularly the much desired international placements. J believed that students with a L C in their file were "different" from the other students, suggesting a view of the L C as evaluative. Yet her final analysis of the L C experience with Student B was clearly educative. She viewed her work with Student B as an example of CNT-student partnership in teaclimg-learning. The Final Evaluation Process J realized that the end product of the evaluative process or, the "whole picture", was sometimes accurate and comprehensive, and sometimes less so. J was unable to definitively conclude about an area of a student's practice when she did not have enough samples, as was the case with the condensed rotation where problems often did not become apparent until the end of the rotation. When J did not have enough data to feel confident concluding something about an area of student's practice, she would record this on the final evaluation as an area where there were insufficient opportunities for the student to learn and for her to make an evaluation. 153 She correctly assessed the wound and documented it, so I have no concerns about that, it's just, one of the areas I've identified is, she has to work on aseptic technique with psychomotor skills and will need more opportunity to practice though, going on her evaluation. It's unfortunate that I only had them for three weeks and picked it up in the third week, if longer time would be able to assess better, if just the result of not enough opportunities or if this is a real concern. (TA Nov 23/24) Student Input into Evaluative Decisions c J felt that she worked mutually with the students in evaluating their practice; however, she acknowledged that her evaluative input was weighted more heavily than the students', particularly with respect to the clinical judgment domain. Student input into their evaluation occurred in several ways; self-evaluation of their practice while in the clinical area, (most commonly following skill performance), discussion of their practice in their written work, their written final self-evaluations, and co-writing of their strengths and learning needs for the final evaluation document. View of student input. J held that self-evaluation was an important part of nursing practice however, she admitted that she may not be doing enough to help students become skillful at this process. She believed that the student's self-evaluations should be weighted in the evaluative process. She did not think, however, they should be weighted equally with the CNT's until much later in the program because she thought it "took awhile" for students to learn what quality nursing practice looked like and how to judge themselves against the indicators on the PAF. A major insight occurred from J's examination of the topic of student self-evaluation in the research. Because of the shortened rotation, she could not conclude with certainty around some areas of practice for a couple of the students. This lead to her realization that 154 the student probably knew more about certain areas of their practice than she did; for instance, what went on in many of their interactions with clients and family members or what occurred with skill performances when J was not there. J then had to question why she had never looked at the students' self-evaluations when she reviewed their past records. She realized that she may not always be correct in assuming that the faculty's evaluation represented the most accurate picture of the student's practice. J concluded that there were different factors influencing the trustworthiness of the CNT's and student's pictures. She thought a major problem with student self-evaluations was that students tended to not recognize how well they were practicing. For example, according to J, Student E had a good level of knowledge yet at the end of a Kardex discussion, when summarizing the discussion, E would comment only on the things she did not know. J stated that Student C who was evaluated by J as one of the strongest students in the group and who had received glowing feedback from her over the rotation in this regard, was surprised at her final evaluation session when she read J's summary of a her strong performance over the rotation. J had seen a variety of self-evaluations over the years, ranging from thoughtful and detailed to "something slapped down on a piece of paper a half hour before". She thought that the process of self-evaluation was only as valuable as the student perceived it and she put more weight on the self-evaluations where she could "see" that students' thought and effort had been put into it. However, she also realized that limited self-evaluations may be a natural outcome of the many competing demands on the students' time and the fact that they were required to write a self-evaluation every six weeks. 155 Self-evaluation in the clinical area. J's practice was to have students evaluate their skill performance before she gave her evaluation. In this way, she could assess the student's ability to self-evaluate, and it allowed her to adjust her feedback in cases where the student's view did not match J's. In the TAs and on the instructor notes, J occasionally commented on students' ability to identify areas where they had made a break in sterile technique or other examples of areas in which they could improve. There were no examples in the research of students' inability to accurately evaluate their performance in psychomotor skill practice. Students' written work. Many of the examples J used in making her evaluative conclusions came from the students' written work. In the D M M care planning exercise, the students were required to write about and evaluate the care of one of their clients. There were also examples of student practice that J did not observe directly, especially related to the competencies of the Health and Healing and Teaching-Learning domains. In addition, J used data from the written work to evaluate the students' ability to accurately interpret and evaluate the quality of their practice. A number of these examples were recorded by J on the weekly written notes and were incorporated into the final evaluation documents. It was not known what J did when the students' written work contradicted her judgments about their nursing practice. Student final self-evaluations. Before coming to the final evaluation session with J, the students wrote a final summary of their practice under each of the domains and identified their strengths and learning needs. These documents varied in depth and detail. None of the five that I analyzed were as comprehensive as J's and several had claims with no supporting evidence. The majority included a balance of strengths and areas to work on; there was only one student evaluation that was all positive. Further, in all cases the students' general 156 evaluations of their performance matched J's quite closely. J and the student brought their written summaries to a final evaluation session and combined them to produce the final evaluation document that was put on the student's file. The final document contained J's written evaluation, the student's self-evaluation and a final summary that they wrote together. There was not always agreement about the student's learning needs. J and the student would discuss the issue until they came to an agreement. And then I asked her about what I wrote in terms of getting involved more with people going through life transitions and delving a bit more into how they're coping and what their issues are. And she agreed and disagreed with that statement. She said she did talk to clients about their issues that she felt that there was always room for improvement. She perhaps wasn't always the most skilled at talking about it but she didn't see it as a big learning need of hers. So with not having a lot of data to really base that on, we decided that it wasn't going to go down as a learning need on her evaluation. (TA Dec 11) If a student did not agree with an learning need that J had identified in the written evaluation, and J felt she had enough evidence to support her conclusion, she discussed her reasoning for the judgment with the student. She had never had a case where the student disagreed after hearing J's rationale, but she said that if it did occur, the power to make the final judgment would ultimately rest with J and the learning need would be recorded. Formative in Summative An insight that J gained about her evaluation practices in the research was that her final evaluation sessions with students were also teaching sessions. She was committed to spending an hour with each student discussing their respective evaluations and deciding on areas of strengths and where the student should go with their learning in the next rotations. She said that the summative evaluation led to formative evaluation that connected the student 157 across rotations. The written final evaluations were placed on the student's file in the school of nursing, adding to the "whole picture" of the graduating student at the end of the program. The student's file was also used to communicate to other CNTs about the student's practice. J consistently documented areas of the student's practice that she thought were "still in progress". When she had identified an area that may need work, she wanted it to be followed through on the next rotations; i.e., for the student to have opportunities to develop in that area, and for the CNT to evaluate that area more fully. She was especially careful to note areas where she had a gut feeling that the student's performance was weak but she could not substantiate it. Bias One of J's evaluative values was to treat students fairly. She was aware of several of her biases, such the types of student that she preferred to work with, and how she acted to control for these in making evaluative decisions. An important influence on her evaluative ability was the relationship she had with the student. J felt she had a good ability to recognize the impact of her relationships on the student's practice. J was aware of the possibility of a "halo" effect with certain students; i.e., she needed to mediate her tendency to reward and evaluate positively students who were hard workers, were skillful, whose personalities she liked, and with similar interests as herself. She regularly checked herself for these sorts of instances of favoritism. J was also aware of the type of students that she did not like working with or that "pushed her buttons" such as students who did not appear to value the things she valued. And I think you have to be careful because certainly it is a lot easier to work with someone who is trying hard and doing everything you suggested versus the person who doesn't seem to be putting the effort into it. And you do, you have to be 158 questioning yourself all the time, whether you are treating them differently based on what that relationship is and I am certainly aware of it and have had to go home sometimes. Sometimes I choose not to speak to the student because I need to step away and be really clear about what I want to say and make sure I'm not reacting to something that they've done or not done. That I'm sticking to what the issues are and it's not a personality thing. So I am aware of it but I can't say that I handle it well every time, that's for sure. (Oct 31) J's self-reflexive strategies included being alert to how she was experiencing her relationship with a student, i.e., how the student made her "react inside". Early in the rotation, she discussed how one of the student's constant requests for extensions made her feel angry, even though she understood how the student's context influenced this behavior. She also identified that she had pre-judged another student based on the student's appearance and negative comments made about this student by colleagues. J considered whether she was treating a student differently by asking, "If this was another student how would I react"? During the research process, J was asked several times to consider whether she was evaluating the students differently based on their perceived ability and whether she was being lenient in the case of Student B because of her desire to see her succeed. Based on the data and the reasoning she presented to explain her evaluative judgments, it was clear that J evaluated students on their performance and not generally on qualities unrelated to the performance. Also, evidence from the written notes and final evaluation documents indicated that J's evaluative comments were a reflection of the incidents as described in the TAs and that each student was recognized for their strengths, while at the same time receiving what appeared to be a fair assessment of the areas that needed to be worked on. At the end of the rotation, she observed one of the "stronger" students make several breaks in aseptic technique. The next day, one of the students she perceived as "weaker', did the same thing. Neither student had a further opportunity to try the skill again. In considering 159 both instances, she concluded that the hectic pace of the unit had contributed to the poor performance in both cases; that is, it was typical of this level of student to not perform well when performing a skill for the first time, under pressure. The incidents were recorded on both students' final evaluation documents with a comment that more opportunities were needed for the student to work on this area of their practice. There was a difference in J's ability to conclude what the poor performance meant in terms of the student's "whole picture". Because J had more data around this area of practice for the stronger student, she was able to conclude that the poor skill performance was not typical for her. She was unable to make this final conclusion about the weaker student's usual skill performance because of a lack of data in this area. Conclusion This chapter summarized the research findings around J's evaluative practices. Clinical evaluation was seen to be a complex and dynamic process that was embedded in the teaching-learning process. J utilized a number of practices for collecting data and determining its meaning with respect to the student's level of performance and the teaching and evaluating strategies that should follow. The data showed that J attempted to be accurate, objective, fair, comprehensive, and caring in her evaluative decision making, particularly when an evaluative decision could result in the student failing the course. What J chose to focus on in evaluation was influenced by her own way of practicing nursing, how nursing was practiced on the unit, the expectations of the workplace/employers and the licensing body, and the ideal view of nursing practice envisioned in the curriculum. From the start of the rotation, J utilized several methods to collect data about the students' nursing practice. J's data collection and evaluative decision making practices were 160 influenced by her ability to find opportunities to sample student practice, her skill at data collection, the amount of time she had with each student, the nature of her relationship with the student, and whether the intent of her thinking was teaching or evaluating. The number of clinical days she had with the students turned out to be a major determinant of her ability to compile an accurate and comprehensive picture of the students' practice. J identified that she had made many changes to her evaluative practices as a result of her development as a CNT. Her greatest gains were in developing awareness of the impact of her practices on the student's performance and in her ability to create an educative environment in the clinical area where evaluation was considered as an important part of the student's clinical learning experience. 161 CHAPTER FIVE Discussion, Conclusions, and Implications In this chapter, I discuss selected findings of the study with respect to some of the concepts and theories underpinning the clinical evaluation of nursing students. The purpose of this discussion is to address the key findings in the study in an attempt to expand current understandings of CNTs' clinical evaluation practices. The chapter ends with a discussion of the implications that the findings have for nursing education and for future research. Discussion of the Research Findings The findings confirmed those of other researchers (e.g., Bergman & Gaitskill, 1990; Duke, 1996; Girot, 1993a; Lankshear, 1990; Paterson, 1991; Stewart, 1991) that evaluation of nursing students in the clinical setting is a complex process, complicated by several problems inherent in the clinical learning situation. Producing a comprehensive and accurate picture of a student's nursing practice requires the use of multiple sources of data, as well as precision and fairness in data collection and interpretation. To promote the accurate, comprehensive, and fair evaluation of students requires evaluative practices that can effectively address issues around data collection, decision making processes, the fact that students are learning while they are being evaluated, and the role of student input into the summative evaluation process. Data Collection The research findings supported several things that are already known about how CNTs collect data in order to make judgments about students' clinical practice. In order to provide a comprehensive and accurate evaluation of students' nursing practice, CNTs have to determine which aspects of nursing practice should be evaluated and then utilize appropriate 162 methods for assessing each area (Friedman & Mennin, 1991; Girot, 1993a; Luttrell et al., 1999; Orchard, 1992). Deciding what to evaluate requires decisions about which aspects of nursing practice are of value and what aspects of clinical practice are integral to determining competence. Several issues arose from the findings related to sampling, the clinical assignment, rotation length, data collection methods, and the focus of evaluation. Sampling, Safe Practice, and Level of Supervision The evaluation of students' nursing practice is based on samples of the students' practice (Chambers, 1998; Oermann & Gaberson, 1998; Orchard 1994b). Sufficient samples are required in order for CNTs to accurately describe the students' current practice, as well as make predictions about their future capabilities (Buckingham, 2000; H i l l , 1998; Orchard, 1994b). Although this is well known in nursing education, there has been little written about the sampling practices of CNTs. The study findings provided information about how a CNT makes choices about the type and amount of data to collect, how sampling proceeds in order to determine safe/unsafe practice, and how the unpredictability of the clinical assignment and CNT's lack of time complicate sampling. It would appear from the findings that sampling occurs both in a controlled and an uncontrolled manner. A CNT can exert control over sampling by selecting clinical experiences that will provide specific examples of the students' capabilities. J outlined several factors that guided her decision-making around sampling such as her plan for the rotation and the practice opportunities available in the clinical setting. Developing the plan required her to understand the clinical competencies that students were expected to demonstrate and then to anticipate which clinical experiences would enable her to evaluate 163 for each competency. As the clinical day progressed, sampling often became more haphazard because of the complex and unpredictable nature of acute care nursing. A n important finding was the way in which the students' level of performance affected sampling. Students were expected to display evidence that they were meeting the clinical competencies outlined in the PAF, to demonstrate growth over the rotation, and to provide safe and competent care. When a student's practice met these expectations, J did not sample as frequently whereas students whose practice was questionable were investigated further. This suggests that sampling was directed in part by a decision making process regarding how much direct supervision to provide based on J's determination of the student's ability to practice safely. Ensuring that students provide safe care is a major responsibility of CNTs (Oermann & Gaberson, 1998; Orchard, 1994b), but, in the absence of clear policy outlining what does or does not constitute safe practice, the decision is left to the discretion of the CNT (Scanlan, Care, & Gessler, 2001). Deciding how much to supervise students involves a fine balance between promoting student independence and ensuring client safety. Over-supervising students may lead to increased student anxiety and decreased student self-confidence, while on the other hand, insufficient supervision can place clients at risk (Orchard, 1994b). The findings reveal that over time, a CNT develops methods to help determine whether students are safe or unsafe. J appeared to have developed her ability to evaluate safe practice by initially over-supervising students. Over time, she amassed a store of patterns of the kinds of student behaviours that she thought were indicative of safe practice, such as the students' preparation for and knowledge of the client's care, the types of questions they asked, and whether or not they come to her when unsure of something. She commented that 164 she also looked for indications that students were nervous or unsure about how to proceed with their care, which she took as a sign that the students required increased supervision to ensure safe practice. This may be helpful to students but it may also increase the student's anxiety. As she supervised students, she was looking for a pattern of student behaviour. When students were demonstrating safe practice she allowed them to work independently in those areas of practice that she had already evaluated. It appeared that in J's professional judgment, students who demonstrated indicators of safe practice when supervised, would continue to practice similarly unsupervised. When students appeared unsafe, J continued to supervise them until a pattern was apparent. If the pattern indicated safe practice, then the student was supervised less frequently. When the pattern suggested unsafe practice, the student was not able to practice independently and often a L C was required. Judging students on the basis of the consistency of their practice over the rotation is important with respect to promoting a just evaluation process (Orchard, 1994b). It would be unfair to determine a student's practice was inadequate on the basis of one sample, unless the breach of standards was severe as in cases such as the impaired student or misappropriation of agency or client property. When students' initial practice appears unsafe, it is prudent to supervise them more closely until the practice improves and sometimes, to remove them from the clinical setting (Orchard, 1994b, Scanlan et a l , 2001). Sampling and supervising less frequently in the case of students who are performing at or above the expected level could be problematic if the criteria used by the CNT to judge safety to practice are inaccurate. Also, students performing adequately do not get the same amount of the CNT's time which may mean that a problem area goes undetected, or that 165 students do not get the benefit of the CNT's teaching expertise which may help them smooth out their performance or even move to a higher level of practice. Clinical Assignment The selection of clinical assignments has received much attention in the literature with respect to student learning opportunities but it is seldom discussed in detail with respect to clinical evaluation (Gaberson & Oermann, 1999; Hill , 1993). The study findings indicated that the nature of a student's clinical assignment was key in terms of the opportunities that were provided for evaluation. However, the practice of utilizing the assignment to provide specific evaluation situations was also problematic because of variations in the quality and frequency of available opportunities and the fact that the physical condition of acute care clients can change unpredictably and rapidly. J had strategies for dealing with the uncertainty in this aspect of clinical evaluation; however, sources of variability in the clinical assignments affected her ability to evaluate some of the students in her group. Strategies such as J's overall plan for the rotation seem useful for guiding selection of assignments that could provide appropriate opportunities for a comprehensive evaluation of each of student. A plan such as this could direct the CNT to core competencies that all the students must be evaluated around. The essential competencies need to be broad enough to be evaluated across a wide range of clinical situations; for example, supporting clients as they deal with life adjustments necessitated by their health situation, or monitoring a client's recovery from surgery. In this way, many different assignments are able to provide the same opportunity for evaluation and the CNT is able to evaluate the most essential areas of practice no matter what the client mix. A CNT must also be clear on the many ways that a student's competency can be expressed. This requires that a CNT be skilled in observing a 166 student's practice and discerning important qualities and characteristics of nursing practice (Bevis, 1989). It has been suggested that clinical evaluation systems be flexible enough to allow for evaluation of unintended and unexpected outcomes that are occur regularly as part of the clinical learning process (Malek, 1988; Neary, 2001). Evaluating in an acute care setting requires CNTs to be flexible. Unanticipated events often alter the original situations that were intended to give students the chance to demonstrate specific behaviours (Buckingham, 2000). For example, unexpected changes in client status can turn an appropriate assignment into one that is beyond the student's capabilities. CNTs must be prepared to remove students from such an assignment and then determine which aspects of the student's practice can be evaluated. With certain levels of students, the change in the assignment may provide opportunities to evaluate other aspects of the student's practice. In this study, working with a changing client situation was the expectation of students. Thus, J was able to utilize situations where the client's condition changed to evaluate the students' observation and reporting skills as well as their ability to take appropriate action. Because the clinical area was new learning for third year students, she did not expect the students to manage the situation independently. They were evaluated on their ability to function with guidance from herself and the nursing staff. This also speaks to the importance of selecting clinical settings that match the abilities of the student group. It would not be appropriate for beginning level students to be expected to function independently on an acute care unit. J did not utilize methods to evaluate students' nursing practice outside of the clinical setting. One strategy for dealing with the issue of variability in clinical evaluation opportunities is to use standardized performance examinations. The school of nursing where 167 J worked did not use this evaluative method because of the cost and time required to develop and implement valid and reliable clinical examinations. Students were evaluated on the performance of discrete psychomotor skills in the laboratory setting as they went through the program but standardized competency exams did not exist. The faculty tended to believe that what was lost in terms of standardization, was gained in the ability to evaluate students' ability to deal with multiple client variables in real life situations. Nonetheless, including a standardized evaluation method into the nursing program's evaluation system expands the CNTs' ability to describe and judge the students' nursing practice (Buckingham, 2000; Nicol &Freefh, 1998). Finally, the study raises an interesting question about the number of clients that should be assigned to students at any one time. On the one hand, having the students involved in numerous client situations provides a greater number of evaluation opportunities for the CNT, as well as expose students to the realities of nursing practice; that is, that nurses are expected to work with several clients at once. On the other hand, the more activities that the students are involved in, the less time the CNT has to spend with any one student, and the greater is the CNT's responsibilities with respect to coordinating and overseeing the students' client care. Rotation Length The findings indicate that the amount of time a CNT has with each student has a significant impact on clinical evaluation. Many CNTs agree that four to six weeks is not long enough to develop a comprehensive and accurate picture of the student's practice (Chambers, 1998; Gallagher, Bomba & Anderson, 1999, Infante, 1985; Paterson, 1991). The number of clinical weeks and the number of hours in a clinical course must provide opportunities for 168 CNTs to see the student practice with a sufficient number of clients requiring varying types of nursing care. It seems logical that with more clinical days, the CNT is able to evaluate students in many different situations and build patterns of performance over time. Having a greater number of interactions with students also allows more options with respect to students who take longer to reach the expected level (for example, the "slow starters" and borderline students). In addition, increased CNT-student contact is beginning to be recognized as an important variable in the development of the CNT-student relationship (Gaines & Baldwin, 1996; Hornak, 1997; Groening, 1999). Unfortunately, although rotation length is mentioned frequently in the literature on clinical evaluation, it is an area that has received little research attention, particularly with respect to optimal rotation length. In this research, it was found that the short rotation length, combined with variability in the students' assignments, resulted in an uneven evaluation of the students, i.e., J did not collect the same amount of data about each student. During the shortened rotation J did not have enough days with the students to find opportunities for them to practice and be evaluated across a range of the competency areas. She had to make summative conclusions about some areas of practice at the only time the practice occurred because this was the only opportunity the student would have with that particular situation; for example, students working with the complex clients and the team medication experience. In addition, J was able to gather much more data on some students than on others. For instance, she had insufficient data on a couple of students whose clients' conditions were less complex than other students' clients. Two of the students had light team medication experiences meaning that J was not able to evaluate their efficiency and problem solving to the extent she would have liked. She was also unable to make a final conclusion in the one instance where the student's client 169 became a surgical emergency on the last day of the rotation. J could not determine whether this student's overall disorganization and her poor performance in a final dressing change were the result of being rushed to complete her care to facilitate the transfer of the client to the operating room, or were representative of the student's usual way of practicing. J commented that, given more clinical days, she would be able to adjust the clinical assignment to gather samples in the areas where she was lacking. J was firmly opposed to the condensed rotation format. One of the outcomes of the research process was that she was able to utilize the research data to make a convincing argument that assisted in having the Semester V clinical rotations changed to the original format of six consecutive weeks. During the study period, three students in the third year class were on learning contracts. At the end of the semester, J stated that all three of the CNTs that had initiated the L C (J being one of them) would not have put the student on a L C if they had worked with the student over more days (a normal six week rotation). Learning contracts were instituted because the CNTs were required to make a summative decision at the end of the first rotation on insufficient data. The CNTs had felt that the LCs were needed because, if it turned out that the student's practice was actually below the expected level, the students would not have enough notice and chances to work on the problem area. The students and CNTs underwent unnecessary stress, and the student have a L C on their file because of the premature decision making that the short rotation forced. Data Collection Methods One of the problems of clinical evaluation is that CNTs tend to utilize a limited number of methods to collect data for use in evaluative decision making (Marrow & Tatum, 1994; Morgan, 1991; Mogan & Warbinek, 1994). The findings suggest that observation and 170 questioning are key practices for assisting a CNT to balance the dual responsibility of ensuring adequate learning and evaluative opportunities while at the same time ensuring the clients receive safe and effective nursing care. The findings also indicate discussion and questioning in combination with the student's written work has the potential for accurately evaluating the students' thinking processes. Observation. The findings confirm that, although CNTs utilize a combination of data collection methods, observation is still the most common evaluation practice in clinical settings (Gaberson & Oermann, 1999; Gomez et al., 1998; Marrow & Tatum, 1994). With respect to observation, CNTs need to be skilled in two areas: noting and describing the important aspects of what they are seeing, and interpreting observations with respect to judgments and conclusions about the student's practice. Given that observation plays a key role in the evaluation of nursing students, it is surprising that there is so little written about what CNTs actually do observe and how they can become more effective at observation. It appears that CNTs need to be aware of the potential negative impact of observation on a student's performance. J stated that she handled this problem by focusing on the client during procedures, restricting unannounced observations, or, "popping in", and by factoring in student anxiety when making judgments about a student's performance. It appeared that her key strategy for reducing student anxiety was to establish observation as a teaching, rather than an evaluative, method. Another important finding is how this experienced CNT perceived an evaluative situation in much the same manner as Benner's (1984) expert nurses perceive nursing situations. Novice CNTs may intuitively know that something is amiss with a student's practice but they do not know how to articulate the problem and are not confident in relying 171 on their intuitive sense of students' practice when having to arriving at final judgments (Duke, 1996; Paterson, 1991; Wolff, 1998). In this study, it appeared that as an experienced CNT, J perceived student practice as a whole. She commented that her intuitive grasp of what she was seeing, allowed her to zero in on what was salient in a particular instance of student practice. She was able to recognize subtle distinctions in the student situation that told her something about the student's ability without having to consciously think in terms of competencies or performance indicators. This appeared to be an important part of her decision making around a student's ability to practice competently. Another interesting finding is that this experienced CNT, like expert nurse clinicians, knew more than she could articulate. This fact became apparent when J noticed how much difficulty she was having articulating her evaluative thoughts on the weekly tapes. The T A process required her to use conscious, analytic thinking whereas J was finding that, in the clinical area, she evaluated in a more global and integrated manner, observing many things and unconsciously combining observation and interpretation. She was not always aware of all that she was observing or of how she recognized salient aspects of a situation. CNTs must be able to articulate their observations and conclusions in order to analyze their thinking for accuracy and fairness, provide students with feedback, and meet the documentation requirements of the educational institution. In providing students with feedback in the clinical setting, J discussed her immediate impressions in terms of specific principles and some of the general observations she had made. When she had time to reflect after the clinical days, she recorded her observations in greater detail and she provided students with her interpretations with respect to the level of performance, of which competencies the performance was reflective, and how she saw the student's performance 172 with respect to the "bigger picture" of the student's nursing practice. Although she was aware of her use of the PAF to guide her observations and interpretations, she did not identify all of the criteria that she was using. The expert knowing of CNTs needs to be studied further to provide direction for faculty development and the education of new CNTs. Exploration of a CNT's evaluative practices could proceed in the manner Benner (1984) has suggested for nurse clinicians wishing to understand expert nursing practice. Faculty development sessions could focus on assisting CNTs to understand their recognitional skill by identifying significant evaluative situations, describing the context in detail, and exploring the meaning of their thoughts and actions. Observation of certain aspects of nursing practice is relatively straightforward; for instance, basic principles and procedural steps can be noted on a checklist, be it a formal one or one within the CNT's mind. But what about those aspects of nurses' work that are complex and often hidden from the observer? Areas of practice that are more complex require evaluation of the appropriateness of the performance within a particular context; for example, the best way to interact with a particular client in the process of learning to live with an colostomy, or how to administer oral medications to clients with dementing illness or to those with swallowing difficulties (Neary, 2001). One of the findings in this study was that J was able to evaluate the student's manner and approach through her observation of students in their interactions with clients and family members, and discussions with the students around this area of nursing practice. Evaluating student knowledge base and thinking processes. Observation is often incomplete and interpretation may be incorrect if the CNT does not discuss her thoughts with 173 students to obtain their perceptions and examine the thinking underpinning their actions (Oermann, 1997; Reed & Proctor, 1993). CNTs need methods whereby they can understand and evaluate a student's thinking process. In the study, discussion, questioning, and written work were used as the primary methods to evaluate student's knowledge base and clinical decision making. Evaluating clinical decision making and problem solving requires that CNTs understand how nurses make clinical decisions, and that they use evaluative methods that enable them to discern the student's thinking, reasoning, and knowledge. In order to evaluate the information and thinking processes students use, it has been recommended that CNTs find ways to help students to articulate their decisions, the reasoning underlying each decision, the different options and perspectives they have considered, and the theory base mforming their decision making (Oermann, 1997). J reported that she used the decision making model of her school of nursing curriculum as one guide for evaluating clinical decision making. She also looked for the student's ability to identify relevant information to consider in their decisions and the logic of their reasoning around each clinical problem To help students articulate their thinking, CNTs also need an understanding of effective questioning practices, including how and when to use various types of questions (higher order or lower level), and how to create an atmosphere of inquiry rather than interrogation (Wink, 1993). Research indicates that CNTs do not use questioning effectively (Sellappah, Hussey, Blackmore, & McMurray, 1998). CNTs ask predominately low level, factual questions to ensure students have the requisite knowledge and to prevent errors in client care. Ineffective questioning practices are frequently cited as an unhelpful evaluative strategy that is CNT centered, creates anxiety, fear, and self-doubt in students, can negatively 174 affect their clinical performance, and motivates them to try to please the CNT rather than develop their thinking in the service of quality nursing practice (Flagler et al., 1988; Loving, 1993; Wilson, 1994). J felt she was conscious of her questioning skill and the potential impact it had on students. She stated that she had developed her questioning practices as a result of her early experiences as a CNT where she had utilized all of the unhelpful practices typical of a novice CNT. J commented that, as her philosophy of questioning changed, she began to use the term "discussion", rather than "questioning" to reflect that the intent of these one-on-one sessions was primarily to help students to develop their clinical thinking and to integrate the many pieces of knowledge that they had. She appeared to use an educative approach in questioning student knowledge and preparation before client care or new procedures, to help her decide how much supervision the student required. Unfortunately, Kardex discussions and medication reviews were time intensive and J's ability to hold one-on-one discussions was greatly influenced by the acuity of the unit. She did not have the opportunity to evaluate every student equally with respect to these discussions, partially the result of the shortened rotation. In addition, one of the insights that she gained from the research process was that some students are not able to engage in discussion to the same extent as others and that she needed to attend to ways to facilitate dialogue with the more taciturn students in order to obtain a clearer picture of their thinking. Although the literature abounds with discussion on the use of written work in the evaluation of nursing students, research on the use of written work in clinical evaluation is limited (Richardson & Maltby, 1995; Wong et a l , 1995). In this study, several points were raised about using written work in the evaluative process. Written work can be used to 175 identify what the source of cognitive difficulty is with certain students; for example, Student B's written assignments assisted J in understanding that the cause of her knowledge deficit was primarily disorganized thinking. Another point is that written work should be used with caution in evaluating students' thinking and knowledge. Students differ in their ability to express their thoughts in their written work. If the "brief writers" are in a group of "good writers", the C N T may be unfavorably biased. Also, what appears on the surface to be a lack of knowledge may really reflect lack of time or effort on the student's part. CNTs need to be clear with their students on how they are using written work in the evaluative process. It is important for students to know that their care planning work and other such theoretical assignments contribute data for evaluative decision making. Also, CNTs who do not believe that reflective writing should be used in evaluation, need to make this distinction clear to students as well as have practices for bracketing evaluative thinking when reviewing and responding to journals, particularly when finding information in a journal that has implications for the student's evaluation as happened with J and Student B. This is analogous to the situation when jurors hear evidence that is ruled out of order by the judge. The jurors are instructed to not consider the evidence in their decisions making. The CNT, like the jurors, must be able to bracket their knowing of the information and be prepared to show how their decisions negated the information. Focus of Evaluation Some sources suggest that CNTs may be evaluating the socialization process of becoming nurses rather than the clinical competence of the students (Campbell, 1995; Hil l , 1998). The source of this problem appears to be the differing expectations of various stakeholders. Most official evaluation forms and the competencies required of a new 176 graduates reflect nursing as practiced in ideal settings, not the understaffed, under-resourced practice situations of the current health care system (Campbell, 1995). Many employers expect graduates to be able to step right into these health care environments and practice with mimmal resources to assist them in adapting to the realities of the practice setting (Bent, 1993; Tracy, Marino, Richo, & Daly, 2000). This discrepancy between the ideal and the real can influence the expectations that CNTs have for what students should be able to do (Campbell, 1995; Orchard, 1992). CNTs need to ask themselves if the students are being evaluated on their ability to cope with unrealistic workloads and inadequate staffing levels. If CNTs evaluate students' ability to function effectively under working conditions that mitigate against competency, they are effectively concealing defects in the organization and/or financing of the health care system (Campbell, 1995). The research findings raise a question of whether students are evaluated differently depending on the CNT's sense of self as a clinician and her/his familiarity with the clinical setting. It would seem that a danger of teaching in the same clinical setting for many years is that a CNT may internalize a view that nursing as practiced on the unit is the norm. When the CNT is able to function competently and confidently on the unit because (s)he is familiar with the nursing required on the unit, is the expectation that students be able to perform similarly? CNTs must examine whether they are evaluating nursing practice as outlined in the PAF, or merely reproducing the workplace. This would not be a problem i f the nursing practice was exemplary, but, in the current health care environment, it is likely that the CNT is evaluating student ability to handle unrealistic workloads and practice competently in unsafe conditions. In summary, the research findings suggest that CNTs utilize several practices in the evaluation of the students' nursing practice. The quality of the evaluation depends on the 177 CNT's ability to obtain sufficient samples of a student's practice. Sampling, in turn, depends on the opportunities, both planned and unplanned, that are available in the clinical setting and the amount of time that a CNT can spend with each student. It is also apparent that the quality of evaluative data may not depend as much on the type of method used (i.e., observation versus written work), but on the CNT's level of skill with the technique. It appears that accurate, comprehensive, and fair clinical evaluations are best achieved by utilizing multiple methods, a blend of observational, oral, and written techniques, that enable CNTs to evaluate both thought and action (Reed & Proctor, 1993). Finally, CNTs need to be aware of what drives the focus of their evaluation to ensure that their expectations of nursing practice are realistic and appropriate. Deciding What the Data Means How do CNTs ensure their interpretations and judgements about a student's practice are accurate and that students are treated fairly and justly throughout the evaluative process? The consensus in the literature is that fair and objective evaluation depends on the evaluation system and policies of the school of nursing, as well as the individual practices of the CNT (Krichbaum et al., 1994; Orchard, 1992; 1994b; Scanlan, et al., 2001). The evaluation system should contain objective and formally recorded course expectations, including the evaluation standards and criteria that will be used to judge the students' clinical practice (Orchard, 1994b). The school of nursing should have written policies and procedures to guide all steps of the clinical evaluation process with particular attention to the rights and duties of students and CNTs, and the fair and just handling of unsafe and failing students (Scanlan et al.; Orchard, 1994a; 1994b). The school of nursing is also obligated to ensure that the clinical settings that are utilized provide ample opportunities for students to achieve each course 178 objective. CNTs require practices for collecting data and interpreting and applying the standards and criteria consistently in each student's evaluation, the interpretations and judgements must include a consideration of the contextual variables that may have influenced the student's performance, and the CNT needs to communicate to students both the interpretations and judgements, and the evaluative reasoning that was used. The CNT must also be aware of the personal expectations (s)he holds for student clinical practice and have the ability and means to recognize and deal effectively with these and other potential sources of bias (Oermann & Gaberson, 1998; Orchard, 1992). Outcome-Based Evaluation Systems Outcome-based evaluation systems have attempted to provide for objective and fair clinical evaluation through standardizing the evaluation and documentation of students' clinical performance (Krichbaum et al., 1994; Luttrell et al., 1999; Woolley et al., 1998). In an outcome-based system, evaluation is directed by specific competency statements, that is, detailed clinical behaviors that are to be demonstrated by students. In addition, standardized performance examinations are utilized as part of the summative evaluation process. CNTs and students have reported that outcome-based evaluation systems made expectations of performance levels clear from the onset and reduced the perceived subjectivity of clinical evaluations. The CNTs are therefore better able to accurately describe and classify the strengths and limitations of a student's performance, give students specific diagnostic feedback, and provide students with a tool they can use to self-evaluate their performance (Krichbaum et al., 1994; Luttrell et al., 1999; Woolley et al., 1998). One of the problems with outcome-based evaluation systems is that there are a limited range of nursing practice behaviors that can be objectified. Some of the clinical behaviors 179 described in the literature appeared to be simplistic, somewhat vague, and even infantile, such as "maintaining personal appearance according to program policies, reporting to the instructor and assigned staff when arriving at and leaving the clinical area, and cooperating with others on the health team" (Woolley et al., 1998, p. 363). In addition, observable behaviors do not assist the CNT in evaluating the influence of the uniqueness, complexity, and constraints of the clinical setting on the students' nursing practice (Buckingham, 2000). Outcome-based systems also do not address the fact that, in the clinical setting, CNTs still have to interpret and judge the students' practice in relation to the performance statements. Instead, by emphasizing the objectivity of the evaluation systems, the CNT's subjectivity may not be openly addressed. Qualitative Evaluation Systems In contrast to the amount of literature that is available on quantitative evaluation systems, little is known about evaluation systems that encourage the use of qualitative evaluation practices. The findings provide some insights into evaluative practices that are based in description and interpretation. The curriculum philosophy of J's school of nursing directed CNTs and students to utilize a combination of objective and subjective methods to evaluate the student's nursing practice. Objective methods included clearly outlining clinical expectations for the rotation in the course manual and the PAF, as well as in discussions between CNT and the students at the onset and throughout the clinical rotation. Subjective methods included the sharing of each party's perceptions of the student's practice through verbal and written dialogue, in the clinical area, in J's office, and in the student's written work and J's instructor notes. 180 Expectations, Standards and Criteria It appeared that the validity of the evaluation process depended largely on J's ability to understand and interpret the PAF, and to judge the student's practice against the performance indicators. Broad statements of achievement such as those used by J's nursing program require more thought and interpretation on the part of the CNT, in comparison to specific behavioral statements, while also allowing greater flexibility in the variety of clinical experiences that can be evaluated (Buckingham, 2000; H i l l , 1998). This can cause problems for novice CNTs in the nursing program. Novices tend to be rule driven, needing increased direction around what specific behaviors they should be looking for when determining the adequacy of a student's clinical practice (Scanlan et al., 2001; Wolff, 1998). In contrast, J's had internalized the course expectations and performance indicators. She had a broad understanding of what the students' clinical practice should look like and to be able to judge the appropriateness of a student's practice in relation to each client situation. Regardless of expertise, CNTs must be clear on their expectations and the standards and criteria being used to judge student performance (Orchard, 1992; 1994b). For J, some of the clinical expectations were spelled out in the course manual, such as the number of clients and the level of client acuity that students were expected to work with. It was not clear from the findings whether J communicated to the students, her personal expectations about their performance. For example, she confided to the researcher that she did not expect smooth performance at first, she had definite ideas about the areas of practice where she expected that students required guidance and those areas where they should be working independently, and she expected students to improve their practice when given feedback and further opportunities. 181 It appeared that a barrier to the fair evaluation of the students was the fact that a large number of the performance indicators on the PAF were not used in the evaluation of the students. This begs the question, how did the students know what their practice was supposed to look like? Because the unofficial intent of the clinical course was to teach and evaluate students on acute care nursing practice, I suspect that J utilized a blend of the Semester V indicators and those from the previous acute care course. This lack of fit between the prevention focus of the semester and the acute care focus of the clinical rotation was well known to both faculty and students. To promote the fair evaluation of Semester V students, the school of nursing needs to develop performance indicators that more accurately describe the clinical practice that is being expected of the students, or assign the students to clinical settings that will allow them to develop the knowledge and skills of preventative nursing practice. Of interest was J's unconscious use of Bondy's (1984) criteria in evaluating a student's performance. When judging a student's practice, she evaluated for principles of safety and accuracy, understanding of the theoretical basis of the practice, degree of confidence, ability to be client-centered, efficiency and organization, and type and amount of assistance or cues needed to perform the behavior. Some of these criteria were explicitly stated or implied in the performance indicators but others were a part of her own knowing of what constitutes quality nursing practice. J relied heavily on the standard of the "usual student". Because she had seen many students perform in numerous situations, her knowledge of the usual student level of performance allowed her to determine expectations for a student's level of ability. J's global determinations of the "strong", "average", and "struggling" student, and the "generally 182 satisfactory", "satisfactory", and "outstanding" levels of performance were similarly based in her past experiences with students. With some effort, she was able to articulate the practice behaviors that characterized these various levels of student practice. Hill (1998) contends that this sort of norm referencing forms the basis of all criterion-referenced evaluation systems because CNTs are in fact judging students against others; other students, the nurses, and the CNT (Hill, 1998). However, the standard of the usual student has been criticized by proponents of outcome based evaluation systems, "Individual faculty often have operated in isolation, relying on experience and instinct and basing evaluation on some personalized perceptions of what the 'average' student performance is like" (Krichbaum et al., 1998, p. 397). One can appreciate the dangers associated with the subjective nature of the "usual student" concept, yet a similar standard is used in the legal system in evaluating negligent practice and by professional disciplinary bodies in detenriining incompetent practice. In both cases, the nurse's practice is judged against the prudent and reasonable nurse of same education and knowledge (Mason v. Registered Nurses' Association of British Columbia, 1979). To determine if a nurse has been negligent or incompetent, the court or the disciplinary panel considers evidence on what the "usual nurse" would do in a similar situation. The use of the "usual student" as standard appears appropriate whether or not the practice being evaluated has resulted in harm to a client (negligence) or demonstrates a pattern of carelessness that does not get corrected, despite feedback and opportunity (incompetent practice). To utilize the "usual student" as a standard requires CNTs to have enough experience to deteirnine what the standard looks like. This is an area in which novice CNTs are disadvantaged (Orchard, 1992; Duke, 1996; Scanlan et al., 2001; Wolff, 1998). It 183 is also not known how CNTs communicate their perceptions of the "usual student" to each other. Contextual Variables A student's clinical performance is affected by many variables, some of which are outside the control of the student; for example, clinical resources, unexpected changes in client status, and staff attitudes. As a result, it is important that CNTs are able to consider contextual variable when judging a student's clinical practice (Buckingham, 2000; H i l l , 1998; Orchard 1994b). J considered variables such as the student's past clinical experiences, the complexity of the clinical situation, the students' level of fatigue, and pressure from the staff. However, she did not seem to use any sort of framework to assist her in being consistent in considering influences on the student's performance. Bevis and Watson (1989) outline a framework that has the potential to guide CNTs in determining how the complexity of a clinical situation may be factored into their interpretation of a student's practice. These authors outline six factors that influence the complexity of a learning situation: number of variables involved, amount of structure provided to students, students' degree of familiarity, students' characteristics, degree of intensity of a situation, and level of theory required to practice effectively. The first five appear relevant to the evaluative process. CNTs can assess the number of variables the student has to contend with in a clinical situation. This is particularly important when making evaluative decisions about a student's practice in situations that started out to be within the student's capability but became unmanageable because of changes in the client's condition. The degree of structure in the clinical situation is the second variable that can be considered. Students can be expected to perform more adequately in straightforward clinical situations 184 such as those that follow the usual pattern that students have experience with or situations that follow the textbook pattern. The student's degree of familiarity with a clinical situation is another variable that may help CNTs decide the adequacy of a student's performance. A CNT can reasonably expect a better level of performance when students have had previous experiences with a particular client situation. A consideration of student characteristics is important when dealing with students from different cultures and students who are shy or lack confidence. A problem with the student's performance such as an inability to be assertive with clients and/or other health team members may reflect cultural or maturational differences and needs to be factored into the evaluative decision making process. Finally the degree of intensity of the clinical situation can greatly impact the student's ability to perform. Rapid changes in a client's condition and pressure from the staff or CNT to perform quickly can negatively impact the student's clinical practice. CNTs working in systems that acknowledge the subjectivity inherent in the clinical evaluation process need practices that enable them to deal with the fact that a student's performance can be affected by the CNT-student relationship and the CNT's evaluative style (Groening, 1999; Hornak, 1997; Paterson & Groening, 1996). J stated she was aware that her subjective responses to students were shaped by many variables such as student capability, personality, and ability to be open with her. She spoke of the importance of regularly reflecting on her interactions with the students to monitor the influence of her personal values, beliefs, likes, and dislikes on her perceptions of and responses to the students. She also claimed to be aware that, like all CNTs, she had "blind spots" and was not infallible to subjective and biased interpretations. 185 Teaching and Evaluating The findings have the potential to broaden our understanding of the relationship between teaching and evaluating in clinical courses. In the evaluation literature, the teaching function of a CNT is typically represented within the formative evaluation process. Formative evaluation provides information to both the student and the CNT on what learning is taking place and what is required in order to improve the student's nursing practice (Reilly & Oermann, 1992). The evaluation function of a CNT is most often discussed with respect to summative evaluation. In the summative evaluation process, CNTs are required to make a judgment as to whether the student's practice meets professional standards and to determinate whether a student has met the academic requirements of a clinical course. In nursing education, teaching and evaluating have been conceptualized as separate functions (Infante et al., 1989; Reilly & Oermann, 1992). As a result of this view, CNTs are admonished to separate evaluation from teaching (Gaberson & Oermann, 1999; Infante et al., 1989; Orchard, 1994b). Separating evaluation from teaching has also been considered important because of the impact of evaluation practices on students' experiences of clinical learning and their development as nurses (Flagler et al., 1988; Loving, 1993; Wilson, 1994). The evaluative practices of a CNT create a learning environment that is experienced by students in predictable ways. When students perceive the CNT is watching and evaluating their every move, they experience clinical learning as restrictive and punitive whereas a perception that the CNT is there to support and guide them as they learn, results in students viewing clinical learning as a growth enhancing experience (Diekelmann, 1992; Flagler et al., 1988; Loving, 1993; Wilson, 1994). 186 The research findings suggest that a CNT's teaching and evaluating roles may not be as incompatible as they have been conceptualized. To date, discussions about separating these two functions have tended to oversimplify the relationship between teaching and evaluating. Clinical evaluation is not a discrete set of acts that occurs after learning has taken place; rather, it is a complex and dynamic process that is continuous throughout the rotation. The findings demonstrated that J's evaluation and teaching practices were often inseparable; each set of practices being intertwined with and dependent upon the other. J stated that when she interacted with a student, she did so with both an evaluative and a teaching intent. As she taught students, she formed judgments and drew conclusions about the student's performance and learning. As illustrated through her "pieces of the pie" analogy, how much of her thinking was evaluative and how much educative depended on the context. Her teaching intent was primary whenever students were learning a new aspect of nursing practice, but as students received feedback and further opportunities to engage in an area of nursing practice, the evaluative piece became greater in emphasis. Similarly, when engaging students in discussions in the clinical area, J balanced teaching and evaluating depending on whether the student was expected to have a grasp of the necessary knowledge, or whether it was new learning or a new application of the knowledge. Establishing a pattern of behavior involved a movement between evaluative and educative thinking. Pattern development started with an evaluative mind, judging the student's level of performance and ability to learn from previous experiences, then moved into a consideration of teaching strategies that would help the student improve, followed by further evaluation. Given that a major source of evaluative data is the student's actual performance in the clinical setting, it does not seem likely that a CNT would work with students and not make some sort of 187 evaluative conclusion. Nor would it be economical. Because students are involved in a limited number of clinical experiences during any given clinical rotation, CNTs need to maximize whatever evaluative opportunities are available to them. J also appeared to utilize the P A F for both evaluation and teaching purposes: as a measure of the expected level of student practice (evaluative use) and as a framework for discussing the students' practice with them (educative use). This view of evaluation as teaching was influenced largely by the evaluation system that had been adopted by the nursing program. The PAF was designed to be used as a teaching tool where the CNT assisted students to learn what quality nursing practice entailed through the cyclic process of practice and reflection on practice. J's role was to model this analytic process through discussing instances of the students' clinical practice with them with reference to the performance indicators on the PAF. Discussions of this sort were limited in the clinical setting because reflection requires time. The evaluative dialogues between J and the students appeared to take place primarily in her weekly instructor notes and the student's written work. One of the dangers of teaching students to understand and critique their nursing practice by referring to the PAF is that CNTs may actually be teaching students to produce their learning and nursing practice as textual reality; i.e., a written description of their practice is substituted for what actually took place in the clinical setting (Campbell, 1995). Evaluation as "accounting" is problematic because a satisfactory evaluation depends partially on the students' ability to create the appearance that their practice fit the P A F version of nursing practice, or as Reed and Proctor (1993) put it, demonstrate their ability to "articulate practice in the currently ideologically acceptable way" (p. 181). Campbell also warns that, 188 when students are motivated by a need to document their practice in terms of the PAF, they learn to stretch their practice experiences to fit the form, rather than critically assess how their practice differed from the ideal, and why. Viewing their practice in terms of the ideal may divert the students' attention away from a critique of the shortcomings of the health care system that contribute to the less-than-ideal nursing practice that students often witness. Also, when students learn to fulfill established criteria, they do not question the practice of the profession as it is; nor do they transform practice to become what it should be. One means to ensure that students are critiquing the hegemony is to include this aspect of nursing practice as a performance indicator on the evaluation tool. Although, in J's curriculum, a critical perspective was considered a highly valued part of nursing practice, there was no evidence in the findings that the students were evaluated on their ability to recognize how the dominant sources of power impacted nurses' abilities to practice adequately, let alone ideally. There was evidence of J and the students trying to make experiences count as congruent with performance categories by stretching the students' clinical practice to fit the prevention concepts from the student's nursing theory course and the prevention indicators on the PAF. This latter practice was conscious as both J and the students were aware that the PAF focus was not really attainable in the clinical setting. The experience of "forcing the fit" did make the students aware of what a minimal focus prevention has in acute care nursing as practiced on their unit. The notion of separating teaching from evaluation does highlight an important evaluative point; i.e., that summative evaluation occurs at the end of the rotation. It is a reminder to CNTs that students are entitled to sufficient opportunities and assistance before being judged on their ability to achieve course expectations (Oermann & Gaberson, 1998; 189 Orchard, 1994b). It also requires CNTs to be aware of tendencies they may have to make premature judgments about students. J thought that she was careful not to judge students on the basis of a single example of their practice in cases where the student's practice was below the expected level. Her usual practice was to evaluate students on the consistency of their performance and growth over the rotation, to put all the "pieces of the puzzle" together at the end, there were occasions where her final evaluations included incidents that had occurred early in the rotation. An interesting finding from the study was J's view of a continuum of evaluation, cutting across the various rotations and centered in the student's file. This finding raises some important points. The first is that teaching and evaluating, or formative and summative evaluation can be seen as seamless; "there is a formative element in any summative evaluation" (Schoenhofer & Coffman 1994 p. 149). When the students came to J, they worked together to find opportunities for the student to work on goals that had arisen from the previous clinical experience. Likewise, during the summative evaluation session, J and the student identified learning needs to be addressed in subsequent rotations. Rather than treating summative evaluation as an isolated incident and a product, the summative process can be used to provide a source of continuity in the evaluation process, helping to counter the fragmented experience of rotating through so many different clinical settings (Scanlan et al., 2001). Viewing the student's file as representing an evaluation continuum raises questions about the purpose of evaluation documents. Are the summative evaluation documents merely an administrative entity, the formal documentation of student achievement, or are the final evaluations "living" documents that assist CNTs and students to promote continuity of 190 teaching and evaluation as students progress through the program? Scanlan et al. (2001) believe that the information from students' previous evaluations is important for the identification of the student's problem areas so that the CNT and student can begin early to find ways to help the student to be successful or to establish that the student's practice is indeed inadequate. This is particularly important in the case of short rotations where a delay in identifying learning needs may result in insufficient opportunities for further teaching and evaluating. Considering the summative evaluation document as a "moment in time" also highlights the fact that the summative evaluation time frame set by the nursing program may disadvantage certain students. "Slow starters" and students who miss clinical time because of illness may be disadvantaged by short rotations and program policies that do not allow students to continue on in clinical courses between semesters. It is commonly argued that the use of previous evaluation documents may negatively bias CNTs against a student and that students may appeal the evaluation on the basis that a CNT's prior knowledge resulted in unfair treatment (Scanlan et al., 2001). Little has been written about the student's view on CNTs' access to previous evaluation information and what is known is contradictory. Some students are skeptical of CNTs reviewing their evaluation documents (Neary, 2001), while others think it is important for their current CNT to know something about how they did in their previous clinical placement (Nylund & Lindholm, 1999). J indicated there was a lack of consensus among her colleagues about the use of the student file in evaluation and a lack of policy outlining acceptable practice. In J's school of nursing, the student's file was considered to contain the "whole picture" of the student's nursing practice throughout the program. Many faculty routinely reviewed their students' 191 previous evaluations. In addition, the file was used in determining awards and opportunities for out-of-town placements (deliberations in which students who had successfully met the requirements of a L C were discriminated against). The file was also used at the end of the program to write a final performance summary that was sent (with the student's permission) to prospective employers. Partnership in Evaluation A significant outcome of the study is the insight into how the concept of partnership is enacted by a CNT with respect to evaluation. CNT-student partnerships are a central feature of critical-interpretive curricula. CNTs attempt to create relationships where students are equal partners in the planning, implementation, and evaluation of the students' learning (Bevis & Watson, 1989; Gaines & Baldwin, 1996). Theoretically, a partnership view of evaluation shifts some of the evaluative power and influence from the CNT to the student and some schools of nursing have expressed a commitment to utilizing evaluation systems that actively involve students in both the formative and summative evaluation process (Tracy et al., 2000). Until recently, the concept of partnership seems to have been uncritically accepted by many CNTs resulting in a mass jump onto the "partnership bandwagon". Paterson (1998) is one of the few to openly question whether the relationship between CNTs and students can truly be one of equal partners, and whether the current partnerships are as just, open, and mutual as they purport to be. She points out that the CNT-student relationship is an inherently imbalanced one. CNTs exercise socially legitimated powers by virtue of their knowledge, expertise, and role as teacher. The joint determination of student success in meeting clinical requirements involves a sharing of responsibility and power, yet many CNTs 192 believe that there cannot be a sharing of the CNT's accountability to the public, the profession, and the educational institution for evaluative decisions. There is a dearth of literature on how partnership relationships extend into evaluation; however, some graduate students have begun to explore this area. Groening (1999) found that, although the ten CNTs in her study were committed to minimizing power differentials between students and themselves, the majority felt that the evaluative role of the CNT precluded egalitarian relationships. These CNTs discussed how some of the realities of clinical teaching, such as ensuring patient safety, working with the struggling students, and promoting and accounting for student learning, conflicted with the curricular ideals of partnership. Groening concluded that equality in CNT-student relationships was a myth. Likewise, in Hornak's (1997) study of five CNTs' experiences of partnership, she found that there were attempts made to share power but that different levels of power were held by partners at different times. One of these situations of power imbalance was summative evaluation where the CNTs in her study felt they held the balance of power by virtue of then-knowledge experience, and professional responsibility for evaluation. Throughout the current study, it was apparent that J believed in a partnership model of CNT-student relationships. She exhibited many of the essential qualities of a partnership: she involved the students in setting personal goals for the rotation, discussed then-expectations of her as CNT, encouraged students to give her feedback, worked to establish and maintain trusting, honest, and open communication, promoted a mutual valuing of each other, and attempted to connect with the students while at the same time mamtaining boundaries (Hornak, 1997). 193 An important finding was that J's students appeared to wield some evaluative power in both formative and summative decision making. J utilized two practices for sharing power in summative evaluation: the first was incorporating student evaluative input from throughout the rotation into her written summative evaluation documents and the second was negotiating with the students in the co-writing the final summary of student strengths and learning needs. These findings differ from Groening's (1999) research where she found a general lack of student input into the evaluative process. In some of the nursing programs represented in her study, self-evaluations were not required and the students' self-evaluations were not always included in the final evaluation documents. The students' contributions were viewed by Groening as "token". In a way, J did present contradictory practices with respect to valuing student evaluative input. On the one hand she actively sought her student's views on their practice, encouraged them to write a thorough self-evaluative document, and decide with her on the final summary of strengths and learning needs. Yet she also paid little attention to a student's self-evaluations when reviewing a student's file, only considering what was written from the CNT's view. Although J did several things to work mutually with the students in evaluating their practice, she was aware that evaluative power was not shared equally with them. In general, she believed that a true evaluative partnership was not possible with the level of students she worked with because of the knowledge differential between herself and the students. As was the case with the CNTs in Groening's (1999) and Hornak's (1997) studies, J found that sharing power was easier with students who were doing well. Students who were practicing at the expected level could work mutually with J in making summative decisions because there was no need for any one of the partners to exert power; it was easy to negotiate 194 agreement when the grade was a passing one. When dealing with evaluation of struggling students who could not recognize or face that their practice was substandard, J's dual professional responsibilities, to protect current and future clients from unsafe nursing practice, and to accurately assess and account for student academic progress, obligated her to take control of the evaluation. Although she had had several positive experiences where she was able to work mutually with students who had insight into their lack of progress, this was not the usual case. When students were unable to accurately see, interpret, or judge their practice, her view was necessarily privileged. Contrary to the practices of some of the CNTs in Groening's (1999) and Hornak's (1997) studies, J worked hard to enact mutuality in her relationships with struggling students. Whereas, some of the CNTs in Groening's research evaluated strong and struggling students differently (strong students were evaluated so they could know what to work on, whereas struggling students were evaluated to justify the CNT's beliefs about the student, and to justify failing the student), J evaluated both strong and struggling students with the same intent (to identify areas where they could improve their practice). One difference in her evaluation with strong and struggling students was that with the struggling student, the area to work on was formally identified in a L C . This forced her and the student to focus on it on a weekly basis. Even though J expected the students to engage in self-evaluation, she thought she needed to do more to help them develop skill in this process. One way could be by taking backseat role rather than being the principal evaluator of the student's practice. By providing the students with such detailed and comprehensive evaluations of their practice, she was inadvertently taking evaluation opportunities away from them. It was evident though, that it 195 was much quicker for her to provide an evaluation in the clinical area than it was to sit together, describe a clinical experience in detail, determine what aspects of practice were reflected in the description, and judge the practice by comparing it against standards of quality care. It was not clear whether students saw self-evaluation as a growth-generating process, or merely another paper exercise that needed to be done to complete the rotation. CNTs may have to face that power cannot always be shared equally with students, especially in the evaluative decision making process. This does not mean that power cannot be shared at all. Power can be shared through the use of student input into the evaluative process. Paterson (1998) recommends something similar to the model of a "limited partnership", where students learn the skills of the profession and when this is mastered, they are granted legitimate access to full and equal partnership. CNTs need to work with students to help them develop the knowledge, skill, and confidence needed to perceive and interpret their nursing practice. Conclusions There are limitations of the results of this research. Because the data were collected and interpreted through the lens of the theoretical perspective I hold on clinical evaluation, they are most likely heavily coloured by my worldview. The data collection period was short and there was only one participant. As the researcher and participant were from the same school of nursing, this greatly influenced the sense we made as we tried to co-create the findings. We have undoubtedly missed many important things in our inability to notice the taken-for-granted assumptions and beliefs that we share as members of the same school. With this in mind, the following points reflect my conclusions about the findings of this study. 196 1. The evaluation of nursing students in the clinical area is a complex process complicated by several problems inherent in the clinical learning experience. To ensure that the evaluation of students is accurate, comprehensive, fair, and just, a CNT must utilize evaluative practices that address the following issues: nursing practice is complex and some of it is invisible or tacit; the student's performance is affected by multiple variables in the clinical setting, the CNT-student relationship, and the CNT's evaluative style; evaluation of clinical practice is inherently subjective; the evaluation of students is based on samples of their practice; and the CNT is both the students' evaluator and teacher. 2. Becoming skillful at clinical evaluation requires study, experience, reflection, and collaboration with peers. A CNT may become skillful at clinical evaluation through a combination of the formal study of the theory and concepts of clinical evaluation and actual experience evaluating students in clinical courses. 3. The evaluative practices of an experienced CNT reflect a broad experiential base. Therefore, CNTs need to regularly reflect on their practices to identify what it is they are actually doing and to ensure that their actions reflect current evaluative theory and are consistent with their evaluative philosophy and that of their school of nursing. Being self-reflexive is one way to develop effective evaluation practice but is insufficient as a sole route to improvement. Use of a faculty development process based on tenets of critical inquiry has the potential to promote the CNT's understanding of her/his evaluative practices and sources of influences on these practices. 197 4. Observation and questioning are important evaluative practices that help CNTs balance their dual responsibility of ensuring adequate evaluative and teaching opportunities while at the same time ensuring that clients receive safe and effective nursing care. CNTs can develop skill in these data collection methods so that students experience CNT supervision and questioning as primarily educative event rather than evaluative. 5. CNTs need time with students in order to form evaluative conclusions that are representative of the student's actual nursing practice. Time is required to allow students to adjust to the new clinical setting and the evaluative style of the CNT, as well as to provide an increased number of clinical opportunities for the CNT to sample the students' nursing practice. Time is also required to build trusting relationships with the students. A trusting relationship facilitates the students' ability to discuss their practice with the CNT, thereby providing an opportunity to understand the students' perspective on their practice. 6. The CNT's view of how students should practice is reflected in the aspects of nursing practice that are the focus of evaluation. The focus of evaluation also reflects the CNT's view of what is of value in nursing practice which, in turn, is influenced by how the CNT practices nursing, how nurses in the clinical setting practice, the view of nursing articulated in the curriculum philosophy, and the expectations that the workplace/employers and the professional regulatory body have for the nursing practice of new graduates. 7. Teaching and evaluating are not incompatible. Most CNT-student interactions in the clinical setting have an evaluative and a teaching component. CNTs needs to be aware 198 of which component is prominent in any situation and to ensure that neither aspect overshadows the other. 8. Dealing with the struggling and failing student is a stressful process for both CNT and student, regardless of how fair and helpful the CNT is or how self-aware the student is. CNTs need to be aware of the negative consequences of a L C on the student's current and future experiences in the school of nursing and have practices in place to ensure that the decision making process is fair and just. It is equally important that the negative aspects of LCs must not deter a CNT from dealing definitively with inadequate or unsafe student practice. 9. CNTs disagree about, and students are not always aware of, the use of the students' files in the evaluative process. Information recorded in previous evaluations can play an important role in providing evaluative continuity across clinical rotations, and be particularly helpful in the early identification of student learning needs giving both CNT and student more time to address the area of difficulty. However, the use of previous evaluation information may also negatively bias a CNT and contribute to an unfair evaluation of the student. 10. Student input in the summative evaluation process is both possible and desirable. The degree of student influence in evaluation should increase as students develop the knowledge and skill required to self-evaluate their nursing practice. CNTs must be able to teach students how to self-evaluate and the school of nursing must support student input through the development of clear guidelines and policy around student input into the evaluative process. 199 Implications for Nursing Education and Research The findings of this study carry several implications for nursing education with respect to the structure of clinical education, evaluative procedures and policies of schools of nursing, the evaluative practices of individual CNTs, faculty development, and the preparation of new CNTs. In addition, the findings point to gaps in what is currently known about the clinical evaluation of nursing students. I have suggested several ways in which nursing research may help to fill these gap. Nursing Education From the findings it seems clear that a major influence on the clinical evaluation of nursing students is the amount of time that the CNT and student have together in the clinical setting. The issue of rotation length is heavily influenced by a perceived need to give students experience in all the subspecialty areas of nursing practice. Many CNTs believe they must provide students with experiences in all areas, including medical, surgical, maternal-child, and mental health nursing practice. In additional, baccalaureate curricula expect that graduates will be able to practice in both acute care and community settings. The attempt to provide students with learning opportunities in these different practice contexts results in students having to rotate through each area. Given that there are only so many weeks in a clinical year and a limited number of clinical years in the program, the rotations end up being short. I believe that CNTs need to challenge the belief that nursing programs preparatory to professional registration need to prepare graduates to work in each of these subspecialties of nursing practice. Times have changed since the days when the "generalist" focus of nursing programs prepared nurses who could function in each clinical area (if in fact, this was ever truly possible). Today's clinical settings have become increasingly complex, requiring nurses 200 that practice in each area to have specialized knowledge and skill that can only be developed through sufficient clinical experience. As it currently stands, students do not generally receive enough time in a short clinical rotation area to develop expertise in that area of practice and CNTs are having to evaluate students based on insufficient information. I believe it is unrealistic to expect students to be skillful across all contexts of practice and that it is possible for students to develop the core competencies required for nursing practice by spending a longer amount of time in a smaller number of clinical settings. Faculty need to review how clinical courses are structured in their school of nursing with the intent of creating longer clinical rotations. In addition, faculty, schools of nursing, and the professional association should reconsider the expectation that graduates need to be prepared to practice in all subspecialties of nursing. A second implication of the findings for nursing education is that a CNT's familiarity with the clinical setting could be beneficial to the evaluation of students. Being assigned to the same clinical area has several advantages with respect to the CNT's ability to evaluate the students' practice. Familiarity with the type of nursing situations students routinely practice within enables the CNT to identify the aspects of nursing practice that can best be evaluated in the typical client scenarios. Repeated experience with students' practice in the setting also contributes to the development of realistic expectations for student performance. Being assigned to teach on the same clinical unit allows CNT and staff to become famihar with each other's routines and the expectations that each has of the other and of the students, maximizing the potential to develop an optimal clinical learning environment. The findings suggested that the staff are a useful source of evaluative data. When CNTs and staff work together on a regular basis they have an opportunity to agree upon a suitable evaluative role 201 for the staff and to discuss evaluative concepts and theory to develop the staffs evaluative abilities. Thus, I believe that nursing programs should commit to the consistent assignment of CNTs to clinical areas and that CNTs study what is known about building and maintaining supportive CNT-staff relationships in order to maximize the role of staff in the evaluation of students. CNTs that work in one clinical setting for an extended time period should also commit to exantining how their expectations for student practice have been shaped by the way nursing is practiced on the unit and critically question whether they are evaluating students based on the status quo or on a view of quality nursing practice. The findings suggest that CNTs spend an inordinate amount of time evaluating psychomotor skill performance. This is not to suggest that evaluation of this area of nursing practice should be downplayed. Indeed, as the findings showed, when supervising students performing skills, CNT's are able to evaluate many more things than just the dexterity of the student. However, when the CNT is spending time with the one student, there is less time available to work with other students, particularly to evaluate other aspects of nursing practice such as clinical decision making, client teaching ability, and student-client interpersonal interactions. One implication of the findings is that CNTs need to take greater advantage of the potential of the skills laboratory for evaluating the mastery of psychomotor skill performance. Standardized performance examinations have been well researched and provide one means to deal with the issue of ensuring students have developed an adequate level of performance. Students may also videotape their skill performance for self-evaluation, or evaluation by a peer or the lab teacher. Nursing programs should investigate ways to provide instructional support to CNTs working in clinical areas where psychomotor skill performance is a major aspect of nursing practice. Support from clinical teaching assistants 202 (CTAs) enables the CNT to focus on other areas of clinical evaluation. Ideally, CTAs would receive professional development resources to help them to develop knowledge and skill in this aspect of clinical evaluation. The C T A would be better able to contribute accurate and detailed input to the students' evaluations. Findings around the use of the student's previous evaluation documents have implications for both individual CNTs and the school of nursing. A l l CNTs need to be aware of how the students' previous evaluation documents are used as part of their evaluative decision making and the arguments for and against such use. CNTs need to explore this issue as a faculty group and attempt to reach agreement around the proper purpose of summative evaluation documents. Faculty also need to come together to draft formal procedures and policy around access to the student's file. In addition, it is recommended that students are made aware of how their files are used other than as a place to store their evaluation documents. A related issue is the discrimination against students who have successfully completed a L C , in faculty decision making around awards and out-of-town clinical placements. The findings suggest that although some CNTs claim to view LCs as a teaching-learning process, the presence of a L C on the student's file is treated in a disciplinary manner; that is, as long as the L C is placed in the student's file, the student is judged as less capable than students without a L C , even when the student goes on to perform admirably in the following rotations. The implication this finding has for nursing education is that faculty need to examine the contradictory beliefs they may be holding about LCs. Are they hiding a belief that LCs are evaluative, behind the ideologically correct rhetoric of LCs as a teachmg-learning process? If the case is that the successful completion of a L C is truly a positive event then CNTs 203 responsible for developing criteria for awards and out-of-town placement opportunities should remove mention of L C s from their criteria. Students who have successfully completed a L C should be judged on the basis of the their current clinical performance and not on the basis that they were once on a LC. From the findings, it is also clear that CNTs need to be aware of their evaluative practices and the influence these have on the students' nursing practice and experience of clinical learning. The clinical evaluation of nursing students is too important in terms of the student's development, the care of current and future clients, and the CNT's professional and legal responsibilities, to be left unexamined. This is particularly true of experienced CNTs whose practices have become so integrated and tacit that they may not be fully conscious of what they are actually tMnking and doing when evaluating students. There is a wealth of information available about many of the aspects of clinical evaluation and it is recommended that CNTs should regularly undertake a study of the relevant literature. The findings also suggest that CNTs would benefit from a peer development program whereby colleagues commit to a critical inquiry process focused on identifying each other's evaluative practices and sources of influence on their practices. Because this is a time consuming project, it needs to be rewarded or it will be replaced by other priorities. I believe that schools of nursing could encourage faculty participation in peer development by adopting an expanded view of scholarship wherein the disciplined examination of one's evaluative practices is accepted as evidence of the scholarship of teaching (Shoffrier, Davis, & Bowen, 1994). I also believe that nursing faculties should commit to a yearly meeting devoted to the examination and resolution of the evaluative issues that exist within the school. 204 Another implication of the findings for nursing education is the need for CNTs to openly discuss the evaluative process with students. Areas to discuss should include the CNT's evaluative philosophy including how evaluation and teaching are related, the many possible sources of evaluative data, the expectations, standards and criteria that will be used to judge the students' practice, how contextual variables are factored into evaluative decisions, including how student errors are evaluated, and the students' role in the clinical evaluation process. CNTs should explore students' understandings, expectations, and issues around clinical evaluation. It is also clear from the findings that CNTS need to develop skill at teaching students how to evaluate their own practice. Finally, the findings point to the need to support novice CNTs in developing skill in evaluating students in the clinical setting. There is a small but significant pool of knowledge available about the difficulties faced by novice CNTs as they learn the art and science of clinical evaluation. In the past, new CNTs have had to develop evaluative skill through trial and error (Paterson, 1991; Wolff, 1998). As a result, many novice CNTs have suffered needless frustration, anxiety, and doubt and innumerable students have been subjected to inconsistent and arbitrary evaluative experiences (Duke, 1996; Lankshear, 1990; Orchard, 1994b). The clinical evaluation of nursing students can be improved by ensuring that CNTs are adequately prepared for their evaluative role through a combination of formal education in clinical evaluation and opportunities to be mentored by experienced CNTs. It is not enough to provide new CNTs with an overview of the curriculum and then expect them to be able to be able to deal with the complexity of the evaluative process. Schools of nursing owe it to their new CNTs and the students to provide an intensive orientation to the curriculum with an emphasis on practices, procedures, policies, and issues related to clinical evaluation. Deans 205 and other nursing department heads need also to support their newest faculty members in accessing graduate education in clinical evaluation. An important implication arising from the findings is the need for a mentorship program in which the novice CNT is paired with an experienced CNT in the clinical area with the intent being the intensive study of clinical evaluation as it is actually enacted. Nursing Education Research It seems clear from this study that more research is needed on the influence of rotation length on the clinical evaluation process. CNTs intuitively know that the longer they are with students, the more opportunities there are to examine the students' practice, and the findings provided evidence that six clinical days was an insufficient length of time within which to evaluate students. However, there is still too little empirical evidence to support the relationship between rotation length and outcomes in terms of more accurate and comprehensive clinical evaluations. Restructuring nursing programs in order to provide longer clinical rotations is a time consuming and costly undertaking that should not proceed without research findings that provide guidance as to how rotation length impacts evaluation. Also, there is a beginning body of knowledge being developed around the importance of longer rotations with respect to building helpful and supportive CNT-student relationships in which the student feels safer in discussing their practice with the CNT. But is there a point in which the rotation is too long? What is the effect of longer rotations on the evaluation of students when the CNT-student relationship is not an open trusting one? The sampling practices of CNTs is a second area that requires further study. Much like the research process, the clinical evaluation of nursing students requires CNTs to make choices about the type and amount of data to collect in order to develop an accurate and 206 comprehensive understanding of the students' nursing practice. It is important to learn more about how CNTs make these decisions. For instance, how do CNTs decide which samples are most predictive of the students' ability to provide safe and effective nursing care and how many samples are required to establish that the students' practice as sampled is reflective of their actual competency? What guidance can be provided to CNTs in selecting clinical experiences that will provide the most appropriate sampling opportunities? It may also be important to investigate how student self-selection of clinical experiences impacts sampling. It is likely that CNTs who believe that student involvement in selecting their own client assignment is important in terms of the students' development as self-directed learners and is an important strategy for building CNT-student partnerships, utilize different sampling techniques than those CNTs who do not believe in student self-selection of their assignments. A related topic that would benefit from further investigation is the effect of the client assignment on clinical evaluation. It is currently not known how many clients should be assigned to a clinical group of students in order to provide an adequate number of evaluative opportunities, while at the same time ensuring that the CNT is able to spend enough time with each student for sufficient sampling to take place. There is a large body of literature concerned with the subjective nature of the observation process. Much of this research is around practices such as rating scales and performance examinations which attempt to circumvent subjectivity through standardizing the observation process. However, there has been little or no research on what CNTs actually see and think when they are observing student practice in the clinical area, particularly with respect to the observation practices of experienced CNTs. It would be fruitful to investigate the observation practices of novice and experienced CNTs to examine how these practices 207 differ and to further our understanding of the observational process. If skillful observation is found to be developmental in nature, then it may be possible to identify ways to assist new CNTs to develop as skillful observers of student nursing practice. Experienced CNTs could also utilize theory around the observational process as a framework for examining and improving their observational skills. How CNTs evaluate clinical decision making remains largely under-examined. To date, most of the literature contains discussions of strategies that the author(s) are currently using with very little, if any, empirical studies on the validity and effectiveness of the various evaluation practices. The subject of clinical decision making is complex and poses difficulties because of the context-dependent nature of these thinking processes. It would be useful to study how CNTs define clinical decision making and what exactly they look for when evaluating these aspects of nursing practice. A comparison of what CNTs are evaluating with existing conceptualizations of clinical decision making could provide insight into the validity and utility of both the CNTs' views of evaluating clinical thinking processes and the current conceptualizations, in guiding clinical evaluation of students' clinical thinking processes. A related area that requires further study is how qualitative practices such as dialogue are used in clinical evaluation. Dialogue has been identified as one way to explicate and understand the thinking processes students use in their nursing practice. It has also be discussed in the nursing literature with respect to its potential to promote student involvement in the evaluative process. As discussed in chapter two, there has been some research done on the use of dialogue journals in evaluating critical thinking and the affective areas of nursing practice, although there is still much that needs to be known about journals as an evaluative tool. The findings suggest that it is equally important to learn how CNTs can 208 effectively use discussion and questioning in evaluating students' nursing practice, primarily because of the frequency with which discussion and questioning are used by CNTs. Areas to investigate include how CNTs engage students in discussions and question students in a way that does not engender student anxiety, which discussion and questioning techniques are most effective at accurately evaluating students' thinking, and how characteristics of the student (level of maturity, taciturn students) and the CNT-student relationship influence the use of dialogue as an evaluative practice. There is a gap in the nursing education literature regarding student input into the evaluative process. Many claims have been put forth that student input improves the accuracy and comprehensiveness of the final evaluation but for the most part, these have not been tested empirically. To date the literature on self-evaluation suggests that the process is poorly understood by students, that they are not capable of, nor should they be encouraged to, participate in the summative evaluative process, and that CNTs do not know how to teach students to self-evaluate their practice. Does students' input merely reflect the feedback they have received from the CNT in the first place, and do CNTs invite student input only to make decisions based on their own impressions? If so, then student input is purely perfunctory; a process to satisfy the curriculum requirement for partnership in evaluation. These questions need to be investigated. If student self-evaluation is truly valued then CNTs need to know how to develop the students' ability to evaluate their practice and thus research is necessary to develop a theory base on teaching students to self-evaluate. The claim that student self-evaluation promotes the professional development of students also needs to be examined; there are far too few studies that relate evaluative practices to outcome measures such as success on registration examinations, employer evaluation of graduate performance, and 209 student self-report of confidence level and participation in professional activities upon graduation. Finally, the findings suggest that the process of becoming skillful at clinical evaluation is worthy of investigation. What is it that skillful clinical evaluators do to promote an effective and fair evaluation process? What faculty development strategies would best assist CNTs in examining and developing their own evaluative practices? And how can novice CNTs be assisted through their difficult early years as evaluators? Conclusion The purpose of this research was to examine the evaluative practices of a CNT teaching within an critical-interpretive curriculum and to gain an understanding of these practices and influences on these practice. A secondary purpose was to promote the participant's understanding of her practices and the influencing variables. A critical single case study method was used to examine the research questions. Data were collected through the use of tape recordings using a modified think-aloud technique, weekly semi-structured interviews, examination of the participant's weekly anecdotal (instructor) notes and student evaluation documents, and the construction and examination of a concept map of her evaluative practices and influencing variables. Data collection and analysis occurred concurrently in a recursive and cyclical manner. Themes, issues and questions that arose from the preliminary analysis of each set of data were compared to units of data from previous tapes and interviews and discussed and were clarified with the participant each week. Understandings and questions from each week's interview were used to guide the next data collection set. At the end of the data collection period the data were analyzed further and the findings were discussed and validated with the participant. 210 I believe this research has contributed to a better understanding of what actually occurs in clinical evaluation in acute care settings. Many of the findings supported what is already known about clinical evaluation, whereas other findings added new insights into the clinical evaluation process. Clinical evaluation emerged as a complex and dynamic process that is embedded in the teaclimg-learning process and is influenced by many variables. To deal with the complexity of clinical evaluation the participant utilized a number of practices to ensure the accurate, comprehensive, and fair evaluation of the students. As a result of the research process, the participant gained several insights into her evaluative practices that enabled her to make changes to her future practices. She was also able to use the understandings she had gained to influence the move within her school of nursing to end the use of condensed clinical rotations. Finally, the findings raised several questions about the evaluative practices of CNTs which have implications for both nursing education and further nursing research. 211 REFERENCES Abbott, S. D. , Carswell, R., McGuire, M . , & Best, M . (1988). Self-evaluation and its relationship to clinical evaluation. Journal of Nursing Education. 27. 219-224. Anderson, J. M . (1991). Reflexivity in fieldwork: Toward a feminist epistemology. Image: Journal of Nursing Scholarship. 23. 115-118. Andrusyszyn, M . A . (1989). Clinical evaluation of the affective domain. Nurse Education Today, 9. 75-81. Arthur H . (1995). Student self-evaluations: How useful? How valid? International Journal of Nursing Studies. 32. 271-276. Battels, J. E. (1998). Developing reflective learners - Student self-assessment as learning. Journal of Professional Nursing. 14. 135. Benner, P. (1984). From novice to expert: Excellence and power in clinical nursing practice. Menlo Park, C A : Addison-Wesley. * Benner, P. (1982). Issues in competency-based testing. Nursing Outlook. 30. 303-309. Bergman, K. , & Gaitskill, T. (1990). Faculty and student perceptions of effective clinical teachers: A n extension study. Journal of Professional Nursing, 6(1). 33-44. Best, M . , Carswell, R. B., & Abbott, S. D. (1990). Self-evaluation for nursing students. Nursing Outlook. 38(4). 172-177. Bevis, E. O. (1989). Accessing learning: Determining worth or developing excellence. From a behaviorist toward an interpretive-criticism model. In E. O. Bevis, & J. Watson. Toward a caring curriculum: A new pedagogy for nursing (pp. 261-303). National League for Nursing, New York. Bevis E. O. & Watson J. (1989) Toward a caring curriculum: A new pedagogy for nursing. National League for Nursing, New York. Bent, K. N . (1993). Perspectives on critical and feminist theory in developing nursing praxis. Journal of Professional Nursing. 9. 296-303. Blomquist, K. B. (1985). Evaluation of students: Intuition is important. Nurse Educator. 10(6). 8-11. 212 Bogdan, R. C , & Biklen, S. K. (1992). Qualitative research for education: A n introduction to theory and methods (2nd ed.). Boston: Allyn and Bacon. Bondy, K. N . (1984). Clinical evaluation of student performance: The effects of criteria on accuracy and reliability. Research in Nursing and Health. 7, 25-33. Bondy, K. N . (1983). Criterion-referenced definitions for rating scales in clinical evaluation. Journal of Nursing Education. 22(9). 376-382. Bower, D. , Line, L. , & Denega, D. (1988). Evaluation instruments in nursing. New York: N L N . Boyle, J. S. (1994). Styles of ethnography. In J. M . Morse (ed.). Critical issues in qualitative research methods (pp. 159-185). Thousand Oaks, C A : Sage. Bricker-Jenkins, M . (1997). Hidden treasures: Unlocking strengths in the public social services. InD. Sallenbey (Ed.). The strengths perspective in social work practice (pp. 133-150). New York: Longman. Brozenec, S., Marshall, J. R., Thomas, C , & Walsh, M . (1987). Evaluating borderline students. Journal of Nursing Education. 26. 42-44. Brown, S. T. (1981). Faculty and student perceptions of effective clinical instructors. Journal of Nursing Education. 20(9). 18-23. Brown, H . N . , & Sorrell, J. M . (1993). Use of clinical journals to enhance critical thinking. Nurse Educator. 18(5). 16-19. Buckingham, S. (2000). Clinical competency: The right assessment tools? Journal of Child Health Care. 4(1). 19-22. Burnard, P. (1988a). The journal as an assessment and evaluation tool in nurse education. Nurse Education Today. 8. 105-107. Burnard, P. (1988b). Self-evaluation methods in nurse education. Nurse Education Today. 8. 229-233. Burns, N . , & Grove, S. K. (1997). The practice of nursing research: Conduct, critique. & utilization. Philadelphia: W. B. Saunders. Burrows, D. E. (1995). The nurse teacher's role in the promotion of reflective practice. Nurse Education Today. 15. 346-350. Callister, L. C. (1993). The use of student journals in nursing education: Making meaning our of clinical experience. Journal of Nursing Education. 32. 185-186. 213 Cameron, B. L. , & Mitchell, A . M . (1993). Reflective peer journals: Developing authentic nurses. Journal of Advanced Nursing. 18. 290-297. Campbell, M . L . (1995). Teaching accountability: what counts as nursing education? In M . Campbell, & A . Manicom (Eds.). Knowledge, experience and ruling relations: Studies in the social organization of knowledge (pp. 221-233). Toronto: University of Toronto Press. Chambers, M . A . (1998). Some issues in the assessment of clinical practice: A review of the literature. Journal of Clinical Nursing, 7. 201-208. Chfford, C. (1994). Assessment of clinical practice and the role of the nurse teacher. Nurse Education Today. 14. 272-279. Coates, V . E., & Chambers, M . (1992). Evaluation of tools to assess clinical competence. Nurse Education Today. 12. 122-129. Cohen G. S., Blumberg P., Ryan N . C. & Sullivan P. L. (1993) Do final grades reflect written qualitative evaluations of student performance? Teaching and Learning in Medicine. 5(1), 10-15. Cooper, H . (1982). Scientific guidelines for conducting integrative research reviews. Review of Educational Research. 52. 291-302. Corbin, J. (1986). Qualitative data analysis for grounded theory. In W. C. Chenitz & J. M . Swanson (Eds.). From practice to grounded theory: Qualitative research in nursing (pp. 91-101). Menlo Park, C A : Addison-Wesley. Curl, E. D. , & Koerner, D. K. (1991). Evaluating students' esthetic knowing. Nurse Educator. 16(6). 23-27. Dale, A . E. (1995). A research study exploring the patient's view of quality of fife using the case study method. Journal of Advanced Nursing. 22. 1128-1134. Davies, E. (1995). Reflective practice: A focus for caring. Journal of Nursing Education. 34. 167-174. Dawson, K. P. (1992). Attitude and assessment in nursing education. Journal of Advanced Nursing. 17. 473-479. De Vore, C. (1993). Evaluation and nursing education: Where to now? Nursing Praxis in New Zealand. 8(1). 16-24. Dauphinee W. D. (1995) Assessing clinical performance: Where do we stand and what might we expect? J A M A . 274. 741 -743. 214 Deshler, D. (1991). Conceptual Mapping: Drawing charts of the mind. In: J. Mezirow and Associates (Eds.). Fostering critical reflection in adulthood: A guide to transformative and emancipatory learning (pp. 336-353). San Francisco: Josey-Bass. Diekelmann, N . (1992). Learning-as-testing: A Heideggerian hermeneutical analysis of the lived experience of students and teachers in nursing. Advances in Nursing Science, 14(3), 72-83. Diekelmann, N . (1988). Curriculum revolution: A theoretical and philosophical mandate for change. In Curriculum revolution: Mandate for change (pp. 137-157). New York: N L N Press. Donoghue, J., & Pelletier, S. D. (1991). An empirical analysis of a clinical assessment tool. Nurse Education Today, 11, 354-362. Duke M . (1996) Clinical evaluation - difficulties experienced by sessional clinical teachers of nursing: A qualitative study. Journal of Advanced Nursing 23. 408-414. Field, P. A . (1991). Doing fieldwork in your own culture. In J. M . Morse. (Ed.). Qualitative nursing research: A contemporary dialogue (pp. 91-104). Newbury Park: Sage. Flagler, S., Loper-Powers, S., & Spitzer, A . (1988). Clinical teaching is more than evaluation alone. Journal of Nursing Education. 27. 342-348. Fong, C. M . , & McCauley, G. T. (1993). Measuring the nursing, teaching, and interpersonal effectiveness of clinical instructors. Journal of Nursing Education, 32, 325-328. Fonteyn, M . E., & Fisher, A. (1995). Use of think aloud method to study nurses' reasoning and decision making in clinical practice settings. Journal of Neuroscience Nursing, 27, 124-128. Fonteyn, M . E., Kuipers, B., & Grobe, S. J. (1993). A description of think aloud method and protocol analysis. Qualitative Health Research. 3. 430-441. Friedman, M . , & Mennin, S. P. (1991). Rethinking critical issues in performance assessment. Academic Medicine. 66(7). 390-395. Gaberson, K. B., & Oermann, M . H . (1999). Clinical teaching strategies in nursing. New York, N Y : Springer. Gaines, S., & Baldwin, D. (1996). Guiding dialogue in the transformation of teacher-student relationships. Nursing Outlook. 44(3), 124-128. 215 Gallagher, P., Bomba, C., & Anderson, B. (1999). Continuity of clinical instruction: The effect on freshman nursing students. Nurse Educator, 24(4), 6-7. Gilgun, J. F. (1994). A case for case studies in social work research. Social Work. 39, 371-380. Girot, E. A . (1993a). Assessment of competence in clinical practice: A phenomenological approach. Journal of Advanced Nursing. 18. 114-119. Girot, E. A . (1993b). Assessment of competence in clinical practice: A review of the literature. Nurse Education Today. 13. 83-90. Goldenberg, D. (1994). Critiquing as a method of evaluation in the classroom. Nurse Educator. 19(4). 18-22. Gomez, D. A . , Lobodzinski, S., & Hartwell West, C. D. (1998). Evaluating clinical performance. In D. M . Billings, & J. A. Halstead (Eds.). Teaching in nursing: A guide for faculty (pp. 407-422). Philadelphia: W. B. Saunders. Green, A . J. (1994). Issues in the application of self-assessment for the Diploma of Higher Education/Registered Nurse mental health course. Nurse Education Today. 14. 292-298. Guba E. G. & Lincoln Y . S. (1989a) Competing paradigms in qualitative research. In N . K Denzin, & Y . S. Lincoln (Eds.). Handbook of qualitative research (pp. 105-117). Thousand Oaks, C A : Sage. Guba E. G. & Lincoln Y . S. (1989b) Fourth generation evaluation. Sage, Newbury Park, C A . Habermas, J. (1979). Knowledge and human interests. (J. Shapiro, Trans.). Boston: Beacon. Hall J. M . & Stevens P. E. (1991) Rigor in feminist research. Advances in Nursing Science. 13(3). 16-29. Harper A . C , Roy W. B., Norman G. R., Rand C. A. & Feightner. (1983) Difficulties in skills evaluation. Medical Education. 17, 24-27. Hay, J. A . (1995). Investigating the development of self-evaluation skills in a problem-based tutorial course. Academic Medicine. 70. 733-735. Hedin, B. A . (1989). Expert clinical teaching. In Curriculum revolution: Reconceptualizing nursing education (pp. 71-89). New York: N L N Press. 216 Hepworth, S. (1991). The assessment of student nurses. Nurse Education Today. 11, 46-52. Hill , J. (1993). Perceptions of factors affecting student-patient matching in clinical experiences. Journal of Nursing Education. 32. 133-134. Hill , P. F. (1998). Assessing the competence of student nurses. Journal of Child Health Care. 2(1). 25-29. Hornak, M . L. (1997). Clinical nursing teachers' descriptions of the experience of partnerships with students. M S N Thesis. UBC. Huberman, A. M . , & Miles, M . B. (1994). Data management and analysis methods. In N . K Denzin & Y . S Lincoln. Handbook of qualitative research (pp. 428-444). Thousand Oaks: Sage. Hunt D. D. (1992). Functional and dysfunctional characteristics of the prevailing model of clinical evaluation systems in North American medical schools. Academic Medicine. 67, 254-259. Hutchinson, S., & Wilson, H . (1994). Research and therapeutic interviews: A poststructuralist perspective. In J. M . Morse (Ed.). Critical issues in qualitative research methods (pp. 300-315). Thousand Oaks: Sage. Hyrkas, K. (1997). Can action research be applied in developing clinical teahcing? Journal of Advanced Nursing. 25. 801-808. Infante, M . S. (1985). The clinical laboratory in nursing education (2nd ed.). New York: John Wiley & Sons. Infante, M . S., Forbes, E. J., Houldin, A . D. , & Naylor, M . D. (1989). A clinical teaching project: Examination of a clinical teaching model. Journal of Professional Nursing. 5, 132-139. Jackson, R. (1987). Approaching clinical teaching and evaluation through the written word: A humanistic approach. Journal of Nursing Education. 26. 384-385. Jensen, G. M . , & Saylor, C. (1994). Portfolios and professional development in the health professions. Evaluation & The Health Professions. 17, 344-357. Johnson, J. L. (1997). Generalizability in qualitative research: Excavating the discourse. In J. M . Morse (Ed.). Completing a qualitative project: Details and dialogue (pp. 191-208). Thousand Oaks: Sage. 217 Karuhije, H . F. (1997). Classroom and clinical teaching in nursing: Delineating differences. Nursing Forum 32(2), 5-12. Karuhije, H . F. (1986). Educational preparation for clinical teaching: Perceptions of the nurse educator. Journal of Nursing Education. 25, 137-144. Kemmis, S. & McTaggart, R. (1988). The action research planner (3rd ed.). Victoria, Austr: Deakin University Press. Kleehamer, K. , Hart, A . L. , & Keck, J. F. (1990). Nursing students' perceptions of anxiety-producing situations in the clinical setting. Journal of Nursing Education. 29. 183-187. Kirschling, J. M . , Fields, J., Imle, M . , Mowery, M . , Tanner, C. A . , Perrin, N . , & Stewart, B. J. (1995). Evaluating teaching effectiveness. Journal of Nursing Education, 34, 401-410. Knox, J. E., & Mogan, J. (1985). Important clinical teacher behaviors as perceived by nursing faculty, students and graduates. Journal of Advanced Nursing. 10. 25-30. Kobert, L . J. (1995). In our own voice: Journaling as a teachmg/learning technique for nurses. Journal of Nursing Education. 34, 140-142. Krichbaum, K. (1994). Clinical teaching effectiveness described in relation to learning outcomes of baccalaureate nursing students. Journal of Nursing Education. 33, 306-316. Krichbaum, K. , Rowan, M . , Duckett, L. , Ryden, M . , & Savik, K. (1994). The clinical evaluation tool: A measure of the quality of clinical performance of baccalaureate nursing students. Journal of Nursing Education, 33, 395-404. Landeen, J., Byrne, C , & Brown, B. (1995). Exploring the lived experience of psychiatric nursing students through self-reflective journals. Journal of Advanced Nursing, 21, 878-885. Landeen, J., Byrne, C , & Brown, B. (1992). Journal keeping as an educational strategy in teaching psychiatric nursing. Journal of Advanced Nursing. 17, 347-355. Lankshear A . (1990) Failure to fail: The teacher's dilemma. Nursing Standard. 4(20), 35-37. Lather, P. (1991). Getting smart: Feminist research and pedagogy with/in the postmodern. New York: Routledge. 218 Leininger, M . (1994). Evaluation criteria and critique of qualitative research studies. In J. M . Morse (Ed.). Critical issues in qualitative research methods (pp. 95-115). Thousand Oaks, C A : Sage. Leino-Kipli, H . (1992). Self-evaluation in nursing students in Finland. Nurse Education Today. 12. 424-430. Lenburg, C. B. (1991). Assessing the goals of nursing education: Issues and approaches to evaluation outcomes. In M . Garbin (Ed.). Assessing educational outcomes: Third National Conference on Measurement and Evaluation in Nursing (pp. 25-52). New York: N L N Press. Lenburg, C. B., & Mitchell, C. A . (1991). Assessment of outcomes: The design and use of real and simulation nursing performance examinations. Nursing & Health Care. 12(2). 68-74. Lewis, L. L. (1995). One year in the life of a woman with premenstrual syndrome: A case study. Nursing Research, 44. 111-116. Lincoln, Y . , & Guba, E. (1985). Naturalistic inquiry. Beverly Hills, C A : Sage. Lipson, J. G. (1994). Ethical issues in ethnography. In J. M . Morse (Ed.). Critical issues in qualitative research methods (pp. 333-355). Thousand Oaks: Sage. Lipson, J. G. (1991). The use of self in ethnographic research. In J. M . Morse. (Ed.). Qualitative nursing research: A contemporary dialogue (pp. 73-89). Newbury Park: Sage. Loving, G. L. (1993). Competency validation and cognitive flexibility: A theoretical model grounded in nursing education. Journal of Nursing Education. 32. 415-421. Luttrell, M . F., Lenburg, C. B., Scherubel, J. C , Jacob, S. R., & Koch, R. W. (1999). Competency outcomes for learning and performance assessment. Nursing and Heath Care Perspectives. 20. 134-141. MacRae H . M . , V u N . V . , Graham, B., Word-Sims M . , Colliver J. A . & Robbs R. S. (1995) Comparing checklists and databases with physician's ratings as measures of student history and physical-examination skills. Academic Medicine. 70. 313-317. Mahara, M . S. (1998). A perspective on clinical evaluation in nursing education. Journal of Advanced Nursing. 28. 1339-1346. Malek, C. J. (1988). Clinical evaluation: Challenging tradition. Nurse Educator. 13(6), 34-37. 219 Marrow, C. E., & Tatum, S. (1994). Student supervision: Myth or reality. Journal of Advanced Nursing. 19. 1247-1255. Mason v. Registered Nurses' Association of British Columbia, 13 B.C.L.R. , 218 (1997). May, K. A . (1991). Interview techniques in qualitative research: Concerns and challenges. In J. M . Morse. (Ed.). Qualitative nursing research: A contemporary dialogue (pp. 188-201). Newbury Park: Sage. McGaghie W. C. (1991) Professional competence evaluation. Educational Researcher. 20(1). 3-9. McGuire, C. H . (1988). Evaluation of student and practitioner competence. In Handbook of health professions education (pp. 256-293). San Francisco: Jossey-Bass. McKnight J., Rideout E., Brown B., Ciliska D., Patton D., Rankin J. & Woodward C. (1987) The objective structured clinical examination: An alternative approach to assessing student clinical performance. Journal of Nursing Education. 26. 39-41. McTaggart, R., & Garbutcheon Singh, M . (1986). New directions in action research-Curriculum Perspectives, 6(2). 42-46. Meier, P., & Pugh, E. J. (1986). The case study: A viable approach to clinical research. Research in Nursing and Health, 9, 195-202. Mitchell, M . (1994). The views of students and teachers on the use of portfolios as a learning and assessment tool in midwifery education. Nurse Education Today. 14. 38-43. Mogan, J., & Knox, J. E. (1987). Characteristics of "best" and "worst" clinical teachers as perceived by university faculty and students. Journal of Advanced Nursing. 12. 331-337. Mogan, J., & Warbinek, E. (1994). Teaching behaviours of clinical instructors: A n audit instrument. Journal of Advanced Nursing, 20. 160-166. Morgan, S. A . (1991). Teaching activities of clinical instructors during the direct care period: A qualitative investigation. Journal of Advanced Nursing. 16. 1238-1246. Morse, J. M . (1997). Considering theory derived from qualitative research. In J. M . Morse (Ed.). Completing a qualitative project: Details and dialogue (pp. 163-189). Thousand Oaks: Sage. Morse, J. M . (1991). Strategies for sampling. In J, M . Morse (Ed.). Qualitative nursing research: A contemporary dialogue (pp. 127-145). Newbury Park, C A : Sage. 220 Mozingo, J., Thomas, S., & Brooks, E. (1995). Factors associated with perceived competency levels of graduating seniors in a baccalaureate nursing program. Journal of Nursing Education. 34. 115-122. Munroe, H . (1988). Modes of operation in clinical supervision: How clinical supervisors perceive themselves. British Journal of Occupational Therapy. 51 (10). 338-343. Muscari, M . E. (1994). Means, motive, and opportunity: Case study research as praxis. Journal of Pediatric Health Care. 8(5). 221-226. Neary, M . (2001). Responsive assessment: Assessing student nurses' clinical competence. Nurse Education Today. 21. 3-17. Nehring, V . (1990). Nursing clinical teacher effectiveness inventory: A replication study of the characteristics of "best" and "worst" clinical teachers as perceived by nursing faculty and students. Journal of Advanced Nursing. 15. 934-940. Nicol, M . , & Freeth, D. (1998). Assessment of clinical skills: A new approach to an old problem. Nurse Education Today. 18. 601-609. Norman G. R., Van der Vleuten C. P. M . & De Graaff E. (1991) Pitfalls in the pursuit of objectivity: Issues of validity, efficiency and acceptability. Medical Education 25, 119-126. Novak, J. D. , & Gowin, D. B. (1984). Learning how to learn. New York: Cambridge University Press. Nylund, L. , & Lindholm, L. (1999). The important of ethics in the clinical supervision of nursing students. Nursing Ethics. 6. 278-286. Oakley, A . (1981). Interviewing women: A contradiction in terms. In H . Robert (Ed.). Doing feminist research (pp. 30-61). London: Routledge and Kegan Paul. Oermann, M . H . (1997). Evaluating critical thinking in nursing practice. Nurse Educator. 22(5). 25-28. Oermann, M . H . (1996). Research on teaching in the clinical setting. In K. R. Stevens (Ed.). Review of research in nursing education (Vol. VII) (pp. 91-126). New York: N L N . Oermann, M . H . , & Gaberson, K. B. (1998). Evaluation and testing in nursing education. New York, N Y : Springer. O'Neill, A . , & McCall, J. M . (1996). Objectively assessing nursing practices: A curricular development. Nurse Education Today. 16. 121-126. 221 Orchard, C. (1994a). Management of clinical failure in Canadian nursing programs. Western Journal of Nursing Research, 16, 317-331. Orchard, C. (1994b). The nurse educator and the nursing student: A review of the issue of clinical evaluation procedures. Journal of Nursing Education. 33(6). 245-257. Orchard, C. (1992). Factors that interfere with clinical judgments of students' performance. Journal of Nursing Education, 31(7), 309-313. O'Shea, H . , & Parson, M . (1979). Clinical instruction: Effective and ineffective teacher behavior. Nursing Outlook, 27, 411-419. Packer, J. L. (1994). Education for clinical practice: An alternate approach. Journal of Nursing Education. 33. 411-416. Pagana, K. D. (1988). Stresses and threats reported by baccalaureate students in relation to an initial clinical experience. Journal of Nursing Education. 27. 418-424. Paterson, B. L . (1998). Partnership in nursing education: A vision or a fantasy? Nursing Outlook. 46. 284-289. Paterson, B. L. (1995). Developing and maintaining reflection in clinical journals. Nurse Education Today. 15. 211-220. Paterson B. L. (1994a). A framework to identify reactivity in qualitative research. Western Journal of Nursing Research,16. 301-316. Paterson B. L . (1994b). The view from within: Perspectives of clinical teaching. International Journal of Nursing Studies. 31. 349-360. Paterson B. L. (1991). The juggling act: A n ethnographic analysis of clinical teaching in nursing education. PhD. Dissertation. University of Manitoba. Paterson B. & Groening M . (1996) Teacher-induced countertransference in clinical teaching. Journal of Advanced Nursing. 23. 1121-1126. Paterson, B. L. , Thorne, S., Crawford, J., & Tarko, M . (1999). Living with diabetes as a transformational experience. Qualitative Health Research. 9. 786-802. Patterson, E. (1996). The analysis and application of peer assessment in nurse education, like beauty, is in the eye of the beholder. Nurse Education Today, 16(1), 49-55. Pavlish, C. (1987). A model for clinical performance evaluation. Journal of Nursing Education. 26. 338-339. 222 Pierson, W. (1998). Reflection and nursing education. Journal of Advanced Nursing. 27, 165-170. Polit, D. F., & Hungler, B. P. (1995). Nursing research: Principles and methods (5th ed.). Philadelphia: Lippincott. Pugh, E. J. (1988). Soliciting student input to improve clinical teaching. Nurse Educator. 13(5). 28-33. Pugh, E. J. (1986a). Research on clinical teaching. In W. L . Holzemer (Ed.), Review of research in nursing education (Vol. I), (pp. 73-92). New York: N L N . Pugh, E. J. (1986b). Use of behavioural observation to augment quantitative data when studying clinical teaching. Journal of Nursing Education. 25. 341-343. Radwin, L. E. (1995). Conceptualizations of decision making in nursing: Analytic models and "knowing the patient". Nursing Diagnosis. 6. 16-22. Reason, P. (1994). Three approaches to participatory inquiry. In N . K Denzin, & Y . S. Lincoln (Eds.). Handbook of qualitative research (pp. 324-339). Thousand Oaks, C A : Sage. Reed, J., & Proctor, S. (1993). Assessment of reflective practice. In J. Reed, & S. Proctor (Eds.). Nursing education: A practice-based approach (pp. 173-182). Sand Diego, C A : Singular Pub. Group. Reed, S. (1992). Canadian competence. Nursing Times. 88(3"). 57-59. Reeve, M . M . (1994). Development of an instrument to measure effectiveness of clinical instructors. Journal of Nursing Education. 33. 15-20. Reilly, D. E., & Oermann, M . H . (1992). Clinical teaching in nursing education (2nd ed.). New York: N L N . Richardson, G. , & Maltby, H . (1995). Reflection-on-practice: Enhancing student learning. Journal of Advanced Nursing. 22. 235-242. Rideout, E. M . (1994). "letting go": Rationale and strategies for student-centred approaches to clinical teaching. Nurse Education Today. 14. 146-151. Riley, J. (1990). Getting the most from your data: A handbook of practical ideas on how to analyze qualitative data. Technical and Educational Services Ltd. Roberts J. & Brown B. (1990) Testing the OSCE: A reliable measurement of clinical nursing skills. The Canadian Journal of Nursing Research. 22(1"). 51-59. 223 Ross M . , Carroll G. , Knight J., Chamberlain M , Fothergill-Bourbonnais F. & Linton J. (1988). Using the OSCE to measure clinical skills performance in nursing. Journal of Advanced Nursing. 13, 45-56. Sandelowski M . (1996). One is the liveliest number: The case orientation of qualitative research. Research in Nursing & Health, 19, 525-529. Sandelowski M . (1993) Rigor or rigor mortis: The problem of rigor in qualitative research revisited. Advances in Nursing Science. 16(2), 1-8. Sandelowski M . (1986) The problem of rigor in qualitative research. Advances in Nursing Science. 8(3). 27-37. Scanlan, J. M . , Care, W. D., & Gessler, S. (2001). Dealing with the unsafe student in clinical practice. Nurse Educator. 26, 23-27. Schoenhofer, S., & Coffman, S. (1994). Prizing, valuing, and growing in a caring-based program. In A . Boykin (Ed.), Living a caring-based program, (pp. 127-165). New York: N L N . Schwandt, T. A . (1994). Constructivist, interpretivist approaches to human inquiry. In N . K Denzin, & Y . S. Lincoln (Eds.). Handbook of qualitative research (pp. 118-137). Thousand Oaks, C A : Sage. Scriven M . (1991) Evaluation thesaurus (4th ed.). Sage, Newbury Park CA. Sedlak, C. A . (1992). Use of clinical logs by beginning nursing students and faculty to identify learning needs. Journal of Nursing Education. 31. 24-28. Sellappah, S. Hussey, T., Blackmore, A. M , & McMurray, A . (1998). The use of questioning strategies by clinical teachers. Journal of Advanced Nursing, 28, 142-148. Sharp, K. (1998). The case for case studies in nursing research: The problem of generalization. Journal of Advanced Nursing. 27, 785-789. Shoffher, D. H . , Davis, M . W., & Bowen, S. M . (1994). A model for clinical teaching as a scholarly endeavor. Image: Journal of Nursing Scholarship, 3, 181-184. Short, J. P. (1993). The importance of strong evaluation standards and procedures in training residents. Academic Medicine, 68, 522-525. Sieh, S., & Bell, S. K. (1994). Perceptions of effective clinical teachers in associate degree programs. Journal of Nursing Education. 33. 389-394. 224 Stake, R. (1994). Case studies. In N . K Denzin, & Y . S. Lincoln (Eds.). Handbook of qualitative research (pp. 236-247). Thousand Oaks, CA: Sage. Stewart, R. (1991). Instructor's perceptions of subjectivity in clinical evaluation of nursing students. M.Ed. Thesis. University of Alberta. Stokes, L. (1998). Teaching in the clinical setting. In D. M . Billings, & J. A . Halstead (Eds.). Teaching in nursing: A guide for faculty (pp. 281-297). Philadelphia: W. B. Saunders. Streubert, H . J., & Carpenter, D. R. (1995). Qualitative research in nursing: Advancing the humanistic imperative. Philadelphia: Lippincott. Tanner, C. (1988). Curriculum revolution: The practice mandate. In Curriculum revolution: Mandate for change. New York: N L N . Theis, E. C. (1988). Nursing students' perspectives of unethical teaching behaviors. Journal of Nursing Education. 26. 150-154. Thorne, S. (1997). The art (and science) of critiquing qualitative research. In J. M . Morse (Ed.). Completing a qualitative project: Details and dialogue (pp. 117-132). Thousand Oaks: Sage. Tracy, S. M . , Marino, G. J., Richo, K. M . , & Daly, E. M . (2000). The clinical achievement portfolio: An outcome-based assessment project in nursing education. Nurse Educator. 25. 241-246. Van der Vleuten C. P. M . , Norman G. R. & De Graaff E. (1991) Pitfalls in the pursuit of objectivity: Issues of reliability. Medical Education. 25. 110-118. Wellard, S. J., & Bethune, E. (1996). Reflective journal writing in nurse education: Whose interests does it serve? Journal of Advanced Nursing. 24, 1077-1082. Wenzel, L . S., Briggs, K . L . , & Puryear, B. L. (1998). Portfolio: Authentic assessment in the age of the curriculum revolution. Journal of Nursing Education, 37, 208-212. While, A . E. (1991). The problem of clinical evaluation: A review. Nurse Education Today. 11. 448-453. Whitely, S. (1992). Evaluation of nursing education programmes -Theory and practice. International Journal of Nursing Studies, 29(3), 315-323. Wiles, L. L . , & Bishop, J. F. (2001). Clinical performance appraisal: Renewing graded clinical experiences. Journal of Nursing Education. 40. 37-39. 225 Wilson, M . E. (1994). Nursing student perspective of learning in a clinical setting. Journal of Nursing Education, 33, 81-86. Windsor, A . (1987). Nursing students' perceptions of clinical experience. Journal of Nursing Education, 4, 150-154. Wink, D. M . (1993). Using questioning as a teaching strategy. Nurse Educator, 15(5), 11-15. Wolff, A . C. (1998). The process of maturing as a competent clinical teacher. M S N Thesis. U B C . Wong, F. K. Y . , Kember, D. , Chung, L. Y . F., & Yan, L. (1995). Assessing the level of student reflection from reflective journals. Journal of Advanced nursing, 22, 48-57. Wong, J., & Wong, S. (1987). Towards effective clinical teaching in nursing. Journal of Advanced Nursing, 12, 505-513. Wood V . (1986) Clinical evaluation of student nurses: Syllabus needs for nursing instructors. Nurse Education Today. 6. 208-214. Wood, V . (1982). Evaluation of student nurse clinical performance: A continuing problem. International Nursing Review, 29(1), 11-18. Woolley, A . S. (1977). The long and torture history of clinical evaluation. Nursing Outlook. 25(5). 308-315. Woolley, G. R., Bryan, M . S., & Davis, J. W. (1998). A comprehensive approach to clinical evaluation. Journal of Nursing Education, 37, 361-366. Yin, R. K. (1994). Case study research: Design and methods. Thousand Oaks, C A : Sage. Zimmerman, L. , & Westfall, J. (1988). The development and validation of a scale measuring effective clinical teaching behaviors. Journal of Nursing Education. 27, 274-277. 226 A p p e n d i x A : C o n c e p t M a p - J CNT Relationship < > Mutuality • Ways of knowing • Experience (novice to expert) - familiarity of area -familiarity of program • Values/Beliefs - learning, evaluation, role, students, nursing, self awareness, reflection, bias • Learning Plan Student • Learning goals • Involvement/Preparedness • Pesonality • Ways of knowing learning Environment • Client Acuity • Nature of the floor staff • Stability vs flux of ward Data Collection Data Analysis • Patterns • Salience of information • Clinical impressions Formative Evaluation (Teaching - Learning! • Learning issues •Strategies • Evaluation - T/L process / student progress Agency - School / Hospital • PAF/ELE curriculum philosophy • Appeal - learning contract • Clinical rotations - length / format • Student handbook - policy • Student number Professional Association • RNABC - Standards • CNA • Ethics • Public • Safety • Competencies 228 In order to participate in this study you must be able, interested in, and wuling to engage with the researcher in weekly discussions to identify and articulate your practices and reflect on the underlying values, beliefs and assumptions underpinning your thoughts and actions. You would need to be interested in reflecting on and uncovering the basis of your clinical practices and be willing to work closely with researcher in analyzing and interpreting your practices. Participation in this research will involve a contribution of your time and effort. The study will occur over a 13 week teaching semester and will involve recording and analyzing data concerning your clinical evaluation practices as you teach a group of students on your clinical ward As data are gathered, the researcher and you will work together to form tentative interpretations. These interpretations will then be used to focus successive sampling decisions and data generation. It is anticipated that your participation will average 3-5 hours/week over a 13 week period to a maximum of 65 hours. You will be asked to record your thoughts and feelings using a modified think-aloud (TA) technique. T A is a data gathering method in which participants verbalize their thoughts during the performance of a cognitive task. You will be provided with a voice activated portable tape recorder and instructed to carry it with you at all times for one clinical rotation. You will be required to make recordings whenever evaluative thoughts, feelings, and questions occur during the course of clinical teaching, immediately following the clinical day, as well as at random and unspecified times in between clinical teaching sessions (e.g., driving to the office, during dinner, at bedtime). Participation in this study requires that you be interviewed at least 9 and not more than 13 times: at the beginning of the study, following preliminary analysis of each set of the T A tapes, following analysis of written evaluation documents, and during and after construction of a concept map. A l l interviews will be semi-structured and each will last approximately one hour. A l l interviews will be tape recorded and selectively transcribed for analysis. You may refuse to answer any question in the interviews or ask that the tape recorder be turned off at any time. You may also decide that a portion of your weekly tapings be omitted from analysis. The initial interview will focus on collection of demographic data (age, nursing education, clinical teaching and nursing practice experience, course work in clinical teaching/evaluation) and initial discussion of your thinking around clinical evaluation. You will be required to participate in at least six post-tape interviews that will take place at a location and time that is convenient to you. Each week's tape will be selectively analyzed and themes, issues, and questions arising from each set of data will be discussed and clarified with you in an attempt to describe what is occurring in your clinical evaluation of students and why practices were carried out in a certain way. Apparent relationships between influences and evaluation practices will be reviewed and examined further with you. As data collection and analysis proceeds, the researcher and you may decide on specific areas that you could address in your upcoming clinical teaching situations and tapings. 229 With the written consent of the students, the researcher will analyze the midterm and final clinical evaluation documents for the students in your clinical group and three to four midterm and final evaluation documents of former students selected by you because the documents reflect a range of evaluation situations (skilled/less skilled students and clear/less clear decisions). At the end of the clinical rotation you will be asked to construct a concept map that represents your view of clinical evaluation. The map should include important concepts and the connections between each, that have emerged from the tapes, interviews, and document analysis. A further interview will be done with you to clarify the meanings of the concepts and how they are related. Anticipated Benefits and Risks of Participating in this Study: If you agree to participate in this study you will have the opportunity to contribute to the body of knowledge of clinical evaluation in nursing education. As critical research, the study should also enrich your understanding of your clinical evaluation practices enabling you to make informed decisions about your future evaluation practice. There is a risk of psychological or emotional distress resulting from the acts of introspection and self-disclosure, and the intense probing of your thoughts and practices by a colleague. It is conceivable that you may experience the research process as an evaluation or judgment of your teaching ability. Confidentiality: A l l information that you provide will be kept confidential. Your tapes and interview documents will be identified only by code number and kept in a locked filing cabinet in the researcher's office. Tapes will be transcribed by the researcher and data will only be viewed by yourself, the researcher, and the researcher's thesis committee members. Tapes, interview and analysis documents, student evaluation documents, and any other notes written by the researcher will be stored in a locked compartment to which only the researcher has access. Data records on the researcher's hard drive cannot be accessed without the researcher's password. Your signed consent form will be kept separate from the data sources. There are no plans for future use of the raw data beyond this proposed study. Tapes and documents will be retained by the co-investigator in a locked filing cabinet and destroyed 7 years following the completion of the study. To protect your anonymity and that of your students and any patients, clinical staff, or teaching colleagues discussed in the course of this study, the actual names of all persons will not appear on transcripts and research reports, all persons will be given a numerical code, pseudonyms will be used in the research report and/or any other publications related to the study. 232 You are being invited to participate in this research because your clinical evaluation documents are an important source of data about your CNT's evaluation values, beliefs, and practices. Your participation in the study is voluntary. You are under no obligation to participate in the study. Study Procedures: Your clinical evaluation documents will contribute to the research findings. The content of your evaluation documents will be compared and contrasted with the emergent analysis from the other sources of data in an attempt to clarify, elaborate on, and expand the description of your CNT's evaluation practices. Participation in this study requires that you provide permission for the researcher to study your midterm and final evaluation documents from NURS xxx. Anticipated Benefits and Risks of Participating in this Study: There will be no direct benefits for you in participating in the study; however, you will have the opportunity to contribute to the body of knowledge of clinical evaluation in nursing education. There are no known risks to participating in this study. None of your time is being requested. Confidentiality: A l l information that you provide will be kept confidential. Data will only be viewed by your CNT, the researcher, and the researcher's thesis committee members. For students in the CNT's current clinical group, all discussions of patterns and themes will be in general so that your CNT will not be aware of which students in her/his current clinical group consented to include their evaluation documents. A l l attempts will be made to keep the discussion of data from former students in the general, however, it may occasionally be necessary for the CNT to know the identity of a past student in order for the CNT to recall the experience of working with her/him. Your evaluation documents, and any other notes written by the researcher in relation to your documents will be stored in a locked compartment to which only the researcher has access. Data records on the researcher's hard drive cannot be accessed without the researcher's password. There are no plans for future use of the raw data beyond this proposed study. Tapes and documents will be retained by the co-investigator in a locked filing cabinet and destroyed 7 years following the completion of the study. It is likely that there will be publications and/or presentations as a result of this research. To protect your anonymity your name will not appear on any of the research reports. You will be given a numerical code, pseudonyms will be used in the research report and/or any other publications related to the study. Your name will not be used on written notes nor in the final report of the study. 234 Appendix D Practice Appraisal Form: Semester V Domains of Nursing Practice and Related Competencies Health and Healing Domain Competencies 1.1 Creating a climate for and establishing a commitment to health and healing 1.2 Providing comfort measures 1.3 Preserving personhood 1.4 Presencing: being with the client 1.5 Maximizing the client's participation and control 1.6 Facilitating understanding through communication 1.7 Guiding and supporting clients through transition 1.8 Providing emotional support Health and Healing Domain - Quality Indicators *Client(s) in this semester refers to individuals, families, and groups.. • evolves an understanding of factors necessary to maintain the wholeness and uniqueness of persons in relation to illness and accident prevention. This understanding will include a sensitivity to culture, context, group dynamics, and community • recognizes individual and collective needs for prevention, including the developmental stages of the individual, family, group, or community • commits to the promotion of health and healing while working on preventive initiatives • considers clients to be individuals, groups, families and/or communities • continues to evolve an understanding of own caring practices in relation to working on preventive initiatives • creates opportunities for clients to guide interactions • facilitates opportunities for effective communication with and between clients • identifies opportunities to develop and initiate preventive programs • identifies preventive support systems and resources available for clients • recognizes preventive aspects related to health challenges • with clients, faculties the mobilization of community resources • provides opportunities for clients to participate in preventive health practices • demonstrates caring as the moral imperative to act ethically and justly 235 Teaching-Learning Domain Competencies 2.1 Timing: capturing the client's readiness to learn 2.2 Participating with clients to integrate health and healing processes into their lives 2.3 Eliciting and understanding client's interpretation of health and healing experiences 2.4 Providing clients with interpretation of their health and healing issues 2.5 Providing relevant information for clients to make informed decisions 2.6 Facilitating client directed change 2.7 Evaluating learning outcomes Teaclimg/LearTiing Domain - Quality Indicators *Client(s) in this semester refers to individuals, families and groups. • recognizes and responds to cues that show clients readiness to learn preventive initiatives • recogrrizes patterns of learning needs of clients • understands and uses teaching/learning theoretical perspectives in working with clients • understands own teaclimg/learning practices in relation to different theoretical perspectives • creates opportunities to facilitate clients' understanding and participation in preventive health and healing practices • evolves an understanding of the nurse as co-learning in health education • mutually develops a health education plan with clients • identifies the multitude of factors that may influence the client's ability to learn preventative initiatives • offers accurate and appropriate information (e.g., epidemiological and theoretical data, etc.) • broadens and shares knowledge of preventive health and healing initiatives available to clients • shares knowledge and takes an active role as co-learner with peers • appreciates the importance of health education within the context of clients' experiences • monitors and evaluates the effectiveness of health education planning with the clients • monitors and evaluates with clients the effectiveness of health education programs • develops an understanding of the role of the nurse as teacher • recognizes the significance of personal and professional growth and of life-long learning 236 Clinical Judgment Domain Competencies 3.1 Assessing the client's potential for health and healing 3.2 Making clinical decisions in relation to client's experience and understanding of health and healing 3.3 Detecting, reporting, and documenting changes in client health and healing experiences 3.4 Anticipating health and healing issues 3.5 Anticipating change prior to confirming signs 3.6 Recognizing patterns of client response to similar situations 3.7 Assessing the client's response to various health and healing initiatives 3.8 Adapting practice to reflect an understanding of the client's experience of health and healing 3.9 Performing skiUrully in situations that are changing 3.10 Setting priorities to meet multiple client needs and requests. Clinical Judgment Domain - Quality Indicators *Client(s) in this semester refers to individuals, families and groups. • refines health assessment to include preventive initiatives and the promotion of health • begins to recognize patterns of behavioral and other risk factors for target populations • begins to identify "at risk" factors for clients using epidemiological data • begins to identify needs for program development for "at risk" populations • begins to identify barriers to effective prevention programs • begins to assess and facilitate clients' access to preventive programs • anticipates potential change in clients' conditions and takes appropriate action • broadens reporting abilities that reflect increasing observational skills of individuals, groups, families, and communities in relation to prevention and to the promotion of health • recognizes the multidisciplinary dimension of prevention programs and reports to appropriate multidisciplinary team members • quickly recognizes the signs and symptoms that signal an emergency and initiates appropriate action • increasingly able to integrate and relate nursing knowledge, epidemiological, pharmacological, biological, and social science knowledge, etc. to a client with particular emphasis on prevention • increasingly able to determine, priorize, organize and evaluate nursing care including preventive measures • appreciates the importance of rapid and planned preventive responses to community crises • continues to build nursing knowledge and skills required for the promotion of health and healing initiatives (e.g., medication administration, IV push, wound care, health education, etc.) • in collaboration with clients, demonstrates increasingly appropriate clinical judgments 237 • demonstrates increasingly autonomous clinical judgments working in collaboration with clients Professional Responsibility Domain Competencies 4.1 Monitoring and ensuring the quality of health care practice 4.2 Performing responsibly and in congruence with knowing the client as a person within his/her own context 4.3 Critically examines the quality of own caring practices 4.4 Critically examines ones overall standards of practice 4.5 Monitoring health care environment for physical and psychological safety 4.6 Advocation for client regarding safe health/healing practices 4.7 Practicing according to CNA's Nursing Code of Ethics and RNABC's Standards of Practice 4.8 Practicing within the legal requirements for nursing 4.9 Practice reflects continuing currency in nursing 4.10 Participating in the evolution of the nursing profession Professional Responsibility Domain - Quality Indicators *Client(s) in this semester refers to individuals, families, and groups.. • broadens scope of practice to include focus on prevention • practice according to the C N A Code of Ethics and R N A B C Standards of Nursing practice • functions in accordance with agency, college, and university/college policies, procedures and guidelines • critically examines and analyzes own nursing practice • applies principles of safety at all times for self and others • increases awareness of legal and ethical implications of practice including various preventive programs • recognizes the rights of clients to essential health care as determined by the Canada Health Act • when appropriate, acts as an advocate for clients • accepts responsibility for being current with nursing knowledge and skills including a developing focus on prevention • promotes quality care • responds constructively to instructions and suggestion from experienced staff, faculty, clients and colleagues • recognizes own limitations and seeks guidance from appropriate resources • demonstrates honesty, integrity and confidentiality • engages in activities that advance the nursing profession 238 Collaborative - Leadership Domain Competencies 5.1 Taking a leadership role in health and healing practices 5.2 Coordination of, and involvement in multidisciplinary teams 5.3 Perceiving the hegemony and creating a vision for change 5.4 Engaging in the political process to facilitate the counter-hegemony Collaborative -Leadership Domain *Client(s) in this semester refers to individuals, families, and groups.. • embodies a way of being that facilitates client empowerment • actively participates in the multidisciplinary team and begins to articulate nurse's preventive role • begins to recognize and critique the role of public policy in influencing health promotion and preventive programs • actively participates in a collegial manner with peers • communicates relevant information to appropriate health team members • critically examines health and healing policies and practices from a health promotion/prevention perspective • begins to critically examine unquestioned preventive nursing practices and considers alternatives • continues to challenge the taken-for-granted practices in health care • beginning to recognize the active, leadership role nurses can take in influencing change • offers suggestions to staff, client, faculty and colleagues related to nursing practice • in collaboration with peers, begins to take action on nursing issues 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0090093/manifest

Comment

Related Items