Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

British Columbia principals and the evaluation of teaching Edgar, William 1996-12-31

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
ubc_1996-0360.pdf [ 7.34MB ]
Metadata
JSON: 1.0055018.json
JSON-LD: 1.0055018+ld.json
RDF/XML (Pretty): 1.0055018.xml
RDF/JSON: 1.0055018+rdf.json
Turtle: 1.0055018+rdf-turtle.txt
N-Triples: 1.0055018+rdf-ntriples.txt
Original Record: 1.0055018 +original-record.json
Full Text
1.0055018.txt
Citation
1.0055018.ris

Full Text

BRITISH COLUMBIA PRINCIPALS AND THE EVALUATION OF TEACHING by WILLIAM EDGAR B.A., The University of Leicester, U.K., 1981 P.G.C.E., North Staffordshire Polytechnic, U.K., 1983 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ARTS in THE FACULTY OF EDUCATION (Centre for the Study of Curriculum and Instruction) We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA June 1996 © William Edgar, 1996 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of The University of British Columbia Vancouver, Canada 2* Date \ DE-6 (2/88) ii Abstract The purpose of this study was to investigate the views of British Columbia principals with regard to the formal evaluation of teaching. Four major concepts were addressed a) the purpose of evaluation; b) the process of evaluation; c) the need for further principal training in evaluation; and, d) obstacles to carrying out evaluation. The sex of principals and years of experience as a principal were identified for further analysis because these variables are absent in the literature on formal evaluation. The data consisted of relevant clauses from all 75 British Columbia school district collective agreements and responses to a survey sent to the members of the British Columbia Principals' and Vice-Principals' Association. The achieved sample is 188 principals. The findings of this study show the conduct of formal evaluation is a responsibility willingly accepted by principals and that it is a function they consider they carry out well. Collective agreements say little about the purpose of evaluation. The majority of principals believe the most important purpose of evaluation is teacher growth and development. Female principals indicate a stronger orientation towards teacher growth and development than males but this difference may also be related to principals' different experience levels. iii Relatively few evaluations are carried out and only a very small proportion result in "less than satisfactory" reports. Evaluations leading to "satisfactory" and "less than satisfactory" reports are characterised in very different terms by principals. Anecdotal responses support the assertion made in the literature that principals believe they already know who their 'weak' teachers are before conducting an evaluation. British Columbia principals consider time as the primary obstacle to carrying out formal evaluation. Evaluation cycles and site management responsibilities are perceived as the major time consumers. Neither size of staff nor percentage of teaching time were identified as significant time barriers by the respondents. Principals do not label themselves as under-trained for the responsibility of formal evaluator of teaching. Moreover, master's specialty and previous training are not linked to further training needs nor to how well principals believe they do evaluation. Three policy recommendations emerge from this study: (1) to re-assess the role of principal as evaluator in the light of their wider responsibilities; (2) to consider extending the role of formal evaluator to educators other than school-based administrators; and (3) to re-assess the value of formal evaluation as currently practised. iv TABLE OF CONTENTS Abstract ii Table of Contents iv List of Figures and Tables vii Acknowledgements x Chapter I: Introduction 1 Chapter II: A Review of the Literature 8 The Role of the Principal 9 Purpose of Evaluation 16 Process of Evaluation 20 Competence of Evaluator 3 Obstacles to Evaluation 8 Summary 31 Chapter III: Research Design and Methodology 35 The Framework for the Study 36 Purpose 3Process 7 TrainingObstacles 8 Sex of Principal 3Years of Experience as Principal 39 Sources of Data 40 Data Collection Procedures 41 Data Analysis and Presentation 44 Design Limitations 6 Summary 55 Chapter IV: British Columbia School District Collective Agreements 56 The Process 5The Four Phases of a Formal Evaluation of Teaching..58 Formal Evaluation Cycles 59 Evaluation Criteria 62 The Evaluator and the Evaluatee 63 The Initiation of a Formal Evaluation 64 Responsibility for Conducting a Formal Evaluation of Teaching 65 The Right of Appeal 65 Teacher Entitlement to Professional Development 66 Summary 6Chapter V: Respondents' Backgrounds, Assignments, and Their Role as Evaluators of Teaching 68 Biographical Information 6Current Assignment 70 The Principal as a Formal Evaluator of Teaching 72 Should Principals Do Evaluation? What is the Purpose? and How Well is Evaluation Done? 72 In-service Training, Obstacles, and The Four Phases of Evaluation 74 Number of Evaluations and "Less Than Satisfactory" Reports 82 Additional Comments Made by Respondents 84 Summary 89 Chapter VI: Sex of Principal and Years of Experience as Principal 92 Sex of Principal 3 Years of Experience as Principal 98 Summary 10Chapter VII: Purpose, Training, and Obstacles 105 Evaluation purpose 107 The Need for Further Training .....116 Obstacles to Evaluation 120 Summary 13Chapter VIII:Discussion, Conclusions, and Recommendations 134 Discussion 13Purpose 5 Process 140 Training 6 Obstacles. 9 Conclusion 154 Key Findings 157 Recommendations 8 PolicyResearch 159 vi References 161 Appendices: A: Questionnaire 168 B: Evaluation Phases in the Collective Agreements 174 C: Permissable Data in Evaluation Final Report 175 D: Evaluation Criteria and Cycles 176 E: School District Numbers, Names and Sizes .177 F: Sample Evaluation Article From a British Columbia School District Collective Agreement 178 G: Summary of Response Frequencies 179 vii List of Figures Figures: 3.1 Comparative Distribution of British Columbia Principals and Questionnaire Respondents 51 List of Tables Tables: 3.1 Respondents, British Columbia Principals' and Vice-Principals' Association Principals, and All British Columbia Public School Principals by Sex, Age, School Type,, and Staff Size... 48 3.2 Respondents and All British Columbia Principals by School District Size 52 3.3 Respondents and British Columbia Principals by Criteria and Cycles 3 3.4 Collective Agreement Wording for Evaluation Cycles 54 4.1 Collective Agreement Wording on Evaluation Cycles 61 5.1 Respondent Biographical Data 69 5.2 Teaching Load, School Type, and Staff Size 71 5.3 Evaluation Purpose and Quality 73 5.4 Evaluation Training Attendance Since September 1988 74 5.5 Evaluation Training Points Since September 1988...75 5.6 First, Second, and Third Most Important Obstacles to the Conduct of the Formal Evaluation of Teaching 78 5.7 Factors Present in Evaluations Leading to "Satisfactory" and "Less Than Satisfactory" Reports 81 viii 5.8 Evaluations Conducted and "Less Than Satisfactory" Reports Written Since September 1988 83 5.9 Anecdotal Responses 85 6.1 Sex of Principal by Evaluation Quality, Age, and Master's Specialty 93 6.2 Sex of Principal by School District Size, School Type, Staff Size, and Teaching Load 95 6.3 Sex of Principal by Evaluation Training Since September 1988 96 6.4 Sex of Principal by Evaluation Training Points Since September 1988 97 6.5 Principal Experience by Evaluation Quality, Age, and Master's Specialty 99 6.6 Principal Experience by School District Size, School Type, Staff Size, and Teaching Load 100 6.7 Principal Experience by Evaluation Training Since September 1988 102 6.8 Principal Experience by Evaluation Training Points Since September 1988 103 7.1 Sex of Principal by Evaluation Purpose and "Less Than Satisfactory" Reports 109 7.2 Principal Experience by Evaluation Purpose and "Less Than Satisfactory" Reports 112 7.3 Principals Categorised on the Basis of Evaluation Criteria by Evaluation Purpose and "Less Than Satisfactory" Reports 114 7.4 Sex of Principal by Years of Experience as Principal 115 7.5 Sex of Principal and Need for Further Training in Evaluation 117 7.6 Principal Experience and Need for Further Training in Evaluations Leading to a " Sat i s factory " Report 118 ix 7.7 Principal Experience and Need for Further Training in Evaluations Leading to a "Less Than Satisfactory" Report 119 7.8 Time Obstacle Statements 124 7.9 Time as an Obstacle and Sex of Principal 126 7.10 Time as an Obstacle and Principal Experience 127 7.11 Time as an Obstacle and Principals Categorised on the Basis of Evaluation Cycles 128 Acknowledgement s There are a number of individuals and organisations I should like to thank for the help they have given me in the production of this thesis and the completion of my MA in Education at UBC. Firstly, without the interest and practical support offered to me by the executive officers and staff of the British Columbia Principals' and Vice-Principals' Association, this thesis would not have been possible. I am also indebted to the 11 people who piloted the questionnaire and the 267 members of the Association who participated in the study and thank them for the kind words and suggestions made in the course of doing so. Thanks are also due to the staff at the School Finance and Data Management Branch at the BC Ministry of Education, who met all my requests for information promptly and carefully. My final thanks for those 'officially' involved in this work, go to Arleigh Reichl, in Education Computer Studies, who assisted me greatly with the statistical analysis, my Thesis Committee, Don Fisher, Frank Echols, and Graham Kelsey, and the external reader, Dan Brown. I should also like to extend some special thanks to friends, both in Vancouver and elsewhere, and in particular to the Lythgoe Family. I am very grateful to June, Len, Shannon and my 'buddy on the block'. Garnet, for their friendship and for having contributed enormously to making the past two years a pleasurable period for me. My Canadian cousins, the Oliver Family, also helped me to settle in and make the transition from life in Great Britain to life in Canada so relatively straightforward. Last, but by no means least, I give a very special thank you to my Mum, Eva Mary Edgar, and my Uncle and Aunt, David and Nola Edgar, for their moral and practical support in this venture on another continent. Without them the whole enterprise would have been very much more difficult. 1 CHAPTER I Introduction In North America the role of formal evaluator of teaching is generally carried out by the school principal, although superintendents, their assistants, district principals and, in some cases, school vice-principals also perform this role. In British Columbia, formal evaluation is governed by statute and the provisions laid down in the school district collective agreements drawn up between the local boards of school trustees and the local teacher unions affiliated to the British Columbia Teachers' Federation (BCTF). Clause 59 of the Teaching Profession Act 1987 (Province of British Columbia, 1987a), Section 121 (1) reads: "A person appointed as a principal or vice-principal in a public school shall, subject to this Act and the regulations:...(c) evaluate teachers under his supervision and report to the board as to his evaluation." The current personnel procedures in most British Columbia school districts date back to 1988, following the passing of Bills 19, as the Industrial Relations Reform Act (Province of British Columbia, 1987), and 20, as the Teaching Profession Act (Province of British Columbia, 1987a), in 1987. An important feature of Bill 19 was to change the former Labour Code of British Columbia "to give more weight to the interests of employers" (Kelsey, Lupini, 2 & Clinton, 1995, p.6). Bill 20 provided teachers with the option of remaining an 'association' outside the provisions of the new Industrial Relations Reform Act but without the right to strike or, to become a 'union' within the provisions of the Act and with the right to strike. Teacher associations in all seventy-five school districts in the Province voted to become unions and each subsequently voted to be affiliated to the BCTF. A major consequence of the Teaching Profession Act was the clear distinction drawn between teachers, who were now members of district unions, and administrators who were disallowed union membership. This quasi 'union/management' distinction and the responsibility for evaluation being mainly that of school administrators, highlights the importance of the principal in any study of formal evaluation. My interest in this subject dates back to the mid to late 1980s when teacher appraisal received increasing attention in England and Wales following the introduction of a series of major government educational initiatives. These initiatives addressed the public examination system, the curriculum, and school governance. A process of formal teacher appraisal was intended to address the effectiveness of teaching. The resulting government regulations (HMSO, 1991) and accompanying circular, set out the principles which were to be followed by Local Education Authorities (LEAs). In turn, the LEAs were to formulate teacher appraisal guidelines for individual schools to follow when they developed their own 'institution specific' appraisal process. Some important features of these principles and guidelines were the emphases on a) professional development; b) career planning; and, c) a supportive, non-threatening process. In this context, as a senior middle manager in a school with 1,650 pupils on roll and 90 staff, I became involved in helping to construct an appraisal system that would meet government and LEA requirements; and that would be viewed favourably by school staff. Meeting the second of these two criteria was particularly important because the perceptions of, and attitudes towards, teacher appraisal by teachers, heavily influence how well the goals of appraisal can be satisfactorily attained (Darling-Hammond, 1986: Sergiovanni, 1977, 1991). Primarily, the goals were to maintain high standards of teaching where they existed, and improve standards where necessary. I was able to form a view of appraisal not only in my capacity as a member of the School Appraisal Committee but also as the Humanities Co-ordinator and Head of the Social Science Department. This position gave me a dual perspective as one who appraised the staff within my 4 department and as one who, in turn, was appraised by the head and deputy head teachers. This led me to the conclusion that many teachers perceive appraisal as merely a middle and senior management device for identifying poor teachers. As Schonberger (1986) states with reference to Reavis (1978): "national surveys of teachers have tended to show teachers as distrustful of the supervisory process as traditionally practiced." Furthermore, "Teachers have come to regard supervision with anxiety, fear, suspicion, and resistance" (p.249). However, I also took the view that "teachers, like any other group of professionals, must accept the need to be appraised. But it must be organised so that the vast majority who do a good job are encouraged" (Edgar, 1991, p.24). In other words, teacher attitudes are important but so is the quality and orientation of the evaluation. It is important to distinguish between informal and formal evaluation. The former is implicitly recognised as occurring in day-to-day professional interaction. The latter explicitly identifies the process to be followed and, of crucial importance, the outcome is recorded. Therefore, while the first is often 'taken as read1, the second tends to be associated with categorisation as good or bad, and with filed information which can be used and referred to at some time in the future. The formal evaluation of teaching, governed in British Columbia by Provincial statute and school district collective agreements, is the subject of interest in this thesis. The effectiveness of formal evaluation, as stated above, is linked to teacher perceptions and attitudes but also depends heavily on the evaluator. Examining formal evaluation from the perspective of the evaluator, within an ends-means framework, highlights four important concepts: a) the reason or purpose principals have for conducting evaluations (other than their contractual obligations); b) the evaluation process principals have to work with; c) the level of professional preparation and training of principals; and, d) the obstacles which may prevent principals from fulfilling this role to their desired standard. These four concepts are, of course, highly interrelated. While purpose very clearly relates to the ends of formal evaluation, the process can also convey purposes which may be quite different from those formally stated. The extent to which a particular factor in the evaluation process may be seen as an obstacle is likely to depend, in part, on the purpose the evaluator has in mind and how far the evaluator has been trained for the role. Purpose, assumes particular importance because of the widespread teacher distrust referred to earlier. Purpose 6 also has a clear political dimension. What happens in public schools is legitimately part of the public political arena. School boards, amongst other bodies, are held to be publicly and politically accountable for the perceived standard of education. School principals now occupy the ground which lies between the public (in the form of parents and the school board) and professional educator colleagues who are trying to provide a service to that public. Therefore, while principals have a role as educational leaders and instructional managers and may well wish to promote professional growth and development, they also have a responsibility to ensure accountability for the quality of the service provided to the public. This, in turn, may lead to a sense of being 'caught' between two apparently contradictory philosophies. The complexity of the environment in which principals now operate highlights the need to know about the quality of professional preparation principals receive. A greater understanding of the perceptions of principals as evaluators, may also lead to a more specific understanding of the impediments to carrying out the evaluator role. Teaching and learning are the raison d'etre for schools and formal evaluation is the prescribed means to assess the "classroom situation". The success or failure of evaluation depends heavily on the objectives principals have for 7 evaluating, their level of competence in evaluating, and how far they are able to carry out evaluation unhindered. As Sergiovanni (1991) asserts: "The nature and characteristics of evaluation knowledge are determined by the way in which the supervisor understands them... To understand an evaluation, therefore, one must understand the evaluator" (p.293). Therefore, the purpose of this study is to elicit the views of British Columbia principals with regard to the formal evaluation of teaching in relation to the four concepts of purpose, process, training, and obstacles. 8 CHAPTER II A Review of the Literature The literature on the formal evaluation of teaching is extensive but can be organised around four key concepts. These are a) the purpose of evaluation; b) the process of evaluation; c) the competence of the evaluator; and, d) the obstacles to conducting evaluation. These concepts are not mutually exclusive but do provide useful foci for four of the sections of this chapter. The concept of purpose relates to the apparently competing needs of the school, as an educational organisation within a wider political context, and the needs of teachers. The discussion about process examines the relative positions of principals and teachers and how far the purposes of evaluation are met. Competence considers the professional expertise of principals as evaluators, including training, and "obstacles" addresses how time may hinder a principal's capacity to satisfactorily carry out the formal evaluation of teaching. The first section of this chapter makes reference to the wider role of the principal. The evaluation of teaching is only one part of a much broader set of principal responsibilities. In particular, this opening section includes the issues of instructional leadership and principals' workload. It also addresses the impact of principals' experience and the sex of principals on educational administration. The experience principals have as principals is examined because, due to the complexity of the role, 'mastery' of the role of principal is likely to require practice. The sex of principals has been addressed because a section of the literature on educational administration draws a distinction between the professional behaviour of male and female educational managers. The chapter concludes with a summary. The Role of the Principal In this decade, much has been written about the leadership role of the principal (Rossow, 1990; Sergiovanni, 1991; Sharp & Walter, 1994; Sybouts & Wendel, 1994; Ubben & Hughes, 1992; et al.). This literature often refers to the 'effective-schools research' conducted in the 1970s and 1980s and places considerable emphasis on the importance of the principal in bringing about school success. The extent and complexity of the role of principal is considerable. Sharp and Walter (1994) illustrate this well as they systematically work through the role of the American principal 'as school manager1, from "school finance", through the "school facility", "public relations", "personnel role", "school law", "food services", "student discipline", and "pupil transportation", to "principal as 10 master schedule maker" (p.vii). They conclude in their introduction that "The principal, whether elementary or secondary, is the single most important person to a school's success" (p.1). Beck and Murphy (1993) present a series of metaphors that have been associated with the principalship since the 1920s. Their list of metaphors for the 1990s is: Principal as leader; as servant; as organisational architect; as social architect; as educator; as moral agent; and, as person in the community. Beck and Murphy consider the generic role of the manager in the post-industrial era and quote Gerding and Serenhuijseur, cited in Beare (1989, p.19) when they suggest that the 'new manager' will be "a customized version of Indiana Jones: proactive; entrepreneurial; communicating in various languages; able to inspire, motivate and persuade subordinates, superiors, colleagues and outside constituents." (p.190). However, Blumberg and Greenfield (1986) remind us that principals are people and therefore are bound to range in effectiveness. They assert that "very few, if any, can possibly live up to the 'White Knight' image that we hold so dear" (p.232). A limitation of the literature on the role of the principal is that it treats the occupants of this office as a relatively homogeneous group, save for a distinction based on 'effective' and 'less effective' practice. Alder et al. (1993, p.4) refer to this tendency and, in particular, the accusation made by Shakeshaft (1987), that the literature on school management is "androcentric" because it largely fails to distinguish between male and female principals. Shakeshaft (1989) is also quoted in Pigford and Tonnsen (1993, p.2) as asserting that "the absence of accurate data on women administrators is by design and is evidence of a 'conspiracy of silence'". While the literature does not refer directly to a distinction between men and women principals in the realm of formal evaluation, a relatively small but interesting literature speaks to the issue of gender differences in school administration generally. A section of this literature addresses the kind of data that Shakeshaft suggests is suppressed. For example, Gross and Trask (1976) and Blumberg and Greenfield (1986) highlight the longer periods women spend as classroom teachers before being promoted into educational administration and that most often women achieve principalships in elementary rather than secondary schools. The literature suggests that once women obtain a principalship they adopt a more collegial and caring approach to the function of school leadership (Alder, Laney & Packer, 1993; Ozga, 1993; Regan & Brooks, 1995; Tibbetts, 1980). 12 Tibbetts (1980), citing Clement, et al. (1977), Grambs (1976), and Gross & Trask (1976) cited in Grambs (1976), makes specific reference to the performance of teachers when she suggests "Data indicate that, on the average, the caliber of performance of... teachers in schools administered by women is found to be of a higher quality than in schools managed by men" (p.176). She goes on to assert, citing Fishel and Pottker (1975), Frasher and Frasher (1979), Grobman and Hines (1956), and Gross and Trask (1964) in Meskin (1974), that "Women principals induce more professional performances and productive behavior from teachers who consequently use more desirable practices, resulting in higher ratings for teacher performance in schools with women principals" (p.177). Ozga (1993) relates school management to the literature on leadership and motivation generally when asserting "leadership is typically authoritarian, charismatic or entrepreneurial; motivation is typically competitive, and linked to success defined as winning, as beating down the opposition" (p.10). She continues: The beginnings of research on women's management and leadership styles suggest that there are differences from this conventional model (Neville 1988). Women's leadership style is less hierarchical and more democratic. Women, for example, run more closely knit schools than do men, and communicate better with teachers, (p.11) 13 Expanding on the description that Ozga provides, Regan and Brooks (1995) make reference to five "Feminist Attributes" (p.25), one of which is courage. While they acknowledge the caring and collaborative attributes as well, Regan and Brooks provide an example,of what they mean by courage when they suggest that women "exercise courage in support of the organization. They take the high road and encourage everyone in the organization to achieve the high road with them" (p.30). Ozga (1993), citing Ball (1987), identifies two styles of management termed 'managerial' and 'interpersonal' and suggests that the first of these tends to be exhibited more by men and the second more by women (p.31). Ozga (1993), referring to the 'managerial' style, quotes Ball (1987, p.97) when stating "in theory at least, the roles and responsibilities of staff are relatively fixed and publicly recorded" (p.31). The 'interpersonal' style on the other hand is characterised by a reliance on personal relationships and face-to-face contact to fulfil the role. These differences in style may have consequences for the way formal evaluation is conducted because they are linked to the way principals interact with teachers. For example, these styles may result in rather different approaches to evaluation if male principals are more concerned with bureaucratic functions and less concerned with the human 14 context than are female principals. In other words, a more 'managerial' approach may place greater importance on the needs of the organisation, whereas an 'interpersonal' style may give priority to the needs of the employee. A second way of distinguishing between principals is on the basis of experience as a principal. However, the literature has very little to say about the role of administrative experience in determining principal behaviours. Principal experience is referred to from time to time (Blumberg & Greenfield, 1986; Rossow, 1990; Ubben & Hughes, 1992; Webster, 1994;) but almost always in passing or in terms that take its positive 'developmental' effect for granted. For example, Blumberg and Greenfield (1986), refer to the possibility "of better-prepared 'rookie' principals who, through practicing their craft, can become more skilled and more effective over time" (p.239). Rossow (1990) asserts that "The principal's previous experiences will influence his decisions and activities" (p.42), but is unable to give more than an intuitive rationale for doing so. Morris et al. (1984) refer extensively to "Principaling and its effect on the principal" (p.181), but do not identify any research that seeks to identify differences in the behaviours of principals with different levels of experience. Webster (1994) provides a final example when he states that "It is possible for experienced principals to 15 list...an infinite number of specific skills required in the principalship" (p.41), but at no point does he address the issue of experience in detail. A possible exception to this truncated or non-existent reference to the effects of experience may be provided by Sergiovanni (1991). He refers to Hogben's (1981) work based on Freidson's (1972) examination of the medical profession. According to Sergiovanni, Hogben identifies four major differences between medical professionals and medical researchers and theoreticians: "Professionals aim at action, not at knowledge...professionals need to believe in what they are doing as they practice...professionals [rely] on their own firsthand experiences...the practitioner is very prone to emphasize the idea of indeterminancy or uncertainty" (1991, p.291/292, author's emphases). Sergiovanni maintains that these differences can also be applied to the teaching profession. Even though this issue is raised in order to promote the need for principals as evaluators to accommodate to the "clinical mind" of teachers, it also identifies characteristics which may apply to principals themselves. One characteristic above, is that professionals "aim at action" and: in this process... seek "useful" rather than "ideal" knowledge...By taking action, they seek to make sense of the problems they face and to create knowledge in use. They rely heavily on informed intuition to fill in 16 the gaps between what is known and unknown, (p.291, author1s emphases) This clearly highlights the role of experience and Sergiovanni suggests that the 'creation of knowledge in use' is necessary because existing theory is only helpful in addressing a small minority of the problems professional educators face. Indeed, another characteristic ascribed to professionals is a heavy reliance on their own firsthand experiences and Sergiovanni asserts that "They trust their own accumulated experiences in making decisions about practice [sic] than they do abstract principles" (p.292). These characteristics not only highlight the importance of experience to the behaviours of principals they also raise a question about how far educators are amenable to 'external' training. Purpose of Evaluation The purpose of evaluation has significance because the stated objectives are likely to have an effect on the perceptions of those who are being evaluated (Airasian, 1993). For example, the stated objectives will provide some indication of the degree to which the evaluatee is to be judged, categorised, and given constructive feedback. The purposes presented in the literature tend to vary slightly but Harris and Monk (1992) capture the essence of most sources when they quote a 1988 Education Research Service 17 report that stated "teacher evaluation systems...must serve three major purposes: (1) to ensure that all teachers are at least minimally competent; (2) to improve further the performance of competent teachers; and (3) to identify and recognize the performance of outstanding teachers" (p.152). Poster and Poster (1993) identify two purposes as.those of 'performance review' and 'staff development review', which they define as follows: Performance review (or appraisal) focuses on the setting of achievable, often relatively short-term goals. The review gives feedback: on task clarification through consideration of the employees' understanding of their objectives set against those of the organisation; and on training needs as indicated either by shortcomings in performance or by the demonstration of potential for higher levels of performance. Staff development review (or appraisal) focuses on improving the ability of employees to perform their present or prospective roles, through the identification of personal development needs and the provision of subsequent training or self-development opportunities. In sum, the former is concerned with the task, the latter with the individual, (p.l, authors' emphases) While the distinction between these two purposes is defined as concerns over task and over the individual, a somewhat more subtle but related difference is raised which places emphasis on the needs of the organisation or on the needs of the employee. In referring to a distinction between 'bureaucratic' and 'professional' evaluation, Housego (1989) states that the former "is meant to serve the needs of the organization for monitoring how adequate the teacher's performance is", while the latter "is meant to help teachers meet their needs for support and guidance relevant to improving classroom practice" (p.197). Poster and Poster suggest that this is a false dichotomy because the efficient working of an organisation "depends both on the delivery system and on those who deliver it" (p.1-2). However, the teachers may well define evaluation as generally seeking to promote the interests of the organisation at the expense of the individual: Instead of encouraging teachers to take control of their own striving and growth, the externally controlled educational objectives, teaching materials, assignments, and schedules have produced a feeling of dependence, insecurity, powerlessness, and subservience among teachers (Schonberger, 1986, p.249). In other words, while it is important to understand the intended purposes of evaluation, there may well be a difference between what is stated and what is perceived. A number of sources make reference to this difference (Allston, Rymhs & Shultz, 1993; Christensen, 1986; Darling-Hammond, 1986; Peterson, 1986; Schonberger, 1986) and they generally present negative attitudes on the part of teachers towards current practice. These negative attitudes have been linked primarily to the fact that evaluation is viewed as judgmental, particularly with regard to how far teachers are fulfilling their contractual obligations or performing to a satisfactory standard (Black, 1993). These teacher perceptions raise a question about the role of the evaluator 19 whose approach will be influenced by the purposes he or she has for formally evaluating. The literature suggests that the somewhat managerial and judgmental approach taken by administrators towards evaluation needs to change (Haefele, 1992; Rooney, 1993; Starratt, 1993; Storey & Housego, 1980; Wood, 1992; et al) and advocates a more 'collegial' form of evaluation. The most negative views are directed towards the controlling function ascribed by some teachers to the evaluation process. Evaluation can be characterised by the evaluated as a means for the senior management in schools to demonstrate their power and ultimate control over the 'ordinary' teacher, rather than attempting to improve the quality of classroom practice. Referring to studies of instruments used by administrators in the American public school system, Peterson points out that: these instruments included...compliance with policies; personal attributes such as appearance, health, attendance and judgment [sic]; extracurricular duties such as record keeping...; and finally a few items on the teaching process...The distressing discovery of this study is that as little as 5% of the items on one instrument in the sample focused on teaching. (1985, p.40) Furthermore, Schonberger (1986), highlights what he describes as "pseudo-scientific management practices favored by administrators in the interest of increasing control, accountability, and efficiency" (p.249). He continues by citing Withall and Wood (1979), who assert that a number of 20 factors have led to feelings of fear and anxiety in relation to evaluation, one being "the manner in which supervisors have tended to project an image of superiority and omniscience in identifying the strengths and weaknesses of a teacher's performance" (cited in Schonberger 1986, p.249). Process of Evaluation The evaluation literature draws a distinction between formative and summative evaluation: In evaluating a teacher's performance, summative evaluation suggests a statement of worth. A judgment is made about the quality of one's teaching...Formative evaluation is concerned less with judging and rating the teacher than with providing information that helps improve teacher performance. (Sergiovanni, 1977, p.372, in Schonberger, 1986, p.249) Sergiovanni though, is once again attaching importance to much more than simply the stated intentions of evaluation and looks at how that purpose is transmitted through the process of evaluation. The evaluation of teaching tends to be summative rather than formative and tends not to be viewed by teachers in a positive developmental way. Evaluation can also be seen, by both teachers and principals alike, as a relatively inconsequential 'chore' that has to be performed periodically but which produces little of any real value. According to Darling-Hammond (1986) "Teacher evaluation can be utterly unimportant. In many school districts it is a perfunctory bureaucratic requirement that 21 yields little help for teachers and little information on which a school district can base decisions" (p.531). Some of the literature highlights the importance of the position or status of evaluatees in relation to their evaluators and concludes that peer evaluation would reduce negative attitudes towards the process. For example, Darling-Hammond (1986) identifies the need for non-threatening procedures as one of the principle justifications for employing peer evaluation. Her studies seem to indicate that such systems can produce higher levels of more positive attitudes amongst evaluatees. She also draws attention to the differences amongst teachers and, when referring to evaluation designs, asserts: Elements that are intended to heighten reliability tend to reduce the ability of the system to help individual teachers improve, since the uniformity of criteria and their application...necessarily reduce the flexibility that would be needed to make evaluation useful to individual teachers with individual needs, (p.546) Some research has highlighted the heterogeneous nature of teachers and investigated the origins of phenomena such as powerlessness (Darnell, 1993; and Lusty, 1991). Darnell investigated the attitudes of teachers towards the Texas Teacher Appraisal System in relation to the self-concept of teachers and their position on the 'Career Ladder'. She found general discontent on the part of teachers towards the process and suggests that status within the school may have some influence in determining such attitudes: Not holding the highest status on the Career Ladder could tend to make Career Ladder II teachers feel less adequate than their peers on Career Ladder III. Overall attitudes for the appraiser are positive, but teachers on Career Ladder II indicate a less positive attitude toward the appraiser than do Career Ladder I or III teachers. (1993, Abstract) The potential significance of the status variable is supported by Lusty who states that "teachers' opinions on teacher appraisal are closely related to their position and status within the school" (Abstract). Christensen (1986) refers to three different orientations for working with teachers: "directive, collaborative and nondirective", and concludes that these orientations have consequences for the evaluation of teaching: "Research has found that different types of teachers need different types of supervision... The supervision, therefore, must be oriented to the teacher" (p.23). Sergiovanni (1991), linking the process of evaluation to the purpose once more, asserts that "No supervisory system based on a single purpose can succeed over time" (p.284). Antosz (1990), in her study of teacher evaluation provisions in selected British Columbia school district collective agreements, concludes that: few British Columbia school districts have teacher evaluation systems that promote teacher growth and instructional improvement. Therefore, the majority fall short of the literature's recommended teacher evaluation practices. All school districts studied have summative teacher evaluation systems.(p.116) 23 This finding echoes the concern that the evaluation process is one dimensional and the purpose transmitted to teachers is one of accountability. It is also likely that many principals who feel strongly orientated towards the growth and development purpose of evaluation, find themselves working within a system geared more to accountability. Competence of Evaluator Haefele (1992), with reference to what he calls the current "deficit model" of teacher evaluation, focuses on the role of the principal in the process and characterises this role as the "deficient evaluator" (p.337). According to Haefele, recent research (Darling-Hammond, Wise, & Pease, 1983; Huddle, 1985; Lower, 1987; Medley, Coker, & Soar, 1987) indicates that "In general, evaluations performed by principals have been found to be poor and imprecise" (p.338). Furthermore, Haefele (1992) cites Scriven (1987) who has highlighted "the questionable ability of the principal to evaluate teachers of subject areas foreign to the principal's background" (in Haefele, p.338). Haefele, referring to other sources (Bridges, 1986; Cangelosi, 1991; Lower, 1987; VanScriver, 1990), concludes that "principals do not receive much, if any, rigorous training in the rating 24 of teaching performance and other evaluation related skills" (p.338). Bailey (1984) asserts that "The evaluation of teachers requires incredible amounts of skill and time. Therefore, unfortunately, many administrators find teacher evaluation to be a highly frustrating endeavour" (p.19). Townsend (1987), referring to Hunter (1985), warns that school systems which do not provide adequate evaluation training to school administrators "can expect to encounter serious difficulties" (p.26). However, one research study (Page & Page, 1985) suggests that principals rate very highly the preparation they receive for "observation of instruction" and "evaluation of teachers". At the same time, principals in this study rated both these activities as very time consuming and the evaluation of teachers as "difficult". Wood (1992), from the perspective of naturalistic inquiry and drawing on sources in Everhart (1988) and Guba and Lincoln (1981), suggests the deficiency in many of the evaluations conducted by principals is that observations are not considered in context and the procedures adopted by many school districts have "underemphasized the role of the principal as the 'instrument' of evaluation"(p.52). The importance of context is also supported by Storey and Housego (1980) and Housego (1989). 25 Under the heading of "Seeing is Believing...Or Is It?", Wood (1992) asserts that "Administrators and others tend to see what they are prepared to see, and what they already believe" and thus "Indeed, believing is seeing" (p.53, author's emphasis). Therefore, due to the increasing complexity of the role of principal and the kinds of pressure on time that Haefele and others refer to above, principals have developed the ability to operate on, what Wood calls, "automatic pilot" (p.56). The issue of pre-judgment is important for making decisions about who to select for evaluation. A preconception of poor teacher performance would probably lead a principal to select a teacher for evaluation. However, if such a preconception exists, the outcome of the evaluation is somewhat pre-determined and perhaps flawed as a result. A study by Morrow et al. (1985) may provide a useful guide to the kinds of indicators or 'criteria' principals employ when arriving at a general assessment of teaching competence. The purpose of the study was to survey the perceptions of principals as to the level of difficulty experienced by their staff with regard to ten "common instructional problems". The ten problems were identified from "An extensive literature review" (p.387) which included Adams & Martray (1980), Adams (1982), Bartholomew (1974, 26 1976), and Cruickshank (1974). The top five concerns for principals in all levels of school were: "Motivation, getting students interested"; "Providing for individual differences"; "Discipline, classroom control"; "Organizing and managing the classroom"; and "Testing, grading and promotion of students". Therefore, the ways in which these kinds of issues 'come to the attention' of principals and the conceptions of teaching performance they create are of interest. However, this is based on an assumption, identified by Storey and Housego (1980), of "identifiability". They suggest that this means "regardless of the approach used, the personnel being supervised, [and] the supervisor,...act as if desired outcomes and indicators of effective practice were known and identifiable." (p.2, authors' emphasis). They go on to raise the question "What are the criteria of effectiveness?" (p.3, my emphasis) and subsequently to conclude: Assessment of any kind is, by definition, based on certain criteria. The explicitness of these criteria will vary among organisations. The state of knowledge in the field is likely to be among the most significant of factors affecting the clarity, universality and acceptance of a given set of effectiveness criteria, (p.3) The formulation of evaluation criteria is a joint responsibility of the school district and principal in conjunction with teacher associations. These criteria are then communicated through the district collective 27 agreements. However, where the collective agreement fails to state criteria or make them explicit, an onus is placed on the principal to codify criteria and, in turn, communicate them to the teacher. This requires that the principal is able to codify such criteria and, as Storey and Housego imply, this may not always be the case. Bailey (1984) asserts that "Many evaluators lack a systematic and orderly way of diagnosing and analyzing classroom teaching methods". He goes on to promote the use of "classroom teaching style classification systems...based on the assumption that... teaching styles are not equal" in order that "the evaluator can identify, classify, and evaluate classroom teaching styles based on their intended purpose" (p.19). A final dimension to the consideration of evaluator competence is provided by Bolton (1980) when he describes twelve "resistances to evaluation by evaluators" (p.27, author's emphasis). These 'resistances' emerged from a series of unstructured conversations with educational administrators over a period of years. They include a) uncertainty about criteria, and interpretation of data; b) fear of an unpleasant reaction which would prevent a relationship conducive to facilitating improvement; c) failure to see evaluation as linked to the purposes of the evaluator; d) inability to organise time for adequate 28 observations; e) fear of being held to a commitment to an objective which may take 'additional' time; f) lack of support from higher levels of the organisation; and, g) lack of conviction that evaluation will provide as much payoff as time spent on other activities. Bolton provides a varied set of reasons as to why principals may be resistant to the role of evaluator and which span the three concepts addressed so far in this chapter. These reasons also emphasise the importance of the evaluator to the effectiveness of evaluation because they illustrate how far the evaluator is able to interpret and influence the process. Obstacles to Evaluation Haefele (1992) suggests that lack of time for principals to conduct evaluations leads them to hesitate in giving critical reports because the sample of observations they can carry out is insufficiently broad (citing research from Andrews & Barnes, 1990; Bridges, 1986; Kauchak, Peterson, & Driscoll, 1985; Langlois & Colarusso, 1988; Lower, 1987; Stodolsky, 1988). In relation to "managing time", Pigford and Tonnsen (1993) suggest that some principals may have "trouble in distinguishing between what is urgent and what is important" (p.42). They go on to say: 29 Posner quotes Hummel [1967] as he reminds us of the difference: "We live in constant tension between the urgent and the important. The problem is that the important tasks rarely must be done today, or even this week. The urgent task calls for instant action" Everard and Morris (1990) talk about the "critical distinction" between the urgent and the important and that "we must not be lured into the trap of being caught up in the urgent to the exclusion of the important" (p.123). They suggest that important tasks need to be thought of in terms of the 'long-term' and 'short-term' and that time must be allocated to the important and kept as carefully as any appointment with a parent or district official. Smith and Andrews (1989) highlighted a further distinction in terms of time management when they found that principals spent "less time than they thought they should on improving instruction and more time on maintaining the school" (p.27). Employing definitions for 'average principals' and 'strong instructional leaders' they also found that 'average principals' did not "implement their values on a day-to-day basis as they allocate[d] time among the various tasks that must be performed" which "has lead [sic] observers of principals' management practices to conclude that many principals are 'building managers' rather than 'instructional leaders', and they should spend less time on building management and more on improving instruction" (p.29). However, they go on to say that their data suggest "principals who are strong instructional leaders do not divert time away from building management functions in favor of instructional leadership functions". Finally, in this regard. Smith and Andrews conclude: These data suggest that principals who are strong instructional leaders implement discretionary time in such a way that they codify, on a day-to-day basis, the ideals or values of the average principal. They spend the greatest amount of their time on educational program improvement activities. These data also suggest that it is a false dichotomy to draw the distinction between being a strong building manager and a strong instructional leader, (p.29) Sergiovanni (1991) links time and purpose when he refers to the "80/20 quality rule". He asserts that "When more than 20 percent of the principal's time and money is expended in evaluation for quality control or less than 80 percent of the principal's time and money is spent in professional improvement, quality schooling suffers" (p.285). Bolton (1980), in looking at the evaluation of administrators, describes the environment of principals as 'problematical' and asserts that an obstacle to the administrator is "the heavy demand on time made by routine clerical and administrative duties" (p.5). He goes on to suggest, citing Estosito et al. (1975, p.63), that when administrators spend a good deal of their time on these kinds of routine activities or others which do not bring them into contact with teachers "the actual activities are 31 incongruent with the major role perceived by clients - a helping relationship to them". Bolton concludes that this "becomes an obstacle to harmonious relationships with others" (p.5). This analysis of the time-consuming nature of principal responsibilities is helpful in an understanding of the dynamics of the principal-teacher relationship which emerges from the consideration of purpose and process. As a consequence, Bolton also highlights the need to look at the evaluator role of the principal in a wider context. Finally, Beck and Murphy (1993) provide some historical perspective to the issue of time as an obstacle when they reveal that it is not a new phenomenon. Citing the 1954 Yearbook of the Department of Elementary School Principals entitled "Time for the Job", they quote from the preface which notes that "many principals have indicated grave concern about 'lack of time for the job'" (in Beck & Murphy, p.56) and explain that the document goes on to suggest ways in which different practices by principals might help to remedy the problem. Summary Much of the literature reveals that negative attitudes exist in relation to evaluation and, moreover, that educational administrators should seek to develop procedures that will achieve more positive reactions. The research 32 identifies general explanations of these negative attitudes, such as a sense of powerlessness, and distrust about the real motives behind evaluation. However, the purposes; from the point of view of the evaluator, do not receive much attention. What does emerge from the literature is a dichotomy of purpose between the growth and development of the teacher and accountability for the quality of teaching. This, in turn, reinforces the image of the principal caught between two different philosophies. Poster and Poster (1993) describe this dichotomy as false and, in highlighting the importance of the delivery system and those who deliver it, could easily be referring to the evaluation process and the evaluators. Sergiovanni (1991) draws attention to the implied purpose of evaluation as transmitted through the summative processes employed. Even though the message received by teachers from summative evaluation may well be that organisational needs are being served rather than the needs of teachers, it raises the question as to whether or not summative evaluation can be formative. Put another way, can the principal provide a formative experience for the teacher while at the same time producing a summative report? If the answer is positive, many principals may be working with a summative process but with formative purposes in mind. 33 The literature presented in this chapter would suggest that formative purposes are not at the forefront for many principals and, indeed, that their approach needs to be less managerial and more collegial. This is qualified by literature which suggests that this managerial approach is more a feature of male than female principalship. This literature describes women principals as more competent, as more caring, and, very importantly, as better communicators. This lends considerable importance to the dimension of gender in a study of formal evaluation because evaluation emerges from the literature as a form of communication which involves high levels of interpersonal skills. The literature regarding the ability and competence of school administrators in their role as formal evaluators of teaching is not encouraging. This literature suggests considerable room for improvement and Haefele (1992) even employs the term 'deficient evaluator'. Lack of adequate professional preparation or lack of time, due to volume of work, are two major reasons presented for principals being unable to perform the function of evaluator satisfactorily. However, little research exists on the views of evaluators themselves, about their level of competence in the formal evaluation of teaching. Page and Page (1985) suggest that principals feel well prepared for the role of evaluator. Bolton, on the other hand, presents a list of 'resistances' to evaluation and Darling-Hammond (1986) asserts that many principals view evaluation as "utterly unimportant". The literature appears to assume that with greater experience in the role of principal the incumbent will develop greater expertise. However, the consideration of issues such as competence is not specifically related to length of experience and, indeed, little direct reference is made to principal experience in the literature. Finally, the general role of principal has been widely considered in the literature and what emerges is a vivid picture of a demanding job in a complex environment. This picture enables an examination of principal views about the formal evaluation of teaching to take place within a more informed context. CHAPTER III Research Design and Methodology This chapter is divided into six sections: The framework for the study; sources of data; data collection procedures; data analysis and presentation; design limitations; and, summary. The framework for the study set out the guiding questions and identifies the concepts and variables which are given prominence. The sources of data section describes the subjects included in the study and explains the involvement of the British Columbia Principals and Vice-Principals' Association (BCPVPA). The third section describes the two data collection procedures employed in the study: a content analysis of the seventy-five British Columbia school district collective agreements; and a questionnaire sent to all the members of the BCPVPA. The data analysis and presentation section explains the methods of analysis used for the questionnaire returns and how the data from these returns have been organised. The penultimate section, design limitations, devotes most attention to the consideration of the generalisablity of the data. The chapter concludes with a summary. 36 The Framework for the Study The evaluation literature described in Chapter II and the concepts which guide this study give rise to a number of questions which provide the starting point for the collection of data. Each of these questions provides the basis for a part of this section of the chapter: purpose, process, training, and obstacles. The two final parts of this section take account of the two variables of sex and years of experience. Purpose What do principals believe to be the most important purpose of the formal evaluation of teaching? The answer to this question would probably include both teacher growth and accountability and indeed Sergiovanni (1991) asserts that both are necessary if the evaluation process is to have a chance of success. However, it would be informative to know which of these two is the more highly regarded by principals and, therefore, it would be necessary to ask principals to choose either teacher growth or accountability. Other data are also pertinent to the concept of purpose. For example, knowing whether principals carry out the role of evaluator simply because it is part of their contractual obligations or because they believe it to be a role they should carry out. The extent to which principals see the purpose of formal evaluation as a quality control measure may also be transmitted through the proportion of 'critical' reports they write and therefore information about this would be useful. Process With what formal evaluation processes are principals in British Columbia working? Determining the evaluation processes employed in the Province can be accomplished through the school district collective agreements thus providing an answer to the question above. This would identify the summative/formative nature of evaluation, as well as factors such as the existence of stated criteria, and whether or not the principal has discretion over choosing teachers to be evaluated. Training What training have principals received in formal evaluation and what are their training needs? This question relates to the issue of professional preparation. It would be informative to know whether principals have received specific training for the role of evaluator and how far they feel they need further training. 38 Obstacles What do principals consider to be the most important obstacles to their carrying out the formal evaluation of teaching? The hindrances to conducting formal evaluation are quite well documented in the literature. It would therefore be interesting to compare the views of British Columbia principals with this literature, particularly with regard to time. A more general question, such as the one above, would avoid the risk of leading principals into giving time as an obstacle. It would thus be interesting to see if the obstacle of time arises 'naturally'. This question would also be useful in identifying other factors which principals consider impede their conduct of evaluation. Sex of Principal Do differences exist between male and female principals with regard to their views on the formal evaluation of teaching? The literature suggests that this is an important factor in educational administration but this literature does not address the specific issue of formal evaluation. Therefore, this question would be a way of distinguishing between principals and provide information hitherto unavailable. 39 Years of Experience as Principal Do differences exist between principals with different levels of experience with regard to their views on the formal evaluation of teaching? This question provides a second way in which to distinguish between principals. The literature is sparse on administrative experience and the behaviours of principals. The particular complexities of the role of evaluator would suggest that different levels of experience may lead to different levels of competence in carrying out that role. In order to have an accurate measure of experience, the number of evaluations carried out by principals and the nature of those evaluations is necessary, in addition to the number of years of tenure. Therefore, the study will identify important information about the views of principals as evaluators. This information falls into three broad categories of process, purpose, and obstacles. The study will also draw a distinction between principals based on sex and years of experience as a principal, while also taking account of the nature of the evaluation process with which principals are working. 40 Sources of Data I contacted the British Columbia Principals' and Vice-Principals' Association (BCPVPA) to elicit their support for a questionnaire-based study and for access to their library of British Columbia school district collective agreements. Access to the collective agreements was immediately forthcoming and following a number of meetings I had with executive officers of the BCPVPA to discuss the purpose of the final study, the Association agreed to organise the distribution of a questionnaire to all its members throughout the Province. Only principals were included in this study because it is they who conduct most evaluations. Even when evaluation is delegated to vice-principals, anecdotal information suggests that they are given the 'straightforward' evaluations to conduct and therefore will not have the same breadth of experience as principals. Therefore, the target population of the survey were the 1,179! principal members of the BCPVPA who, in turn, represent 76.9 percent of all the school principals in British Columbia. However, all 2,430 members of the BCPVPA were sent a questionnaire because the Association were interested in obtaining data relating to vice-principals as This is the most accurate figure that could be obtained from the BCPVPA data files. 41 well as principals. In addition, the logistical difficulties involved for the BCPVPA to post questionnaires to only some of their members were regarded as too great. In order to be able to link data gathered from the collective agreements to data gathered from the questionnaire, it was necessary to ask respondents the number of their school district. While this raised some concern about confidentiality, it was considered fundamental to a meaningful set of results and therefore was included in the section relating to an administrator's current assignment. It is impossible to say whether or not this question had an effect on the return rate. Data Collection Procedures The study included two major data gathering procedures. First, a detailed study of the sections relating to the formal evaluation of teaching in the seventy-five British Columbia school district collective agreements and, second, the distribution of a questionnaire to all BCPVPA members. A content analysis of all seventy-five British Columbia school district collective agreements was carried out at the offices of the BCPVPA, in order to extract clauses relevant to the formal evaluation of teaching. Initially, five main questions guided the reading of these collective agreements: 42 a) How is the formal evaluation of teaching carried out? b) Who initiates a formal evaluation of teaching? c) Who is responsible for conducting evaluation? d) Are there stated evaluation criteria? e) Is there a stated evaluation cycle? Common themes emerged from the processes specified for the formal evaluation of teaching. These included four phases (described in more detail in Chapter IV, p.58), the right of appeal against the process or outcome of an evaluation, the opportunity for the teacher to receive remedial help, and whether or not an explicit distinction was made between formative and summative evaluation. With regard to formal evaluation criteria and cycles, criteria and cycles are stated in quite different ways in different collective agreements and the frequencies of cycles are by no means always clear cut. For this reason it became necessary to keep a detailed record of what each contained rather than simply whether or not criteria or a cycle existed. With information from the collective agreements about the context within which the formal evaluation of teaching takes place in British Columbia and informed by the review of literature, it was possible to begin formulating a set of questions around the key themes of purpose, time and training. An important consideration in formulating two of the questions was to provide a time frame. These relate to the period since September 1988, to take account of the legal change in relationship between teachers and school-based administrators which occurred as a result of the Teaching Profession Act 1987 (see p.l). A draft questionnaire was piloted with eleven school based administrators who had either recently retired, were on the executive committee of the BCPVPA, or were currently on study leave or seconded to other duties. The piloting resulted in some modest but important changes to the questionnaire and the final version (Appendix A) was then printed and distributed by the BCPVPA. A total of 2,430 questionnaires were distributed and, as is the normal practice for BCPVPA initiated or supported surveys, a return paid envelope was not provided. The questionnaires were posted on or around February 9th, 1996 with a return deadline of February 26th, 1996. A reminder was issued to BCPVPA members via their regional executive officers in mid-March. However, this resulted in only an additional three questionnaires being received by the final deadline of Friday, April 5th, 1996, bringing the achieved sample to 188 or 15.9 percent of principal members of the BCPVPA. 44 Data Analysis and Presentation The data from the returned questionnaires were coded and entered onto a data file. For some variables, data were grouped or reformulated in order to facilitate analysis. Data from question number 7 (see Appendix A.2), which asked for the number of years as a principal, were merged into four experience categories (1-5, 6-10, 11-15, and 16+). The data from question 16 (see Appendix A.3), which asked for information about past training in the formal evaluation of teaching, were reformulated. This reformulation aggregates the different durations of training a respondent could have attended, by using a formula to produce a total number of training 'points'. This formula assigned one point for each course of "one day or less", three points for each course of "between two days and one week", five points for each course of "more than one week but less than one full term", and ten points for each course of "one full university or college term". The rationale for this formula was based on an estimated value for the amount of time spent engaged in the training, rather than its quality, since this was impossible to gauge. The statistical analysis was carried out using the "SPSS Windows 6.0" program. Firstly, this analysis included the production of frequency summaries for all questions, apart from 21 (see Appendix A.6). Secondly, cross tabulations and chi-square analysis were used in order to break down the respondents into different constituent groups and determine the level of significance of the resulting data. The analysis of question 21 was carried out manually by identifying themes in the anecdotal responses and then grouping them into broader categories (see p.84). This approach was also adopted for question 18 about obstacles, in addition to the coded data on the computer file, in order to identify the reasons principals have for stating time as a major obstacle. The presentation of the questionnaire data is initially in the form of frequency summaries in Chapter V, followed by a more detailed breakdown of the data in Chapters VI and VII. This breakdown involves categorising respondents in two different ways to produce profiles based on respondent sex and number of years experience as a principal. Reference to statistical significance is included in Chapter VI and VII in order to determine the probability that differences found in these profiles occur by chance or whether they are likely to be found in the whole British Columbia principal population. Findings are described as statistically significant at the p=.05 level. Where the level of significance is p<.05 this is shown in parenthesis but no probability value is given for analyses producing a significance level of p=>.05. 46 Design Limitations It is important to note that while school based principals carry a considerable part of the responsibility of formally evaluating teaching, this responsibility is not entirely theirs. Therefore, this study cannot possibly address the whole formal evaluation scene but only that which relates to the principal's role as evaluator. Furthermore, the subjects invited to participate in the study, as already highlighted, constitute 75.9 percent of the British Columbia principal population. This 'selection' may have some bearing on the validity of the final conclusions drawn but this cannot be judged accurately because data relating to the sub-population of approximately 24.1 percent are not available. The total number of respondents to the questionnaire is 2672 of which 188 are school-based principals (referred to hereafter as principals), 70 are vice-principals and the remaining 9 are district principals. The 188 principals represent 15.9 percent of the total BCPVPA principal membership (1,179) and, therefore, this constitutes a major ^This level of response compares well with that of other surveys with BCPVPA members. For example, in January 1996, the month before the distribution of this questionnaire, a BCPVPA survey asked for reactions to the Ministry of Education's "Default Plan" on the amalgamation of British Columbia school districts. This survey on amalgamation elicited 194 responses in total. It is clear from this information that principals find it difficult to respond to surveys of this kind, among the many other 'paper exercises' they are asked or required to perform. 47 limitation to this study. From a methodological standpoint, it presents difficulties for statistical analysis, where cell sizes may be too small to draw conclusions with confidence and, thus, generalisability of the findings is a concern. Establishing how far generalisability has been undermined is difficult with regard to certain biographical information. For example, data on years of experience as principal, post graduate education and specialty, and the percentage of principal teaching time, were unavailable for either BCPVPA members or British Columbia principals as a whole. However, the School Finance and Data Management Branch of the Finance and Administration Department at the British Columbia Ministry of Education was able to provide some information which pertained specifically to principals. These Ministry and BCPVPA data indicate that the participants in this study are similar to the BCPVPA principal membership or British Columbia public school principals as a whole. Table 3.1 shows the percentages of respondents, BCPVPA principals, and British Columbia principals as a whole with regard to the variables of sex, age, school type, and staff size. 48 Table 3.1 Respondents, BCPVPA Principals, and All British Columbia Public School Principals by Sex, Age, School Type, and Staff Size Respondents BCPVPA members British Columbia principals Variable n Sex Male Female Age 44 or less 45 to 49 50 to 54 55 or over School type Elementary Secondary Both Staff size 1 to 9 10 to 19 20 to 29 30 or more 72.7 27.3 21.3 34.6 25.5 18.6 71.8 22.3 5.9 8. 29. 32. 29.7 136 51 40 65 48 35 135 42 11 15 55 60 55 74.5 25.5 78.9 20.1 1.0 878 301 930 237 12a 73.4 26.6 23.7 32.8 29.3 14.2 28.1 35.6 21.3 15.0 1125 408 363 503 449 218 470b 596 356 253 aThis represents the best available information. bThese figures relate to numbers of schools rather than principals. The above Table shows that the percentages of male and female respondents match very closely those for BVPVPA principals and British Columbia principals as a whole. The BCPVPA does not maintain data about the age of its members but information from the British Columbia Ministry of Education (Report 2059 - 1995/1996 School Year - Age distribution of Educators by Position Within the School) shows that the age distribution of the respondents is similar to the population of all British Columbia principals. Table 3.1 shows that the largest discrepancy in the four age categories is in the group "55 years or over". However, the pattern of distribution is the same over the four age groups and the differences are relatively small. Information about the types of school that principals in the BCPVPA administer was more difficult to ascertain because school descriptions do not always make it clear whether the student intake is elementary grades only (K-7), secondary grades only (8-12), or both elementary and secondary grades. However, an approximation was calculated from BCPVPA files which indicates (as Table 3.1 shows) that once again the respondents quite closely match the population as a whole. However, the information which the Ministry of Education made available on teaching staff sizes shows less correspondence between the questionnaire respondents and the whole population of British Columbia principals. Indeed, as Table 3.1 shows, there are wide disparities between the distribution of respondents and all British Columbia principals, which is particularly the case for principals with staffs of "1 to 9" and "30 or more". Returns were received from 56 (74.7%) of the 75 British Columbia school districts. It is possible to compare the 50 overall distribution of respondents in these school districts with the overall distribution of all principals in British Columbia. Figure 3.1 shows the respondents as a percentage of all respondents and their distribution across the 75 British Columbia school districts. This figure enables a comparison to be made with the distribution of British Columbia principals as a whole across all school districts. For example, seven percent of British Columbia principals are based in school district 36, while ten percent of the respondents to the survey are based there. Figure 3.1 indicates that, generally speaking, where there are concentrations of respondents from particular districts, a corresponding concentration exists among British Columbia principals as a whole. However, of the 19 school districts not represented in the responses to the questionnaire, one, school district 22, employs 20 principals. A further discrepancy includes the over-representation of respondents from school districts 7, 60, and 75 (medium sized - see Footnote 3) and 25, 36, and 43 (large). How far these discrepancies may interfere with generalisability can be measured against British Columbia Ministry of Education guidelines for the categorisation of school districts as "small", 52 "medium" or "large"3. Table 3.2 reveals a very close similarity between the percentages of respondents and all British Columbia principals employed in the three sizes of school district referred to above. This comparison suggests that the modest discrepancies highlighted in Figure 3.1, do not pose a serious threat to generalisability. Table 3.2 Respondents and All British Columbia Principals by School District Size District size Respondents British Columbia principals n % n Large 51.4 95 51.7 764 Medium 33.0 61 36.2 535 Small 15.7 29 12.2 180 Total 100.1 185 100.1 1479a aThis figure is different from the Ministry of Education total of 1533 given in Table 3.1. It is a count from the 1994/1995 Public and Independent Schools Book (Province of British Columbia, 1995) and represents the best available information. JThis categorisation is based on student enrolments, so that school districts with 1 to 2,999 students are categorised as "small", those with 3,000 to 14,999 are classified as "medium", and those with 15,000 or more student enrolments are defined as "large" (Cherington, 1989, in Antosz, 1990, p.67 - see Appendix E). 53 Data from the collective agreements (Chapter IV), shows that many of the provisions for evaluation are very similar. However, a distinction is possible with regard to evaluation criteria and evaluation cycles (indicating the stated frequency, if any, for evaluations). Table 3.3 shows that respondents are closely representative of all British Columbia principals in districts with stated criteria, but less so for principals in districts with evaluation cycles. Table 3.3 Respondents and British Columbia Principals by Criteria and Cycles Respondents British Columbia principals Variable % n % n Criteria No criteria Criteria 33.0 67.0 61 124 34.3 65.7 507 972 Cycle No Cycle Cycle 39.5 60.5 73 112 53.1 46.9 785 694 Furthermore, a comparison can be made on the basis of the way evaluation cycles are described. Table 3.4 shows such a comparison between respondents and all British Columbia principals and reveals fairly marked differences. However, this is to be expected as a result of the over representation of respondents from districts with evaluation cycles. Table 3.4 Collective Agreement Wording for Evaluation Cycles Evaluation cycle Collective Respondents British Columbia phraseology agreements3 principals n % n % n % "Every" 14 18.7 31 16.8 190 12. 8 "At least every" 15 20.0 41 22.2 278 18. 8 "Not more than one in" 7 9.3 40 21.6 226 15. 3 Total 36 48.0 112 64.4 694 46. 9 aTotal including collective agreements without a cycle = 75 Taking the limitations into account, the study does provide an opportunity to present and examine the views of principals on formal evaluation, identify areas where further investigation would be helpful, and arrive at a number of conclusions and policy recommendations. 55 Summary Three themes have been identified: Purpose, time, and training. In addition to these themes the two variables of principal sex and years of experience are highlighted as possible factors in determining principal views. Two data collection procedures are used: a content analysis of the clauses relating to formal evaluation in the British Columbia school district collective agreements; and a questionnaire. The subjects invited to participate in the study were the 1,179 principal members of the BCPVPA of whom 188 (15.9%) took part. The response rate raises a concern about the generalisability of the findings. However, in relation to a number of variables including sex, age, school type, district size, and provision of evaluation criteria in collective agreements, the respondents are representative of of British Columbia principals as a whole. This is less so for staff size and provision of evaluation cycles in collective agreements. Staff sizes of "1 to 9" are under represented among the respondents and staff sizes of "30 or more" are over represented. Principals in districts without an evaluation cycle are under represented among the respondents and principals in districts with an evaluation cycle are over represented. 56 CHAPTER IV British Columbia School District Collective Agreements All seventy-five British Columbia school district collective agreements contain provision, in some form or another, for the evaluation (though in a very few cases not formal evaluation) of teaching. The most current versions of these collective agreements were drawn up in July 1992, to be reviewed in 1994 or 1995, but remain in effect at the time of this study. The first part of Chapter IV gives an overview of the content analysis of these collective agreements with regard to the evaluation process. The second part of the chapter relates to the roles and responsibilities of both the evaluator and evaluatee. The chapter concludes with a summary and a sample article from a British Columbia school district collective agreement can be found in Appendix F. The Process The content analysis of the collective agreements reveals that sixty-six districts have a very similar evaluation process which incorporates four phases (see Appendix B). A further two are distinguished only by the fact that they make provision for a shortened process for "highly competent" teachers. Only two of the remaining seven employ a process which (albeit having phases which are common in some form to other districts) is substantially different from the rest. Other features of the process that emerge from the content analysis are a) the provisions, or otherwise, for a regular cycle of formal evaluations (that is, a fixed period of time within which a teacher's classroom situation must be formally evaluated); and, b) whether or not there are stated evaluation criteria. Most collective agreements require that the conclusion to the final report contain reference to either the term "satisfactory" or "less than satisfactory" (a very few allow for graded comments such as "excellent", "very good" and so on). The term used indicates the evaluator's summative view of the adequacy or otherwise of the teacher's "classroom situation". If a teacher receives three consecutive "less than satisfactory" reports in a period normally between twelve and twenty-four months, the teacher is liable to dismissal. Reference to the terms formative and summative are not made in the collective agreements, but those which share the features described above are certainly, by implication, summative and more orientated towards accountability than growth and development. 58 The Four Phases of a Formal Evaluation of Teaching The phases that emerge from the collective agreements can be described as a) pre-evaluation conference(s); b) classroom observations; c) post-observation conferences; and, d) final report conference and writing. The pre-evaluation conference (or conferences, since in a very few collective agreements provision is made for two such conferences) takes place in order for the participants to talk through the purpose, criteria, and timetable for the coming formal evaluation. Generally, the second phase, that of the classroom observations, includes between three and six classroom visits, which in most cases are recommended to be for the duration of the whole lesson. The majority of collective agreements stipulate that a) a post-observation conference should take place within a limited time after the observed lesson; and b) the teacher should be provided with an anecdotal statement by the evaluator. If weaknesses were observed the teacher must be apprised of them and given the opportunity to remedy them before the next classroom observation. The fourth phase, writing the final report, requires the teacher to be given an opportunity to read a draft report and comment upon it, before the final report is written and filed at the school board. Seventeen collective agreements specifically disallow the inclusion of references 59 to anything other than the data generated from the formal evaluation classroom observations. A further twelve state that classroom observation data should be those used "primarily", "generally", or "normally". However, in fourteen collective agreements the evaluator is explicitly entitled to include aspects of the teacher's work in the school beyond what was observed in the classroom visits. In these school districts, principals can include reference to the teacher's "general contribution", "general performance", "other factual information", "other pertinent information", "other information" or "multiple sources of data". In the remaining thirty-two collective agreements no indication was given regarding sources of data to be used in the final report (see Appendix C). Formal Evaluation Cycles Thirty-six (48%) collective agreements contained terms for an evaluation cycle (see Appendix D). Of the 39 school districts which make no provision for such a cycle, eight provide for what might be termed 'automatic' evaluation in certain cases. These are in cases where a teacher is new to the profession, or to the district, or has assumed a significantly different assignment. In such cases, the teacher must be evaluated in his/her first year (or in one case the second year for teachers new to the profession). For those districts, referred to above, that do state a 60 formal evaluation cycle, the frequency of evaluations varies considerably at the extremes (from two year to ten year intervals) but the vast majority (34 of 36) fall somewhere in the range of every three to five years. However, the phraseology used to stipulate the frequency of these cycles is not the same and can be categorised into three types, each of which conveys a somewhat different expectation and perhaps, therefore, a different level of responsibility for the evaluator. These phrases include the provision that a formal evaluation of teaching will be conducted a) "every" stated number of years; b) "at least every" stated number of years and c) "not more than one in" a stated number of years. This linguistic context is further complicated by the fact that, in some cases, the expectation of the frequency of evaluations is couched in qualified terms. For example, in eight collective agreements that state a frequency (whether it be categories a, b, or c above), qualifications are employed which include "usually", "normally", "unless otherwise agreed", "where practicable" and "it is expected". Table 4.1 shows firstly, reading from left to right, the total number of school district collective agreements with each of the three forms of wording (a - c above), followed, secondly, by the number that have this wording in 'unqualified' and unambiguous terms. The third column 61 indicates the number of collective agreements that have each of the three forms of words but in 'qualified' terms. Finally, the fourth column provides the number of British Columbia principals whose assignment is in school districts with these three variations of cycle provision. Table 4.1 Collective Agreement Wording on Evaluation Cycles Wording Total Number with Number with Number of collective unqualified qualified British agreements wording wording Columbia principals "Every" 14 11 3 190 "At least every" 15 2  278 "Not more than one in" 7 5 2 226 Total 36 28 8 694 Thus, Table 4.1 shows that the stipulation "every", is given in fourteen collective agreements, eleven of which state this in unqualified terms. The phrase "at least every" is included in fifteen collective agreements, twelve of which are without qualification. Seven agreements use "not more than one in", of which five are unqualified. The significance of this language springs from the consequences it is likely to have for the frequency of evaluations. The stipulation "every" allows no room for manoeuvre on the part of principals and teachers alike and "at least every" can clearly mean evaluations take place 62 more often than the stated time period. However, "not more than" provides, by the strict letter of the language, unlimited scope for the frequency of evaluations. For example, one evaluation every five years and one every fifteen years both adhere to a cycle of not more than one evaluation every four years (or three, or six and so on) because no evaluation in a four year period is not more than one. While the interpretation of such a stipulation may not be as radical as this example suggests, the wording of collective agreement provisions and the qualifications they may contain, clearly have potential importance for the frequency of evaluation. Evaluation Criteria Stated criteria were found to be present in 48 (64%) collective agreements (see Appendix D). However, when they are present in a collective agreement or referred to as part of some other school district policy or document, they can vary markedly in their specificity. Some provide a great deal of detail as to exactly what the teacher should be able to demonstrate and the evaluator observe for, while others simply list a set of headings which allows for considerably more interpretation by the parties involved. For example, thirty-two school districts state their evaluation criteria in some detail as either articles in, or appendices to, the collective agreement, or as part of school district policy 63 documents. A further seven districts do state their evaluation criteria in the collective agreements but only as a brief outline or set of general headings covering areas to be commented upon. Such headings include "classroom management" or "instructional strategies" but do not enter into any detail as to exactly how an evaluation of "satisfactory" or "less than satisfactory" might be arrived at. The remaining nine collective agreements contain references including "as the Evaluation Committee recommends" or "to be modified at the school level" and in these cases it was difficult to ascertain the degree of specificity employed. The Evaluator and the Evaluatee The content analysis of the collective agreements also included taking note of stipulations with regard to: a) the initiation of an evaluation, if not activated by a regular cycle; b) the personnel responsible for conducting a formal evaluation of teaching; c) the right of a teacher, whose teaching has been the subject of an evaluation, to lodge an appeal against the process and/or outcome; and d) the entitlements a teacher has to professional development opportunities following a "less than satisfactory" report or an indication of weaknesses in a "satisfactory" report. 64 The Initiation of a Formal Evaluation The picture painted by school district collective agreements across the Province is quite a complex one. However, as a general rule, a formal evaluation can be initiated individually or by some combination of the teacher, the school-based administrative officer, or by the school board, through the district superintendent, assistant superintendent or some other competent board official. This complex picture is incomplete because in 24 collective agreements it is unclear who is able to initiate an evaluation, other than in 6 which provide for an evaluation cycle. Of the 51 collective agreements that do make some specific statement in this regard, 24 make reference to the school-based administrative officer (which in all cases would mean the principal, even if the subsequent evaluation were carried out by a vice-principal). Forty-three give the right of initiating an evaluation to the teacher, although, in a few cases, the agreement of the administrative officer or school board is also required. In four cases the Minister for Education and the British Columbia College of Teachers are also mentioned in addition to the above parties. 65 Responsibility for Conducting a Formal Evaluation of Teaching The collective agreements place most of the responsibility for formal evaluation on the school-based administrative officer. While this responsibility is shared with superintendents, their assistants and in some cases district principals and directors of personnel, these latter officers are generally reserved for evaluations where a teacher has already received one "less than satisfactory" report. Of the 75 school district collective agreements studied, 68 specifically refer to the administrative officer or principal as having this evaluation responsibility. In the other seven, no reference of any kind was made to the administrative officer in the section relating to the ' evaluation of teaching. The Right of Appeal This is universally present in all collective agreements under the section entitled "Grievance Procedure". This usually involves a series of stages (in most cases four), each successive one of which is only reached if agreement has not been possible at the previous stage. Finally, there is provision for arbitration should agreement prove to be impossible through the grievance procedure. 66 Teacher Entitlement to Professional Development The large majority of collective agreements give a teacher who has received, a first or second "less than satisfactory" report, an entitlement to professional development opportunities. These generally consist of up to one year's unpaid leave to undertake further training and/or the offer of a "plan of assistance" which is to be drawn up by the school principal or district superintendent and agreed with the teacher concerned. Summary Generally, the formal evaluation of teaching consists of four phases including a pre-evaluation phase, between three and six classroom observations, post-observation conferences, and a final report conference. Approximately half of the school districts in British Columbia have evaluation cycles but these vary in length and in the strictness of wording. The other half have no stated frequency of evaluation. Nearly two thirds of the school districts state evaluation criteria in some form. The initiation of evaluation, if not by a cycle, is a right which lies predominantly with the teacher and principal, but can also be exercised by district and Ministry personnel. However, conduct of formal evaluation is, in very large part, the responsibility of the school 67 principal. If a teacher's "classroom situation" is considered to be deficient, assistance is generally-available. Should the teacher be dissatisfied with the process or outcome of the evaluation there is also provision made for an appeal procedure. CHAPTER V Respondents' Backgrounds, Assignments, and Their Role as Evaluators of Teaching This chapter reports the results obtained from the questionnaire returns. It describes the overall summary of response frequencies based on total valid responses (i.e. missing cases are not included) to each question and, therefore, not all total numbers of respondents equal 188. If the number of missing cases is considered high and cannot be accounted for because a question was "not applicable" to a large number of respondents, this fact is brought to the attention of the reader. A complete summary of response frequencies is provided in Appendix G. The chapter is organised into the three main headings that appeared on the questionnaire. It therefore includes biographical information on the respondents, followed by data relating to their current assignment and, finally, responses with regard to their responsibilities as formal evaluators of teaching. A summary concludes the chapter. Biographical Information Table 5.1 shows that approximately three quarters of the respondents were male and a quarter were female. Four broad age categories were identified: 44 years or fewer; 45 to 49; 50 to 54; and 55 years or more. The majority of 69 respondents are in the middle two age categories with around one fifth in each of the other two. Table 5.1 Respondent Biographical Data Respondents Variable % n Sex Male 72.7 136 Female 27.3 51 Age 44 or fewer 21.3 40 45 to 49 34.6 65 50 to 54 25.5 48 55 or over 18.6 35 Master's specialty Administration 65.7 llla Curriculum 14.8 25 Other 19.5 33 Experience as a principal I to 5 years 31.0 58 6 to 10 31.0II to 15 12.8 24 16 or more 25.1 47 aThere were 19 missing cases in the returns for this question. The overwhelming majority of respondents, 92.6 percent (n=174), have a master's degree or are currently working on one. Of these, two thirds have a master's in Educational Administration, while just 14.8 percent (n=25) have their master's in Curriculum. The remaining fifth have a master's in an area they described as "other" (these included 70 combined Educational Administration and Curriculum [n=15]; Counselling or Educational Psychology or Special Education [n=10]; subject discipline [n=5]; and Supervision of Instruction/Teaching Practice [n=3]). Only 5.4 percent (n=10) have a doctoral degree or are currently working on one. The level of principal experience is also categorised into four groups: One to five years; six to ten years; eleven to fifteen years; and sixteen years or more. Somewhat less than a third of the respondents fall into each of the first two groups, with a much smaller proportion in the "11 to 15 years" group, while a quarter of respondents have 16 or more years of experience as a principal. Current Assignment Even though only principals have been included in the results from this survey, over one half have teaching responsibilities to some degree. Table 5.2 shows that nearly half of the respondents indicated their assignment is full-time administration. The rest are fairly evenly spread across the teaching load categories of 1 to 19 percent; 20 to 39 percent; and 40 percent or more. Elementary principals constitute by far the largest group of respondents, while secondary principals accounted for around a quarter of the responses, and principals from 71 schools which enrol both elementary and secondary grades constituted 5.9 percent (n=ll). Numbers of teaching staff (which respondents were asked to provide as a head count, including the principal) within these schools were placed into four groups: 1 to 9 teaching staff; 10 to 19 staff; 20 to 29 staff; and 30 or more staff. The first of these groups is by far the smallest, with the other three each at around 30 percent of total respondents. Table 5.2 Teaching Load, School Type, and Staff Size Respondents Variable % n Percentage teaching Zero 1 to 19 20 to 39 40 or more 44.9 17.3 22.2 15.7 83 32 41 29 Type of school Elementary grades Secondary grades Both 71.8 22.3 5.9 135 42 11 Teaching staff 1 to 9 10' to 19 20 to 29 30 or more 8.1 29.7 32.4 29.7 15 55 60 55 72 The respondents included principals from 56 of the 75 British Columbia school districts. The overall distribution of responses from those school districts is shown in Figure 3.1 (p.51). The distribution of responses based on school district size is shown in Table 3.2 (p.52). The Principal as Formal Evaluator of Teaching Should Principals Do Evaluation? What is the Purpose? and How Well is Evaluation Done? Overwhelmingly principals expressed the view that the formal evaluation of teaching should be one of their responsibilities, with 96.8 percent (n=181) saying "Yes" to this question. The remainder indicated that they were "not sure". When asked about what they considered to be the most important purpose of formal teacher evaluation, a much greater difference of opinion emerged. However, it is important to note that a small number of respondents (six) made it clear that they found it impossible to choose between the two main options: a) teacher growth and development; and b) accountability for the quality of teaching (respondents were not given the option of choosing both, see p.36). As Table 5.3 shows, of those who could make this choice, the majority opted for "teacher growth and development", while a substantial minority selected 73 "accountability for the quality of teaching". Just 3.3 percent (n=6) indicated some other purpose, which included improving communication between administrators and teachers, and providing an opportunity to celebrate excellence in teaching. Table 5.3 Evaluation Purpose and Quality Respondents Variable % n Evaluation purpose Growth and development 57.1 104 Accountability 39.6 72 Other 3.3 6 Evaluation done Poorly 6.5 12 Adequately 32.6 60 Well/Very well 60.9 112 When asked about how well they did the formal evaluation of teaching, none of the principals who responded defined their execution of formal evaluation as very poor. A small proportion though, expressed the view that they carried out this responsibility poorly. The response of "adequately" was given by a third of principals but a large majority said they did the formal evaluation of teaching either well or very well. 74 In-service Training, Obstacles, and The Four Phases of Evaluation Results from the question about formal evaluation training show (Table 5.4), of the four categories described in the questionnaire (a. one day or less; b. between two days and one week; c. more than one week but less than one full term; d. one full university or college term), generally, at least half of the respondents to each category indicated no attendance since September 1988. Most training is of the "one day or less" or "between two days and one week" variety. Courses of more than one week have been attended in much smaller numbers, while a fifth have undertaken courses of one full university or college term since September 1988. Table 5.4 Evaluation Training Attendance Since September 1988 Respondents 1 day 2 days/ Less than 1 week/ 1 term or less 1 week more than 1 term Attendances n=188 n=188 n=188 n=188 None 47.8 51.6 87.2 80.9 One 14.4 17.0 6.4 16.5 Two 11.7 17.0 3.2 2.1 Three or more 26.1 14.4 3.2 .5 Total 100.0 100.0 100.0 100.0 75 In order to have some quantifiable means of describing total training per respondent, the training points formula, explained in Chapter III (p.44), was applied to the data in Table 5.4. This produces an average number of points per respondent of 8.7, ranging from 0 to 50 at the extremes. Table 5.5 illustrates the distribution of training points across five groupings of "1 to 2"; "3 to 4"; "5 to 9"; "10 or more"; and "None". When grouped in this way it can be seen that only 2.7 percent (n=5) of respondents have received no training in the formal evaluation of teaching since September 1988. Nearly one third have 1 to 4 points, while just over a third have "10 or more" (which corresponds, in the formula referred to above, to a university or college term course). Table 5.5 Evaluation Training Points Since September 1988 Respondents Training points3 n Percentage Cumulative % None 1 to 2 3 to 4 5 to 9 10 or more 5 30 25 60 68 2.7 15.9 13.3 31.9 36.2 2.7 18.6 31.9 63.8 100.0 aSee page 44 for the formula used to calculate these points. 76 Despite the somewhat limited training over the past eight years in the formal evaluation of teaching, there is no corresponding sense of this being a problem to the principals who responded to the survey. For example, training was mentioned only 9 (2.5%) times (out of a total of 362 references) as one of the three most important obstacles to carrying out the formal evaluation of teaching. In addition, the majority of principals did not believe they required more training for any of the four phases of the formal evaluation of teaching, other than the "post-observation conferences" and "report writing" phases of an evaluation leading to a "less than satisfactory report". The question which asked respondents to list, in rank order, the three most important obstacles to the formal evaluation of teaching, produced sixteen different types of obstacle (including 'other') and 362 individual respondent references (an additional four respondents said there were no obstacles). Time is by far the most prominent of the sixteen types of obstacle cited and accounts for two thirds of all first obstacles. It was also the only obstacle to be cited more than once by the same respondent. These multiple references to time were presumably made to emphasise the importance of time. However, in Table 5.6, time is only 77 counted once per respondent who referred to it, even if that respondent mentioned it more than once4. Table 5.6 is divided into five columns. The first column shows the five main categories into which obstacle references could be placed. These categories are "Time", "Process", "Individuality", "Political context", and "Principal competence". Each of these categories is made up of one or more types of obstacle. The category title is shown first, as are the data relating to that category. Thus, the first line of data represents the aggregate data for all the types of obstacle at a particular level of importance within that category. For example, in the "Individuality" category the aggregated percentages for "Teacher non-acceptance", "Stress", and "Purpose not agreed" in the "First" column, is 4.8 percent. The next four columns of Table 5.6 show the data relating to the five categories of obstacle referred to above. The first of these columns, labelled "First", identifies the percentage of respondents who cited a most important obstacle (n=184) in each of the five obstacle categories. For example, 12.0 percent of respondents who cited a most important obstacle, cited "collective agreement" which forms part of the "Process" category. The This is explained in detail in Chapter VII (p.122), as part of the consideration of time as an obstacle to formal evaluation. Table 5.6 First, Second, and Third Most Important Obstacles to the Conduct of the Formal Evaluation of Teaching Level of importance First Second Third Combined n=184 n=116 n=66 n=362a Obstacle % Q. % % Time 65.2 20.7 12.1 42.0 Process 18.0 39.7 31.8 27.6 Collective agreement 12.0 19.8 18.2 15.7 Process 2.2 14.7 12.1 8.0 Criteria 2.2 3.4 1.5 2.5 Lack of cycle 1.1 .9 — .8 Cycle .5 .9 — .6 Individuality 4.8 13.8 24.3 11.4 Teacher non-acceptance 3.8 9.5 16.7 8.0 Stress .5 4.3 6.1 2.8 Purpose not agreed .5 — 1.5- .6 Political context 6.5 9.5 , 16.6 9.7 Union 3.8 4.3 9.1 5.2 District 2.7 5.2 4.5 3.9 Ministry — — 3.0 .6 Principal competence 2.1 8.5 4.5 4.7 Training 1.1 3.4 4.5 2.5 Subject knowledge — 3.4 — 1.1 Lack of experience .5 1.7 .8 Principal biases .5 — — .3 Other 1.1 7.8 10.6 5.0 None 2.2 — — — Total 99.9 100.0 99.9 100.4 aThis figure does not include respondents who said "none". 79 next column gives the percentages for second most important obstacles (n=116) and the next shows percentages for the third most important obstacles (n=66). The last column combines all first, second and third most important obstacles (n=362), without including those respondents (n=4) who said there were no obstacles to the evaluation of teaching. The process category accounts for just over a quarter of combined obstacle references, the two most prominent parts of which, are the collective agreement and the 'process'. Teacher non-acceptance of the process accounts for a quite a large proportion of the other obstacles, as does the political context. Principal competence though, does not feature strongly as an obstacle. The importance of time was further borne out by responses to the question regarding the four phases of formal evaluation (Table 5.7, p.81). In relation to an evaluation leading to a "satisfactory" report, while two thirds felt that the pre-evaluation conference was "time-consuming", this rose to three quarters or more for the post-observation conference and classroom observations, and an overwhelming 94.1 percent (n=176) for the writing of the final report. For evaluations leading to a "less than satisfactory" report the same pattern emerges but with even higher percentages. Four fifths or more regarded the pre-80 evaluation conference, post-observation conference, and classroom observations as "time-consuming", with an almost unanimous 98.8 percent (n=85) expressing this view about the final report writing phase. In addition to "time" and "need for training", respondents were asked to express a view on two other factors. These were "stress" (for the principal) and "complexity". Both were considered less important than the factor of time, although not so markedly when considering evaluations leading to a "less than satisfactory" report. Table 5.7 shows that, apart from the report writing phase, principal stress is not a major factor in a formal evaluation leading to a "satisfactory" report. However, the picture is very different when examining evaluations leading to a "less than satisfactory" report. The pre-evaluation conference is considered stressful by over half the principals, rising to nearly two thirds for the classroom observations, and over 90 percent for the post-observation conference and final report writing phase. 81 Table 5.7 Factors Present in Evaluations Leading to "Satisfactory" and "Less Than Satisfactory" Reports Percentage of respondents agreeing on presence of factor Stress Complexity Time- Need for consuming training SRa LTSRb SR LTSR SR LTSR SR LTSR Phase n=185 n=86 n=185 n=86 n=187 n=86 n=182 n=82 Pre-evaluation conference(s) 7.0 54.7 26.5 66.3 62.0 79.1 20.8 35.4 Classroom observations 7.0 62.8 51.6 76.7 84.0 89.5 34.1 43.9 Post-observation conferences 25.9 90.7 55.7 91.9 73.1 87.2 35.6 57.3 Writing the final report 41.1 91.9 75.7 94.2 94.1 98.8 37.2 59.0 aEvaluation leading to a "satisfactory" report. ^Evaluation leading to a "less than satisfactory" report. Complexity is also a relatively unimportant factor when compared to time, although a little over half or more of principals agree that all the phases of an evaluation leading to a "satisfactory" report, apart from the pre-evaluation conference, are complex. As with the other factors, the final report writing phase receives most agreement with three quarters believing it to be complex. 82 "Less than satisfactory" reports are viewed as more complex, with sizeable, if not substantial, majorities taking this view about all four phases, the most striking being the final report phase. Number of Evaluations and "Less Than Satisfactory" Reports The numbers of formal evaluations of teaching carried out by principals since September 1988 varied considerably but could be classified into four main groups: 1 to 9 evaluations; 10 to 19; 20 to 29; and 30 or more. Table 5.8 shows that a third of principals have done 10 to 19 evaluations, with just over a fifth of principals falling into each of the other three categories. When asked if they had written a "less than satisfactory" report in this period, nearly two thirds said they had not. A further quarter have written only one and just 13.5 percent (n=25) have written two or more "less than satisfactory" reports since September 1988. 83 Table 5.8 Evaluations Conducted and "Less Than Satisfactory" Reports Written Since September 1988 Respondents Variable Q. ~o n Evaluations conducted 1 to 9 22.8 41 10 to 19 32.8 59 20 to 29 21.1 38 30 or more 23.3 42 "Less than satisfactory" reports written None 61.6 114 One 24.9 46 Two or more 13.5 25 When the data for evaluations conducted and "less than satisfactory" reports written are aggregated, it results in a total of 110 "less than satisfactory" reports out of a total of 3,832 evaluations conducted since September 1988. This means that a "less than satisfactory" report is written, on average, once in every 34.8 evaluations or, put another way, 2.9 percent of all evaluations result in a "less than satisfactory" report. The total years of principalship which all respondents have between them is 84 1,2365. Therefore an average of 3.1 evaluations have been written per year of principalship over all respondents in the period since September 1988. Additional Comments Made by Respondents The final question on the questionnaire, number 21, asked respondents if there was anything they wished to add with regard to formal evaluation. Of the 188 principals that responded to the survey, 116 (61.7%) chose to take advantage of this opportunity. These anecdotal data range in length from one or two sentences to several paragraphs. The first column of Table 5.9 shows, the five broad themes which emerged from the analysis of these data: "Evaluation purpose", "Inadequate process", "Ability to evaluate", "Evaluator attitudes", and "Political context". Within these five broad themes are particular types of reference. For example, the theme of "Inadequate process" is made up of three types of reference. The second column of Table 5.9 shows the number of times each type of reference was made. Because no principal made the same type of reference more than once, the number of respondents and the number of references are equal. The numbers in bold type are the aggregate references for that -•Total years of principalship are based on exact years of experience given in answer to question 7 on the questionnaire. 85 Table 5.9 Anecdotal Responses Thematic categories Number of No. As a % of References3 all references 1. Evaluation purpose 91 35.5 a. Reserved for poor teachers 16 6.3 b. Growth and development 37 14.6 c. Accountability 18 7.0 d. LTSRb ineffectiveness 20 7.8 2. Inadequate process 60 23.4 a. Generally unsatisfactory 40 15.6 b. Need for peer evaluation 8 3.1 c. Reference to cycle/criteria 12 4.7 3. Ability to evaluate 43 16.8 a. Time factor 27 10.5 b. Competence/resolve 16 6.3 4. Evaluator attitudes 41 16.0 a. Important leadership role 19 7.4 b. Positive experience 8 3.0 c. Stressful activity 9 3.5 d. Promotes administrator/ teacher understanding 5 2.0 5. Political context 14 5.5 a. Union/District hindrance 14 5.5 Other 6 2.7 Total 256 99.9 aDoes not include 'other' and is equal to the number of people who made such a reference. "Less than satisfactory" report. 86 theme. For example, the three types of reference under "Inadequate process" total 60 individual references. The third column shows the number of references as a percentage of all references made. For example, of the total of 256 individual references made in response to question 21, 60 can be categorised under the theme of "Inadequate process", which represents 23.4 percent of all references. The largest number of references (35.5%) relate to the purpose of evaluation. Within this category, apart from "Growth and development" and "Accountability" highlighted in question 14 (see appendix A.3), 16 respondents (6.3%) suggested formal evaluation of teaching should be reserved for use with poor teachers or those about which the principal already had cause for concern. Respondent 047 provides a fairly typical example when saying "We should re think the system. The formal evaluation should be reserved for only the 'less than satisfactory' teachers." Some respondents express the view that where "less than satisfactory" reports are written they fail to achieve very much. There were 20 (7.8%) such references of which the following is representative: "[Evaluation] must be focussed on growth - but must be an effective tool in dismissal when that becomes necessary. I have never heard that teacher evaluations resulting in 'less than satisfactory' reports 87 have been an effective tool in dismissing staff" (respondent 122, author's emphases). A further illustration of the difficulties some principals associate with "less than satisfactory" reports is provided by respondent 146: "Less than satisfactory" evaluations are more stressful because there is all the fallout - denial, accusation, union grievance, etc...More "less than satisfactory" reports need to be written, I believe, but the hassles scare admin, off. They are intimidated and don't feel they can call a spade a spade." The next largest category is "Inadequate process" which accounts for 60 (23.4%) references. The majority (n=40) expressed a general dissatisfaction with the process and also, at times, admitted to a sense of isolation or powerlessness which was echoed to an extent in all the other major categories. The following two extracts give a flavour of the responses in this regard. The first is given by respondent 006 who said: "The area of reporting on the 'marginal teacher' is the most difficult of all. The data is harder to gather, the teacher is often immune to professional growth options and the evaluator is unsure which direction to go." The second extract comes from respondent 007 who asserts: "The formal evaluation process as it presently exists in B.C. is outdated, stressful, time-consuming, but most importantly (in most cases) a totally irrelevant exercise...A.O.'s are in the embarrassing position of trying to legitimize an activity (in its present form) that we all know is 'hoop jumping'" 88 In "Ability to evaluate", time is referred to on 27 (10.5%) occasions, while the competence and resolve of principals to undertake the role of formal evaluator is mentioned 16 times (6.3%). Competence was referred to in a number of different ways including both positive and negative statements about training, doubts about the validity of the results an evaluator had produced, lack of sufficient subject knowledge, and simply whether or not the evaluator was doing a good enough job. Two examples include respondent 136 who, after explaining that she had only given "excellent" or "very good" ratings, went on to say "I know that the teachers I rate as Excellent deserve it but I wonder if I am right to give so many such a high rating." A second respondent, 187, stated formal evaluation was not an area of concern for her after saying "I feel very well trained by my university courses, District inservice and mentoring programs, working with my principal when I was a VP and Supervising Skills Workshops." Within "Evaluator attitudes", mentioned 41 times (16.0%), the largest sub-category was 'Important leadership role' with 19 (7.4%) references. These generally testify to the belief that the principal is an instructional manager and the role of formal evaluator is central to the whole raison d'etre of schools and public education. For example, respondent 122 said "This process can greatly help teachers 89 - but...[it is] less valuable than it ought to be. Solution? = Increased admin time - principal focus on instructional leadership not building management.", while respondent 073 asserted that "Instructional leadership is 'the' most important aspect of our job." Finally, the 'political' context within which the formal evaluation process must take place was referred to. All of these references (n=14), with additional comments about the school board in three of them, were directed at the hindrance of the British Columbia Teachers' Federation. An indication of the feelings expressed is given in the comment made by respondent 185 who said "I find it frustrating that the Union protects those individuals that clearly tarnish the reputation of the profession and injure the children we are charged to teach." Summary Around three quarters of respondents to the questionnaire are male and a quarter female. The majority of respondents are between the ages of forty-five and fifty-four and most have a master's degree in Educational Administration. The majority of respondents have between one and ten years of experience as a principal. Over half the principals in this study have a teaching assignment and they predominantly administer elementary schools with just 90 under a quarter running secondary schools. Nearly two thirds have staffs of twenty teachers or more. Overwhelmingly, principals in this study believe they should do formal evaluation and the majority consider the most important purpose of evaluation to be teacher growth and development. They also consider this to be a role they carry out well. The vast majority of respondents have had some recent training in the formal evaluation of teaching but few have had extensive training. Only one fifth have undertaken university or college courses with a component on evaluation, since September 1988. Lack of training does not feature prominently amongst the obstacles to evaluation. By far the most important obstacle is time. Nearly two thirds of principals express this view. The process is also highlighted in different forms but to a lesser extent than time. Time is also the most cited factor in the four phases of a formal evaluation. In the responses about these four phases, the perceptions of principals in relation to evaluations leading to a "satisfactory" report are quite different from those leading to a "less than satisfactory" report. The latter are considered to be more time-consuming, more stressful and more complex. Few evaluations are written, 3.1 per year of principalship, and just 2.9 percent result in "less than satisfactory" reports. Finally, the general data described in this chapter provide the basis for a more detailed description in the following two chapters. In Chapter VII the three concepts of purpose, need for further training, and obstacles give the structure for organising the presentation of findings. Prior to consideration of these three concepts however, introductory data are provided in Chapter VI in order to draw a 'profile' of each of two respondent groups based on sex, and experience as a principal. These profiles give a brief description of how well each group considers they do the formal evaluation of teaching, followed by other data gathered from the questionnaire about age, master's specialty, district size, type of school, staff size, percentage of teaching, and amount of evaluation training received. 92 CHAPTER VI Sex of Principal and Years of Experience as Principal In this chapter respondents are categorised on the basis of two variables which emerged from the literature: Sex of principal; and experience as a school principal. These variables highlight some interesting differences among principals but are also intended to provide foci to the description of data in Chapter VII, with regard to the concepts of purpose, training, and obstacles. Sex of principal was chosen because gender differences in educational administration are claimed in the literature and because of the very human interactive nature of the formal evaluation process. Experience presented itself as an interesting variable since it is rarely mentioned in the literature. It might be expected though, that a manager with greater experience would be more practised in the conduct of potentially difficult tasks, such as formal evaluation, than their less experienced colleagues. The following data therefore present two 'profiles' which include how well principals consider they carry out formal evaluation, age, master's specialty, school district size, type of school, staff size, teaching load, and training undergone. 93 Sex of Principal Table 6.1 shows that little difference exists in the self-evaluation by male and female respondents as to how well they do formal evaluation. However, male principals are more likely to describe themselves as poor evaluators and female principals are more likely describe themselves as doing formal evaluation either well or very well. Table 6.1 Sex of Principal by Evaluation Quality, Age, and Master's Specialty Percentage of respondents Variable Male Female n=134 n=49 Evaluation done Poorly 8.2 2.0 Adequately 32.8 30.6 Well/Very well 59.0 67.3 n=136 n=51 Age 44 or less 18.4 29.4 45 to 49 36.8 29.4 50 to 54 25.0 25.5 55 or over 19.9 15.7 n=122 n=47 Master's specialty Administration 67.2 61.7 Curriculum 13.1 19.1 Other3 19.7 19.1 aSee page 69 for a list of the areas covered by these degrees. 94 The major difference in age distribution for male and female respondents occurs in the "44 years or less" category which has a considerably larger proportion of females than males, while the proportions in the other age categories are somewhat closer. Male principals are more likely to hold a master's degree in "Educational Administration", while degrees in "Curriculum" are more likely to be held by females. Proportions are similar for degrees in "other" fields. Table 6.2 shows that a much larger percentage of female principals work in large school districts than do males (p<.05), while a larger percentage of males work in medium and small districts, although the difference is not statistically significant. A statistically significant difference (p<.05) exists between the proportions of male and female principals in elementary and secondary schools. Elementary schools are more likely to be administered by women, while principals of secondary schools are more likely to be men. In the case of schools that enrol both elementary and secondary students, the percentages are the same. 95 Table 6.2 Sex of Principal by School District Size, School Type, Staff Size, and Teaching Load Percentage of respondents Variable Male Female n=131 n=51 School district size Large 45.9 64.7 Medium 36.1 25.5 Small 18.0 9.8 n=136 n=51 Type of school* Elementary grades 66.9 84.3 Secondary grades 27.2 9.8 Both 5.9 5.9 n=135 n=49 Teaching staff 1 to 9 8.9 6.1 10 to 19 28.1 34.7 20 to 29 31.9 34.7 30 or more 31.1 24.5 n=134 n=50 Percentage teaching Zero 42.5 50.0 1 to 19 19.4 12.0 20 to 39 25.4 14.0 40 or more 12.7 24.0 *£ < .05. Staff sizes for male and female respondents are similar. However, a clear percentage difference exists in the teaching time male and female principals have as a part of their assignment. A larger percentage of female respondents have a 100 percent administration assignment. 96 but this is also true for teaching time of 40 percent or more. In the two intervening categories of "1 to 19 percent" and "20 to 39 percent" teaching time, men are represented in markedly larger proportions than are women. Table 6.3 below shows training received by male and female respondents since September 1988. Table 6.3 Sex of Principal by Evaluation Training Since September 1988 Percentage of respondents Training duration and number of attendances Male n=136 Female n=51 1 day or less* None One Two Three or more 2 days/1 week None One Two Three or more Less than 1 week/ more than 1 term None One or more 1 term None One or more 47.8 10.3 13.2 28.7 48.6 17.6 19.1 14.7 88.2 11.8 81.6 18.4 47.1 25.5 7.8 19.6 60.7 15.7 11.8 11.8 84.3 15.7 78.4 21.6 *p < .05. 97 A great similarity exists between male and female respondents with regard to training received and the only statistical difference occurs with courses of one day or less (p<.05): women are represented in larger percentages for "one attendance" and men are represented in larger percentages for "two attendances" and "three or more attendances". The average training points (see p.44) for males and females are almost identical at 8.7 and 8.6 respectively. Table 6.4 provides a description of the distribution of training points among male and female principals. At the lower end of the points scale the percentage of females is noticeably larger than for males. For example, over one Table 6.4 Sex of Principal by Evaluation Training Points Since September 1988 Male3 Female' Training points n=136 n=51 g, *5 Cum. % % Cum. % None 1 to 2 3 to 4 5 to 9 10 or more 2.9 14.0 12.5 35.3 35.3 2.9 16.9 29.4 64.7 100.0 2.0 21.5 15.7 21.6 39.2 2.0 23.5 39.2 60.8 100.0 aAverage number of points = 8.7 ^Average number of points = 8.6 98 fifth of female principals have "1 to 2" training points (which equates directly to 1 to 2 days), whereas this applies to only 14 percent (n=19) of male principals. At the "3 to 4" points level, there are somewhat over one third of females and one quarter of males. Years of Experience as Principal Respondents were categorised into four groups based on their number of years of experience as a principal. Table 6.5 shows no clear pattern with regard to how well these experience groups consider they carry out the formal evaluation of teaching. However, quite a large proportion of principals with 11 to 15 years of experience say they carry out evaluation poorly, while a relatively small proportion with 16 years or more experience say this. The age distribution of principals when grouped by years of experience follows the predictable pattern that younger principals tend to have less experience (p<.05). With regard to master's degree specialty, there is a marked difference between principals with 1 to 10 years of experience and those with 11 or more years of experience. The more experienced principals have an administration specialty in higher percentages than their less experienced counterparts, with a corresponding difference in curriculum specialty. Master's degree specialties in "other" fields 99 are held by approximately one fifth of principals in all the experience categories, apart from "11 to 15 years". Table 6.5 Principal Experience by Evaluation Quality, Age, and Master's Specialty Percentage of respondents Variable 1-5 years 6-10 years 11-15 years 16+ years n=57 n=56 n=24 n=46 Evaluation done Poorly 5.3 7.1 16.7 2.2 Adequately 33.3 32.1 25.0 34.8 Well/Very well 61.4 60.7 58.3 63.0 n=58 n=58 n=24 n=49 Age* 44 or less 39.7 25.9 8.3 — 45 to 49 43.1 41.4 37.5 14.9 50 to 54 17.2 19.0 33.3 38.3 55 or over — 13.8 20.8 46.8 n=56 n=54 n=20 n=39 Master's specialty Administration 60.7 57.4 85.0 74.4 Curriculum 17.9 20.4 5.0 7.7 Other 21.4 22.2 10.0 17.9 *p < .05. Table 6.6 presents the data on school district size, type of school, teaching staff, and teaching load. The data regarding experience and school district size reveal no obvious pattern other than declining percentages in each experience group from large to small districts. Quite wide differences exist in the percentages of each experience group that work in each size of district. 100 Differences do exist in the types of school administered by principals categorised by experience but they are not significant. The percentage of "16+ years" principals that administer elementary schools is larger than the other three groups, while they are represented is much smaller proportions in secondary schools. Table 6.6 Principal Experience by School District Size, School Type, Staff Size, and Teaching Load Percentage of Respondents Variable 1 -5 years 6-10 years 11-15 years 16+ years School district size n=58 n=56 n=24 n=46 Large 55.2 58.9 37.5 43.5 Medium 32.8 28.6 33.3 39.1 Small 12.1 12.5 29.2 17.4 Type of school n=58 n=58 n=24 n=47 Elementary grades 70.7 67.2 66.7 80.9 Secondary grades 24. 1 29.3 25.0 10.6 Both 5.2 3.4 8.3 8.5 Teaching staff n=55 n=58 n=24 n=47 1 to 9 14.5 3.4 8.3 6.4 10 to 19 36.4 31.0 20.8 25.5 20 to 29 27.3 29.3 37.5 40.4 30 or more 21.8 36.2 33.3 27.7 Percentage teaching n=57 n=57 n=24 n=46 Zero 38.6 45.6 45.8 50.0 1 to 19 17.5 21.1 12.5 15.2 20 to 39 17.5 22.8 29.2 23.9 40 or more 26.3 10.5 12.5 10.9 101 No clear pattern emerges for staff sizes. However, principals with 1 to 5 years of experience are more heavily represented in schools with staffs between 1 and 19, while they represent the lowest percentages for schools with staffs of 20 or more. Absence of pattern is certainly not the case with regard to the teaching responsibilities of principals within these experience groups. With greater experience comes a greater likelihood of an assignment which consists of 100 percent administration, although principals with 11 or more years of experience are still represented in sizeable proportions in the "20 to 39 percent" teaching load category. Principals with 1 to 5 years experience account for the lowest percentage among experience groups with full administration assignments and the highest percentage with assignments carrying a teaching load of "40 percent or more". Table 6.7 shows the predictable finding that "16+ years" principals have attended university or college courses in very small percentages since September 1988 (p<.05). For courses of "one day or less" and "two days to one week", the percentages for one or more attendances increases as experience increases. Also, much larger percentages of principals with 16 or more years experience have attended three or more courses of "one day or less". However, training points averages are very similar. These 102 averages are 8.6 for principals with 1 to 5 years experience, 8.7 for both the "6 to 10 years" and "11 to 15 years" experience groups, and 8.8 for those principals with 16 years experience or more. Table 6.7 Principal Experience by Evaluation Training Since September 1988 Training duration and number of attendances Percentage of respondents 1-5 years 6-10 years 11-15 years 16+ years n=58 n=58 n=24 n=47 1 day or less None One Two Three or more 2 days/1 week None One Two Three or more More than 1 week/ less than 1 term None One or more 1 term* None One or more 58.6 12.1 13.8 15.5 58.7 15.5 10.3 15.5 86.2 13.8 72.4 27.6 48.3 15.5 10.3 25.9 55. 1 12.1 20.7 12.1 89.7 10.3 77.6 22.4 41 .7 20.8 16.7 20.8 45.5 29.2 20.8 4.2 83. 16. 75.0 25.0 36.1 12.8 8.5 42.6 42.7 19.1 19.1 19.1 87.2 12.8 97.9 2.1 *£ < .05. 103 While average training points may be very similar, Table 6.8 shows that the distribution of points is not. For example, for "5 to 9" points, there is a fairly steady increase in the percentage of respondents as experience increases. Also, the largest percentage without any training in formal evaluation, at 6.9 percent (n=4), is in the "1-5 years" category. Table 6.8 Principal Experience by Evaluation Training Points Since September 1988 1-5 years3 6-10 years13 10-15 yearsc 16 + yearsd Training points n = 58 n = 58 n=24 n =47 % Cum. % o. "6 Cum. % % Cum.% o ~o Cum. % None 6.9 6.9 1.7 1.7 0.0 0.0 0.0 0.0 1 to 2 13.8 20.7 15.5 17.2 20.8 20.8 17.0 17.0 3 to 4 12.1 32.8 12.1 29.3 12.5 33.3 17.0 34.0 5 to 9 25.8 58.6 32.8 62.1 33.4 66.7 36.2 70.2 10 + 41.4 100.0 37.9 100.0 33.3 100.0 29.8 100.0 aAverage number of points = 8.6 ^Average number of points = 8.7 cAverage number of points = 8.7 ^Average number of points = 8.8 Summary Amongst principals categorised by sex, a statistical difference exists with type of school administered where women are also over represented in elementary schools while for males this is true in secondary schools. Females are represented to a disproportionately greater extent in large 104 districts, while this is true for males in medium and small districts. Teaching load data shows females represented in higher proportions among principals with a 100 percent administration assignment but also for assignments with a teaching load of 40 percent or more. The profiles for male and female principals with regard to how well they consider they do evaluation, age, master's specialty, numbers of teaching staff, and training attendance are all similar. Principals with 11 or more years of experience are represented in larger percentages among respondents with a master's degree in Educational Administration, in medium and small school districts, and larger staff sizes. Principals in the "16+ years" category are also more heavily represented in elementary schools. With regard to teaching load, the pattern emerges of more experienced principals having larger administration assignments than their less experienced colleagues. Age distribution and university or college attendance since September 1988 are significantly different but this is to be expected because the older principals are more likely to have attended before this date. No pattern was identified for how well principals with different levels of experience consider they do evaluation. 105 CHAPTER VII Purpose, Training, and Obstacles Three concepts have driven this research from its inception through to the analysis. The first concept is 'most important purpose' of formal evaluation. As part of purpose, a further element, whether or not "less than satisfactory" reports have been written, is also examined. These data provide information about the product of evaluation and therefore may cast further light on the purposes principals have in mind when they formally evaluate teaching. The second concept is the 'need for further training'. The third concept is 'obstacles to carrying out formal evaluation'. Examination of these concepts provides a clearer understanding of why principals evaluate and how far they have the preparation and opportunity to evaluate competently. 'Second tier' variables are also selected, where this is considered appropriate, in addition to the variables of sex and years of experience described in Chapter VI. Thus, since the existence or otherwise of evaluation criteria may have a bearing on evaluation purpose, this variable is included in the consideration of purpose. Similarly, the existence or otherwise of an evaluation cycle, the size of staff, and the ratio of administration and teaching, may have some determining effect on the amount of time required 106 or available for evaluation. Therefore, these variables are included in the consideration of time as an obstacle. Thirdly, the variables of master's specialty and training already received may influence the additional training principals believe they need. Therefore, these variables are considered with regard to the theme of further training required. The inclusion of the two distinctions: a) between principals in districts with and without evaluation criteria; and b) between principals in districts with and without evaluation cycles; emerges from Chapter IV (see p.59-63). For the purposes of analysis, in this and the following chapter, an assumption is made that collective agreements which stipulate a frequency of one formal evaluation "at least every" stated number of years, are unlikely to produce more than one evaluation per member of teaching staff in that period of time. Therefore, this category has been amalgamated with that of "every" stated number of years. This produces three evaluation cycle types: a) "no cycle"; b) "every/at least"; and c) "not more than". Hence, principals classified by cycle provision are referred to in the following text as "no cycle", "every/at least", or "not more than" principals, and those classified by criteria provisions are referred to as "no criteria" and "criteria" principals. 107 This chapter generally involves a bivariate analysis but on occasions employs multivariate analysis in order to provide a more sophisticated form of data upon which to base explanations. Finally, the consideration of the the findings in this chapter, as with Chapters IV, V, and VI, is left until the discussion in Chapter VIII, where all the data gathered in this study is drawn together. Evaluation Purpose Both the literature and the data presented in Chapter V highlighted a dichotomy of purpose between teacher growth and development and accountability for the quality of teaching. This dichotomy was called into question by Poster and Poster (1993) and Sergiovanni (1991) as well as a small number of respondents to the survey who said they were unable to make a choice between these two purposes. However, Table 5.9 (p.85) showed that the largest proportion of anecdotal responses to the survey could be defined under "purpose" and that within this category principals continued to distinguish between growth and accountability. The following anecdotal responses illustrate the mixture of views which principals have with regard to the issue of purpose. These views ranged from one principal who stated that "In this district a teacher can get a satisfactory report in the first year and never be evaluated 108 again. It's time for a re-focus of purpose." (respondent 047), to another who, after referring to the purpose of evaluation as personal growth for the teacher, went on: "Samuel Johnson said 'The applause of a single human being is of great consequence'" (respondent 101), suggesting that an important role of the principal in evaluation is to encourage. The views that these two examples represent found, on occasions, expression in other comments such as that from respondent 072 who said: "I firmly believe in the more formative, growth oriented philosophy. However, we remain the 'gatekeepers' at this point. I'm not entirely convinced that both roles are compatible." Finally, this dichotomy, reinforced by other data from question 14 (about the purpose of evaluation, see Appendix A.3), was encapsulated by a fourth respondent (138) who asserted: "Question 14, above, gets to the heart of the current dilemma." Table 7.1 shows a statistically significant difference (p<.05) between male and female principals in relation to the data about the most important purpose of the formal evaluation of teaching. While just over half the male respondents to the questionnaire indicated that the most important purpose was teacher growth and development, nearly three quarters of the female respondents chose this option. A corresponding difference exists for the option of accountability for the quality of teaching. Table 7.1 Sex of Principal by Evaluation Purpose and "Less Than Satisfactory" Reports Percentage of respondents Variable Male Female Evaluation purpose Growth and development Accountability Other n=132 52.3 45.5 2.3 n=49 71.4 22.4 6.1 "Less than satisfactory" reports written None One Two or more n=133 61.7 21.8 16.5 n=51 62.7 31.4 5.9 A further possible indication of purpose is the propensity to write "less than satisfactory" reports. The above data reveal that a similar proportion of male and female principals have never written such a report. However, a somewhat larger percentage of women than men have written one "less than satisfactory" report since September 1988. The position is reversed for two or more "less than satisfactory" reports, where male respondents are in the majority. However, the larger percentage of male principals 110 who have written multiple "less than satisfactory" reports can be accounted for on the grounds of experience. Most females have been principals for ten years or less and this experience group is responsible for fewer multiple "less than satisfactory" reports. When "less than satisfactory" reports written are cross tabulated against the most important purpose for males and females, no significant relationship between male and female respondents is found. If the number of "less than satisfactory" reports as a proportion of all evaluations written is compared between males and females, it shows that 2.8 percent (87 of 3064) of the reports written by men have been "less than satisfactory", while this figure is 3.0 percent (23 of 768) for women. These data are placed in a more meaningful context when the frequency of evaluations per year of principalship since September 1988 is calculated. These data indicate that male principals have conducted 3.3 evaluations per year (3064 in 940 principal years) in the above period compared to 2.6 evaluations (768 in 296 principal years) by female principals. Therefore, women principals conduct fewer evaluations per year than men, of which a slightly larger proportion result in "less than satisfactory" reports than men. When principals are categorised by experience they divide quite noticeably into two 'sub-groups' of 1 to 10 Ill years and 11 years or more, with regard to the most important purpose of formal evaluation. Table 7.2 shows around two thirds of principals with 1 to 10 years experience say the most important purpose is teacher growth and development, while around half of the more experienced principals take this view. This is accompanied by a corresponding difference in responses indicating the most important purpose is the accountability for the quality of teaching. A prominent (though somewhat predictable) fact to emerge from the data about "less than satisfactory" reports is that nearly three quarters of the principals with 1 to 5 years of experience have never written such a report. However, they are represented in similar proportions to the other three experience groups for principals having written one "less than satisfactory" report. The "11 to 15 years" group is also interesting because half of all these principals have written a "less than satisfactory" report, and nearly a third have written two or more. This indicates a greater tendency to have written more than one "less than satisfactory" report than the other three experience groups. However, when all experience groups and "less than satisfactory" reports are cross tabulated there is no statistical significance. 112 Table 7.2 Principal Experience by Evaluation Purpose and "Less Than Satisfactory" Reports Percentage of respondents Variable 1-5 years 6-10 years 11-15 years 16+ years n=55 n=56 n=24 n=46 Evaluation purpose Growth and development 63.6 62.5 50.0 47.8 Accountability 32.7 33.9 50.0 47.8 Other 3.6 3.6 -- 4.3 n=58 n=58 n=24 n=44 "Less than satisfactory" reports written None 72.4 58.6 50.0 59.1 One 24.2 24.2 20.8 27.3 Two or more 3.4 17.2 29.2 13.6 When "less than satisfactory" reports are calculated as a proportion of total reports written, the "11 to 15 years" group has the highest percentage at 3.6 percent (24 of 673), followed by "6 to 10 years" at 3.1 percent (38 of 1238), "1 to 5 years" at 2.6 percent (18 of 693), and finally "16+ years" at 2.4 percent (30 of 1228). Principals with 1 to 5 years experience have the highest average number of evaluations per year of principalship since September 1988, at 4.4 (693 in 158 principal years). Principals with 11 to 15 years experience have written 3.1 (673 in 216 principal years), those with 16 or more years have written 2.9 (1228 in 423 principal years) and the "6 to 10 years" group have 113 the lowest average at 2.8 evaluations per year (1238 in 439 principal years) since September 1988. Table 7.3 shows that principals in districts with criteria are more likely to opt for accountability for the quality of teaching than are principals in districts without criteria. A correspondingly higher percentage of principals in districts without criteria express the view that teacher growth and development is the most important purpose. Table 7.3 also shows a statistically significant difference (p<.05) between principals categorised by evaluation criteria that have written two or more "less than satisfactory" reports. One fifth of principals in districts without criteria have done so compared to a tenth of the principals in districts with criteria. A corresponding difference exists in the writing of no "less than satisfactory" reports. Examining these "less than satisfactory" reports as a proportion of all evaluations written, reveals that 4.2 percent (49 of 1163) of reports written by "no criteria" principals are "less than satisfactory" compared to only 2.2 percent (57 of 2585) of "criteria" principals. However, no statistically significant relationship emerges when criteria and "less than satisfactory" reports are cross tabulated with purpose. Finally, "no criteria" principals have written an average of 2.9 evaluations per year (1163 in 408 114 principal years) compared to 3.2 (2585 in 806 principal years) for "criteria" principals. Therefore, "no criteria" principals conduct fewer evaluations of which a greater proportion result in "less than satisfactory" reports. Table 7.3 Principals Categorised on the Basis of Evaluation Criteria by Evaluation Purpose and "Less Than Satisfactory" Reports Percentage of respondents Variable No Criteria Criteria n=60 n=119 Evaluation purpose Growth and development 66.7 53.8 Accountability 30.0 43.7 Other 3.3 2.5 n=61 n=121 "Less than satisfactory" reports written* None 50.8 66.9 One 27.9 24.0 Two or more 21.3 9.1 *2 < .05, Female principals have much less experience than male principals overall (see Table 7.4) and this difference is statistically significant (p<.05). Therefore, the sex of principals and evaluation purpose were cross tabulated against years of experience as a principal. This provides a control for experience and, when done, the statistical difference that exists between sex and purpose disappears. 115 However, statistical significance remains in the "1 to 5 years" experience group (p<.05) with men opting for growth and development in significantly smaller proportions than women. Table 7.4 Sex of Principal by Years of Experience as Principal Percentage of respondents 1-5 years 6-10 years 11-15 years 16+ years Respondent sex* n=58 n=58 n=24 n=47 Male 26.5 27.2 14.7 31.6 Female 43.1 41.2 7.8 7.8 *£ < .05. From the data presented in Chapter VI a significant difference also exists in the type of school male and female principals administer (p<.05). This manifested itself in terms of males being over represented in secondary schools while females are over represented in elementary schools. However, when school type is cross tabulated with respondent sex and purpose there is no statistically significant relationship. 116 The Need For Further Training Findings on the need for further training should be viewed against a backdrop of general comfort on the part of principals about their level of competence in formal evaluation. Training received is hardly referred to either as an obstacle to formal evaluation or as a comment, positive or negative, in the final anecdotal section of the survey. Two examples of anecdotal responses were given in Chapter V (p.88). A third respondent (075) recognises the ever changing nature of the role of the school principal and the continual need to upgrade knowledge and skills: It is extremely important for principals to be current on curriculum and teaching strategies. In this respect, we all need "more training" throughout our career. I have taken available workshops on legal aspects of report writing, but will need more as things change and evolve. A comparison of male and female principals with regard to their need for further training (Table 7.5), shows very little difference. The exceptions are the pre-evaluation phase of an evaluation leading to a "satisfactory" report and the post-observation and report writing phases of a formal evaluation leading to a "less than satisfactory" report. However, female principals cite the need for training for the pre-evaluation phase of a "satisfactory" evaluation less than their male colleagues. In the post-observation phase of a "less than satisfactory" evaluation, half the male principals indicate a need for more training, whereas this applied to two thirds of female. For the report writing phase, half the males express a need for more training compared to three quarters of female principals. Table 7.5 Sex of Principal and Need for Further Training in Evaluation Percentage agreeing on need for further training "Satisfactory" "Less than satisfactory" report report Male Female Male Female Evaluation ~~~~ phase n=133 n=48 n=58 n=23 Pre-evaluation conference(s) Classroom observations Post-observation conferences Writing the final report 23.1 36.8 34.4 36.6 14.6 27.1 39.6 38.8 34.5 43.1 53.4 52.5 39.1 47.8 65.2 73.9 Table 7.6 presents the data on training need for different experience groups, in formal evaluations leading to a "satisfactory" report. This Table shows that principals with 1 to 5 years of experience indicate, in much larger percentages, a need for further training in all four phases of formal evaluation (p<.05) and a trend for principals with increasing years of experience to feel in decreasing percentages, that they need further training. 118 Table 7.6 Principal Experience and Need for Further Training In Evaluations Leading to a "Satisfactory" Report Percentage agreeing on need for further training 1-5 years 6-10 years 11- 16 years 16+ years Evaluation phase n=55 n=56 n=24 n=47 Pre-evaluation conference!s) 30.9 14.3 12.5 21.3 Classroom observations 45.5 26.8 29.2 32.6 Post-observation conferences 47.3 31.5 37.5 26.1 Writing the final report 50.0 33.9 30.4 29.8 In the case of evaluations leading to a "less than satisfactory" report, the division lies between principals with 1 to 10 years of experience and those with 11 or more years of experience. By dividing principals in this way, it emerges from Table 7.7, that respondents with 1 to 10 years of experience say they need more training than those with 11 years or more. However, none of the above differences are statistically significant, apart from the pre-evaluation phase for the "1 to 5 years" principals (p<.05). As with evaluations leading to a "satisfactory" report, a pattern of decreasing need for training emerges with increasing numbers of years of experience. 119 Table 7.7 Principal Experience and Need for Further Training in Evaluations Leading to a "Less Than Satisfactory" Report Percentage agreeing on need for further training 1-5 years 6-10 years 11- 16 years 16+ years Evaluation phase n=17 n=23 n=16 n=27 Pre-evaluation conference(s) 62.5 26.1 33.3 29.6 Classroom observations 62.5 52.2 26.7 37.0 Post-observation conferences 64.7 69.6 50.0 44.4 Writing the final report 70.6 73.9 40.0 48.1 Given that master's specialty and previous training in formal evaluation might be expected to have some bearing on the extent to which principals feel the need for further training, these two variables were first cross tabulated with training needs across all principals. This showed no relationship or statistical significance between master's degree and the need for training. Furthermore, principals with 10 training points or more are no less likely to say they need further training than their colleagues with fewer training points. No statistically significant relationship could be found between these two variables. 120 When the variables of sex and experience were each included in a cross tabulation with master's specialty and training need, it did produce occasional instances of statistical significance. However, cell sizes were generally less than five. While training points cross referenced with training need and each of the variables of sex and experience was subject to similar drawbacks with regard to the size of cells, it produced a rather more definable pattern. For the observation, post-observation and final report phases of a "less than satisfactory" report, there is a statistically significant relationship between female principals with less than 10 training points and their greater need for further training (p<.05). The same pattern is then repeated for principals with one to five years experience (p<.05). Obstacles to Evaluation By far the most important obstacle to emerge from the survey results was time. Anecdotal responses provide an interesting lead into this finding. The majority testify to the multitude of tasks principals have to do and the different roles they are expected to perform. These responses often convey a feeling of insufficient time to devote to what principals feel are the most important aspects of their responsibilities, one being the evaluation 121 of teaching. The comment made by respondent 073 is fairly representative: Instructional Leadership is "the" most important aspect of our job. However, until this is recognized by government and by School Boards in actions as well as rhetoric we will never have the necessary time to do this part well. Eroding administration time in schools actually erodes the quality of education significantly more than does the raising of class size, (author's emphasis) A comment from respondent 093 even goes so far as suggesting the current responsibilities of principals may need to be separated between principals, who would maintain their function as educational managers, and other administrators who would take on the more bureaucratic responsibilities: The increase in decentralization from district level to site based management works against formal evaluations being made an administrative priority due to length of time. If current administrators are expected to continue doing formal evaluations, then other people need to perform the managerial tasks - people not presently in the system perhaps. A final comment presents the lack of time and its incumbent pressures in their starkest form when respondent 061, after listing the three main obstacles to the carrying out of the formal evaluation of teaching as "TIME", "TIME", "TIME", went on to say in response to question 21: "It is obvious I believe time to be the most significant factor preventing good quality assessment." These anecdotal data provide support for the data relating to the most important obstacles to conducting formal evaluation. Time is given by two thirds of all If 122 respondents who cited a most important obstacle. Across all first, second and third obstacles, "time" was cited by 152 respondents or 42.0 percent of all individual references made. In eight cases principals wrote "time", "time", and "time" as their three most important obstacles. However, in each case these were recorded only as a first obstacle and thus constitute one individual reference rather than three6. In a further seven cases all three obstacles can be defined as time, while in another 24 cases two of the obstacles can be defined as time. In each of these 39 cases, time is therefore treated as one individual reference. However, when counted separately, they bring the total references to time to 206 and, as shown later, all these references are analysed for what they say about why time is an obstacle. The thirty-nine principals, who made multiple references to time, constitute a sub-group which represents 25.7 percent of all the respondents who referred to time as an obstacle. However, when an analysis is carried out to discern whether or not these respondents are clustered in particular groups, for example less experienced principals, no pattern emerges. For principals grouped by sex, experience, and evaluation cycle requirements, the This is because the main point of interest was the number of principals referring to an obstacle rather than the number of references to that obstacle. 123 proportion of respondents who made multiple time references is generally between 20 and 25 percent. Among the 206 references identified above, 79 were one word statements, 94 gave some elaboration or explanation, and 32 were defined as time obstacles without explicitly stating the word "time". Therefore, 126 statements (representing the views of 113 respondents, or 74.3 percent of those who gave time as an obstacle) are more elaborative and, as such, provide a rich source of explanation for why time is considered such an important obstacle. Table 7.8 identifies the two themes (excluding "Other") which emerged from these statements. Both quite evidently have to do with pressure of work but it is possible to distinguish between a "Workload" category and a "Process" category. The workload category is sub-divided into four types of statement, while the process category is sub divided twice. The first two columns of the table show a) the numbers of statements made in the above two categories; and b) the percentages these numbers represent out of the total number of statements made. The second two columns show a) the number of respondents who made these types of statement; and b) the percentage these respondents represent of the total who made time statements. 124 Table 7.8 Time Obstacle Statements Nature of time pressure Number of statements % Number of respondents % 1.Workload 74 58.7 64 56.6 a.Other priorities, demands and interruptions 46 36.5 37 32.7 b.Teaching commitments 11 8.7 11 9.7 c.Increased administrative responsibilities in recent years 9 7.1 8 7.1 d.Excessive number of evaluations in one year 8 6.3 8 7.1 2.Process 43 34.1 40 35.4 a.Ability to effectively carry out the process 29 23.0 28 24.8 b.Observations/Conferencing 14 11.1 12 10.6 Other 9 7.1 9 8.0 Total 126 100.0 113 100.0 Table 7.8 shows that excessive workload is generally considered an important obstacle by principals. The evaluation process also features prominently Almost all the statements about time could be placed into one of these two categories. The following two statements highlight the unpredictable nature of the principal's role and the feeling that there may always be something to unsettle previously made plans: "Priorities - while this component of admin is 125 important, the urgent needs often displace others -*• 'The tyranny of the urgent1" (respondent 008). "Crisis - both parent and student that take precedence and have to be dealt with 'right now'" (respondent 154). A third respondent (090) draws attention to the pressure of teaching commitments when saying "Lack of time - I teach .5 and have many, many responsibilities besides evaluation of staff", while another expressed the shortcomings in the number of evaluations possible: "Time! I should be doing several evaluations a year but can only manage one." (respondent 136). Respondent 142 provides an all embracing example of what many others said in part: 1. other responsibilities - meetings, paperwork, curricular updates, special ed, budgets, behavioural involvements, etc. 2. unexpected interruptions - parents, district staff, telephone. 3. time commitments - school wide events, performing arts, special projects, assemblies... A further example illustrates the perception of the pressures imposed by the process: "Time - to develop goals for evaluation process, to observe/collect data, to debrief, revise goals, observe/debrief, revise, write, revise, rewrite!" (088). Finally, the assertion made by respondent (035) presents a bold statement about a key role of the principal and the need to address the obstacle of time if this role is to be carried out effectively: "TIME! - if the 126 principal, as school leader, is charged with the responsibility of assisting teachers to set goals for professional growth, then more admin, release time is needed." No clear difference between male and female respondents emerges with regard to time (Table 7.9). The data relating to time as a percentage of all obstacles referred to in question 18 of the questionnaire (see Appendix A.3), shows that male principals cited "time" 110 times out of a total of 266 references to obstacles, or 41.4 percent of all references. This is virtually identical to the 41 references by female principals out of a total of 97, or 42.3 percent of all references. Table 7.9 Time as an Obstacle and Sex of Principal Sex of "Time" as most "Time" as a proportion of principal important obstacle all 'obstacle references' Q, "O n a % n b Male 63.2 86 136 41.4 110 266 Female 66.7 34 51 42.3 41 97 a Total number of first obstacles cited of all types. b Total number of first, second, and third obstacles cited of all types. Table 7.10 shows that as numbers of years of experience increase, principals give time as the most important obstacle to conducting formal evaluation in decreasing 127 percentages. Of principals with 1 to 5 years of experience, nearly three quarters put time as their most important obstacle, while just over half with 16 or more years of experience took this view. Table 7.10 Time as an Obstacle and Principal Experience Years of experience "Time" as most "Time" as a proportion of as a principal important obstacle all 'obstacle references' Q, "6 n a % n b 1 to 5 years 74.1 43 58 43.6 51 117 6 to 10 years 63.8 37 58 40.9 45 110 11 to 15 years 58.3 14 24 42.6 20 47 16 years or more 55.3 26 47 39.3 35 89 a Total number of first obstacles cited of all types. b Total number of first , second, and third obstacles cited of all types. A similar trend can be observed when looking at time as a percentage of total 'obstacle references' made by each of the experience groups. Once again, it is the more experienced principals that make fewer references to time as an obstacle than their less experienced colleagues, albeit by a fairly narrow margin. When evaluation cycle provision is examined in relation to time (Table 7.11), just over half the "no cycle" principals give time as their most important obstacle. However, for the "every/at least" principals this figure is 128 nearly three quarters. The "not more than" group also cite time as their most important obstacle to a greater extent than do the "no cycle" principals and these differences are statistically significant (p<.05). When time as a percentage of total 'obstacle references' is used as an indicator, the same pattern emerges and, indeed, for the "every/at least" principals time amounts to nearly half of all their 'obstacle references'. Table 7.11 Time as an Obstacle and Principals Categorised on the Basis of Evaluation Cycles Collective agreement "Time" as the most "Time" as a proportion of provision important obstacle* all 'obstacle references' % n a O, o n b "No cycle" 54.8 40 73 38.2 55 144 "Every/At least" 73.6 53 72 46.3 63 136 "Not more than" 65.0 26 40 39.5 32 81 a Total number of first obstacles cited of all types. b Total number of first, second, and third obstacles cited of all types. *£ < .05. Furthermore, "no cycle" principals have conducted an average of 3.1 evaluations per year of principalship (1576 in 502 principal years) since September 1988, compared to 2.9 (1422 in 491 principal years) for "every/at least" and 3.4 (750 in 221 principal years) for "not more than" 129 principals. These data reveal that "no cycle" and "every/at least" principals conduct a similar number of evaluations per year. Finally, 3.6 percent (56 of 1576) of the reports written by "no cycle" principals have been "less than satisfactory", compared to 2.4 percent (34 of 1422) of the "every/at least" principals and 2.1 percent (16 of 750) of the "not more than" principals. A statistically significant relationship exists between teaching load and type of school (p<.05). Elementary principals are far more likely to have teaching assignments of 20 to 39 percent and 40 percent or more, than secondary principals. Even so, when time as an obstacle for all respondents is cross referenced against a) the percentage of teaching principals do; b) their type of school; and c) their staff sizes; no statistically significant relationship emerges. However, when the percentage of teaching and size of staff were controlled for in three way cross tabulations with cycle provision and time, statistically significant relationships were found for "zero" teaching (p<.05) and staffs of "20 to 29" (p<.05) and "30 or more" (p<.05). These data show that principals evaluating to a cycle with a 100 percent administration assignment or staffs of "20 to 29" or "30 or more", cite time as their most important obstacle significantly more than their "no cycle" colleagues with the same administration assignment and size of staff. 130 A statistically significant relationship exists between district size and cycle provision (p<.05). This relationship takes the form of large districts having disproportionately fewer (p<.05) evaluation cycles phrased as "every/at least", while medium districts have disproportionately more (p<.05). However, when district size is cross tabulated with cycle provision and time, the relationship for medium districts completely disappears. While "no cycle" principals from large districts still under represent time as their most important obstacle and "every/at least" principals from large districts over represent time, the relationship is not statistically significant. A significant difference also exists between cycle provision and experience (p<.05). Because there is a greater tendency for less experienced principals to cite time as their most important obstacle, cycle provision, experience and time were cross tabulated. However, this produces no statistically significant results. Summary A clear difference exists with regard to the purpose ascribed to formal evaluation by male and female principals and this difference is statistically significant. A much higher percentage of females than males defined teacher 131 growth and development as the most important purpose, although this is also the view of the majority of male principals. Correspondingly, a much higher percentage of males than females, considered the most important purpose to be accountability for the quality of teaching. In addition, female principals have conducted slightly fewer evaluations per year and written slightly more "less than satisfactory" reports as a percentage of all reports written, than their male counterparts. Among principals categorised by experience a pattern also exists with regard to their views about purpose but it is not statistically significant. Principals with more than ten years experience assign greater importance to accountability and less to teacher growth and development than do their less experienced colleagues. Furthermore, principals in the "11 to 15 years" experience category are, to a statistically significant extent, far more likely to have written multiple "less than satisfactory" reports and have the highest percentage of "less than satisfactory" reports as a percentage of all reports. This is against the backdrop of conducting more evaluations per year than their colleagues, apart from the "1-5 years" experience group. "No criteria" principals cite teacher growth and development as the most important purpose of evaluation more often than "criteria" principals, although this is not 132 statistically significant. "No criteria" principals are also more likely to have written multiple "less than satisfactory" reports and this is statistically significant. This was further borne out by the substantially higher percentage of evaluations carried out by "no criteria" principals that lead to "less than satisfactory" reports, while they conduct fewer evaluations overall. "No cycle" principals are much more likely to have written multiple "less than satisfactory" reports and this relationship is statistically significant. Furthermore, "no cycle" principals write more evaluations per year than their "every/at least" colleagues, though not as many as the "not more than" principals. With regard to respondent sex and time as an obstacle, no marked difference is identified between male and female principals. However, time emerged in percentage terms, as a decreasing obstacle as principal years of experience increased. A particularly marked difference exists between the "1 to 5 years" group and the rest. While "no cycle" principals acknowledge time as an important obstacle, they did not express this view to the same extent as their "with cycle" colleagues and this difference is statistically significant. Female principals indicated a greater need for training in all the phases of an evaluation leading to a "less than 133 satisfactory" report but less need than male principals for training in the pre-evaluation phase of an evaluation leading to a "satisfactory" report. A similar trend emerges with regard to experience, where, generally, as experience increases the need for training decreases for evaluations leading to both "satisfactory" and "less than satisfactory" reports. In the case of the "1 to 5 years" experience group, this difference is statistically significant for satisfactory reports. 134 CHAPTER VIII Discussion, Conclusions, and Recommendations This chapter is divided into three sections: a) discussion; b) conclusions; and c) recommendations. The first section is sub-divided into four parts, under the headings of process, purpose, training, and obstacles and seeks to explain the findings from the study. The second section draws together the main findings of the study and concludes with a list of key findings. The third section in this chapter presents recommendations for further research and suggests possible solutions to weaknesses or shortcomings which emerged from the study. Discussion This section seeks to identify explanations for the findings presented in Chapters IV, V, and VII. As far as possible, explanations are sought by relating the findings from the study to the literature presented in Chapter II. However, at times the literature suggests only partial explanations. In these cases intuitive explanations are offered based on the evidence available. The section is divided into four parts: purpose, process, training, and, obstacles. However, the distinctions between these four parts are necessarily blurred because of their degree of inter-relationship. 135 Purpose The purpose of formal evaluation was a matter of considerable concern to many respondents. The different purposes highlighted in the literature (Harris & Monk, 1992; Housego, 1989; Poster & Poster, 1993) are clearly reconstructed in the survey responses. The majority of principals believe the most important purpose of formal evaluation is teacher growth and development. However, a sizeable minority of principals consider the primary purpose of evaluation to be accountability for the quality of teaching. This finds further expression in the anecdotal responses and, therefore, even though it is likely that the large majority of principals would say both of these purposes are important, the above difference of view appears to be a real one. The explanation of this difference of view may be provided by the literature which describes the complexity of the principal role and the different stakeholders to whom the principal is accountable (Rossow, 1991; Sharp & Walter, 1994; Sybouts & Wendel, 1994). How far the principal is influenced by the 'competing' needs of the various stakeholders in the education system will depend largely on the principal's own personal values and beliefs. The existence of different values and beliefs amongst principals supports the need to distinguish between them when 136 attempting to explain their professional views and behaviour. With regard to evaluation purpose, the distinction drawn between male and female principals and between principals with different lengths of experience, does produce some interesting findings. The statistically significant difference identified between male and female principals with regard to purpose disappeared when principal experience was included in the cross tabulation. However, the data suggest gender is a factor in the determination of views about the purpose of evaluation, although it seems equally likely that experience has some influence. Trying to establish whether or not there is a gender or experience effect is problematic because the vast majority of female principals have ten years experience or less and this experience group tends to opt for growth and development in greater percentages than their more experienced colleagues. Therefore, because the less experienced principals are younger and have a more recent university post-graduate education, the factors of age and greater exposure to 'newer' philosophies pertaining to growth and development may be at work. However, support for a gender explanation is provided by the differences which exist between male and female principals at all experience levels and, indeed, male principals in the "1 to 5 years" experience group cite growth and development less 137 than their female counterparts to a statistically significant degree. Two important questions are raised here. First: "Why should greater experience have any association with a greater orientation towards accountability rather than growth and development?" Second: "Why should female principals be any more inclined to see evaluation as a process of growth and development than male principals?" The literature on formal evaluation of teaching provides little assistance with the first question and so it is 'intuitive' logic that leads to the rather familiar explanation that with more experience comes more cynicism. This straightforward explanation is made all the more appealing when taking into account the views expressed by respondents about the nature of the process. If the fairly negative attitudes expressed are representative, it is very likely that principals would develop a degree of "battle weariness" over time. However, this explanation itself is based on the assumption that the pursuit of growth and development is somehow more idealistic than that of accountability. This assumption may very well be false given that some principals wrote with passion about their belief in ridding the teaching profession of those teachers who they feel bring harm to the educational well-being of pupils. 138 Another explanation may lie in what might be called a "culture of accountability". This was epitomised by the existence of school inspectors who, in the past, were responsible for ensuring the competence of teaching. Anecdotal evidence suggests that it is only in relatively recent times that notions of growth and development have become more widely accepted. Thus, more experienced principals may have had their views about evaluation shaped in a rather different culture to that which exists today. An answer to the second question is certainly offered by the literature. Shakeshaft (1987), Alder et al. (1993), Regan and Brooks (1995) and Ozga (1993) amongst others, have suggested that women adopt a more collegial style of school management and have a more caring approach to staff within the school than do men. If this is the case, it may provide an explanation for the differences observed between men and women with regard to the purpose of formal evaluation, since this more caring disposition is likely to be better suited to the purpose of growth and development than accountability. However, the literature also speaks of the longer periods of time that women principals have tended to spend as classroom teachers before they enter the field of school administration (Gross & Trask, 1976; Blumberg & Greenfield, 1986). This may lead to a greater affinity with the lot of the classroom teacher and a more 'established 139 memory1 of the classroom context than some male principals who 'rose through the ranks' more quickly. Furthermore, the literature (Blumberg & Greenfield, 1986) and questionnaire data show that women are predominantly principals of elementary schools. The questionnaire data also show that elementary principals are significantly more likely to have a 40 percent or more teaching load as part of their assignment. This current, day-to-day exposure to the reality of the classroom would only serve to reinforce any greater understanding these female principals have of the position of the classroom teacher. If a greater understanding of the position of the classroom teacher does exist among the generality of women principals than among the generality of men, this does not, in itself, mean that women principals would be less likely to opt for accountability. Indeed, such an understanding may lead to less tolerance of those whose teaching is not of a satisfactory standard. This highlights the marginally greater tendency for women principals to write "less than satisfactory" reports than men, which, at first glance, would seem to be somewhat at odds with the notion of growth and development. However, the literature also refers to the capacity of women in administration to have a more principled stance which results in a more courageous form of leadership (Regan & Brooks, 1995). Bolton's (1980) 140 'evaluator resistances', one of which is fear of an unpleasant reaction which would prevent a relationship conducive to facilitating improvement, may also be pertinent here. If female principals are more practised and more confident at the interpersonal style of management, this is likely to also have taught them ways of disagreeing while maintaining a working relationship. This, in turn, may lead to less fear of the consequences of a "less than satisfactory" report than for some male principals who are less practised and less skilled at the art of conflict resolution. Process It is evident from the school district collective agreements, that most of the responsibility for conducting evaluation lies with the school principal. Furthermore, British Columbia principals clearly believe this is a responsibility they should carry out and, to some extent, an important part of their wider role as instructional managers or educational leaders. The evaluation process is summative in nature and collective agreements rarely make specific reference to the purpose of formal evaluation. The final report is required to conclude with either a) a statement indicating that the teacher's 'classroom situation' is "satisfactory" or "less than satisfactory"; or, in a few school districts, b) a 141 statement of competence level, for example, "excellent", "very good", and so on. The study clearly shows that the final report writing stage is problematic for principals and this is all the more true for reports concluding with a "less than satisfactory" recommendation. This summative process exists despite the wealth of literature (Darling-Hammond, et al., 1983; Darling-Hammond, 1986, Sergiovanni, 1977; and others) which describes the negative effects such processes have on both the evaluatee and on the evaluator. More specifically, the findings in Antosz's (1990) study of British Columbia evaluation processes, that most are summative and fail to take account of the evaluation literature, appear to be as valid today as they were six years ago. This evidence suggests that there are reasons for the existence of a summative process and these reasons can probably be explained best by the literature which identifies the different needs of the organisation and of the individual (Housego, 1989; and others). Clearly, school boards have to be able to meet the requirements of the Teaching Profession Act 1987 (Province of British Columbia, 1987) and this involves an evaluation report of some kind. However, what appears to be happening is the production of summative reports by many principals who believe growth and development to be the most important purpose. While the 142 data from this study do not provide a clear answer, the anecdotal responses suggest that some principals are trying to provide a formative experience within a summative process (see p.32). A distinction has to be drawn between evaluations leading to "satisfactory" reports and those leading to "less than satisfactory" ones. All the data regarding factors present in the four stages of a formal evaluation, show "less than satisfactory" reports to be associated with much greater stress and complexity, as well as a greater requirement of time and need for further training. Furthermore, since September 1988, close to two thirds of principals have never written a "less than satisfactory" report and a further quarter have written only one. The average number of evaluations conducted per year of principalship in this period was 3.1 and the number of "less than satisfactory" reports written was one per 34.8 evaluations, which equates to 2.9 percent. These data support the assertion made by Haefele (1992), that few "less than satisfactory" reports are written. There are a number of possible explanations for this phenomenon and Haefele suggests that part of the reason is lack of time to conduct enough observations upon which to base a "less than satisfactory" report. Bolton (1980), though, refers to a set of resistances on the part of 143 evaluators. Some of these resistances may explain a principal's disinclination to write a "less than satisfactory" report and may also account for the relatively small number of total evaluations conducted. For example, uncertainty about criteria and interpretation of data; fear of an unpleasant reaction; inability to organise time for adequate observations; lack of support at higher levels of the organisation; and a lack of conviction that evaluation will provide much "payoff". The data from the study provide other possible explanations. For the majority of principals the most important purpose of evaluation is teacher growth and development and many believe the current process to be inadequate and time-consuming. A number of anecdotal responses revealed the difficulties involved in proceeding with a "less than satisfactory" report and the feeling that they rarely effect real change or improvement. The distinction made earlier between evaluations leading to "satisfactory" and "less than satisfactory" reports, was based on much greater levels of stress, complexity and time-consumption associated with the latter. A rationale was presented in Chapter VII for including "less than satisfactory" reports written and evaluations conducted as part of the consideration of purpose, since these reports are the product of the evaluation process. 144 However, this study does not support the claim that principals who take a different view about the most important purpose of formal evaluation, also have a tendency to produce different proportions of "less than satisfactory" reports. In other words, a principal who is more orientated towards teacher growth and development seems no less likely to produce "less than satisfactory" reports than a principal who perceives formal evaluation more in terms of the accountability for the quality of teaching. However, it is possible that principals with different views about the most important purpose of evaluation write relatively few "less than satisfactory" reports for different reasons. Perhaps growth orientated principals do regard the writing of a summative "less than satisfactory" recommendation to be at odds with the concept of growth and development. On the other hand, principals who are more inclined to want to hold teachers accountable for the quality of their teaching, may be reluctant to use a process which they feel is inadequate in meeting this objective. Of course, these data are just as likely to show that principals consider the general standard of teaching to be high but, on the rare occasions when it is necessary, both growth and development and accountability orientated principals are prepared to write "less than satisfactory" reports. 145 Anecdotal responses show that at least some principals already have a notion of who their weak teachers are before the evaluation is carried out because they were able to suggest that the formal evaluation process should be reserved for such teachers. This supports the contention made by Wood (1992), Housego (1989) and others, that principals have preconceptions about the "classroom situation" of the teachers on their staffs. It is impossible to say, from the findings of this study, how well founded these preconceptions are, but they are clearly a factor in understanding how principals approach their responsibilities as evaluators of teaching. The existence of these preconceptions may explain why principals who are not governed by an evaluation cycle produce a greater proportion of "less than satisfactory" reports. If principals are 'freed' from the requirement to evaluate all teachers on a cyclical basis, including the most competent, they may be more inclined to focus their time and attention on the teachers they believe to be less than competent. This, in turn, would be likely to lead to the writing of a greater proportion of "less than satisfactory" reports than by principals who are obliged to evaluate all teachers on a cyclical basis. Also, principals in districts without criteria write more "less than satisfactory" reports than principals in districts with 146 criteria. The explanation for this is rather speculative but the reason for the above phenomenon may be related to greater freedom once again. In this case, the absence of stated criteria allows the principal an opportunity to 'tailor' the evaluation to his or her own objectives. In this situation, any preconceptions the principal may have about the teaching of a member of staff would be more likely to manifest themselves in the evaluation because they would be more able to look for the things they wanted to see. Training Principals generally believe they do formal evaluation well, there is no strong indication from the survey results that they feel inadequately trained, and they generally express little need for further training. However, this overall picture is qualified by the fact that large percentages of respondents expressed a need for further training in relation to evaluations leading to a "less than satisfactory" report. It would seem that this need is linked to the greater complexity of such evaluations as illustrated by other data from the survey. For example, several anecdotal responses attested to the increased difficulties involved in evaluations leading to "less than satisfactory" reports, as highlighted in the previous section on "process". The report writing phase for both 147 "less than satisfactory" and "satisfactory" evaluations is also characterised by a greater need for training. The explanation for this is largely intuitive but seems likely to be associated with the act of recording final summative recommendations which may then have to be defended. Little evidence exists of a link between prior training in formal evaluation and the needs expressed for further training. The exception to this general finding with regard to training is among principals with ten or less training points who are either a) female; or b) in the "1 to 5 years" experience category. These principals express a significantly greater need for training in most phases of evaluation. The explanation for both of these groups may be the same, given that many of the female principals are also in the "1 to 5 years" experience category. A principal with less experience, and in particular less experience of evaluating, will be more likely to seek evaluation training than a more experienced principal who feels well versed in the role of evaluator. It is important here to emphasise that because a principal perceives the need for more training this does not necessarily imply a lack of confidence on the part of that principal. Indeed, such a principal might be very confident and competent, but simply wish to fill the gaps they consider exist in their knowledge 148 as a result of limited experience. By the same token, a sense of not requiring further training does not necessarily mean an individual is well trained. The fact that no link can be shown between master's degree specialty and, more particularly, prior training in formal evaluation and an expressed need for further training, is somewhat difficult to explain by reference to the literature. However, Sergiovanni's exploration of Hogben's work on the "clinical mind" and the teaching profession may provide some clues. Principals, like teachers, may be more inclined to rely on their own experience than on the ideas generated by educational theoreticians and researchers. In other words, they may believe they learn more by doing than by taking courses. This may be all the more likely, given the intensely personal character of evaluation and the knowledge that no two evaluations are going to be the same. This experiential explanation is given greater substance by the pattern in the survey data, already referred to, of decreased need for training with increased experience. Another important factor in this 'lack of training need' phenomenon may be the nature of the training itself. However, information relating to the content of evaluation training does not form part of the data gathered by this study. 149 Obstacles Time featured very heavily amongst the obstacles cited by respondents to the survey. The evaluation process described in most collective agreements testifies, to a greater or lesser extent, to the resources of time this aspect of personnel management is likely to consume if done conscientiously. The planning involved in the pre-evaluation phase; the stipulation that classroom observations should be for full lessons and take place on at least three occasions; the need, in most cases, for the production of a full anecdotal statement at each post-observation conference; and, lastly, the writing of a final report, amount to a considerable quantity of work. This time pressure on principals which emerges from the collective agreements and also the literature (Haefele, 1992; Pigford & Tonnsen, 1993; Smith & Andrews, 1989; Bolton, 1980; and others) is borne out and reinforced by the questionnaire returns. Time is by far the most important obstacle cited, the factor most often identified in the four phases of the evaluation process, and is all the more present for evaluations leading to a "less than satisfactory" report. Those principals who elaborated on time as an obstacle substantiated the impression from the collective agreements, of a process which imposes considerable demands on time. In addition, respondents 150 highlighted general demands placed on them in their role as principal and together these two data reveal a clear perception held by principals of excessive workload and insufficient time to meet all their professional priorities. Therefore, the explanation for time being considered an obstacle seems clear enough, although, curiously, no direct relationship was found between the percentage of administration time available and time as an obstacle. This was also true for staff sizes. A possible explanation for this finding is that because so many respondents cite time as an obstacle these data are bound to include principals with a wide range of assignments. Also, a larger administration assignment will not necessarily mean more time available for evaluation, where principals have numerous tasks 'bidding' for the 'additional' time. A statistically significant relationship does exist between evaluation cycles and time. Principals who do not have to evaluate on a regular cycle are much less likely to cite time as the most important obstacle than their colleagues. A straightforward explanation of this finding would be that the "no cycle" principals do not have the pressure of a certain number of evaluations to conduct in a certain period of time. This explanation is given modest support in the statements made by respondents in relation to time as an obstacle. Eight respondents referred to 151 evaluation cycles as specifically contributing to time pressures. While this represents only 12.7 percent of all "every/at least" respondents, it provides some evidence of time pressure imposed by evaluation cycles. The true extent of the contribution evaluation cycles make to a perception of time pressure may also be hidden by the more general references to excessive workload. However, the evidence for a link between evaluation cycles and time is not conclusive. For example, the data from this study show that principals who evaluate on a regular cycle do not conduct any more evaluations per year than "no cycle" principals. In other words, the quantity of evaluations conducted is no greater for principals who are required to evaluate on a regular cycle. However, this particular finding may illustrate only that, even with the same quantity of work, when an activity is required to be carried out it is associated with greater pressure than an activity which involves some element of choice. A further setback to establishing a 'cycle-time' relationship is the loss of statistical significance when district size was incorporated into the equation with cycle provision and time. However, a pattern could still be observed in terms of "no cycle" principals from large districts citing time less often than "every/at least" principals from large districts. Also, the fact that 152 significantly fewer large districts have evaluation cycles may itself suggest the belief, on the part of the authors of collective agreements in these districts, that employing them would place too great a time pressure on their principals. This may be particularly true given that school size and thus staff size, tend to be larger in larger districts. Although no link exists in the survey data between staff size and administration time as a proportion of a principal's assignment, when they are coupled with cycle provision a significant relationship does emerge. Principals who do not teach or who have small staff sizes, and have no evaluation cycle, cite time as an obstacle significantly less than principals in the same position but who do have an evaluation cycle. Therefore, the importance of an evaluation cycle is maintained, but its effect is compounded by teaching responsibilities and large staffs. In short, there is evidence to suggest, in school districts where evaluation cycles exist, that this places a greater time pressure on principals than in school districts where there is no evaluation cycle. The pattern that emerges of time being a less important obstacle as experience increases, can be explained on both an intuitive level and with reference to Sergiovanni's (1991) consideration of experience. Put simply, it might be 153 expected that as principals become more familiar and more practised in their role, they would feel less 'overwhelmed' by the range of tasks to be done and thus be less prone to see time as an obstacle. They do indeed 'create knowledge in use1. However, it remains a point of interest that differences between experience groups are not clearer. Time management features quite prominently in the literature (Hummel, 1967; Smith & Andrews, 1989; Sergiovanni, 1991; and others). How far a principal is able to make the most effective use of time is likely to influence his or perception of time pressure. Of course time management requires the time manager to have a definition of what effective use of time means. This definition requires decisions to be made about which aspects of the principal's responsibilities are assigned differing degrees of priority. It is at this stage in any consideration of time management that the issue of 'important' and 'urgent' arises because priority does not necessarily equate with most important. Everard and Morris (1990) highlight the distinction between important and urgent matters with regard to establishing priorities and offer a means by which principals can avoid being swept along by continual crisis management. They suggest planned time for the important issues, both on a short and long-term basis. However, a number of the anecdotal responses in the study describe the difficulty in planning such time for evaluation: An activity which most principals acknowledge as important. The corollary of such comments though, is that if certain tasks are to be put to one side or completed in a less rigorous way than principals might like, it is more acceptable to leave tasks such as the formal evaluation of teaching. If this analysis is correct, such an attitude must be based on some perception that principals have formed about the 'external' value of formal evaluation. Given that formal evaluation of teaching is one of a principal's contractual obligations, this perception of the value of evaluation must in part be based on the attitude of the school board. In other words, principals must have formed an understanding that the consequences of not evaluating are less severe than those for not doing something else. Conclusion School district collective agreements provide little assistance in specifically determining the purpose of formal evaluation. However, there is evidence that the way the process is outlined in most, implicitly favours an orientation towards accountability. Therefore, the clauses relating to evaluation in British Columbia school district collective agreements present an austere view of personnel review, apart from a very few districts where professional growth plans are in place and the purpose of the process is clearly stated as growth and development. Principals, working within the confines of these collective agreements, clearly view the final report writing of an evaluation as problematic, as indeed they do the entire process of an evaluation leading to a "less than satisfactory" report. The study shows that few evaluations are conducted and only a very small percentage of these lead to "less than satisfactory" reports. Many principals are likely to believe that both teacher growth and development and accountability for the quality of teaching are important. However, the vast majority of respondents in this study were able to distinguish one as more important than the other when asked to do so. The majority of principals believe the most important purpose to be teacher growth and development, which is a particularly important finding given the summative nature of the evaluation process in most school districts. A further distinction, with regard to purpose, can be drawn between male and female principals and those with different levels of experience. Women are more likely to opt for growth and development than men; principals with more than ten years experience are more likely to opt for accountability than their less experienced colleagues. 156 Time emerged as the single most important obstacle to the conduct of evaluation. This perception is borne of the belief that principals are being asked to perform too many functions resulting in an inability to perform some, such as formal evaluation, as well as they would like. However, this view of evaluation is qualified by the belief of many principals that they still carry out evaluation well. This study shows that, generally, British Columbia principals do not consider that they need further training in formal evaluations leading to "satisfactory" reports. However, for final report writing and evaluations leading to "less than satisfactory" reports, a greater need for training is expressed. The amount of time principals have spent in training varies considerably but, while it is impossible to comment on the quality of training, it is clear that on-going training in formal evaluation is available and is undertaken by principals. Returning to the question set out in the framework for the study in Chapter III, the following answers can be given: a) The most important purpose of formal evaluation for the majority of principals is teacher growth and development. b) The evaluation process is largely summative and geared more towards the accountability of teaching. c) Principals have received modest amounts of training in formal evaluation and need for further training is limited to report writing and evaluations leading to "less than satisfactory" reports. 157 d) The most important obstacle to carrying out formal evaluation is lack of time. e) More similarity than difference exists between the views of men and women principals on formal evaluation. f) More similarity than difference exists between the views of experienced and less experienced principals on formal evaluation. The objective of the study was to elicit the views of British Columbia principals about the formal evaluation of teaching. While it has achieved this objective, what emerges is an interesting view of the principal's role generally. The issue of time, which is bound up in this general view of the role of principal, implies much about the level of priority principals are willing or able to assign to the evaluation of the primary function of schools - teaching and learning. Key Findings 1. The formal evaluation process in the vast majority of school districts is implicitly geared to accountability for the quality of teaching. 2. The majority of principals consider the most important purpose of formal evaluation to be teacher growth and development. 3. Principals place time as the most important obstacle to carrying out formal evaluation. Evaluation cycles appear to magnify the problem but there is no direct link between staff size or teaching load and time as an 158 obstacle. Clearly principals perceive workload as a major contributing factor to time pressures. 4. Post-graduate degree and previous training have no bearing on the extent to which principals feel in need of further training in formal evaluation. 5. Principals who are not required to evaluate on a regular cycle and those who are not bound by stated district evaluation criteria, write "less than satisfactory" reports more often than principals who do have to meet these requirements. 6. Few evaluations are conducted and only a very small percentage result in "less than satisfactory" reports. 7. Principals with less than six years experience express a greater need for further training in formal evaluation than their more experienced colleagues. Recommendations Policy It may be time for the Ministry of Education and individual school boards to re-assess their expectations of school principals. This re-assessment should focus on the balance between the bureaucratic responsibilities of principals and their role as educational leaders and instructional managers. If this balance has swung too much in the direction of bureaucratic functions the result may 159 be a less than fully effective employment of the expertise and background in education that principals possess. This recommendation does not preclude principals also looking again at the priorities they set for themselves and examining their time management strategies. If roles and assignments are to be examined it would be helpful to consider the opportunities available for introducing additional evaluators. Coming from an education system (and a slightly different school culture) where it is entirely acceptable for heads of department to 'evaluate' their departmental colleagues, and given the time pressures that principals speak of, spreading the workload of evaluation seems worthy of exploration. Finally, a re-assessment of the value of formal evaluation, as currently practised, would be timely. If sizeable numbers of principals are questioning the value of the process and even greater numbers attest to the difficulty in carrying out the role of evaluator, a concern is raised as to how effective formal evaluation can be in these circumstances. Research Many of the findings from this study are far from conclusive. More detailed investigation of the evaluation practices of British Columbia principals would be very interesting, especially with regard to the interaction of 160 informal and formal methods. Further research into the gender difference in purpose identified in this study is necessary. Finally, it would also be interesting to see further study on the specific nature of the evaluation training offered by school districts, its take up by principals, and the extent to which university and college master's degree programmes see this aspect of the principal's role as fundamental by making it required study. 161 References Adams, R. D. (1982). Teacher development: A look at changes in teacher perception across time. Paper presented at the annual conference of the American Educational Research Association, New York. (ERIC DRS No. ED 214 926). Adams, R. D. & Martray, C. (1980). Correlates of teacher perceived problems. Paper presented at the annual conference of the Mid South Educational Research Association, New Orleans. (ERIC DRS No. ED 195 567). Airasian, P. W. (1993). Teacher assessment: Some issues for principals. NASSP Bulletin, 77, (Oct), 55-65. Alder, S., Laney, J., & Packer, M. (1993). Managing women: Feminism and power in educational management. Buckingham, UK: Open University Press. Allston, D., Rymhs. R., & Shultz. L. (1993). Effective teaching does make a difference. Alberta Journal of Educational Research, 39, (2), 191-203. Andrews, T., & Barnes, S. (1990). Assessment of teaching. In R. Houston (Ed.), Handbook of research on teacher education. New York: Macmillan. Antosz, J. L. A. (1990). Teacher evaluation in British Columbia as described in recent collective agreements: A comparative analysis of reported practice as recommended in the literature (Magistral major paper, University of British Columbia, 1990). Bailey, G. D. (1984). An evaluator's guide to diagnosing and analyzing teaching styles. NASSP Bulletin, 68, (469), 19-25. Ball, S. (1987). The micropolitics of the school: Towards a theory of school organisation. London: Methuen. Bartholomew, B. R. (1974). Teachers' instructional problems, 1974: NEA survey. Today's Education, 63, (3), 78-80. Beare, H. (1989, September 25). Educational administration in the 1990s. Paper presented at the national conference of the Australian Council for Educational Administration, University of New England, Armidale, New South Wales, Australia. 162 Beck, L. G. & Murphy, J. (1993). Understanding the principalship: Metaphorical themes, 1920s - 1990s. New York: Teachers College Press. Black, S. (1993). How teachers are reshaping evaluation procedures. Educational Leadership, 51, (Oct), 38-42. Blumberg, A. & Greenfield, W. (1986). The effective principal (2nd ed.). Newton, MA: Allyn and Bacon. Bolton, D. L. (1980). Evaluating administrative personnel in school systems. New York: Teachers College Press. Bridges, E. (1986). The incompetent teacher. Philadelphia, PA: Falmer Press. Cangelosi, J. (1991). Evaluating classroom instruction. New York: Longman. Cherington, E. (1989). Information provided on the telephone from the Public Relations Department, B.C. Ministry of Education. Christensen, E. E. (1986). Teacher evaluation - who needs it? Roeper Review, 9, (1), 19-23. Clement, J. P., DiBella, C. M., Eckstrom, R. B., & Tobias, S. (1977). No room at the top? American Education, 13, (5), 20-23, 26. Cruickshank, D. R. (1974). Perceived problems of secondary school teachers. Journal of Educational Research, 68, (4), 154-159. Darling-Hammond, L. (1986). A proposal for evaluation in the teaching profession. Elementary School Journal, 86, (4), 531-551. Darling-Hammond, L., Wise, A., fit Pease, S. (1983). Teacher evaluation in the organizational context: A review of the literature. Review of Educational Research, 53, 285-328. Darnell, M. A. M. (1993). Self-concept and attitudes of selected Texas public school teachers resulting from the Texas teacher appraisal system (Doctoral dissertation, East Texas State University, 1994). Dissertation Abstracts International, 54/08, 2815. 163 Edgar, W. (1991, December 6). Exam results are never referred to. Times Educational Supplement, 3936, 24. Estosito, J. P., Smith, G. E. & Burbank, H. J. (1975). A delineation of the supervisory role. Education, 96, (Fall 1975), 63-67. Everard, K. B. & Morris, G. (1990). Effective school management. London: Paul Chapman. Everhart, R. B. (1988). Fieldwork methodology in educational administration. In Boyan, N. J. (Ed.), Handbook of research on educational administration. New York: Longman. Fishel, A. & Pottker, J. (1975). Performance of women principals: A review of behavioral and attitudinal studies. Journal of the National Association of Women Deans, Administrators, and Counselors, 38, (3), 110-117. Frasher, J. M. & Frasher, R. S. (1979). Educational administration: A feminine profession. Educational Administration Quarterly, 15, (2), 1-13. Freidson, E. (1972). Profession of medicine: A study of the sociology of applied knowledge. New York: Dodd Mead. Grambs, J. D. (1976). Women and administration: Confrontation or accommodation? Theory Into Practice, 15, (4), 293-300. Grobman, H. & Hines, V. A. (1956). What makes a good principal? NASSP Bulletin, 40, (223), 5-16. Gross, N. & Trask, A. E. (1964). Men and women as elementary school principals. (Final Report No. 2, Cooperative Research Project No. 853, Graduate School of Education, Harvard University, 1964). Gross, N. & Trask, A. E. (1976). The sex factor and the management of schools. New York: John Wiley and Sons. Guba, E. G., & Lincoln, Y. S. (1981). Effective evaluation. San Francisco: Jossey-Bass. Haefele, D. L. (1992). Evaluating teachers: An alternative model. Journal of Personnel Evaluation in Education, 5, 335-345. 164 Harris, B. M. & Monk, J. M. (1992). Personnel administration in education (3rd edit.). Toronto: Allyn and Bacon. Her Majesty's Stationery Office, (1991). The Education (School Teacher Appraisal) Regulations 1991. London: HMSO. Housego, I. (1989). Principals' evaluation of teachers in a British Columbia school district: Are the reports professional or bureaucratic documents? The Alberta Journal of Educational Research, 35, (3), 196-216. Hogben, D. (1981). The clinical mind: Some implications for educational research and teaching training. South Pacific Journal of Education, 10, (1). Huddle, G. (1985). Teacher evaluation - How important for effective schools? Eight messages from research. NASSP Bulletin, 69, (3), 58-63. Hummel, C. (1967). Tyranny of the urgent. Chicago, IL: Intervarsity Press. Hunter, M. (1985). What's wrong with Madeline Hunter? Educational Leadership, 42, (5), 57-60. Kauchak, D., Peterson, K., & Driscoll, A. (1985). An interview study of teachers' attitudes toward teacher evaluation practices. Journal of Research and Development in Education, 19, 32-37. Kelsey, G., Lupini, D., & Clinton, A. (1995). The effects of legislative change on the work of British Columbia school superintendents. Report presented to the annual meeting of the British Columbia School Superintendents' Association, Richmond, British Columbia. Langlois, D., & Colarusso, M. (1988). Improving teacher evaluation. The Education Digest, 54, (11), 13-15. Lower, M. (1987). A study of teachers' and principals' perceptions and attitudes toward the evaluation of teachers. Unpublished doctoral dissertation. The Ohio State University, Columbus, OH. Lusty, M. G. F. (1991). Teacher appraisal: Teachers' perceptions of an LEA teacher appraisal scheme and its implementation [London, teacher perceptions](Doctoral dissertation. Open University, 1993). Dissertation Abstracts International, 54/02, 354. Medley, D., Coker, H., fit Soar, R. (1984). Measurement based evaluation of teacher performance: An empirical approach. New York: Longman. Meskin, J. D. (1974). The performance of women school administrators - A review of the literature. Administrator's Notebook, 23, (1), 1-4. Morris, V. C, Crowson, R. L., Porter-Gehrie, C. fit Hurwitz, E. (1984). Principals in action: The reality of managing schools. Columbus, OH: Charles E. Merrill. Morrow, J. E. et al (1985). Improving teacher effectiveness: Perceptions of principals. Education, 105, (4), 385-390. Neville, M. (1988). Assessing and teaching language: Literacy and oracy in schools. Basingstoke, UK: MacMillan Education. Ozga, J. (1993). Women in educational management. Buckingham, UK: Open University Press. Page, J. A. St Page, F. M. (1985). Principals' perceptions of their role and the perceived effectiveness of their academic preparation. College Student Journal, 19, (1), 2-16. Peterson, D. (1986). Developing teacher evaluation systems with potential for increasing student performance. Educational Research Quarterly, 10, (2), 39-46. Pigford, A. B. fit Tonnsen, S. (1993). Women in school leadership: Survival and advancement guidebook. Lancaster, PA: Technomic Publishing. Poster, C, St Poster, D. (1993). Teacher appraisal: Training and implementation. London: Routledge. Province of British Columbia (1987). The Industrial Relations Act 1987. Victoria: Ministry of Education. Province of British Columbia (1987a). The Teaching Profession Act 1987. Victoria: Ministry of Education. Province of British Columbia (1995). 1994/1995 Public and Independent Schools Book. Victoria: Ministry of Education. 166 Reavis, C. A. (1978). Teacher improvement through clinical supervision. Phi Delta Kappan, 584, Regan, H. B. & Brooks, G. H. (1995). Out of women's experience: creating relational leadership. Thousand Oaks, CA: Corwin Press. Rooney, J. (1993). Teacher evaluation: No more "super"vision. Educational Leadership, 51, (Oct), 43-44. Rossow, L. F. (1990). The principalship: Dimensions in instructional leadership. Englewood Cliffs, NJ: Prentice-Hall. Schonberger, V. L. (1986). The effective supervision of professional colleagues: Self-direction and professional growth. High School Journal, 69, (4), 248-254. Scriven, M. (1987). Validity in personnel evaluation. Journal of Personnel Evaluation in Education, 1, 9-23. Sergiovanni, T. J. (1977). Clinical supervision: A review of the research. Journal of Research and Development in Education, 9, 21. Sergiovanni, T. J. (1991). The principalship: A reflective practice perspective (2nd ed.). Needham Heights, MA: Allyn and Bacon. Shakeshaft, C. (1987). Women in educational administration. Newbury Park, CA: Sage. Shakeshaft, C. (1989). Women in educational administration. Newbury Park, CA: Sage. Sharp, W. L. & Walter, J. K. (1994). The principal as school manager. Lancaster, PA: Technomic. Smith, W. F. & Andrews, R. L.(1989). Instructional leadership: How principals make a difference. Alexandria, VA: Association for Supervision and Curriculum Development. Starratt, R. J. (1993). A modest proposal: Replace supervision with super-vision. The School Administrator, 50, (4), 35. Stodolsky, S. (1988). The subject matters. Chicago: University of Chicago Press. Storey, V. J. & Housego, I. (1980). Personnel supervision: A descriptive framework. The Canadian Administrator, 19, (6), 1-4. Sybouts, W. & Wendel, F. C. (1994). The training and development of school principals: A handbook. Westport, CT: Greenwood Press. Tibbetts, S. (1980). The woman principal: Superior to the male? In Far rant, P". A. (Ed.), Strategies and attitudes: Women in educational administration. Washington, DC: National Association of Women Deans, Administrators, and Counselors. Townsend, D. (1987). Components of a model of teacher evaluation. Education Canada, 27, (2), 24-30. Ubben, G. C. & Hughes, L. W. (1992). The principal: Creative leadership for effective schools. Needham Heights, MA: Allyn and Bacon. VanScriver, J. (1990). Teacher dismissals. Phi Delta Kappan, 72, 318-319. Webster, Sr., W. G. (1994). Learner-centered principalship: The principal as teacher of teachers. Westport, CT: Praeger. Withall, J. & Wood, F. (1979). Taking the threat out of classroom observation and feedback. Journal of Teacher Education, 30, (1), 55. Wood, C. J. (1992). Toward more effective teacher evaluation: Lessons from naturalistic inquiry. NASSP Bulletin, 76, (Mar), 52-59. Appendix A2 169 6. If you answered "Yes" or "In progress" to 5 above, what is your specialization in? • Educational Administration • Curriculum • Other (please specify) 7. How many years of experience (include the present year as one) do you have as: a) Principal? b) Vice Principal? PART B: CURRENT SCHOOL INFORMATION 8. What is your current Administrative Officer assignment? • Principal • Vice Principal • District Principal 9. What percentage of your official appointment is allocated to each of the following? a) Administration % b) Teaching % c) District % 10. Which of the following best describes your present school? • School enrolling only elementary grades (any grades from K-7) • School enrolling only secondary grades (any grades from 8-12) • School enrolling both elementary and secondary grades • I do not have a school assignment 11. What is the number of your school district? 12. How many teachers, including the principal, do you have on staff? (please report headcount and not FTE) PART C: ADMINISTRATIVE OFFICER AS A FORMAL EVALUATOR OF TEACHING This part of the questionnaire is about the formal evaluation of teaching. Formal evaluation of teaching means the evaluation process which takes place according to the provisions of the district collective agreement and/or legislation. This process results in the writing of a final report concluding that a teacher's classroom situation is either "satisfactory" or "less than satisfactory". 13. The formal evaluation of teaching is part of your responsibilities. Do you think it should be? • Yes • No • Not sure Appendix A3 170 14. What do you consider to be the most important purpose of the formal evaluation of teaching? (please check one response) • Teacher growth and development • Accountability for the quality of teaching • Other (please specify) 15. How well do you carry out the formal evaluation of teaching? Please check the most appropriate description below: Very Poorly Poorly Adequately Well Very Well • • • • • 16. Please indicate the duration and number of any in-service workshops, seminars, university courses (or components thereof), etc. that addressed the formal evaluation of teaching and which you have attended since September 1988: • One day or less Number attended • Between two days and one week Number attended • More than one week but less than one full term Number attended • One full university/college term Number attended 17. Please state, as accurately as possible, the total number of formal evaluations of teaching you have carried out since September 1988, and the number of those that resulted in "satisfactory" reports and "less than satisfactory" reports: Number of Formal Teaching Evaluations Number of "Satis factory" Reports Number of "Less than satisfactory" Reports 18. Please list, in rank order, what you consider to be the main obstacles (up to a maximum of 3) to your carrying out the formal evaluation of teaching, with # 1 being the greatest obstacle: 1. 2. 3. Appendix A4 171 19. This question deals with your views on different aspects of the formal evaluation of teaching and asks you to consider evaluations that result in a "satisfactory" report. Almost all collective agreements identify four phases in the formal teaching evaluation process. These are a) the pre-evaluation conference(s); b) classroom observations; c) post-observation conferences; and d) writing the final report. The four parts to this question each give a series of statements relating to these phases. You are asked to indicate your level of agreement with the statements. a) Pre-evaluation conference(s) (to discuss purpose, criteria, time-frame, etc.) For me, I consider this phase: Strongly Disagree Disagree Agree Strongly Agree I) Stressful Jty -Complex I) Time-consuming IV) I need more training b) Classroom observations For me, I consider this phase: Strongly Disagree Disagree Agree Strongly -Agree".' I) Stressful ' 1} Complex' : - " -l).TJrne-consurnlng IV) 1 need more training c) Post-observation conferences For me, 1 consider this phase: Strongly Disagree Disagree Agree Strongly Agree I) Stressful H) Complex I) Time-consuming IV) 1 need more training d) Writing the final report (including any discussions/feedback on draft report, etc.) For me, 1 consider this phase: Strongly Disagree Disagree Agree Strongly Agree'-'-' I) Stressful I) Complex 1) Time-consuming -IV) 1 need more training Appendix A5 172 20. N.B. Please ignore this question if you have never written a "less than satisfactory" report. This question deals with your views on different aspects of the formal evaluation of teaching and asks you to consider evaluations that result in a "less than satisfactory" report. The four parts to this question each give a series of statements relating to a different phase of formal evaluation. You are asked to indicate your level of agreement with the statements. a) Pre-evaluation conference(s) (to discuss purpose, criteria, time-frame, etc.) For me, 1 consider this phase: Strongly Disagree Disagree Agree Strongly . Agree I) Stressful 11) Complex I) Time-consuming IV) 1 need more training b) Classroom observations For me, 1 consider this phase: Strongly Disagree Disagree Agree -Strongly . .Agree I) Stressful J) Complex I) Time-consuming IV) 1 need more training c) Post-observation conferences For me, 1 consider this phase: Strongly Disagree Disagree Agree Strongly Agree I) Stressful S) Complex I) Time-consuming IV) 1 need more training d) Writing the final report (including any discussions/feedback on draft report etc.) For me, 1 consider this phase: Strongly Disagree Disagree Agree Strongly. Agree I) Stressful -I)' Complex . IH) Time-consuming IV) 1 need more training Appendix A6 173 21. If there are any additional points you would like to make regarding the formal evaluation of teaching please do so in the space below. Thank you for taking the time to participate in this project. Appendix B 174 EVALUATION PHASES IN THE COLLECTIVE AGREEMENTS The table below shows the requirement for certain phases in the formal evaluation of teaching, as contained in the seventy-five British Columbia school district collective agreements (see key for headings and symbols). School District Pre Evl Ob Pst Ob Fin Rep School District Pre Svl 0b Pst 0b Fin Rep School District Pre Evl 0b Pst 0b Fin Rep 1 i i ^1 31 X 1 X X 59 i ^1 i 2 i i ^1 i L 32 i i \| o 60 i i ^1 i o 3 i i •i i o 33 2 i i o 61 i i i i 4 i i 34 i i ^ o 62 i i i i 7 i i i <| o 35 i i i o 63NB i i i i 9 i i i i 36 i i i i 64 i i i i o 10 i i i i ° 37 i i i XT 65 i i i o X 11 i i i i 38 i i i 66NB i i ^1 12 i i i i o 39 i i i i o 68 i i i] o 13 i i i i o 40NB i i •i 69 i i i o i o 14 i i i i o 41NB i i i L 70 i i i 15 i i i i o 42 i i i i o 71 i i i o 16 i i i 43 i i i o 72 i i i o 17 i i i i o 44 i o i i ° 75 i i ^1 18 i i i 45 i i i i o 76 2 i i i o 19 i i i i 46 i i i j o 77 i i i o 21 i i i i o 47 i * i • i 80 i i i i 22 i i i ° 48 i i i i 81 X i X i o 23 i i i \| o 49 i i i o 84 i i i i 24 i i i o i 50 i i i \| o 85 i i i i 26 i i i i o 52 i i i i 86 i i X 27 i i \| o 54 i ) i 87 i i i o 28 i i i i o 55 i i i vJ O 88 i i i i o 29 i i 56 i i i ° 89 i i i v| O 30 i i i VJ O 57 i i i 1 92 i i i o KEY Headings: Pre Evl = Pre-evaluation conference Ob = Classroom observations Pst 0b = Post observation Conference Fin Rep = Final report conference Symbols: = This phase is stated in the collective agreement x = This phase is not stated in the collective agreement o  The opportunity for such a meeting must be made available * = "Process should be agreed" t  Second meeting is available to discuss process if necessary 0 = More than one, if necessary T  But "parties should try to agree on the report" L = For teachers who receive a "less than satisfactory" report NB =40: Provision for 'peer evaluation' 41: Four step 'professional growth plan' model 63,66: Provision for a 'short' report for excellent teachers Appendix C 175 PERMISSABLE DATA IN EVALUATION FINAL REPORT SOURCE OF DATA COLLECTIVE AGREEMENT SCHOOL DISTRICT NUMBER Classroom observation data only 1, 4, 10, 12, 18, 28, 32, 36, 37, 44, 52, 54, 60, 63, 75, 80, 88 TOTAL =17 Classroom observation data: • Primarily • Principally •Generally • Normally •Not necessarily 13, 14, 15, 17, 19, 30, 42 89 50, 61 77 40 TOTAL 12 Classroom observation data plus: •General performance -General contribution/ work of the teacher •Other pertinent/factual information/material •Other information •Observation of other required duties •Work directly related to teacher's assignment •Multiple sources of data •Not specified 65, 68, 71, 72 46, 64 49, 85 87, 92 66 21 24 70 TOTAL =14 Not stated 2, 3, 7, 9, 11, 22, 23, 26, 27, 29, 31, 33, 34, 35, 38, 39, 41, 43, 45, 47, 48, 55, 56, 57, 59, 62, 69, 76, 81, 84, 86 TOTAL = 32 Appendix D 176 EVALUATION CRITERIA AND CYCLES The table below shows the provision of evaluation cycles and criteria as contained in the seventy-five British Columbia school district collective agreements. School Dist. Cycle Crit eria. School Dist. Cycle Crit eria. School Dist. Cycle Crit eria. 1 i i 31 X X 59 X X 2 i i 32 X 60 i i 3 X X 33 i 61 X X 4 i X 34 X X 62 X ^1 7 i X 35 X i 63 i 9 i i 36 i i 64 i i 10 i X 37 i i 65 i i 11 X i 38 i 66 i i 12 i X 39 X i 68 i 13 X i 40 X i 69 i i 14 X i 41 X X 70 i i 15 X i 42 i X 71 i i 16 X X 43 X X 72 X i 17 X X 44 X X 75 i i 18 i X 45 X X 76 i 19 X i 46 X i 77 X X 21 X X 47 X i 80 i i 22 i i 48 X i 81 X X 23 X 49 i i 84 i i 24 X X 50 i i 85 i i 26 X i 52 i X 86 X i 27 i i 54 i X 87 X i 28 X X 55 X i 88 i i 29 X X 56 X i 89 X X 30 X X 57 i i 92 i i KEY i = Stated x = Not stated Appendix E 177 SCHOOL DISTRICT NUMBERS, NAMES, AND SIZES SMALL (0-2,999)* MEDIUM (3,000-14,999)* LARGE (15,000+)* 03 -- Kimberley 01 - Fernie 23 -- Central Okanagan 04 -- Windermere 02 - Cranbrook 24 • - Kamioops 09 • - Castlegar 07 - Nelson 34 • - Abbotsford 10 • - Arrow Lakes 11 - Trail 35 -- Langley 12 • - Grand Forks 15 - Penticton 36 -- Surrey 13 • - Kettle Valley 22 - Vernon 37 • - Delta 14 -- Southern Okanagan 27 - Cariboo-Chilcotin 38 -- Richmond 16 • - Keremeos 28 - Quesnel 39 • - Vancouver 17 -- Princeton 33 - Chilliwack 41 -- Burnaby 18 -- Golden 40 - New Westminster 43 • - Coquitlam 19 • - Revelstoke 42 - Maple Ridge 44 • - North Vancouver 21 • - Armstrong-Spallumcheen 45 - West Vancouver 57 -- Prince George 26 • - North Thomson 46 - Sunshine Coast 61 -- Greater Victoria 29 -- Lillooet 47 - Powell River 68 • - Nanaimo 30 -- South Cariboo 48 - Howe Sound 31 -- Merritt 52 - Prince Rupert 32 -- Hope 54 - Bulkley Valley 49 -- Central Coast 56 - Nechako 50 -- Queen Charlotte 59 - Peace River South 55 -- Burns Lake 60 - Peace River North 64 -- Gulf Islands 62 - Sooke 66 -- Lake Cowichan 63 - Saanich 76 -- Agassiz-Harrison 65 - Cowichan 77 -- Summerland 69 - Qualicum 80 -- Kitimat 70 - Alberni 81 -- Fort Nelson 71 - Courtenay 84 -- Vancouver Island West 72 - Campbell River 85 -- Vancouver Island North 75 - Mission 86 -- Creston-Kaslo 88 - Terrace 87 -- Stikine 89 - Shuswap 92 -- Nisga'a *Student enrolments (individual school district assignations to district size are based on 1995 enrolments). Appendix F 178 SAMPLE EVALUATION ARTICLE FROM A BRITISH COLUMBIA SCHOOL DISTRICT COLLECTIVE AGREEMENT Article 5 Evaluation Of Teaching 5.1 Both the [local] Teachers' Association and the Board of School Trustees believe that students are best served when a high quality of classroom instruction and teaching performance is provided and maintained, and adequate assistance for teaching performance is provided. 5.2 All formal reports on the work of a teacher shall be in writing. 5.3 Before commencing observations, the evaluator shall meet with the teacher, discuss the purposes of the evaluation, the approximate time span and schedule of observations, and review the criteria to be applied in the evaluation and report writing process. 5.4 Not less than three (3) nor more than six (6) formal classroom observations which reflect the teacher's assignment, shall be conducted in completing the report process unless otherwise mutually agreed. 5.5 Periods chosen for observation shall be during normal periods of the school year and the teacher shall have the opportunity to select at least one third of the times. a) The evaluator shall provide the teacher with a written anecdotal statement at the end of each lesson observed. 5.6 Reports shall be prepared only by evaluators authorised under the School Act. 5.7 The report shall reflect only the teaching and learning situation within the teacher's responsibility, unless other aspects of the teacher's work are requested to be recognised by the teacher concerned. 5.8 Any written report that is satisfactory and that identifies weaknesses shall include constructive suggestions for improvements. In this case, a teacher may request a plan of assistance from the employer. 5.9 Except in the case of a final, less than satisfactory report, the employer in consultation with the teacher, shall develop a plan of assistance. At this meeting the teacher has the right to be accompanied by a member of the association. 5.10 Except under extraordinary circumstances where a plan of assistance is underway, formal evaluation will be postponed until the plan of assistance is completed. 5.11 The teacher shall be given a draft copy of a report at least forty-eight (48) hours prior to preparation of the final copy. He/she shall have the opportunity of meeting with the evaluator in the company of a member of the association, to discuss the draft. 5.12 The final report shall be filed in the teacher's personnel file. A copy shall be given to the teacher at the time of filing. 5.13 The teacher shall have the right to submit to the evaluator (within one week of receiving the final report) a written commentary on the report which shall be filed with all copies of the report. 179 APPENDIX G SUMMARY OF RESPONSE FREQUENCIES The following summary of response frequencies is organised in the same order as questions on the questionnaire. Two abbreviations are used from page 183 on: SR = An evaluation leading to a "satisfactory" report; LTSR = An evaluation leading to a "less than satisfactory" report. Appendix G 180 Respondent sex Valid Cum Value Frequency Percent Percent Percent Male 1 136 72.3 72.7 72.7 Female 2 51 27.1 27.3 100.0 1 .5 Missing Total 188 100.0 100.0 Valid cases 187 Missing cases 1 Respondent age Valid Cum Value Frequency Percent Percent Percent 30-34 years 2 2 l. l 1.1 1.1 35-39 years 3 5 2.7 2.7 3.7 40-44 years 4 33 17.6 17.6 21.3 45-49 years 5 65 34.6 34.6 55.9 50-54 years 6 48 25.5 25.5 81.4 55-59 years 7 32 17 . 0 17 . 0 98.4 60-65 years 8 3 1.6 1.6 100.0 Total .188 100 . 0 100 . 0 Valid cases 188 Missing cases 0 Masters specialisation Valid Cum Value Frequency Percent Percent Percent Administration 1 111 59.0 65.7 65.7 Curriculum 2 25 13 .3 14.8 80.5 Other 3 33 17.6 19.5 100 . 0 19 10 .1 Missing Total 188 100.0 100.0 Valid cases 169 Missing cases 19 Doctoral specialisation Administration Other Valid cases Frequency 7 Percent 3.7 . Valid Percent 77 .8 95.2 Missing Total 188 100.0 100.0 Missing cases 179 Cum Percent 77 . 8 100.0 Experience as a principal Cum Cum Cum Value Freq Pet Pet Value Freq Pet Pet Value Freq Pet Pet 1 16 9 9 10 11 6 62 19 4 2 89 2 12 6 15 11 3 2 64 20 8 4 94 3 11 6 21 12 6 3 67 21 6 3 97 4 10 5 26 13 4 2 69 22 2 1 98 5 9 5 31 14 7 4 73 25 1 1 98 6 15 8 39 15 4 2 75 26 1 1 99 7 14 7 47 16 8 4 79 27 1 1 99 8 10 5 52 17 7 4 83 35 1 1 100 9 8 4 56 18 8 4 87 id cases 187 Missing cases 1 Teaching load as a percentage of assignment Cum Value Freq Pet Pet 0 83 45 45 3 1 1 45 5 6 3 49 8 1 1 49 10 12 6 56 12 5 3 58 13 1 1 59 15 2 1 60 Cum Value Freq Pet Pet 16 2 1 61 17 2 1 62 20 27 15 77 22 1 1 77 24 1 1 78 25 1 1 78 30 10 5 84 37 1 1 84 Missing cases 3 Cum Value Freq Pet Pet 40 8 4 89 50 12 6 95 60 4 2 97 65 1 1 98 70 3 2 99 80 1 1 100 Appendix G 181 Type of school Valid Cum Value Frequency Percent Percent Percent Elementary 1 135 71.8 71.8 71.8 Secondary 2 42 22.3 22 .3 94 .1 Both 3 11 5.9 5.9 100 . 0 Total 188 100 . 0 100.0 Valid cases 188 Missing cases 0 Staff size Cum Cum Cum Value Freq Pet Pet Value Freq Pet Pet Value Freq Pet Pet 2 1 1 1 22 5 3 50 43 2 1 87 4 2 1 2 23 6 3 54 44 3 2 89 6 3 2 3 24 5 3 56 48 1 1 89 7 2 1 4 25 10 5 62 49 1 1 90 8 2 1 5 26 5 3 64 50 2 1 91 9 5 3 8 27 3 2 66 52 1 1 91 10 3 2 10 28 6 3 69 54 1 1 92 11 8 4 14 29 2 1 70 56 1 1 92 12 6 3 17 30 4 2 72 60 2 1 94 13 6 3 21 31 4 2 75 62 2 1 95 14 4 2 23 32 4 2 77 63 2 1 96 15 10 5 28 33 3 2 78 66 1 1 96 16 2 1 29 35 2 1 79 72 1 1 97 17 7 4 33 36 2 1 81 74 1 1 97 18 8 4 37 38 1 1 81 80 1 1 98 19 1 1 38 39 2 1 82 81 1 1 98 20 10 5 43 40 2 1 83 90 1 1 99 21 8 ' 4 48 42 5 3 86 99 2 1 100 Valid cases 185 Missing cases 3 Should evaluation be done by principals? Yes Not sure Valid cases ue Frequency 1 181 3 . 6 Total Missing cases Percent 96.3 3.2 .5 100.0 Valid Percent 96.8 3.2 Missing 100.0 Cum Percent 96.8 100.0 Purpose of evaluation Growth and development Accountability Other Valid cases ue Frequency 1 104 2 72 3 6 Percent 55.3 38.3 3.2 Valid Percent 3.2 Missing Cum Percent 57.1 57.1 39.6. 96.7 3.3 100.0 Total 188 100.0 100.0 Missing cases 6 How well do you do evaluation? Valid Cum Value Frequency Percent Percent Percent Poorly 2 12 6.4 6.5 6.5 Adequately 3 60 31.9 32 .6 39.1 Well 4 81 43.1 44.0 83.2 Very well 5 31 16.5 16.8 100 . 0 4 2.1 Missing Total 188 100.0 100.0 Valid cases 184 Missing cases 4 Number of one day courses since September 1988 Cum Cum Cum Value Freq Pet Pet Value Freq Pet Pet Value Freq Pet Pet 0 90 48 48 4 10 5 85 8 1 1 97 1 27 14 62 5 15 8 93 9 1 1 98 2 22 12 74 6 6 3 96 10 3 2 99 3 11 6 80 7 1 1 97 12 1 1 100 Valid cases 188 Missing cases 0 Appendix G 182 Number of two day courses since September 1988 Cum Cum Cum Value Freq Pet Pet Value Freq Pet Pet Value Freq Pet Pet 0 97 52 52 3 14 7 93 • 6 3 2 100 1 32 17 69 4 6 3 96 2 32 17 86 5 4 2 98 Valid cases 188 Missing cases 0 Number of one week courses since September 1988 Cum Cum Cum Value Freq Pet Pet Value Freq Pet Pet Value Freq Pet Pet 0 164 87 87 3 2 1 98 10 2 1 100 1 12 6 94 4 1 1 98 2 6 3 97 5 1 1 99 Valid cases 188 Missing cases 0 Number of one term courses since September 1988 Cum Value Freq Pet Pet 0 152 81 81 1 31 16 97 Valid cases 188 Cum Value Freq Pet Pet 2 4 2 99 3 11 100 Missing cases 0 Number of training 'points' since September 1988 Cum Cum Cum e Freq Pet Pet Value Freq Pet Pet Value Freq Pet Pet 0 5 3 3 10 18 10 73 23 1 1 95 1 21 11 14 11 5 3 76 25 2 1 96 2 9 5 19 12 10 5 81 26 1 1 96 3 19 10 29 13 2 1 82 30 2 1 97 4 6 3 32 15 9 5 87 35 1 1 98 5 17 9 41 16 3 2 89 36 1 1 98 6 26 14 55 17 2 1 90 47 1 1 99 7 2 1 56 18 4 2 92 50 2 1 100 8 5 3 59 20 2 1 93 9 10 5 64 22 2 1 94 Valid cases Missing cases Number of evaluations conducted since September 1988 Cum Cum Cum Value Freq Pet Pet Value Freq Pet Pet Value Freq Pet Pet 0 4 2 2 18 10 5 55 36 2 1 87 1 1 1 3 19 3 2 57 37 1 1 88 2 3 2 4 20 8 4 61 39 1 1 88 3 1 1 5 21 3 2 63 40 3 2 90 4 8 4 9 • 22 1 1 63 42 3 2 91 5 8 4 14 23 2 1 64 44 1 1 92 6 8 4 18 24 5 3 67 45 1 1 92 7 4 2 20 25 11 6 73 47 1 1 93 8 5 3 23 26 4 2 75 50 4 2 95 9 3 2 24 27 2 1 76 54 1 1 96 10 8 4 29 28 1 1 77 55 1 1 96 11 2 1 30 29 1 1 77 60 2 1 97 12 13 7 37 30 5 3 80 70 2 1 98 13 1 1 38 31 2 1 81 78 1 1 99 14 5 3 40 32 3 2 83 94 1 1 99 15 14 8 48 33 2 1 84 99 1 1 100 16 1 1 48 34 1 1 84 17 2 1 49 35 3 2 86 Valid cases 184 Missing cases 4 Number of "less than satisfactory" reports written- since September 1988 Cum Cum Cum Value Freq Pet Pet Value Freq Pet Pet Value Freq Pet Pet 0 114 62 62 3 6 3 98 6 1 1 100 1 46 25 86 4 1 1 99 2 16 9 95 5 1 .1 99 Valid cases 185 Missing cases Appendix G 183 Most important obstacle to conducting formal evaluation Valid Cum Value Frequency Percent Percent• Percent None 0 4 2 1 2 2 2.2 Time 1 120 63 8 65 2 67.4 Union 2 7 3 7 3 8 71.2 Criteria 3 4 2 1 2 2 73 .4 Collective agreement 4 22 11 7 12 0 85.3 Process 5 4 2 1 2 2 87.5 Teacher acceptance 6 7 3 7 3 8 91.3 Lack of cycle 8 2 1 1 1 1 92 .4 Stress 9 1 5 5 92.9 District expect 11 5 2 7 2 7 95.7 Cycle 12 1 5 5 96.2 Training 13 2 1 1 1 1 97.3 Lack of expernce 14 1 5 5 97 . 8 Unagreed on purpose 15 1 5 5 98.4 My own biases 16 1 5 5 98.9 Other 99 2 1 1 1 1 100 . 0 4 2 1 Missing Total 188 100 0 100 0 Valid cases 184 Missing cases 4 Second most important obstacle to conducting formal evaluation Valid Cum Value Frequency Percent Percent Percent Time 1 24 12 8 20 7 20.7 Union 2 5 2 7 4 3 25.0 Criteria 3 4 2 1 3 4 28 .4 Collective agreement 4 23 12 2 19 8 48.3 Process 5 17 9 0 14 7 62 . 9 Teacher acceptance 6 11 5 9 9 5 72 .4 Subject knowledge 7 4 2 1 3 4 75.9 Lack of cycle 8 1 5 9 . 76.7 Stress 9 5 2 7 4 3 81.0 District expect 11 6 3 2 5 2 86.2 Cycle 12 1 5 9 87 .1 Training 13 4 2 1 3 4 90.5 Lack of expernce 14 2 1 1 1 7 92.2 other 99 9 4 8 7 8 100.0 72 38 3 Missing Total 188 100 0 100 0 Valid cases 116 Missing cases 72 Third most important obstacle to conducting formal evaluation Valid Cum Value Frequency Percent Percent Percent Time 1 8 4 3 12 1 12.1 Union 2 6 3 2 9 1 21. 2 Criteria 3 1 5 l 5 22 . 7 Collective agreement 4 12 6 4 18 2. 40.9 Process 5 8 4 3 12 1 53.0 Teacher acceptance 6 11 5 9 16 7 69.7 Stress 9 4 2 1 6 1 75.8 Ministry expect 10 2 1 1 3 0 78 . 8 District expect 11 3 1 6 4 5 83.3 Training 13 3 1 6 4 5 87.9 Unagreed on purpose 15 1 5 1 5 89 .4 Other 99 7 3 7 10 6 100.0 122 64 9 Missing Total 188 100 0 100 0 Valid cases 66 : Missing cases 122 SR - Pre-evaluation conference is stressful Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 83 44 .1 44 . 6 44 . 6 Disagree 2 90 47 . 9 48 .4 93 .0 Agree 3 13 6.9 7.0 100.0 2 1.1 Missing Total 188 100.0 100.0 Valid cases 186 Missing cases 2 SR - Pre-evaluation conference is complex Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 52 27.7 28.1 28.1 Disagree 2 84 44 . 7 45 .4 73.5 Agree 3 44 23 .4 23.8 97.3 Strongly agree 4 5 2.7 2.7 100.0 3 1.6 Missing Valid cases 185 Total 188 100.0 100.0 Missing cases 3 Appendix G 1 SR - Pre-evaluation conference is time-consuming Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 24 12 . 8 12 . 8 12 . 8 Disagree 2 47 25 . 0 25.1 38.0 Agree 3 95 50.5 50.8 88.8 Strongly agree 4 21 11.2 11. 2 100 . 0 1 .5 Missing Total 188 100.0 100 .0 Valid cases . 187 Missing cases 1 SR - Pre-evaluation conference requires more training Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 62 33.0 33.9 33.9 Disagree 2 83 44 .1 45 .4 79.2 Agree 3 33 17.6 18 . 0 97.3 Strongly agree 4 5 2.7 2.7 100 . 0 5 2.7 Missing Total 188 100.0 100.0 Valid cases 183 Missing cases 5 SR - Classroom observation is stressful Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 79 42 . 0 42 . 7 42 . 7 Disagree 2 93 . 49.5 50.3 93.0 Agree 3 11 5.9 5.9 98.9 Strongly agree 4 2 1.1 1.1 100 . 0 3 1.6 Missing Total 188 100 . 0 100.0 Valid cases 185 Missing cases 3 SR - Classroom observation is complex Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 41 21.8 22.3 22.3 Disagree 2 48 25.5 26.1 48.4 Agree 3 69 36 .7 37.5 85.9 Strongly agree 4 26 13.8 14 .1 100.0 4 2.1 Missing Total 188 100.0 100.0 Valid cases 184 Missing cases 4 SR - Classroom observation is time-consuming Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 11 5.9 5.9 5.9 Disagree 2 19 10 .1 10.2 16.0 Agree 3 95 50.5 50.8 66.8 Strongly agree 4 62 33.0 33 .2 100.0 1 .5 Missing Total 188 100.0 100.0 Valid cases 187 Missing cases 1 SR - Classroom observation requires more training Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 44 23 .4 24.2 24.2 Disagree 2 76 40 .4 41. 8 65.9 Agree 3 55 29.3 30.2 96.2 Strongly agree 4 7 3.7 3.8 100.0 6 3.2 Missing Total 188 100.0 100.0 Valid cases 182 Missing cases 6 SR - Post observation phase is stressful Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 50 26.6 27.0 27 .0 Disagree 2 87 46.3 47.0 74 .1 Agree 3 46 24.5 24.9 98.9 Strongly agree 4 2 1.1 1.1 100.0 3 1.6 Missing Valid cases 185 Total 188 100.0 100.0 Missing cases 3 Appendix G SR - Post observation phase is complex Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 32 17.0 17.3 17.3 Disagree 2 50 2G.G 27.0 44.3 Agree 3 90 47.9 48.6 93.0 Strongly agree 4 13 6.9 7.0 100.0 3 1.6 Missing Total 188 100.0 ioo. o Valid cases 185 Missing cases 3 SR - Post observation phase is time-consuming Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 16 8.5 8.6 8.6 Disagree 2 34 18.1 18.3 26.9 Agree 3 98 52.1 52.7 79 . 6 Strongly agree 4 38 20.2 20.4 100 . 0 2 1.1 Missing Total 188 100.0 100.0 Valid cases 186 Missing cases 2 SR - Post observation phase requires more training Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 42 22.3 23 .3 23 .3 Disagree 2 74 39 .4 41.1 64.4 Agree 3 55 29.3 30 . 6 95.0 Strongly agree 4 9 4.8 5.0 100.0 8 4.3 Missing Total 188 100.0 100.0 Valid cases 180 Missing cases 8 SR - Final report writing phase is stressful Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 35 18.6 18 .9 18 .9 Disagree 2 74 39.4 40.0 58.9 Agree 3 61 32 .4 33.0 91.9 Strongly agree 4 15 8.0 8.1 100.0 3 1.6 Missing Total 188 100.0 100 .0 Valid cases 185 Missing cases 3 SR - Final report writing phase is complex Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 20 10.6 10.8 10.8 Disagree 2 25 13 .3 13.5 24.3 Agree 3 106 56.4 57.3 81.6 Strongly agree 4 34 18.1 18 .4 100.0 3 1.6 Missing Total 188 100.0 100.0 Valid cases 185 Missing cases 3 SR - Final report writing phase is time-consuming Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 5 2.7 2.7 • 2.7 Disagree 2 6 3.2 3.2 . 5.9 Agree 3 83 44 .1 44.4 50.3 Strongly agree 4 93 49 . 5 49 . 7 100 . 0 1 .5 Missing Total 188 100 . 0 100.0 Valid cases 187 Missing cases 1 SR - Final report writing phase requires more training Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 35 18.6 19.1 19.1 Disagree 2 80 42 .6 43.7 62.8 Agree 3 57 30.3 31.1 94.0 Strongly agree 4 11 5 5.9 2.7 6.0 Missing 100.0 Valid cases 183 Total 188 100.0 100.0 Missing cases 5 Appendix G 186 LTSR - Pre-evaluation conference is stressful Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 11 5.9 12.8 12 . 8 Disagree 2 28 14 . 9 32.6 45.3 Agree 3 33 17.6 38.4 83 .7 Strongly agree 4 14 7.4 16.3 100.0 102 54.3 Missing Total 188 100 . 0 100.0 Valid cases 86 Missing cases 102 LTSR - Pre-evaluation conference is complex Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 9 4.8 10.5 10 .5 ' Disagree 2 20 10 .6 23 .3 33 .7 Agree 3 33 17.6 38.4 72.1 Strongly agree 4 24 12 .8 27.9 100.0 102 54.3 Missing Total 188 100.0 100.0 Valid cases 86 Missing cases 102 LTSR - Pre-evaluation conference is time-consuming valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 7 3.7 8.1 8.1 Disagree 2 11 5.9 12 . 8 20.9 Agree 3 39 20 . 7 45.3 66.3 Strongly agree 4 29 15 .4 33 .7 100.0 102 54.3 Missing Total 188 100 . 0 100 . 0 Valid cases 86 Missing cases 102 LTSR - Pre-evaluation conference requires more training Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 19 10.1 23.2 23 .2 Disagree 2 34 18.1 41.5 64.6 Agree 3 18 9.6 22 . 0 86.6 Strongly agree 4 11 5.9 13 .4 100.0 106 56 .4 Missing Total 188 100.0 100.0 Valid cases 82 Missing cases 106 LTSR - Classroom observation is stressful Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 8 4.3 9.3 9.3 Disagree 2 24 12.8 27.9 37.2 Agree 3 37 19 . 7 43.0 80.2 Strongly agree 4 17 9.0 19.8 100.0 102 54.3 Missing Total 188 100.0 100.0 Valid cases 86 Missing cases 102 LTSR - Classroom observation is complex Valid Cum Value Frequency Percent Percent Percent Strongly disagree l 5 2. 7 5.8 5.8 Disagree 2 15 8.0 17.4 23 .3 Agree 3 42 22.3 48.8 72.1 Strongly agree 4 24 12.8 27 . 9 100.0 102 54.3 Missing Total 188 100.0 100.0 Valid cases 86 Missing cases 102 LTSR - Classroom observation is time-consuming Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 3 1.6 3.5 3.5 Disagree 2 6 3.2 7.0 10.5 Agree 3 37 19.7 43.0 53 .5 Strongly agree 4 40 21.3 46.5 100 . 0 102 54.3 Missing Total 188 100.0 100.0 Valid cases 86 Missing cases 102 Appendix G 187 LTSR - Classroom observation is requires more training Valid Cum Value Frequency Percent Percent Percent strongly disagree 1 18 9.6 22 .0 22 .0 Disagree 2 28 14 .9 34.1 56.1 Agree 3 28 14 .9 34.1 •90.2 Strongly agree 4 8 4.3 9.8 100.0 106 56 .4 Missing Total 188 100.0 100.0 Valid cases 82 Missing cases 106 LTSR - Post observation phase is stressful Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 4 2 .1 4.7 4.7 Disagree 2 4 2.1 4 . 7 9.3 Agree 3 34 18.1 39.5 48.8 Strongly agree 4 . 44 23 .4 51.2 100.0 102 54.3 Missing Total 188 100.0 100.0 Valid cases 86 Missing cases 102 LTSR - Post observation phase is complex Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 3 1.6 3.5 3.5 Disagree 2 4 2.1 4.7 8.1 Agree 3 29 15 .4 33.7 41.9 Strongly agree 4 50 26.6 58 .1 100.0 102 54 . 3 Missing Total 188 100.0 100.0 Valid cases 86 Missing cases 102 LTSR - Post observation phase is time-consuming Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 4 2 .1 4.7 4.7 Disagree 2 7 3.7 8.1 12.8 Agree 3 29 15 .4 33.7 46.5 Strongly agree 4 46 24.5 53 .5 100.0 102 54.3 Missing Total 188 100 . 0 100.0 Valid cases -86 Missing cases 102 LTSR - Post observation phase requires more training Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 16 8.5 • 19.5 19.5 Disagree 2 19 10 .1 23.2 42 . 7 Agree 3 32 17.0 39.0 81. 7 Strongly agree 4 15 8.0 18.3 100 . 0 106 56 .4 Missing Total 188 100.0 100.0 • Valid cases 82 Missing cases 106 LTSR - Final report writing phase is stressful valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 1 .5 1.2 1.2 Disagree 2 6 3 .2 7.0' 8.1 Agree 3 36 19.1 41.9 50.0 Strongly agree 4 43 22.9 50.0 100.0 102 54.3 Missing Total 188 100 . 0 100.0 Valid cases 86 Missing cases 102 LTSR - Final report writing phase is complex Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 1 .5 1.2 1.2 Disagree 2 4 2 .1 4 .7 5 . 8 Agree 3 26 13.8 30.2 36 .0 Strongly agree 4 55 29.3 64 .0 100 . 0 102 54.3 Missing Total 188 100.0 100.0 Valid cases 86 Missing cases 102 Appendix G 188 LTSR - Final report writing phase is time-consuming valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 1 .5 1.2 1.2 Agree 3 28 14.9 32 .6 33 .7 Strongly agree 4 57 30.3 66.3 100.0 102 54.3 Missing Total 188 100 . 0 100.0 Valid cases 86 Missing cases 102 LTSR - Final report writing phase requires more training Valid Cum Value Frequency Percent Percent Percent Strongly disagree 1 15 8.0 18.1 18.1 Disagree 2 19 10.1 22.9 41.0 Agree 3 23 12.2 27.7 68.7 Strongly agree 4 26 13.8 31.3 100.0 105 55.9 Missing Total 188 100 . 0 100.0 Valid cases 83 Missing cases 105 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Country Views Downloads
United States 15 1
China 8 33
France 4 0
Canada 4 0
Algeria 2 0
South Africa 2 0
Mauritius 1 0
United Kingdom 1 0
City Views Downloads
Unknown 11 28
Arlington Heights 5 0
Shenzhen 5 33
Ashburn 4 0
Beijing 3 0
Ottawa 2 0
Victoria 2 0
Sunnyvale 2 0
Seattle 1 0
Redmond 1 0
Rose Hill 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0055018/manifest

Comment

Related Items