Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Interpreting the Fraser Institute ranking of secondary schools in British Columbia : a critical discourse… Simmonds, Michael John 2012

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2012_spring_simmonds_michael.pdf [ 2.54MB ]
Metadata
JSON: 24-1.0072603.json
JSON-LD: 24-1.0072603-ld.json
RDF/XML (Pretty): 24-1.0072603-rdf.xml
RDF/JSON: 24-1.0072603-rdf.json
Turtle: 24-1.0072603-turtle.txt
N-Triples: 24-1.0072603-rdf-ntriples.txt
Original Record: 24-1.0072603-source.json
Full Text
24-1.0072603-fulltext.txt
Citation
24-1.0072603.ris

Full Text

    INTERPRETING THE FRASER INSTITUTE RANKING OF SECONDARY SCHOOLS IN BRITISH COLUMBIA: A CRITICAL DISCOURSE ANALYSIS OF HOW THE MECHANICS OF SYMBOLIC CAPITAL MOBILIZATION SHAPES, MANAGES, AND AMPLIFIES VISIBILITY ASYMMETRIES BETWEEN SCHOOLS AND SCHOOL SYSTEMS  by  Michael John Simmonds  M.Ed., Columbia University, 1998 M.A., McGill University, 1991 Diploma in Secondary Science Education, McGill University, 1989 B.P.E., University of New Brunswick, 1985   A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  DOCTOR OF EDUCATION in The Faculty of Graduate Studies  (Educational Leadership and Policy)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) February 2012      © Michael John Simmonds, 2012   ii Abstract In the discourse on how to improve British Columbia’s secondary schools two prevailing epistemological tensions exist between two competing rationalities: (1) an instrumental rationality that privileges sense-making born out of data-gathering, and (2) a values- rationality that is discernibly more context-dependent. The seeds for public discord are sown when a particular kind of logic for capturing the complexity of any problematic is privileged over a competing (counter) logic attempting to do the same thing. The Fraser Institute proposes to the public a particular vision on how to improve secondary schools by manufacturing annual school report cards that are published in newspapers and online. Proponents of school report cards believe that school improvement is predicated on measurement, competition, market-driven reform initiatives, and choice. They support the strategies and techniques used by the Fraser Institute to demarcate the limits and boundaries of exemplary educational practice. Critics of school report cards object to the way ranking rubrics highlight and amplify differences that exist between schools. They believe that the rankings devised by the Fraser Institute rewards certain kinds of schools while statistically sanctioning others. Drawing principally on published media accounts and the Fraser Institute’s own documents this project shows how the Fraser Institute has mounted an effective public critique on the state of public secondary schools. It describes how statistical revisions made to the ranking matrix from 1998-2010 resulted in a marked redistribution of top-ranked schools in British Columbia that privileged certain kinds of private schools over public schools. School rankings designed to locate and fix their respective subjects in this way call on agents to compete for, acquire, and leverage different kinds of symbolic capital on the field of power, which they use to promote their respective political agendas. When the kinds of stories that can be told about schools become narrated through a statistical régime of truth they may negate capital disparities that exist between schools and the population of students they serve. At stake is the emancipatory belief that different kinds of schools operate to serve the diverse educational needs of different kinds of students in different kinds of ways.   iii Table of Contents Abstract ...................................................................................................................... ii Table of Contents ........................................................................................................ v List of Tables .............................................................................................................. vi List of Figures ............................................................................................................ vii Acknowledgements ................................................................................................. viii  CHAPTER 1: School Rankings Contextualized ............................................................... 1 The Problem and its Significance ........................................................................................ 7 What’s at Stake? ............................................................................................................... 11 Problematic ................................................................................................................... 13 Polemical Models .............................................................................................................. 16 Tout Court—“The Only World” ..................................................................................... 18 Dissertation Roadmap ...................................................................................................... 25  CHAPTER 2: Theoretical Framework .......................................................................... 29 Introduction ...................................................................................................................... 29 Foucault and School Rankings .......................................................................................... 33 The Panopticon ............................................................................................................. 39 Systems of Accountability ................................................................................................. 41 Disclosure ...................................................................................................................... 42 Transparency ................................................................................................................. 42 Redress .......................................................................................................................... 44 Bourdieu and School Rankings .......................................................................................... 45 Limitations of Foucault and Bourdieu in Explaining School Rankings .............................. 53 An Integrative Approach of Foucault and Bourdieu ......................................................... 55 Research Questions .......................................................................................................... 61  CHAPTER 3: Methodology ......................................................................................... 63 Research Design—Case Study ........................................................................................... 63 Triangulation ................................................................................................................. 64 Data Gathering ............................................................................................................. 65 Critical Discourse Analysis ................................................................................................. 66 Data Sources Contextualized ........................................................................................ 73 Limitations of Critical Discourse Analysis ...................................................................... 76       iv CHAPTER 4: A Changing School Ranking Rubric .......................................................... 78 Introduction ...................................................................................................................... 78 The Epistemology of Seeing .............................................................................................. 80 Iteration #1 (1998-2000): Five Key Performance Indicators ............................................ 84 Descriptive Indicators: Enrolment Data, Trend Indicators, and Parents‘ Education .... 95 Iteration #2 (2001-2002): Gender Matters ..................................................................... 100 Descriptive Indicator: Dropout Rate ........................................................................... 116 Iteration #3 (2003-2006): Refining the Student Cohort ................................................. 118 Descriptive Indicator: Extracurriucular Activities ........................................................ 124 Iteration #4 (2007): A Revised Graduation Program ...................................................... 126 Iteration #5 (2008-2010): Revised University Admission Policy Changes ...................... 128 Conclusion ....................................................................................................................... 135  CHAPTER 5: Discursive Practices and the Mechanics of Capital Mobilization ............ 138 Introduction .................................................................................................................... 138 Knowledge Discourses .................................................................................................... 139 Common Sense Discourses ............................................................................................. 152 Expanding the Surveillance Gaze .................................................................................... 154 Alberta School Rankings ............................................................................................. 154 Elementary School Rankings ....................................................................................... 156 New Brunswick School Rankings ................................................................................. 160 Aboriginal Report Card................................................................................................ 161 School Improvement Discourses .................................................................................... 164 Economic Discourses ...................................................................................................... 169 Coded Discourses ............................................................................................................ 172 Action Discourses and the Politics of Space ................................................................... 173 Local Capital Acquisition ................................................................................................. 181 International Capital Acquisition .................................................................................... 183 The Beacon Schools Initiative ...................................................................................... 189 Conclusion ....................................................................................................................... 190  CHAPTER 6: Manufacturing Dissent ......................................................................... 195 Dissertation Synopsis ...................................................................................................... 195 A Critical Overview of Major Findings ............................................................................ 199 Disciplinary Power is Exercised Through Published School Report Cards ................... 199 Technologies of Representation Have Surveillance and Visibility at its Core ............. 203 Agents Deploy Language in Ways that Mediate and Reposition Relationships of Power ..................................................................................................................................... 207 Capital is Acquired Through Storytelling, Coalition Building, and Policy Borrowing .. 210 Devolution and Choice in England, New Zealand, and the United Sates ....................... 216 Reflections on Case Study Research ............................................................................... 220 Personal Reflections........................................................................................................ 224 Bibliography ........................................................................................................... 236   v APPENDICES ........................................................................................................... 253 Appendix A: Decile Range Table ..................................................................................... 253 Appendix B: Timeline of Fraser Institute publications and changing political landscape in British Columbia (Spring 1998-Summer 2009) ............................................................... 254 Appendix C: Percentile rankings for British Columbia schools (Iterations #1 & 2) ........ 257 Appendix D: Percentile rankings for British Columbia schools (Iterations #3-5) ........... 258 Appendix E: Alphabetized list of school acronyms identified in Appendix F.................. 259 Appendix F: Top BC schools ranked by Fraser Institute (1998-2010) ............................. 260 Appendix G: Member schools of ISABC .......................................................................... 263      vi List of Tables Table 1: Documents used for critical discourse analysis ............................................................................... 72  Table 2: Changing iterations of the Fraser Institute ranking rubric (1998-2010) ......................................... 81  Table 3: Relative percentage weights of KPIs for iteration #1 ...................................................................... 85  Table 4: First school ranking table published for YHS (1998) ....................................................................... 86  Table 5: Second school ranking table published for YHS (1999) ................................................................. 100  Table 6: Relative percentage weights of KPIs for iteration #2 .................................................................... 102  Table 7: Relative percentage weights of KPIs from iteration #1 to iteration #2 ......................................... 105  Table 8: Percentage distribution of public (PU) and private (PV) schools for iterations #1 and #2 ............ 109  Table 9: Schools attaining a score of ten on the Fraser Institute ranking (1998-2010) .............................. 114  Table 10: Percentage of schools ranked in the top two decile ranges for iterations #2 and #3 ................. 122  Table 11: Relative percentage weights of KPIs for iteration #3 .................................................................. 124  Table 12: Relative percentage weights of KPIs for iteration #4 .................................................................. 127  Table 13: Relative percentage weights of KPIs for iteration #5 .................................................................. 131     vii List of Figures Figure 1: Discursive practices (DP) emerge from the conflation of knowledge (K), truth (T), and language (L) .................................................................................................................................................................. 56 Figure 2: Discursive practices (DP) are shaped by habitus ........................................................................... 57 Figure 3: Agents acquire capital (C) on the field of power ........................................................................... 58 Figure 4: Analyzing the school accountability game through an approach that integrates Foucault and Bourdieu ....................................................................................................................................................... 60 Figure 5: Number of 'top' ranked public and private schools for iterations #1 and #2 .............................. 110 Figure 6: Kitsilano Secondary's overall school ranking for iterations #1 and #2 ........................................ 111 Figure 7: York House School's overall school ranking for iterations #1 and #2 .......................................... 112 Figure 8: Percent annual growth in student enrolment for private schools (1997-2010) .......................... 175 Figure 9: Percent annual growth in student enrolment for public schools (1997-2010) ........................... 176     viii Acknowledgements I have my good friend and former York House School colleague, Carolyn Levy, to thank for insisting that I apply to the doctoral studies program in the spring of 2004. I didn’t know it at the time, but her persistent encouragement resulted in my undertaking—what has turned out to be—one of the most challenging, frustrating, and rewarding experiences of my adult life. I didn’t expect the journey would take this long to complete. Thank you to members of the (2004) EDD Cohort who began this extended journey of scholarship and self-discovery with me. They are: Lyn Daniels, Peter Froese, Bill Koty, Stephen Kozey, Judith McGillivray, Val Peachy, Jeanette Robertson, Trish Rosborough, and Marilynne Waithman. Their collective good-natured and nurturing spirit helped make the first two years of this odyssey fun. Thanks especially to Gwen Dueck and Kathryn MacLeod whose unwavering support and friendship helped me navigate set-backs and disappointments that punctuated my experience as a doctoral student. I’d like to congratulate my colleagues within this group who have completed their work and extend my best wishes for a successful outcome to those of who are still engaged in the adventure. I would also like to thank my professors for their excellent teaching. They are: Dr. Shauna Butterwick, Dr. David Coulter, Dr. Mary Daniels, Dr. Jo-Anne Dillabough, Dr. Mona Gleason (my pro tem adviser), Dr. Garnet Grosjean, Dr. Dan Pratt, Dr. Tom Sork, and Dr. Daniel Vokey. Without exception every one of them contributed to my completing this project in some way, and they all helped me understand the importance of thinking critically and reflexively throughout the two years that defined the course work part of the program. Thanks especially to Dr. Michelle Stack and Dr. Taylor Webb who, in addition to teaching me policy and accountability courses respectively, also served on my advisory committee. Their focused and critical insights helped me immeasurably to complete this project. I need to also acknowledge the work of Peter Cowley and Dr. Stephen Easton of the Fraser Institute whose contributions to the discourse surrounding education in 1998   ix sparked a national debate that continues to this day. Their perspective on what matters in education has made my project possible. Of course I am grateful for the enduring love, support, and interest my family has consistently demonstrated in my personal and professional lives. They very kindly and skillfully struck an appropriate balance of knowing when to ask about the progress I was making on my “paper”, and when not to. Thank you, Bob, Mary, Paul, Linda, Kelli, Jamie, Tara, Clark, Andrea, and Faye. My partner, Steve Wilson, has also been extraordinarily supportive and patient throughout this endeavor. Not only has he turned a blind eye to the columns of paper that have engulfed our collective workspace over the years, but he has feigned interest in wanting to learn more about what Michel Foucault and Pierre Bourdieu thought about anything. Finally, and most importantly, I would like to acknowledge the efforts of my principal advisor, Dr. André Mazawi. Throughout this project André was uncompromisingly discerning, rigorous, and—above all—consistently fair-minded in his appraisal of my work. André challenged me to think, reflect, and write in ways I never imagined possible. At the same time, however, André honoured the direction and course I wanted my project to take. I will always appreciate André’s commitment, exactitude, humour, kindness, patience, compassion, and razor sharp intellect. He is an exceptional person and a master teacher. Thank you for everything!    1 CHAPTER 1: School Rankings Contextualized  In the spring of 2006, Peter Cowley—the Director of School Performance Studies at the Fraser Institute—addressed a room full of teachers and administrators at Vancouver’s Arbutus Club. As one of two principal authors of the ‘Report Card on Secondary Schools in British Columbia and the Yukon', he described the public’s thirst for holding schools accountable (Cowley & Easton, 2006). Cowley explained the genesis of the controversial Fraser Institute ranking of public and private schools that he coauthored with Dr. Stephen Easton by relating a personal story. In the mid-90s Mr. Cowley wanted to get information about how students in his daughter’s public high school had performed on standardized provincial examinations. He described how the principal of his daughter’s school refused to share the information, and the frustration he felt as a concerned parent being denied access to it. (The principal explained that school exam results were confidential and not for public consumption.) Cowley talked about how a sympathetic teacher secretly provided him with the information he wanted, and how he felt empowered as a result—Peter could finally make a personal assessment about the overall quality of his daughter’s public high school. He talked about how he was approached by the Fraser Institute to develop a school report card that helped British Columbia parents do precisely what Cowley had difficulty doing for himself—assess the educational experience of students attending British Columbia’s high schools. Peter talked about accepting that challenge and how a concerned parent suddenly had access to the intellectual, financial, and human capital of an advocacy think tank with political clout. He talked about the first ‘Report Card’ being published in ‘The Province’ newspaper in the spring of 1998, and the maelstrom of controversy that accompanied it—everyone, it seemed, had an opinion (Cowley, Easton, & Walker, 1998; Proctor, 1998a, 1998b, 1998c, 1998d, 1998e). He talked about how five “Key Performance Indicators” (hereafter: KPIs) that informed the first published ranking in 1998 had grown to eight in 2006 and why vocal critics of the report should perceive today’s ranking as being a more nuanced, sensitive, and statistically relevant   2 incarnation. He talked about the ‘Garfield Weston Awards for Excellence in Education’ luncheon hosted by the Fraser Institute every spring at which deserving principals were honored for having achieved significantly marked improvements in their school’s overall ranking from the previous year, and the pride each of them felt for being recognized in this way. Peter Cowley talked passionately about education and I was left with the impression that he was committed to improving the educational experience of students in British Columbia. My observation, however, was that in talking about education that evening, Mr. Cowley’s position was anchored in a particular kind of instrumental rationality that troubled his critics because it reduced schools to measurable outcomes—exclusively. Critics of the ranking challenged Cowley to consider the obstacles educators faced in meeting the diverse needs of a diverse student population and they objected to the complexity of school systems being reduced to KPIs. Schools were not homogenous places and they reasoned that it made no sense for the Fraser Institute to compare different kinds of schools that served different kinds of students. Apparently sympathetic to the concerns being expressed by hardworking teachers that evening, Cowley defended the logic of the ranking and explained why it was useful—it helped parents make informed choices about where to send their children to school. He described how each of the KPIs included in the report had been taken from data the Ministry had collected on students and schools and the role teachers played in setting the provincial exams. The evening concluded with Peter challenging his critics assembled in the audience to provide an alternative to the ranking, or to suggest ways that it could be improved. His closing remark, however, made it clear why the ranking would continue to generate controversy, “If people think we have a narrow focus, give us more data.”1 Cowley’s comment strikes at the core of the ranking debate because it is predicated on the presumption that school ranking instruments (like the one developed by the Fraser Institute) is key to improving schools. It also invites school ranking critics to engage the Fraser Institute on its own  1  Personal notes made by Michael Simmonds while attending a PDK - UBC Chapter, dinner meeting at the Arbutus Club, Vancouver, April 19, 2006.   3 epistemological terrain—a terrain defined by standardization, performativity, and the use of measurement. It occurred to me then that if the voices of British Columbia’s students were heard at all, the reactions to Peter’s address at the dinner was evidence they were being articulated within a polemical discourse. It seemed that two giant, tectonic plates of truth collided that evening and the controversy surrounding the ranking was the end result. And while Cowley was open to the possibility of expanding the focus of his ranking by including more data, his critics discounted the stories the Fraser Institute told about schools as they were narrated by statistical discourses that did not include a “misery index" (Kozol, 2005, p. 51). They argued the voices of students should be heard outside the boundaries imposed by the Fraser Institute and their position had been well documented in published media accounts printed on the pages of British Columbia’s regional and provincial newspapers (Beyer, 2000; Editorial, 1999; Hughes, 2005; Johal, 2001a, 2001b; Knox, 2005; Masleck, 2000; McDonnell, 2005; Proctor, 1998a; Steffenhagen, 2002b, 2003c, 2004a). As the evening unfolded I was struck by an interesting paradox that I believed was taking place in the room. It occurred to me that were it not for the publication of the Fraser Institute ranking in a provincial newspaper every spring, the dinner (and the heated conversations that informed the evening) would not be happening in this very public way. If nothing else, Cowley’s work sparked a dialogue within the field of education about the purpose and nature of schooling and the filled-to-capacity-room was evidence that it was a conversation worth having. It was within this space that student voices were articulated through a story told by high school rankings. And it was within the same space that student voices were articulated through the stories told by teachers working in low-, medium-, and high-ranked schools. I wondered if these stories were ever told before, and if they were, from whose perspective? Whose voices were included; silenced; marginalized; discounted; hidden; and amplified? I wondered what kinds of stories ranking discourses told about students, teachers, and schools? But mostly, I wondered what (or whose) purpose they served.   4 When I left the dinner I began to reflect on an important question that framed the conversations about the school accountability movement that emerged at the Arbutus Club dinner—by what techniques are truths about student achievement told in British Columbia and why do prevailing truths seem to be anchored in a particular kind of rationality that Peter Cowley represented that evening? My questions at the time were triggered by my fourteen-year professional practice working at York House School (YHS), an independent k-12 school for all-girls that was founded in 1932 in Shaugnessy— one of Vancouver's wealthiest neighbourhoods. YHS belongs to a number of organizations that includes: the Independent School Association of British Columbia (ISABC); the Federation of Independent Schools (FISA); Canadian Accredited Independent Schools (CAIS); the National Association of Independent Schools (NAIS); and the National Coalition of Girls’ Schools (NCGS). York House is classified as a Group 2 school in the Independent School Act of British Columbia.  “Group 2 schools receive 35% of their local school district’s per student operating grant on a per full-time equivalent (FTE) student basis. They employ BC certified teachers, have educational programs consistent with ministerial orders, provide a program that meets the learning outcomes of the British Columbia curriculum, meet various administrative requirements, maintain adequate educational facilities, and comply with municipal and regional district codes.” (British Columbia Ministry of Education of Independent Schools, 2010).  York House School (YHS) is a college-preparatory, day school that has a selective admissions policy for all prospective students. It has a population of six hundred students and every one of its (approximately fifty-five) graduating students gets accepted to highly competitive, post-secondary institutions throughout the world. On average parents spend about $14,000 (after tax) dollars on annual tuition and it is not uncommon for students to have spent twelve or thirteen years at the school. YHS employs a team of full-time fundraising development professionals whose sole task is to raise money for the school to assist in meeting (and exceeding) the operating and capital costs of running a mission-driven, independent school. Annual giving campaigns   5 from parents bring in approximately $500,000 per year. Its annual budget is approximately 7.5 million dollars. YHS is one of three all-girls’, private schools some families living in Vancouver might consider as being a best-fit school for their daughter. There are also a number of other independent, (k-12), co-ed schools in the city from which some families may choose. What is relevant to note at this juncture, however, is that YHS is endowed with high levels of financial, human, social, and cultural capital that—taken together—make it a relatively resource rich school in comparison to most other British Columbian schools. As well it is important to note that YHS is enmeshed within a network of power relations that span provincial, national, and international boundaries. The ISABC, FISA, CAIS, NAIS, and NCGS are different kinds of organizations that have in common the belief that mission-driven independent (and private) schools best serve the diverse educational needs of students. When the Fraser Institute published its second school ranking in 1999, York House School scored eight-point-six out of a possible ten-point-zero (Cowley, Easton, & Walker, 1999, p. 43). For the past eight years, however, the school has been identified with other independent and private2 schools in Vancouver as being a ‘top’-ranked (ten- out-of-ten) school. By that singular measure, York House lays claim to its status as being one of the “best” schools in British Columbia (Chung, 2006). Students complete all requirements of the Ministry’s revised graduation program in Grades 10-12, and may self-select from a number challenging, college-level, Advanced Placement (AP) courses that are offered in science, humanities, languages, English, and the fine arts. The teachers of York House School belong to the British Columbia Government and Service Union (BCGEU) and pay annual union dues. A full-time teaching load in the senior school (Grades 7-12) is six courses and the maximum class size is 20 students. (Some AP classes are taught with as few six students, but whether small classes run at all are left to the discretion of the Head.) An elected Board of Governors works at arm’s-length with the  2  The important distinction between a private and an independent school has to do with governance. An independent school is governed by an elected board of trustees and operates outside the boundaries imposed by public school boards. A private school can be part of another entity such as a for-profit corporation or a not-for-profit organization such as a church or synagogue. All religious schools are private schools. Most secular schools are independent schools.   6 Head of School to assist in directing the school’s activities. York House is inspected periodically by the Independent School Branch of the Ministry of Education to ensure that it complies with all aspects of the Independent School Act of British Columbia. Collectively these factors created a highly structured web of support for students aspiring to continue their educational pursuits at highly selective post-secondary institutions throughout the world. As importantly, they helped contribute to the school moving up the ranking one-point-four points to achieve a perfect ten-point-zero on the Fraser Institute’s school ranking scale of performativity (Cowley & Easton, 2001). In that incremental step York House made an enormous leap by joining the extremely small cluster of ‘elite’ independent and private schools in Vancouver that had also earned perfect scores. That was twelve-years ago, and with the most recent published ranking of the Fraser Institute report card on secondary schools in May of 2010, only two Vancouver public secondary schools have ever attained a ‘perfect’ ten-point-zero— Prince of Wales Secondary (Cowley, et al., 1999) and University Hill (Cowley & Easton, 2003). How was this possible? What information about school culture was missing from the Fraser Institute’s report card and why is it important to consider in the first place? Are private schools really ‘better’ than public schools, and if so why? That is when I began to seriously consider the possibility that school rankings measured organizational capacity—foremost—and had very little to say about ‘school goodness’. I surmised that top-ranked schools like York House had the human, financial, and cultural capital necessary to achieve top-ranked scores in ways that most other public and private schools throughout the province may have lacked. These reflections have led me to explore three aspects of the ‘Fraser Institute Report Card on Secondary Schools in British Columbia and the Yukon’: (1) how rankings operate discursively to construct the realm of schooling, performance, and accountability; (2) what ideological assumptions underpin the statistical formulae used to construct the rankings in the first place; and (3) why the Fraser Institute has garnered so much traction on the field of education in promoting its school ranking reports. These reflections have informed the   7 way in which I have come to problematize the school ranking accountability phenomenon within the context of British Columbia in the broadest sense.  The Problem and its Significance The Fraser Institute is a non-elected, libertarian think tank that promotes an agenda of improving schools through competition, choice, and market forces. Declaring on its website to be “an independent, non-partisan organization” (The Fraser Institute, 2010a). The Fraser Institute is comprised of academics, business executives, and former politicians that espouse and promote right-leaning, conservative political agendas and ideologies. Every spring the Fraser Institute publishes a ranking of public and private schools in ‘The Province’3 newspaper based on data it gleams from the Ministry’s website. Entitled ‘Report Card on Secondary Schools in British Columbia and Yukon’, the rankings represent the collected efforts of statisticians, computer technicians, economists, politicians, and business executives to promote the Institute’s mission “to measure, study, and communicate the impact of competitive markets and government intervention on the welfare of individuals” (The Fraser Institute, 2010a). That a single person represents the report’s findings says something about how political agendas and ideologies from special interest groups can be both promoted and humanized at the same time. This is an important consideration because in assigning scores to schools, the Fraser Institute effectively names-schools by numbering-schools. The public, however, responds best to messages communicated by human beings, and not organizations (and institutes per se). In this way, Peter Cowley has become the spokesperson associated with school-wide accountability issues in British Columbia and elsewhere. This is relevant given Pierre Bourdieu’s assertion that language and power  3  April of 2007 marked the first time since 1998 that the Fraser Institute’s ‘Report Card on Secondary Schools’ was published in the ‘The Vancouver Sun’ and not ‘The Province’ newspaper. ‘The Vancouver Sun’ has a weekly readership of 848,300. ‘The Province’, by comparison, has a readership of 865,000 (Source: Anne Crassweller, President Newspaper Audience Databank (NAD) Inc., (Personal email correspondence, May 22, 2007).   8 have profound political implications when an authorized someone speaks on behalf of an entire group. At stake is the deployment of power through language.  “The spokesperson, in speaking of a group, on behalf of a group…institutes the group, through the magical operation that is inherent in any act of naming. That is why one must perform a critique of political reason, which is intrinsically inclined to abuses of language that are also abuses of power, if one wants to pose the question with which all sociology ought to begin, that of existence and the mode of existence of collectives" (Bourdieu, 1985, p. 741).  The existence and the mode of the existence collective to be questioned here is the constellation of power forces that underpin and shape the Fraser Institute, and more specifically, its annual ranking of secondary schools.   According to the Fraser Institute’s 2007 Annual Report entitled, ‘Changing the World’, 87% of its 12.7 million dollar operating budget came from unnamed organizations, corporations, and foundations (The Fraser Institute, 2007). The remaining 13% came from personal donations. While the majority of the Institute’s funding comes from the corporate sector, the 2007 Annual Report indicates that the majority of Fraser Institute supporters are individuals, accounting for 85% of its support base. This is an important consideration because it suggests that there exists strong grassroots support for an advocacy think tank whose work is primarily funded by ‘big business’ set on ‘changing the world’ (The Fraser Institute, 2007). The mode of the collective broadens to include a list of researchers who are currently associated with the Fraser Institute. There are now more than three hundred and fifty researchers in twenty-two countries associated with the Fraser Institute; four of whom have been awarded Nobel Prizes4 in Economics (The Fraser Institute, 2007). With offices in Vancouver, Calgary, Montreal, and Toronto, “the Fraser Institute has active research ties with similar independent organizations in more than 70 countries around the world” (The Fraser Institute, 2007).  4  Freidrich A. von Hayek (1974); Milton Freidman (1976); George J. Stigler (1982); and James M. Buchanan Jr. (1986).    9 Again, this is an important consideration because it speaks to the epistemic sources of power from which the Fraser Institute gains its legitimacy—a legitimacy it uses to speak with authority on issues such as education, health care, taxation, and immigration within the public domain. The Fraser Institute occupies an important place, therefore, in a broader constellation of power brokers because it is well funded, well connected, and well placed.   Today, the Fraser Institute is associated with: The Hudson Institute, C.D. Howe Institute, Free The World, Atlantic Institute for Market Studies, Montreal Economic Institute, State Policy Network, Institute of Public Affairs (Australia), EKOME5 (Society for Social and Economic Studies), and Frontier Centre for Public Policy, among others (The Fraser Institute, 2010c). What these institutes from around the world have in common is the development—and promotion—of policy platforms that are closely aligned with the Fraser Institute’s overall mission. Becoming part of a world coalition of advocacy think tanks, therefore, substantially empowers the Fraser Institute because it allows for discourses to be cast, and recast, in ways that can be universally packaged and disseminated. If league tables could be used to improve the educational condition in the United Kingdom, for example, something approximating them—like school rankings— must be able to do the same in British Columbia. What’s essential to understand, however, is the considerable power the Fraser Institute wields in its own right— independent of a worldwide coalition of like-minded think tanks. If numbers tell the whole story, as the Institute’s annual report likes to promote, consider the following statistics that were published at the end of their 2007 Annual Report (The Fraser Institute, 2007).   5  EKOME is headquartered in Greece.   10 Numbers Tell the Whole Story  214,000,000 combined circulation & listenership of Canadian media coverage 3,807,728 files, including podcasts & videos, downloaded from all Fraser Institute web sites 3,000,000 students attend 6,300 schools rated in Fraser Institute School Report Cards 1,331,549 visits to Fraser Institute web sites 59,000 copies of monthly magazine Fraser Forum mailed to subscribers 24,884 inquiries from around the world handled by Fraser Institute staff 6,243 news stories in print, on line, and broadcast around the world 4,012 subscribers to Fraser Institute e-mail updates 3,656 Fraser Institute supporters from 12 countries 1,058 mentions on external web sites and blogs 350 authors from 22 countries have contributed to Institute research 282 commentaries published in newspapers across North America 225 news releases & media advisories issued 188 presentations given around the world by Fraser Institute staff 117 Fraser Forum articles on wide variety of public policy issues 98 requests from around the world to reprint Fraser Institute material 24 languages in which Fraser Institute books have been published 5 Fraser Institute office locations to best influence the North America policy debate 1 of the most influential think tanks in the world   What is most striking about these statistics is the extent to which the Fraser Institute occupies entirely different fields that reach entirely different populations— physicians respond to its surveys; principals are honored at its award luncheons; Fraser Institute publications are translated into different languages; and offices have been strategically positioned in North America to “best influence the North America policy debate” (The Fraser Institute, 2007, p. 52).    11 What’s at Stake? The Fraser Institute’s school report card operates within a specific discursive practice that endeavors to hold educational professionals accountable for what goes on within them. In one sense, the invisible work of educational professionals is rendered visible in that parents ‘see’ a small part of what goes on inside classrooms every time the ranking is published. The assumption is that good teaching occurs in the classrooms of top-ranked schools and that problems exist in the classrooms of low-ranked ones. The ranking, therefore, provides parents with an instrument by which they can choose good schools for their children and avoid bad ones. In this way, the Fraser Institute’s annual report card is perceived by some to serve a public service (Bierman, 2007; Cowley, 2005b, 2007; Editorial, 2001, 2002c, 2003; McMartin, 2010b; Raham, 1999). But in reducing the pedagogical, social, cultural, and economic complexities of public and private secondary schools to KPIs, the Fraser Institute forces a consciousness on the public about what they think matters in schools by effectively promoting a culture of performativity, competition, and consumption. As importantly, the Fraser Institute’s message about education is well articulated in media accounts that include print, radio, the internet, and television (Abelson, 2002; The Fraser Institute, 2006, 2007, 2008a, 2009, 2010b). By this measure, the Fraser Institute commands significant attention on the public media stage being cited six times more than its left-leaning policy institute counterpart—the Canadian Center for Policy Alternatives (CCPA) (Abelson, 2002, pp. 98- 99). As such it is important to critically examine the assumptions made by the Fraser Institute in publishing its annual school ranking for five principal reasons: (1) the Fraser Institute ranking is fast becoming a national fixture in Canada as it begins to publish school rankings in Alberta, Ontario, Quebec, New Brunswick, and the Yukon Territory. That the Fraser Institute has established score-board-like-school-rankings in provinces and territories that are culturally, politically, and economically disparate speaks to the inroads it has gained in the minds of Canadians everywhere. This is important because, like published school rankings in other parts of the world, they are used by British Columbians, Albertans, Quebecers, and Maritimers alike to make inferences about   12 schools that, in turn, have “predictable consequences" on the educational system that may not always be in the best interest of students (West & Pennell, 2000, p. 434); (2) as a non-elected entity the Fraser Institute influences public educational policy and, has as its prevailing goal, the promotion of neoliberal market forces to improve both public and independent/private schools. This speaks to the imposition of a particular ideology that has both political and pedagogical implications that need to be problematized; (3) the data-driven educational reforms supported by the Fraser Institute are steeped in neoconservative standardization movements and accountability systems that are—by their very nature—limiting, reductive, and potentially harmful to schools and students (Rowe, 2000). This belief is reflected by published media accounts that have framed the ranking debate within a polemical discourse that juxtapose two competing core rationalities and will be explored in this project; (4) the Fraser Institute ranking of schools impacts leadership practices within schools. Increasingly school leaders seem to be developing strategies for playing the ranking game that make between-school comparisons highly problematic (Wilson, 2004). This kind of complicit school- accountability game-playing can have deleterious consequences. In their book, Collateral Damage: How High-Stakes Testing Corrupts America’s Schools, Nichols and Berliner (2007) explain how ‘Campbell’s Law’ can shape human behavior. When single measures (or indicators) of success and failure in a profession take on too much value, Campbell’s Law asserts that the exaggerated reliance on the measure can create conditions that promote corruption and distortion; and (5) a final reason for critically examining the Fraser Institute report card on secondary schools is that rankings derived from a statistical language privilege a particular kind of instrumental rationality that has profound sociological and pedagogical implications (Apple, 2000; Goldstein & Spiegelhalter, 1996; Norris, 2011; Whitty & Edwards, 1998). The end result may be the division of schools and school districts into ‘winners’ and ‘losers’. This is an important consideration because in ranking schools the Fraser Institute effectively rewards and punishes schools, if rewards and punishments are reflected by choices parents make about where to send their children to school. Winning schools attract the brightest   13 students. Losing schools take what’s left over. This is made possible to the extent the public continues to view the published school ranking as a legitimate authoritative document. This is highly problematic because authoritative documents are constructed to give the impression the author(s) are representing the truth of the matter. “This is achieved through the use of specific syntactical, grammatical…devices which empower the authors and disempower the reader” (Ozga, 2000, p. 19).  “*T+he reader is not just presented with an argument and then asked to make up their mind about its merits and demerits, but is positioned within a discourse—a way of understanding relations within the world— which, if successful, restricts and constrains the reader from understanding the world in any other way. This discourse is characterized as common sense, whereas in fact it is merely one way of viewing the world and is therefore ideological” (Ozga, 2000, p. 19).  Problematic  The emergence of school rankings and their impact on shaping educational discourse spans at least three continents (North America, Europe, and Australia) and has been ongoing for at least three decades (Cowley & Easton, 2006; Dwyer, 2006; Goldstein & Spiegelhalter, 1996; Rowe, 2000; Tight, 2000; West & Pennell, 2000). The United Kingdom’s League Tables that summarize the performance of schools and universities have been well established since the mid-1980s (West & Pennell, 2000). League tables in the UK consist of ranking schools “computed from students’ average achievement scores (raw and unadjusted) on national curriculum test results at ages 7, 11 and 14 years, together with similar scores for the General Certificate of School Education (16 year-olds) and A-levels (18 year-olds)” (Rowe, 2000, p. 75). In the United States, “detailed public accountability of schools and State education systems on the basis of students’ test scores is well established, despite vigorous debate about the consequences of basing performance indicator and accountability arrangements solely on the outcomes of system-wide, standardized testing/assessment programs” (Rowe, 2000, p. 74). Closer to home Canadian colleges and universities have been ranked by Maclean’s magazine since 1991 with the goal of establishing Canada’s ‘best’ post-   14 secondary institutions (Fillion, 2006; Hunter, 1999; Kong & Veall, 2005; Stevenson & Kopvillem, 2006). Despite the geographic expanse over which ranking debates occur nationally and internationally, they have at their core the expression of common concerns about the impact school rankings have on teaching and learning on many levels that include: teacher morale, teacher effectiveness, socioeconomic disparity, selective admission procedures, and the erosion of professionalism in an educational system that values standardized testing and market driven reforms (Ball, 1997; Gaskell & Vogel, 2000; Lucey & Reay, 2002; Masleck, 2000; Rist, 2000; Shaker, 2007; Webb, 2005, 2006, 2007).  Although the literature is replete with studies that examine the impact school and university rankings have on the life-world of students, teachers, parents, professors, and administrators the focus of this project is to examine the effect a local ranking has on shaping how the public perceives secondary schools in British Columbia. Since publishing its first secondary school ranking the Fraser Institute continues to present its understanding on the public about what constitutes a quality educational experience for students in British Columbia (Cowley, 2005a; Cowley, Easton, & Walker, 1999; Rocky, 2003; Schmidt, 2005). It does this by first selecting some data that is collected by the Ministry about students and schools, and then interpreting that information in a statistical analysis that measures ‘school goodness'. The seeds for public discord are sown when a particular kind of logic for capturing the complexity of any problematic is privileged over a competing (counter) logic attempting to do the same thing. In this debate, some quantitative data counts while all qualitative data does not when the Fraser Institute compiles its published ranking of best-to-worst performing schools. This is extremely problematic on many levels that will be addressed in this project, but some scholars have argued that it may be considered a form of epistemic assault on teachers and schools alike (Apple, 2000; Webb, 2007; Whitty & Power, 2002b).  Despite the data-centric focus on school-wide accountability, the Fraser Institute promotes itself in a way McHoul and Grace (1993) describe as “having adequately captured its ‘object' by a series of techniques which ‘stitch up' the imperfections in its   15 representation of other” (McHoul & Grace, 1993, p. 23). I am interested in unstitching the techniques adopted by the Fraser Institute in promoting a particular logic; a logic that follows—what Foucault called—a régime of truth (Foucault, 1977). This logic aims to reduce the socially complex world of schools into an overall mark out of ten, and for the purpose of this study is considered as being a statistical régime of truth. This is especially important given the influence the Fraser Institute has managed to exact on public opinion about school accountability and the school choice movement, not only in British Columbia, but throughout Canada as well. A central argument I will make is the capitalist rhetoric used by the Fraser Institute to promote its free-market agenda for school reform overshadows the constellation of deeper (hidden) forces that operate at the nexus of discourse, representation, and power to effect educational policy-making. Additionally, I am interested in examining how statistical discourses used by the Fraser Institute to construct secondary school rankings have changed over time, and how those changes have reconfigured the school ranking landscape. I am also interested in understanding how information is packaged and disseminated by the Fraser Institute for public consumption that results in school rankings (and the KPIs that comprise them) becoming the accountability litmus test for school goodness in British Columbia. This is an important consideration because secondary schools, and the people working within them, are effectively rewarded and punished in the public’s mind based on a school’s relative position on the ranking: Top-ranked schools are perceived as being better than mid-ranked schools, and mid-ranked schools are perceived as being better than low- ranked schools. Herein lies my entry point into a research project, that has as its principal focus, a discursive analysis of the Fraser Institute ranking of secondary schools: That a statistical language is used to promote and legitimize how schools have come to be represented in the educational field. I argue this mode of representation is made more palatable to the public because it is presented as being objective, fair-minded, and steeped in a particular kind of instrumental rationality that allows for it to be universally applied to all schools. My study will explore how discourse is contained within language   16 and how “language…possesses new powers, and powers peculiar to it alone" (Foucault, 2006, p. 53). I am interested in problematizing the legitimacy of school rankings in general by unpacking some of the underlying assumptions the Fraser Institute makes about schools in particular. The Arbutus Club dinner was evidence the accountability issue was not only controversial but also polarizing. Both sides seemed trapped by their own perspective; unwilling to hear the ‘other' side’s point of view. Why did there not appear to be a middle ground in the debate? Is it possible to delineate a new discursive terrain that transcends polemical discourse when talking about educational reform in the context of school rankings? Or, as was the case in advancing civil rights in the 1960s, are there some issues for which polemical debate is warranted—indeed necessary? That is, are there some social justice related issues within education for which no middle ground should ever be established, and should the debate on school ranking be considered one of them? These kinds of questions not only inform the school ranking debate but they are—at their core—political in nature.  Polemical Models Michel Foucault recognized “very schematically…the presence in polemics of three models: the religious model, the judiciary model, and the political model” (Foucault, 1997, p. 112). In religion, polemical debates are dogmatic in nature and deal principally with human moral failing. The judiciary model examines, prosecutes, and sentences the case. But it was the political model that Foucault believed was most powerful because it could establish ‘other' as enemy.  “Polemics defines alliances, recruits partisans, unites interests or opinions, represents a party; it establishes the other as enemy, an upholder of opposed interests against which one must fight until the moment this enemy is defeated and either surrenders or disappears. Has anyone ever seen a new idea come out of a polemic?" (Foucault, 1997, pp. 112-113).    17 This was an interesting insight and one I felt had tremendous implication on my work. The more I reflected on the emotional responses expressed by Cowley’s critics at the dinner to the position he took as an advocate for educational reform, the more I began to understand the polemical debate was born out of Foucault’s political model. And how could it be otherwise? The Fraser Institute is, by definition, a political entity. Born out of the policy institute movement of the mid-1970s and 1980s advocacy think tanks, like the Fraser Institute, marketed their ideas to target audiences. They sought to accomplish specific political agendas and worked hard to see their sociopolitical visions realized above all others.  “Founders of advocacy think tanks understood the importance of immersing themselves in the political arena. Ideas in hand, they began to think strategically about how to most effectively influence policy makers, the public, and the media. It also stressed the importance of marketing its ideas to the media” (Abelson, 2002, p. 31).   It was during this era that the Vancouver-based Fraser Institute was founded to counter the left-leaning politic of then Prime Minister Pierre Elliott Trudeau. Increasingly concerned by the federal government’s economic policies (and the election of the first NDP government in BC in 1972) Patrick Boyle, a senior industrial executive for MacMillian Bloedel, began considering how best to inform Canadians about the crucial role markets play in economic development (Abelson, 2002). His dream became a reality on October 21, 1974, when the federal government granted the Fraser Institute a charter. Named for the mighty Fraser River, it was deemed politically prudent to give the new institute a geographical, rather than ideological, reference point. And if Boyle and his supporters had any doubt about why the institute needed to be established in the first place, they were assuaged of their uncertainty in December of 1974 when Trudeau told the nation in his annual Christmas message, “the marketplace was not a reliable economic institution and would increasingly have to be replaced by government action in order to sustain the economic well-being of Canadians” (Abelson, 2002, p. 44). The seeds for a new hegemonic alliance were planted and the crop yielded a potent   18 hybrid of political action that pushed “education and social policy in conservative directions” (Apple, 2004b, p. 174). In this way alliances were formed between right- leaning and seemingly disparate groups united in their goal to shift “the educational debate onto their own terrain—the terrain of traditionalism, standardization, productivity, marketization and industrial needs” (Apple, 1998, p. 5).  Tout Court—“The Only World”  Apple (2004) identifies four distinct groups that have emerged as 21st century forces that he feels profoundly shape the educational policy landscape. They are: neoliberals, neoconservatives, authoritarian populists (fundamentalists), and “experts for hire” (Apple, 2004b, p. 176). Each group exerts power on the educational field to varying degrees, but according to Apple (1998), two dominant groups have emerged in this period of modern conservative restoration—neoliberals and neoconservatives. While both groups promote educational reform agendas that are geared at improving the overall educational condition for students in British Columbia they approach the issue from different ideological perspectives.  Neoliberals are characterized as being “economic modernizers who want educational policy to be centered around the economy [and] around performance objectives” (Apple, 2004b, p. 174). Economic modernizers “see schools themselves as in need of being transformed and made more competitive by placing them into marketplaces through voucher plans, tax credits, and other similar marketizing strategies” (Apple, 2004b, p. 175). By comparison, neoconservatives are “deeply committed to establishing tighter mechanisms of control over knowledge…through national or state curricula and national or state-mandated…testing” (Apple, 2004b, p. 175). Both groups promote socially conservative beliefs that “saturate our very consciousness, so that the educational, economic and social world we see and interact with, and the commonsense interpretations we put on it, becomes the tout court, the only world” (Apple, 2004b, p. 4). Herein lies the potential power (and I argue the potential danger) of the new hegemonic alliance. Although neoconservatives and   19 neoliberals make different assumptions about schools and how best to improve them, they are similar in that both ideologies promote their respective agendas through discursive techniques that intersect at the nexus of educational reform. The economic deregulation agenda of neoliberals like, for example, the Fraser Institute shapes every policy reform initiative proposed by that particular advocacy think tank (not only in education) but in health care, taxation, immigration, and global warming as well.  “Our vision is a free and prosperous world where individuals benefit from greater choice, competitive markets, and personal responsibility. Our mission is to measure, study, and communicate the impact of competitive markets and government interventions on the welfare of individuals” (The Fraser Institute, 2010a).  The social regulation agenda of neoconservatives like, for example, the British Columbia Ministry of Education, shapes educational reform initiatives in a myriad of ways some of which include: prescribing curriculum, setting standardized exams for high school students, and establishing compulsory skills-based assessments for elementary students. In fact the parameters that help the Ministry of Education establish the social regulation agenda for the entire province is entrenched in law. One need only consider the preamble to The School Act. It reads:  “WHEREAS it is the goal of a democratic society to ensure that all its members receive an education that enables them to become literate, personally fulfilled and publicly useful, thereby increasing the strength and contributions to the health and stability of that society; AND WHEREAS the purpose of the British Columbia school system is to enable all learners to become literate, to develop their individual potential and to acquire the knowledge, skills and attitudes needed to contribute to a healthy, democratic and pluralistic society and a prosperous and sustainable economy; THEREFORE HER MAJESTY, by and with the advice and consent of the Legislative Assembly of the Province of British Columbia, enacts as follows…*insert The (Revised) School Act+” (The Legislative Assembly of British Columbia, 1996).    20 There clearly exists an ideological tension embedded within the School Act itself about “educational goals and their ordering” (Godwin & Kermerer, 2002, p. 65). On the one hand schools enable students to become engaged members of a democratic and pluralistic society, while on the other hand schools help students acquire the skills and attitudes they need to contribute to a prosperous and sustainable economy. This ideological tension exists regardless of what political party holds office—Liberals, Conservatives, NDP, or the Green Party—because the tension resides in the inter- textuality between The School Act and the political agendas set by power wielding brokers whose interests are affected by the legislation. And while an uncontestable outcome of education is that students become literate in ways that enable them to actively participate in a democracy, what is highly contestable is how this outcome is best achieved. In their book, School Choice Tradeoffs: Liberty, Equity, and Diversity, Godwin and Kemerer (2002) describe five approaches to the state’s role in education. At one end of the political spectrum is classical liberalism—an approach that limits the role of government to protecting life, liberty, and property. Classical liberalism places a “heavy burden on the state to justify its intervention in the private sphere” (Godwin & Kermerer, 2002, p. 67). In this paradigm the state funds education, but it does not provide it. Classical liberalists view the state as something to fear and forbid the state to “control the socialization of children” (Godwin & Kermerer, 2002, p. 72). At the other end of the political spectrum is communitarianism—an approach that advocates for the state’s monopoly on education. Communitarianism “places a greater emphasis on public rather than private goals” (Godwin & Kermerer, 2002, p. 85). Here the goal of education is to develop participatory citizens who share common values. Communitarianists do not support private schools in any form. Positioned between the poles of the political spectrum are political liberalism, comprehensive liberalism, and progressive liberalism. At their core these approaches are differentiated by the degree to which parental and state rights drive educational outcomes. Political liberals—like, for example, the Fraser Institute—expect the state to   21 protect constitutional and charter rights that promote pluralism and autonomy. When that state’s expectations of human behaviour conflict with religious rights entrenched in the law, political liberals would advocate for the rights of the individual to supersede the rights of the state. As such, political liberals expect the state to fund and regulate schools, but not as a monopoly. Political liberals support the role that private schools can play in educating students and advocate for minimal state regulation of them. Comprehensive liberalism is an approach to education that supports highly state- regulated private schools to co-exist with their public school counterparts. Comprehensive liberals expect the state to protect constitutional and charter rights that promote autonomy and develop participatory citizens. “They would like the state to…eliminate discrimination and to guarantee that parental prejudices do not go unchallenged” (Godwin & Kermerer, 2002, p. 90). This approach elevates reason over faith and perceives the role of publically funded schools to socialize students to that idea. Progressive liberalism “demands shared values, a common culture, and the elimination of all illiberal aspects of communities and subcultures” (Godwin & Kermerer, 2002, p. 93). Progressive liberals “give the state the broadest educational goals, and they give it almost total control over schooling” (Godwin & Kermerer, 2002, p. 92). This approach to education is aimed at creating a deliberative and egalitarian democracy with multiple shared values. The state’s role is to not only to fund education, but also to provide experiences for students in ways that promote emancipatory outcomes. These approaches to education (political liberalism, comprehensive liberalism, and progressive liberalism) are different because they balance the rights of the individual and the rights of the state in different ways, but they are similar insomuch as they challenge society to ask of itself “what kind of education develops the best human beings?” (Godwin & Kermerer, 2002, p. 65). This is an important consideration and one that lies at the heart of the Fraser Institute school ranking debate because the Fraser Institute’s political agenda is concomitantly aligned, and at odds, with the School Act. It aligns with the parts of the legislation that promote free-market approaches to building a prosperous and sustainable economy through, for example, consumerism, competition, and choice.   22 But it resists those parts of the School Act that make room for plurality, diversity, and egalitarianism when students are required by law to attend local public schools situated within designated catchment areas until the School Act itself was amended in 2003. With the amendment came the possibility that—for the first time in British Columbia— students could apply for admission to public schools beyond the limits imposed by state- designated catchment areas called school districts. When this occurred the contours of the educational landscape shifted away from a terrain defined by the politic of comprehensive liberalism towards a terrain defined by the politic of political liberalism—a politic more in keeping with the mission-driven agenda of the Fraser Institute itself. The effect of the amendment amplified the relevance of school rankings for some parents as much as it renewed criticism of the Fraser Institute’s school report card. I will show how voices expressing dissent about school rankings (and the choice- based-reforms that underpin them) promote counter agendas that are anchored in social justice issues and the emancipatory hope the educational encounter can provide. Such position taking is not new. Nor is it born out of the school accountability and choice movements—exclusively. To describe the Fraser Institute’s effect on the educational field through discourses that have accountability as its most dominant discursive feature, therefore, is to discount the influence of overlapping social justice discourses that are positioned in relation to critical theory and political action.  “Questions of justice and education have *always+ been a part of educational thinking as long as there has been a formal schooling system. The introduction of mass schooling itself arose in the broader context of a struggle for social improvement and transformation, to provide opportunities for the ‘poorer classes’” (Taylor, Rizvi, Lingard, & Henry, 1997, p. 126).  Education brings with it potential opportunities that can lead to an overall improved quality of life in the struggle for upward social mobility. But opening doors of opportunity for students comes at a substantial cost. Education is an expensive   23 undertaking. In 2007 the Ministry of Education spent $5.07 billion on the (k-12) educational system alone (British Columbia Ministry of Education, 2010). Given the great expenditure of public funds on one of “the best educational systems in the world” (Bond, 2007)6 it is not surprising that some stakeholders endeavor to hold schools accountable in ways that can be objectively measured. So when Ministry documents declare the government’s mandate is to “make British Columbia the best- educated, most literate jurisdiction on the continent” (Bond, 2007), questions about performance outcomes and student achievement are concomitantly raised. Questions like: How does a government measure grade-specific educational success? What standards should be used to measure student (and teacher) performance? What cost effective initiatives can be implemented in schools without compromising educational quality? What does it mean to be an educated person living in the 21st century? Should all schools look and feel the same? For many, these questions are best answered when decisions about spending are informed by quantitative assessments about the educational system. It is no longer sufficient to say that literacy rates have significantly improved in Grade 4 students, for example. A discerning tax-paying electorate, and the politicians beholden to voters everywhere, want to know by how much literacy rates have improved, and at what cost? These questions beg data-driven responses through which entire régimes of truth are manufactured about the educational system. For how else can Shirley Bond make the following ‘indisputable’ claim in her web-published educational report?  “One important and indisputable factor contributing to the fact that B.C. students rank among the best in the world is the dedication of B.C.’s many skilled teachers” (Bond, October 10, 2007).  In ranking students, therefore, against an international standard the Ministry of Education implements a discursive practice that is not unlike the one adopted by the  6  Shirley Bond was British Columbia’s Minister of Education from June 2005 to June 2009.    24 Fraser Institute. They are similar because they are both anchored in measurement and performativity. Notwithstanding the increasing hold the Ministry of Education and the Fraser Institute rankings have managed to exact on the public’s consciousness about the school accountability movement, a study that challenges prevailing statistical régimes of truth- telling contributes to the accountability dialogue by providing alternative rationalities by which schools are measured and compared. Herein lies the emancipatory potential of my study: Not only do I problematize the Fraser Institute’s articulation of school ranking discourses in ways that account for the statistical and contextual assumptions it makes about secondary schools in British Columbia, but I will also illustrate how debates and controversies over accountability and policy making are not confined to the formal seat of power; as power is defined in a legislative (democratic) sense. More specifically, I will show how discursive practices are used to demarcate the limits and boundaries of exemplary educational practice in a way which “presupposes a play of prescriptions that govern exclusion and selections” (Foucault, 1997, p. 11). When a particular kind of school is consistently held up by the Fraser Institute as being an exemplar for all schools to emulate a new “policy technology" has been deployed that is considered by Ball (2003) as being integral to the new mode of state regulation.  “In various guises the key elements of the education reform ‘package‘— and it is applied with equal vigor to schools, colleges, and universities— are embedded on three interrelated policy technologies; the market, managerialism and performativity” (Ball, 2003, p. 215).  It is interesting (and I believe relevant) in a study that has as its principal focus the critical examination of school rankings to consider performativity in the way Ball suggests—as a kind of “technology; a culture and mode of regulation that employs judgments, comparisons and displays as means of incentive, control, attrition and change—based on rewards and sanctions” (Ball, 2003, p. 216). When institutional performances begin to serve as measures of productivity, output, and/or displays of quality they do so within a field of judgment. “The issue of who controls the field of   25 judgment is crucial” (Ball, 2003, p. 216). Part of my project explores how different agents with different visions for school reform compete for capital to dominate the field of judgment. When schools become complicit in their own subjugation because they subscribe to policy technologies that promote régimes of performativity it leaves open the possibility that individuals and organizations fabricate the educational experience for students in ways that align with the metrics of accountability (Ball, 2003; Webb, 2006). This dynamic creates tremendous institutional tension; a tension that manifests in media accounts between competing agents.  Dissertation Roadmap The Fraser Institute ranking of schools is regarded by some to be an objective measure of the overall quality of high schools in British Columbia (Editorial, 2003; Foot & Benzie, 2001; Raham, 1999). It has sparked tremendous public controversy since it was first published in The Province newspaper in the spring of 1998 (Bierman, 2007; Cowley & Easton, 2000; Cowley, et al., 1999; Derworiz, 2002; Fine, 2001; Foot & Benzie, 2001; Johal, 2001a; McMartin, 2010a; O'Connell, 2002; Proctor, 1998a; Royce, 2010; Sokoloff, 2005; Steffenhagen, 2000, 2002b, 2008). This study questions, problematizes, and unpacks many of the assumptions made by the Fraser Institute about secondary schools and what it takes to improve them. Chapter 1 has contextualized for the reader the impact the Fraser Institute ranking of secondary schools had on my professional practice working in one of Vancouver’s ‘best’ schools. It established the rationale underpinning the polemical debate that emerged in British Columbia as a result of the Fraser Institute publishing its annual secondary school ranking. At issue is the collision of two competing epistemic approaches about how best to determine an overall ‘good’ school—one anchored in a particular kind of instrumental rationality, whereby schools (and the students attending them) are reduced to a set of measurable KPIs; the other anchored in a belief that schools are complex organizations that provide opportunities that serve the diverse educational needs of all students—an understanding that transcends measurement on   26 KPIs. What’s at stake is the erosion of school cultures that value and serve different kinds of students in different kinds of ways. Chapter 2 establishes the theoretical grounding for my project. My intent is to show how modern disciplinary power operates on the fields of accountability and judgment through the Fraser Institute’s school-ranking rubric. A principal argument I make is anchored in Foucault’s (1977) understanding of how power is operationalized in prisons—that statistical rankings cast their omnipresent gaze on secondary schools from published tables in the same way prison guards cast their omnipresent gaze on incarcerated prisoners from Panoptic prison towers. I draw on Bourdieu’s work to show how capital imbalances between agents playing the school accountability game result in power imbalances between agents on—what Brighenti (2007) calls—the field of visibility. Here I am interested in exploring the possibility that political formations compete across fields in ways that seek to force a particular vision on schools. Five research questions are posed at the end of Chapter 2 as they relate to discourse, political praxis, capital acquisition, and governmentality. Chapter 3 outlines the methodological approach that I bring to bear on this project. Specifically, I will be analyzing the Fraser Institute ranking of secondary schools—the case under investigation—through a critical discourse analysis (CDA). CDA focuses on the role of discursive activity in constituting and sustaining unequal power relations (Phillips & Hardy, 2002). It “examines how language constructs phenomena” (Phillips & Hardy, 2002, p. 6). CDA also focuses on how particular kinds of discourse privilege some agents at the expense of others. More specifically, CDA examines how “discourse structures determine specific mental processes, or facilitate the formation of specific social representations” (van Dijk, 1993, p. 259). Yin’s work on how to conduct case study research when investigating a “contemporary phenomenon within its real- life context” (Yin, 2003, p. 13) is something I draw on to help frame my study. Chapter 4 unpacks and problematizes the development and use of the Fraser Institute ranking rubric with a particular emphasis placed on demonstrating how semiotic ranking discourses have shifted and mutated from 1998-2010. My interest and   27 focus here is not in critiquing the myriad of complex statistical equations the Fraser Institute has developed to measure the overall quality of secondary schools in British Columbia as a mathematician, statistician, or actuary might, but rather to explore how the language of statistical rankings have been used by the Fraser Institute as a discursive strategy to tell particular kinds of stories about schools. Here I argue that embedded in the polemical debate around school rankings is what Foucault calls the “principle of rarefaction” (Foucault, 1984, p. 116). The principle of rarefaction describes the relationship between epistemologies and discursive practices whereby one position supplants another. I am interested in analyzing how a particular—impenetrable— statistical gaze has been manufactured by the Fraser Institute to highlight and amplify the differences between schools with the goal of promoting its privatization agenda through choice and market-based reform initiatives in this province and elsewhere. As importantly, I will be arguing how the principle of rarefaction operates within school ranking discourses to supplant counter discourses made by teachers and the political organizations to which they belong. Here I will demonstrate how the market, managerialism, and performativity—what Ball (2003) calls interrelated policy technologies—are strategically deployed by the Fraser Institute on the field of visibility to shape how the public perceives (and judges) schools and school systems. Chapter 5 explores the polemical nature of the school ranking debate. Specifically the chapter focuses on how particular régimes of truth are manufactured by political agents about secondary schools and how they are presented in the media to construct a reality effect in the public’s mind about the state of secondary school education in British Columbia. Here I will be analyzing the mechanics underpinning the discursive strategies deployed by agents invested in the school ranking debate. I argue that discourse is a form of symbolic capital that is used in strategic ways by the Fraser Institute to manufacture public support for an educational reform movement that is principally rooted in privatization and choice. I also focus on how different political agents that include (but are not limited to) the Fraser Institute acquire and the BCTF, consolidate, and leverage symbolic capital on the field of power to promote divergent   28 visions about the role school rankings should play in holding teachers accountable for their work. Finally, I describe how the Fraser Institute expands its presence on the educational field by devising school-ranking rubrics for elementary schools and Aboriginal students both within, and outside of, British Columbia. Chapter 6 begins with a synopsis of the study. It has been written with the goal of repositioning the reader within the confines defined by the original problematic. This is followed by a description of the major findings that emerged in response to the research questions posed. Very generally these findings—which are presented as empirical assertions—relate to: (1) how disciplinary power is exercised through published school report cards; (2) how technologies of representation inform, shape, and manage the field of visibility through surveillance; (3) how competing agents use language to mediate relationships of power; and (4) how symbolic capital is acquired, mobilized, and leveraged through storytelling, coalition building, and policy borrowing. My goal here is to review how each finding resonates with the literature on discourse, surveillance, accountability, the acquisition of symbolic capital, and policy theory. These points are contextualized—not only in relation to British Columbia—but in relation to England, New Zealand, and the United States. This is relevant because those countries have also emphasized the benefits gained when market-based reform initiatives are paired with standardization movements that promote the development (and publication) of school ranking tables. The chapter continues with a critical analysis of the single case study approach that was used to conduct this investigation. Here a focus is placed on the extent to which this particular methodological approach could be deemed successful in this case. The chapter concludes with my personal reflections on how the Fraser Institute ranking of schools has shaped my thinking about teaching, leadership, and accountability.   29 CHAPTER 2: Theoretical Framework Introduction Chapter 2 establishes the theoretical framework for my project. It focuses on how asymmetries of power are established and promoted by examining the relationship between knowledge, discourse, and truth as that relationship is informed by a particular kind of representation—the statistical ranking of schools. A central argument I make is that in using statistical rubrics to describe the experiences of students in secondary schools, the Fraser Institute employs a particular kind of logic that not only limits the kinds of stories that can be told about schools, but as importantly shapes how they are ‘seen’ by the public. This is especially relevant because the Fraser Institute has become a significant force in determining how educational issues are discussed in the public realm since their school rankings were first published in 1998. A study that questions how language is used by one group to describe the experiences of another must also have as its focus questions that relate to agency. This is an important consideration because language, knowledge, and truth are enmeshed in discourse, power, and agency. Where there is knowledge there is language. Where there is language there is discourse. Where there is discourse there are truth claims. Where there are truth claims there is difference. Where there is difference there is power. And where there is power there is the potential for agency that can take the form of a political struggle. Without exception political struggles are situated within a socio-political context and to some the Fraser Institute acts as a proxy for conflict within the British Columbian context because its mission-driven agenda to “measure, study, and communicate the impact of competitive markets and government interventions on the welfare of individuals” is steeped in the controversial educational reform of privatization (The Fraser Institute, 2010a). I argue this reform initiative is communicated to the public through a ranking discourse that highlights visibility asymmetries between schools and school systems. How different schools are represented on the report card, therefore, is at the core of the Fraser Institute ranking because published accounts of ‘school   30 goodness’ reflect discursive practices that are both familiar and strategic. Report cards are familiar because teachers, and schools, have traditionally communicated student achievement through reporting documents that cast student performance against a backdrop of achievement possibility.7 Report cards take on a strategic role because they are tied to the Fraser Institute’s choice agenda when they are published as rankings in newspapers and online. In sorting schools according to how well students perform on compulsory standardized provincial examinations the Fraser Institute has devised an accounting tool that has an “extraordinary impact on the life world of educators *by+ establishing what is normal and what is not [and] what is necessary and what is peripheral” in the operation of schools (Pignatelli, 2002, p. 172). In this regard I agree that the Fraser Institute’s published school rankings reconfigure what, Andrea Brighenti, calls “the epistemology of seeing” (Brighenti, 2007, p. 323). According to Brighenti (2007) the epistemology of seeing defines fields of visibility on which human activity is perceived and judged—contextually. For many people it is through school rankings that we come to know, evaluate, and recognize what ‘good’ schools are according to a particular epistemology of seeing—an epistemology presented by the Fraser Institute through its school ranking discourse. By implication the rankings also highlight ‘bad’ schools while, at the same time, renders invisible alternative ways that schools can be presented in the public realm. A central focus of my study will investigate and clarify how this representation takes place in the public domain through published school report cards. In understanding how the field of visibility is constructed, Ball (2003), focuses on how technologies of governance can identify certain performances as being exemplar thereby instituting a “new mode of state regulation which makes it possible to govern in an ‘advanced liberal way’” (Ball, 2003, p. 215). By this logic, the ranking of school  7  In British Columbia students receive letter grades and/or percentages in compulsory and elective subjects that comprise the Ministry of Education’s Graduation Program. Additionally, compulsory Foundation Skill Assessment (FSA) results for students in grades 4 and 7 are reported to parents in three categories—numeracy, reading comprehension, and writing—as: not yet meeting (NWM), meeting (M), or exceeding (E) expectations. In many respects student report cards are perceived by the general public as being the traditional and normative way for educators to document student progress at every grade.   31 performance is associated with a culture that defines relative quality, net worth, and the value of individuals and organizations. When viewed in this context, rankings can be thought of as a governing technology through which schools can be regulated (Ball, 2003; Rowe, 2000; West & Pennell, 2000). Governing technologies (like school rankings) exercise disciplinary power when they are introduced across the social space and insert themselves into an ever-changing web of power relations under the guise of accountability régimes. I argue that within the British Columbian context the school report card serves as a kind of governing technology that is used by the Fraser Institute, the Ministry of Education, political organizations that represent teachers, and other professional and parental groups to play the ‘school accountability game’ on the field of visibility. The debates about the local report card are reflected in the discursive practices through which each agent (or player) constructs their respective vision of schools. Here it would be important to understand how each contestant’s strategy in playing the accountability game unfolds relative to other agents playing the same game and, moreover, in the broader debates and controversies that stem from the Fraser Institute publishing its annual school report card. Clarifying this problematic would help explain why the discourse that surrounds the Fraser Institute ranking of schools is polemical in nature. This is why I believe there is something essential at stake that underpins the ranking phenomenon and that warrants further investigation: To be successful in the school ranking game—and to be a successful school by making a positive difference in the diverse lives of students—is not the same thing because, generally speaking, “educators do not see students as objects, but as a potentiality that triggers the oppressed, silenced, and marginalized to come forth and be emancipated”.8 Herein lies my point of entry into an analysis of the Fraser Institute ranking of secondary schools as a case study. So far the polemical debate has focused on the impact school rankings have on teacher and student morale ("BCTF responds," 2006; McDonnell, 2005; O'Connell, 2002; Proctor, 1998a). However, very little scholarship has  8  Dr. André Mazawi. Personal notes made during Dr. Mazawi’s EDST 565 lecture, UBC, August 4, 2005.   32 been devoted toward understanding how rankings operate discursively to create self- disciplining dynamics that co-opt professionals working within schools into playing by rules imposed by the Fraser Institute. This shortcoming reflects a major gap in the literature and brings to light an important element that is not discussed in the public realm—the relationship between power and discourse in the production of knowledge and truth claims about schools and school systems.  I also argue that ranking discourses promote the Fraser Institute’s agenda that market forces will lead to an overall improvement in the educational experience of students when two criteria are met: (1) when an interested public perceives the ranking instrument to be a legitimate way to measure the overall quality of schools, and (2) when the public perceives top-ranked private schools as being model schools to be emulated by their public school counterparts. By this logic, the Fraser Institute becomes influential in driving an educational reform initiative that is principally anchored in visibility, school performativity, market forces, and school choice. This is relevant because it implies that private organizations—like the Fraser Institute—acquire and mobilize symbolic capital in ways that can influence public educational policy within the broader field of political power. It also implies that the Fraser Institute promotes its privatization agenda by using discursive practices in strategic ways to shape and manage the public field of visibility and—by extension—the public field of judgment.  Michel Foucault (1926-1984) and Pierre Bourdieu (1930-2002) have written extensively about how the state, individuals, and groups procure and leverage different forms of capital that I believe are relevant to the school-ranking phenomenon. I draw on their work to support my argument that different agents use different strategies on the field of power to shape and manage how the general public perceives secondary schools. I use some of Foucault’s theoretical testimony to understand how instruments of disciplinary power operate within a ranking discourse to produce ‘winning’ and ‘loosing’ schools. I argue that discourse can be thought of as a form of capital that is used by the Fraser Institute to promote its political agenda. I draw on Bourdieu’s understanding of how class-based struggles are steeped in the acquisition and   33 mobilization of different kinds of capital by competing agents playing the ‘accountability game’ on the field of education. Class-based distinctions, therefore, are at the heart of Bourdieu’s work and it is the division of schools by the Fraser Institute into high, mid, and low-ranked institutions in a journalistic mediated space that is of principal interest to me. What follows is a description of how I apply some of their work to my analysis of the school ranking phenomenon.  Foucault and School Rankings Foucault’s preoccupation was to deconstruct premeditated claims to régimes of truth. He believed that truth was born out of “multiple forms of constraint” (Foucault, 1980, p. 131). He also believed that each society had its general politics of truth. Foucault understood that politics made possible,  “the types of discourse *society+ accepts and makes function as true; the mechanisms and instances that enable one to distinguish true and false statements; the means by which each is sanctioned; the techniques and procedures accorded value in the acquisition of truth; the status of those who are charged with saying what counts as true” (Foucault, 1994, p. 131).  Foucault understood that the deployment of discourse was an area where truth is manifested, expressed, sanctioned, and seized. As such, it becomes important to problematize the epistemic and semiotic foundations of the statistical language operating in school ranking discourses through which the ‘truth’ is told about schools. Like any ‘foreign’ language, statistical discourses need first to be translated if the stories told about schools are to be critically interpreted. Part of my study will focus on this enterprise.  Like Foucault, I am also interested in problematizing truth-claims made about a matter—in this instance the ones made by the Fraser Institute about schools. And like Foucault, I am less interested in understanding what power is per se, as I am in explaining how power operates within a published ranking system that identifies the   34 ‘best’ and ‘worst’ performing schools in British Columbia. Principally, I argue that school rankings function as a particular kind of knowledge discourse that exercises disciplinary power on the public field of accountability. Foucault‘s thinking about discourse is relevant in this regard because he concerned himself with the relationship between speech, language, and text—core aspects of the rankings’ representation—in the production of régimes of truth. He was instrumental in awakening scholars to the limitations imposed by discourse analyses relating to the structural, linguistic, and hermeneutical dimensions associated with language (Foucault, 2006). Before Foucault intellectuals thought about taxonomic discourses that named, classified, and organized knowledge in “the theory of representation“ (Foucault, 2006, p. xxv). By drawing on Foucault’s work I intend to problematize how the Fraser Institute uses school rankings to name, classify, and organize schools in the practice of representation to legitimize its position that report cards, tables, and numbers speak for themselves–categorically. Foucault’s work is relevant in this regard because he deconstructed claims to truth embedded in the production of knowledge, language, and discourse. He understood, “that we should not speak of the author but more definitively of the author function” (Rabinow, 1984, p. 113) in constructing an instrument like, for example, the Fraser Institute school ranking. Furthermore, Foucault believed that when authors write, “much of what they say is a product, not of their distinctive insight or ability, but the result of the language they are employing” (Gutting, 2005, p. 13). By this logic, power is operationalized through discourse as opposed to the kind of sovereign power that a single group or institute wields over another.  In his essay, The Order of Discourse, Foucault (1984) argued that “in every society the production of discourse is at once controlled, selected, organized and redistributed by a certain number of procedures whose role is to ward off its powers and dangers, to gain mastery over its chance events, to evade its ponderous, formidable materiality” (Foucault, 1984, p. 109). In many respects Foucault is describing the mechanics underpinning polemical debates because he recognizes that opposing sides develop discursive strategies to gain mastery on the field of power. In the same essay Foucault   35 describes the principal of rarefaction whereby, “none shall enter the order of discourse if he does not satisfy certain requirements or if he is not, from the outset, qualified to do so” (Foucault, 1984, p. 120). This underscores the epistemic and ontological divide between the Fraser Institute’s approach and teachers that so far has characterized the school ranking debate because both sides feel uniquely qualified to speak from positions of authority about what goes on within secondary schools. I argue that the polemical debate that plays itself out in the media over the Fraser Institute’s ranking of schools results from the same guiding question posed by Foucault in his preface to the English edition of The Order of Things: “What is the ground on which we are able to establish the validity of [considered] classifications with complete certainty?“ (Foucault, 2006, p. xxi). This is an important consideration and it underscores why I am drawn to Foucault as one of two principal theoretical anchors for this project: He exposed the historical specificity of discourse by suggesting that discourses always functioned in relation to power relations; that power was everywhere; and that power was inextricably connected to truth-telling.9 Herein lies Foucault’s intellectual contribution. He challenged scholars to problematize the political and social conditions necessary for the production of truth claims and related that production to knowledge itself (Foucault, 1980). In this way Foucault argued that power and knowledge directly implied one another and—that unlike universal laws of gravitation—there were no absolute truths in the social domain. This is an important insight, and one that is especially relevant in a study that problematizes school rankings devised by non-government organizations—like the Fraser Institute—because it underscores the ambiguity associated with the construction of knowledge in the  9  Foucault recognized that truth, power, and knowledge were related and—that in the absence of absolute truth—knowledge and power colluded to promote prevailing truth claims that he termed régimes. Absolute truth transcended the influence of religion and politic when absolute truth was expressed through the physical sciences because truth had a predictive element. Humans could predict, for example, the rise and fall of tides; the time of the next solar eclipse, etc. These absolute and irrefutable truths that were steeped in rational epistemologies allowed, for example, Galileo to challenge the church. Consequently, predictive epistemologies became privileged epistemologies because they were premised on irrefutable truths that were steeped in the collection and interpretation of data. The data spoke for itself and, in Galileo’s case; the data enabled him to question the infallibility of a 17th century pope. Foucault, on the other hand, concerned himself with the social world and perceived this other world as being distinct from the physical world and the universal laws that governed it.   36 production of truth. It also points to how socially constructed régimes of truth gain their legitimacy when they take the form of privileged epistemologies; ones that are steeped in data-centric notions of reality like, for example, school rankings. To focus on the ranking from Foucault’s perspective, therefore, is to focus on how non-government organizations construct régimes of truth about what schools should, and should not, be. In part this project will focus on this insight and question the relations embedded within networks of power relations that are established by the Fraser Institute to promote school rankings in British Columbia and elsewhere. Foucault‘s understanding that régimes of truth were manufactured in the social realm to promote political agendas—and that not every citizen was equally served in the process—is an insight that still resonates today. He believed the key to political agency was in problematizing the relationship between knowledge, power, and discourse (Foucault, 1994, 1997, 2006; Rabinow & Rose, 2003). Foucault located power at the extremities—the place where official discourses over-asserted their authority. Furthermore, Foucault understood that power extended well beyond state imposed limits when he said, “for all *its+ omnipotence *the state’s+ apparatuses is far from being able to occupy the whole field of actual power relations…because the state can only operate on the basis of other, already-existing power relations“ (Foucault, 1994, p. 77). Foucault is saying here that non-state agents can—and do—operate within the broader field of power to exert influence. He is saying that “*t+he exercise of power is not simply a relationship between “partners,” individual or collective; it is a way in which some act on others” (Rabinow & Rose, 2003, p. 137). This implies that power is not a matter of consent, and that power is exercised in relation to existing power dynamics and enmeshed networks of connectivity between multiple agents. Herein lies a principal interest I have in studying how a non-state agent (like the Fraser Institute) positions itself in relation to already-existing, state-sanctioned, power relations between the Ministry of Education, schools, teachers, and unions. I am especially interested in exploring Foucault’s understanding of disciplinary power (power that disciplines) and its relationship to knowledge and discourse because   37 I think it operates throughout emerging power relationships that have developed since the Fraser Institute’s first published its secondary school report card to punish entire populations of schools. As importantly, Foucault’s conceptualization of disciplinary power and its relationship to surveillance theory is something I believe warrants further consideration in thinking about schools, in general, and published school rankings, in particular. This is especially true given how the Fraser Institute uses standardized examination results to compile its annual ranking. To that end, I intend to draw on Foucault’s (1977) seminal work, Discipline and Punish: The Birth of the Prison, to show how ranking discourses reward and punish different kinds of schools. A principal argument I will make is that school rankings operationalize disciplinary power through similar techniques Foucault described were at play in relation to surveillance in prisons. While schools and prisons are designed with decidedly different purposes in mind I will show how school rankings can elicit institutional compliance in the same way panoptic prisons were designed to elicit prisoner compliance. At this point it is worth noting that for disciplinary power to operate within any human field techniques for observing subjects within the field have to be established and ritualized by an authority figure. Foucault (1977) identified three distinctively modern techniques for observing subjects within a field: (1) hierarchical observations, (2) normalizing judgment, and (3) the examination. The art of discipline presupposes the exercise of discipline and Foucault was clear on the means in which he felt disciplinary power operated.  “The exercise of discipline presupposes a mechanism that coerces by means of observation; an apparatus in which the techniques that make it possible to see induce effects of power and in which, conversely, the means of coercion make those on whom they are applied clearly visible“ (Foucault, 1977, pp. 170-171).   This is an important point that Foucault raises which I draw on to argue that, in British Columbia, the Fraser Institute’s school ranking represents a modern technique for observing subjects because it incorporates hierarchical observations that are made   38 about student performance. As such, it becomes possible for the Fraser Institute to manage how the public ‘sees’ schools because the Fraser Institute manages how student achievement data is made visible to the general public. A second feature of modern disciplinary power is concerned with normalizing judgment. Foucault described five ways in which the “normalization process operated within the régime of disciplinary power: (1) it referred individual actions to a whole that is at once a field of comparison, (2) it differentiated individuals in terms of following an optimum toward which one must move, (3) it was measured in quantitative terms, (4) it introduced the constraint of a conformity that must be achieved, and (5) it traced the limit that would define difference in relation to all other differences“ (Foucault, 1977, pp. 182-183). I argue that this normalization process is at play every time the Fraser Institute publishes its annual school ranking because school improvement is always measured in relation to an arbitrarily defined optimum score of ten—a normalizing judgment rendered by a non-elected institute that exercises disciplinary power. As for the examination, it combines hierarchical observations with normative judgment. If the examination is to work as a technique of disciplinary power, there has to be associated with it an artifact of the exam process; a document; a paper; a product that is held up for scrutiny by someone in a position to judge based on some criteria. The exam, therefore, not only situates students in relation to a Ministry prescribed curriculum but it situates students in relation to other students and agents in the broader field of power. Foucault made specific claims about the examination and its mediation to knowledge and power. In Discipline and Punish: The Birth of the Prison he argued that examinations: “transform the economy of visibility into the exercise of power; introduce individuality into the field of documentation”; and finally, that “examinations (surrounded by all its documentary techniques) reduce an individual to a case” (Foucault, 1977, pp. 187-191). If disciplinary power has a functional dimension, Foucault recognized that it also has a structural one. In writing about the Panopticon, Foucault had this to say about the effect of disciplinary power: “Inspection functions ceaselessly. The gaze is everywhere“   39 (Foucault, 1977, p. 195).  I believe this conceptualization of power is especially relevant to a study that has as its focus the institutional practice of school rankings because the effect of school surveillance is made permanent by students writing compulsory, Ministry-set, standardized exams, the results of which, are published in newspapers and online. “The art of punishing, then, must rest on a whole technology of representation“ (Foucault, 1977, p. 104). The technology associated with exam setting and exam marking, therefore, transforms the theory of representation into the practice of representation—a concern that stands at the centre of my project.  The Panopticon  It is a bleak comparison, but a principal argument I make in my analysis of the Fraser Institute‘s school ranking is that it functions very much like Jeremy Bentham’s 17th century proposal for a model prison. The all-seeing Panopticon was designed to surveil inmates 24-hours a day in a cost-efficient way. An essential architectural function of Bentham’s prison was that a few “overseers” could effectively monitor and scrutinize the behaviour of prisoners (Foucault, 1980, p. 155). This omnipresent functionality was achieved through structural means because an all-seeing tower was positioned in the central courtyard of the Panoptic prison. Consequently, inmates would never know when guards stationed within the tower were observing them. This speaks to the powerful relationship between structure and agency because, as Foucault noted in Discipline and Punish: The Birth of the Prison, “visibility is a trap that assumes the automatic functioning of power” (Foucault, 1995, p. 200). Foucault described the effect panoptic architecture had on human behaviour.  “Hence the major effect of the Panopticon: to induce in the inmate a state of conscious and permanent visibility that assures the automatic functioning of power. That this architectural apparatus should be a machine for creating and sustaining a power relation independent of the person who exercises it; in short, that the inmates should be caught up in a power situation of which they are themselves the bearers” (Foucault, 1977, p. 201).    40 In its 21st century extension of the Panoptic prison Thomas Mathiesen (2006) writes about the need for panoptic surveillance mechanisms to be augmented by the recognition of synoptic surveillance mechanisms. “Synopticism involves the ability of a large group of people to scrutinize the actions of a few individuals” (Haggerty & Ericson, 2006, p. 28). It is the opposite of panopticism whereby a few prison guards scrutinize the actions of many prisoners. Synopticism, therefore, is a function of contemporary mass media because the detailed actions of groups are made public through newspapers, television, and Internet accounts in the new politic of visibility. Foucault’s central argument that panopticism was an essential component of disciplinary power because it contributed significantly to its production as a mechanism (or instrument) of power, is something that I feel may be operating within published school rankings— synoptically. Earl (1999) describes the economic relationship between test-taking, surveillance, and scarce resources.  “External tests and examinations have always existed in schools, with a clear and singular purpose: making decisions about the educational status of individual students. They have been seen as a fair way to identify the best candidates for scarce resources, and they have been the vehicle for directing students into various programs or into the world of work“ (Earl, 1999, p. 4).  I maintain that compulsory provincial exams and school ranking systems that are advocated by neoconservatives as being essential ways to improve schools serve the same principal function as prison towers because both instruments operate as cost- effective technologies of governance of scarce resources that make possible the surveillance of prisoners and teachers alike. In the same way, therefore, the state is able to reduce the operating costs associated with prison reform by employing a few guards to view many inmates, so too does the ranking of schools have the potential to reduce the costs of, and resistance to, educational reform by employing technologies of   41 governance that “narrowly calculate and assess student learning, teacher work, [and] school effectiveness“ (Pignatelli, 2002, p. 157).  Systems of Accountability  Pignatelli (2002) suggests that educators are increasingly subject to educational reform initiatives that are “marked by evermore centrally designed and monitored systems of accountability“ (Pignatelli, 2002, p. 157). And just as the Panopticon created a power relation between the ‘object’ and the ‘subject’ in ways that rendered the ‘subject’ in a permanent state of visibility so too can published school rankings create an analogous power relationship—a relationship in which the subjects become complicit in their own subjugation. For example, Webb (2006) identified how teachers can “generate performances of their work in order to satisfy accountability demands” which he termed “choreographed performances” (Webb, 2006, pp. 201-202). I argue that it is not just educators that are targeted by accountability systems, but other agents that also participate in the educational project—students, parents, school trustees, politicians, and political organizations like, for example, the British Columbia College of Teachers (BCTF)—that are also targeted in ways that underscore and highlight a network of power relationships I am interested in problematizing. In an unpublished doctoral study, Kuchapski (2001) identifies three key principles that accountability systems seem to have in common: (1) disclosure, (2) transparency, and (3) redress (Kuchapski, 2001). My intent is to link these accountability markers with Foucault’s description of how disciplinary power operates through distinctly modern techniques to “monitor and scrutinize individuals at every location in the social hierarchy in the new politic of surveillance and govern-mentality“ (Haggerty & Ericson, 2006, p. 6). In large measure this politic is made possible because of the ubiquitous presence of the media that has the capacity, not only to shape public opinion, but as importantly to create a reality effect in the public’s mind.    42 Disclosure Although fundamental to the idea of accountability, disclosure is a problematic concept because it must be balanced with a respect for an individual’s right to privacy (Kuchapski, 2001). For the purposes of this analysis disclosure pertains to the sharing of information about student achievement by the Ministry of Education. The Ministry uses two large-scale assessments to gather information about student achievement at five different grades: (1) Foundation Skill Assessments (FSAs) in Grades 4 and 7, and (2) Standardized Provincial Exams in Grades 10, 11, and 12.10 The collection and analysis of information about people in this way in order to govern their activities is fundamental to the new politic of visibility. Consider what Haggerty and Ericson (2006) have to say about data-gathering as an important dimension of surveillance technology.  “Surveillance technologies…operate through processes of disassembling and reassembling. People are broken down into a series of discrete informational flows, which are stabilized and captured according to a pre- established classification criteria. They are then transported to centralized locations to be reassembled and combined in ways that serve institutional agendas“ (Haggerty & Ericson, 2006, p. 4).  I argue that this understanding of surveillance theory is fundamentally no different from how data is gathered about students in British Columbia. The Ministry discloses individual student results to parents and school administrators with one hand (disassembling data) while it reassembles (repackages) the collective experience of entire groups of students for publication in provincial newspapers and online, within the broader field of power.  Transparency If corruption is symbolized by darkness and secrecy, transparency serves as its polar opposite accountability marker. Visibility seems to be synonymous with transparency as evidenced by multiple dictionary definitions that reference light, clarity,  10  The compulsory standardized provincial exams that all British Columbian students write since 2005 are: English 10, Math 10, Science 10, humanities 11, and English 12.   43 and openness in defining the term. When school rankings are made transparent to the public agents understand how information is used by the Fraser Institute to compile its annual report card. Transparency limits the possibility that people, organizations, and corporations can misrepresent their respective positions on the field of power. Enron is an example of what can go wrong when corporate accounting practices are non- transparent. Enron collapsed because its senior accounting team consistently overstated revenues and underreported liabilities in a cloak of secrecy. Transparent due process reduces the possibility that these kinds of willful misdeeds occur within organizations. It is important that I contextualize transparency within a business discourse because it speaks to a particular accounting phenomenon called the audit that I believe has become entrenched within school culture. Auditing is made possible to the extent complex human behaviour within organizations is reduced to objective measures that, in turn, become entries on a spreadsheet. Auditing, therefore, may be thought of as a particular kind of surveillance tool (technology) that promotes a particular régime of truth. In the realm of business, spreadsheets can be thought of as a kind of numerical text that, like any hermeneutic, is subject to interpretation. As importantly, however, the audit as a surveillance tool has found a home in schools.  “Originally focused on financial criteria, auditing now encompasses various efforts to render institutions more transparent and more accountable. This quest for visibility through surveillance has come at a cost. Auditing disproportionately values criteria that are amenable to being audited, often to the detriment of other outcomes that are less easy to measure. For example, standardized test scores in education are prominent auditing criteria that are only loosely connected to the diverse goals and accomplishments of schools“ (Haggerty & Ericson, 2006, p. 7).  In this context, transparency can be considered a powerful condensation symbol within educational policy because it serves an anti-corruptive (or counter-corruptive) function in the public’s mind. Furthermore, the very real limitations embedded within auditing as a surveillance tool are effectively diminished because auditing criteria may serve to “distort organizational mandates, as the phenomena being measured is maximized at   44 the expense of other ends“ (Haggerty & Ericson, 2006, p. 7). It is precisely for this reason that critics object to school ranking instruments that negate, devalue, and/or ignore important aspects of school culture that matter in the lives of students, teachers, and parents—things like: how approachable do students find their teachers?; to what extent are parents engaged in their respective school communities?; and what kinds of curricular and extra-curricular opportunities do students have at their schools, which help connect students to each other and their teachers beyond the classroom. These aspects of school life are most commonly ‘felt’ by members of any school’s community but they are equally important because they contribute to the overall ethos of any ‘good’ school.  Redress Redress operates where disclosure and transparency intersect. It represents that accountability marker that functions to remedy, or set right, an undesirable or unfair situation. Redress may be thought of within the greater context of emancipatory discourses that percolate within the public space. Redress begs the question: What should we do and how should we do it? In determining what is “good” redress underscores a fundamental belief that underpins critical social theory at its core—not only that society can be engineered and arranged—but that society should be engineered and rearranged. But engineered and arranged for whom, and with what purpose in mind? The civil rights movement of the 1960s, in which the United States Supreme Court ruled that it was unconstitutional for public schools to provide separate but equal educational opportunities for black students, speaks to the power of redress in reengineering an aspect of American society. The Supreme Court’s decision was made possible, in part, because the educational experience of black students was demonstrated to be significantly compromised compared to their white peers. In disclosing the inequities of a two-tiered public educational system made transparent in a court of law, redress was made possible—at least in principle. Herein lies the potential   45 ‘good’ that can result from surveillance theory in the new politic of visibility. In making visible the diminished educational opportunities that black students experienced systemically decades ago, the political climate and will of a nation was altered in a way that opened doors of opportunity for students of colour where they might otherwise remain closed. In this example disclosure, transparency, and redress changed the way black students came to think about themselves. It also changed the way white students came to think about black students. This warrants serious consideration in the context of school rankings because, I argue, there exists within them an opportunity for different kinds of stories to be told about different kinds of schools. While Foucault’s focus helps explain how discourses shape and define social relations, it is important to note that he is concerned much less with the material, non- material, and symbolic distribution of capital that make playing the accountability and ranking game easier for some schools than others (Bourdieu, 1985; Callewaert, 2006; Foucault, 1977). Herein lies the limitation of applying Foucault’s approach— exclusively—to an analysis of the Fraser Institute ranking of schools: The social field of schooling is not just defined by discourse but also to the material and symbolic aspects of politics. Moreover, discourse does not cause or explain human agency in isolation from the material context of political action. Discourse holds meaning in relation to the cultural and social fields that discourse encounters. So while Foucault’s principal interest was in understanding the relationship of individuals to society as forms of discourse, Bourdieu’s focus was in understanding this relationship in terms of a social practice.  Bourdieu and School Rankings  Bourdieu was also interested in deconstructing social realities but he approached the problematic differently. He believed that the nexus of power resided not so much in discursive practices per se, but in the amount of social, political, cultural, and symbolic capital that agents had either inherited, accumulated, and/or mobilized in playing a game on the field of power (Bourdieu, 1985). I am drawing on Bourdieu’s work because he brings to bear on this project an epistemological bridge between discourse and   46 power; structure and agency; in proposing the concepts of habitus and field. This is an essential theoretical component to my study because I argue that habitus gets expressed through discursive practices that are employed by the various agents active in the field of power. I also argue that the Fraser Institute ranking reflects class-based distinctions between schools that have always existed. Classes, understood from the objectivist point, are categories of people who occupy similar positions within a field because they are similar to each other: “The closer the positions are, the more likely is the participation of their occupants in a shared habitus, the possibility of their constitution as a social group through political struggle, and their collective recognition of their identity as distinct from other groups or classes” (Jenkins, 2002, p. 88).  Habitus is the product of individual and collective history; a history steeped in traditions and internalized social conditioning. “When habitus encounters a social world of which it is the product, it finds itself ‘as a fish in water’; it does not feel the weight of the water and takes the world around itself for granted” (Brown & Szeman, 2000, p. 14). Habitus “characterizes the reoccurring patterns of class outlook—the beliefs, values, conduct, speech, dress, and manners—which are inculcated by everyday experiences within the family, the peer group and the school. Implying habit, or unthinking-ness in actions, the habitus operates below the level of calculation and consciousness, underlying and conditioning and orienting practices by providing individuals with a sense of how to act and respond in the course of their daily lives” (Mills & Gale, 2007, p. 436). Habitus brings into focus the subjective dimension of Bourdieu’s social theory because it transcends “determinism and freedom, conditioning and creativity, consciousness and unconscious, or the individual and society” (Bourdieu, 1990, p. 54). Habitus shapes understandings, behaviour, and outlooks but it does not define them. Habitus becomes active in relation to fields because “it provides the connection between agents and practices” (Rawolle & Lingard, 2008, p. 731).  Fields are socially constructed areas defined by human activity. They are a configuration of [objective] relations that include political organizations, public schools, private schools, boarding schools, day schools, single sex schools, provincial statutes,   47 and think tanks—to name a few. Bourdieu theorized that different fields had their own structures, interests, and preferences in which the ‘rules of the game’ were played. In this way, fields can be thought of as a kind of social arena within which struggles take place between agents steeped in different habitus. And just as there are winners and losers on the soccer field, so too are there winners and losers in the ranking field of school-wide accountability.  In many respects fields are defined by what’s at stake within the field. For example, in the field of education, Bourdieu would say that intellectual distinction, economic prosperity, self-esteem, and the emancipatory potential for redress are at stake. In the field of politics, power is at stake; in the acting (theatre) field it might be fame, and so on. Social spaces consist of a number of overlapping, autonomous, and interconnected fields that operate interdependently but with their own logic of practice. Bourdieu recognized that relational power between agents competing for limited resources on various fields resulted in strategies being adopted by the agents themselves. These strategies help tip the balance of power in ways that promote the interests of some agents while simultaneously disadvantage the interest of others. Bourdieu’s description of relational power playing itself out in the field of journalism is especially relevant in a study about school rankings that are published in provincial newspapers.  “Constant, permanent relationships of inequality operate inside this [journalistic] space, which at the same time becomes a space in which the various actors struggle for the transformation and preservation of the field. All the individuals in the universe bring to the competition all the (relative) power at their disposal. It is this power that defines their position in the field and, as a result, their strategies” (Bourdieu, 1998, p. 40).  This is an important point for my study because in emphasizing the relationship between power and power-operationalized, Bourdieu clarifies how agents adopt strategies to win the game being played on any given field. Bourdieu realized what was at stake in the struggle of the disempowered when the media was involved in telling   48 their stories. In writing specifically about televised imagery, Bourdieu stated: “At stake today in local as well as global political struggles is the capacity to impose a way of seeing the world, of making people wear “glasses“ that force them to see the world divided up in certain ways“ (Bourdieu, 1998, p. 22). This is an important consideration and one that I intend to explore in depth throughout my analysis of school rankings in British Columbia. A central argument I make is that by imposing its way of seeing schools through the media the Fraser Institute effectively makes class-based distinctions between schools that are disconnected from what really goes on inside them and that primarily reflect the ways symbolic capital is unequally distributed throughout the educational system. Class-based distinctions, therefore, are at the heart of Bourdieu’s work and it is the division of schools by the Fraser Institute into high, mid, and low- ranked institutions in a journalistic mediatised space that is of principal interest to me. I argue that media (principally newspapers) play an important role in this regard because they shape, not only how we understand-self in relation to the plurality of other, but how we experience-self in relation to the plurality of other. I refract these theoretical insights through Bourdieu’s lens to examine how different agents like, for example the Fraser Institute, the BCTF, the Ministry of Education, teachers, parents, and journalists— mobilize various resources to advance their relative positions over the field of education within the context of an ever-changing field of power. The strategic positioning and repositioning of agents in this way shapes how school communities are viewed because the struggle takes place—in part—as a public spectacle in the media. Bourdieu (1998) also talked about the relationship between journalism and the capacity it has to create a reality effect in the representation of the ‘Other‘. He writes: “the simple report, the very fact or reporting, of putting on record as a reporter, always implies a social construction of reality that can mobilize (or demobilize) individuals and groups“ (Bourdieu, 1998, p. 22). I argue that different groups compete for different media representations that promote different reality effects. In my analysis of the school-ranking rubric I explore how different agents engage the media about its characterization of schools in the practice of representation. While divisions make   49 possible the mobilization of groups that, “can exert pressure to obtain privileges, and so forth“ (Bourdieu, 1998, p. 22), divisions also make possible the imposition of socially constructed dominant views by one power-wielding group over another like, for example, the Fraser Institute imposing its view on how secondary schools should be ‘seen’. By this logic the media’s reality effect becomes a form of capital that agents engaged in the school accountability debate mobilize to promote their political agendas. How groups mobilize the media to leverage reality effects on the field of judgment become an important determinant in their location on the field of power.  Another way to think about fields is as a social space that is comprised of multi- faceted, inter-dependent, context-dependent fields of human activity in which political struggles play themselves out between teams. For Bourdieu, it is necessary to understand the relationship of the field in question to the field of power because it spoke to the issue of legitimizing the game being played and the efforts exerted to that end. Furthermore, Bourdieu argued that it was not only essential to analyze the field- play of various teams involved in the game itself, but as importantly his methodological approach made possible the analysis of how team strategies affected the game’s outcome. In other words, Bourdieu was interested not only in who played the game, but also how they played it: Was every team equally adept at playing?; Did everybody understand the rules?; Were all teams equally prepared to play?; Who was benched for the duration of the game and why?; And were the officials refereeing unbiased in calling the game? Team players could be considered ‘agents’ who collectively implement game- specific strategies concerned with preserving (or improving) their relative positions within the field of power according to logics of practice. For Bourdieu there were two principal logics under which agents negotiated fields and engaged in practice. Bourdieu termed these logics ‘practical’ and ‘reflexive’ (Schirato & Webb, 2002). Practical logic refers to a feel for how the game is played. Agents engaged on the field must know how the game is played “with respect to the various discourses, genres, capital, written and unwritten rules, values and imperatives that inform and determine agents’ practices” (Schirato & Webb, 2002, p. 256). By comparison reflexive knowledge “is an extension   50 and development of this practical sense away from habituated practice to a more aware and evaluative relation to one’s self and one’s context” (Schirato & Webb, 2002, p. 255). Embedded within reflexive knowledge is the concept of strategy; that agents can learn from the game in ways that allow for them to develop and implement new strategies that affect the outcome of the game. Simply put, agents play the game to the extent they understand and abide by the rules of the game (practical logic of practice) and to the extent they can change the rules by which the game itself is played in the deployment of strategy (reflexive logic of practice).  As importantly, however, Bourdieu was interested in knowing what makes the game go on, and why players take the game so seriously. When agents come to “accept the game of the field on its own terms unquestioningly” simply because they are caught up in, and by, the game itself then agents are enmeshed in a condition he called ‘illusio’—a term that describes a tacit acceptance by agents on the field that “playing is worth the effort” (Schirato & Webb, 2002, p. 256). ‘Illusio’ is like an ontological spell that is cast on agents engaged in any game on any field. ‘Illusio’ gives agents the motivation to play the game and ‘illusio’ drives their actions. Consider what Bourdieu said about how competing agents acquire social, cultural, and political power when cast under the spell of ‘illusio’:  "It becomes clear why one of the elementary forms of political power…consisted in the quasi-magical power to name and to make-exist by virtue of naming. The capacity to make entities exist in the explicit state, to publish, make public (i.e., render objectified, visible, and even official) what had not previously attained objective and collective existence and had therefore remained in the state of individual or serial existence…represents a formidable social power, the power to make groups by making the common sense, the explicit consensus, of the whole group. In fact, this categorization, i.e., of making-explicit and of classification, is performed incessantly, at every moment of ordinary existence, in the struggles in which agents clash over the meaning of the social world and their position within it, the meaning of their social identity, through all the forms of benediction or malediction, eulogy, praise, congratulations, compliments, or insults, reproaches, criticisms, accusations, slanders, etc. It is no accident that the verb kategoresthai,   51 which gives us our "categories" and "categoremes", means to accuse publicly" (Bourdieu, 1985, p. 729).   What is at stake in the struggle for competing agents, therefore, is to establish a prevailing logic of practice through which the school accountability game is played. I argue that the Fraser Institute accumulates capital in the field of power because it has changed the rules of the accountability game by importing and consolidating its own reflexive logic of practice. To change the rules of the game, therefore, agents accumulate, mobilize, and leverage capital in ways that promote their respective (reflexive) logic of practice, which—in the habitus of the Fraser Institute—is anchored in the discourse bounded by school rankings, privatization, and market-driven reform initiatives. Understanding Bourdieu’s theoretical approach to the reproduction of schools by showing how both social and political forces get imported into the educational system is an important aspect of my project. Additionally, I show how the Fraser Institute engages in building networks and coalitions over the field of power by aligning itself with other institutions that share the Fraser Institute’s base habitus. In this regard I am interested in unpacking the strategies different agents use on the field of power to build networks of influence with the goal of mobilizing capital and shaping the educational field. Bourdieu recognized that capital imbalances between groups resulted in power imbalances between groups. He felt that, “differences in scholastic outcomes could be explained by understanding differences in social provenance—especially when the culture of pupils and their backgrounds meshed or clashed with the dominant culture of educational institutions“ (Grenfell, 2004, p. 58). It is therefore, important that Bourdieu’s approach to power be brought to bear on this project because it strikes a balance between an analysis that focuses on discourses—exclusively—and an analysis that focuses on resources—exclusively; as those forces play themselves out over the larger field of power. Simply put, an analysis that problematizes the Fraser Institute’s ranking solely from a discursive perspective sees schools (and the communities that inhabit them as social institutions) as text and language. Bourdieu’s approach   52 contributes to my analysis by focusing on the contextual dynamics relating to capital imbalances that clearly exist between individual students and ordered groups of schools as they move over the field of power politics. Bourdieu’s project attempts to articulate a methodological approach that is able to capture the ability of agents to mobilize diverse forms of capital to further their position on the field of power. In the case under consideration I draw on Bourdieu’s work to understand how school rankings operate within the wider realm of power and the dynamics associated with the mobilization of capital among contending groups. I am interested in exploring the possibility that political formations—like advocacy think tanks, research centres, professional organizations, etc.—compete across the field of power in ways that seek to force a particular vision on schools. Their vision is imposed on the public by mobilizing various forms of cultural, symbolic, and political capital, which are leveraged to either promote and legitimize or discount and undermine the introduction (and continuance) of school rankings into the educational system. In sum, the ongoing and successful reproduction of relationships of domination and subjugation lies at the heart of Bourdieu's social theory. These relationships are informed by how the ‘accountability game’ in British Columbia is played out between competing agents involved in political struggles over the institution of schooling. I argue that the debates over the Fraser Institute’s ranking are part of a larger struggle over the role and function of schools, in particular, and rankings in general. In that sense, how different social and political groups in British Columbia (and beyond) coalesce in the current struggle over education and schooling is conceived as part of the broader reconfiguration of the field of politics currently taking place in British Columbia and other jurisdictions. An examination of the school ranking phenomenon through Foucault and Bourdieu makes possible, not only an analysis of power that focuses on the intersection of shifting discourses within a shifting accountability field, but also the unpacking of strategies through which various actors/agents mobilize different forms of capital on the field of education to enhance their respective positions.    53 Limitations of Foucault and Bourdieu in Explaining School Rankings  While Foucault investigated the discursive techniques through which power operates to name, blame, and shame ‘Other’, Bourdieu’s focus was in describing the sociocultural mechanisms by which power produces (and reproduces) class-based struggles. Clearly both scholars have something relevant to contribute in problematizing prevailing truth claims that surround the secondary school ranking debate, however, their respective epistemological approaches also invite critique. Many scholars have taken issue with Bourdieu’s theory. On the one hand Bourdieu is saying there exists a possibility for agents to acquire new skills and apply different (winning) strategies that could result in different (emancipatory) outcomes. On the other hand, he is saying that, “habitus excludes ideas like ‘self’, ‘choice’, and action’ by virtue of its emphasis on practices arising from the group’s relation to the collective and to its own repeating history” (Brown & Szeman, 2000, p. 20). Jenkins (2002) is especially critical. He characterizes Bourdieu’s élan vital as persistently producing “deterministic models of social process” (Jenkins, 2002, p. 175).  I argue that there exists within Bourdieu’s “conceptual triad *of+ practice, habitus, and field” the emancipatory possibility of redress (Rawolle & Lingard, 2008, p. 3). I believe that Bourdieu’s approach leaves open the door to human agency because he understood that it made no sense to speak of a highly structured, deterministic, social space in absolute and definitive terms. His theoretical accountings factor prominently in understanding how individuals play the sociopolitical game of anything and Bourdieu did not accept that social practice could be understood solely in terms of individual decision-making. He also didn’t believe that group life could be understood as being the aggregate of individual behaviour. For Bourdieu, marginalized agents could enhance their position when they acquired enough capital to implement winning strategies on political fields. My intent is to show how competing agents involved in the school wide accountability struggle, debate, accumulate and leverage capital over the course of time. By analyzing these strategies my aim is to show that story-telling and coalition building through capital mobilization stand at the core of the school ranking debate,   54 thus helping me bring together the works of both Foucault and Bourdieu to bear on framing this study. What is different between agents is how they mobilize different forms of capital across the field of power in ways that leverage their respective discursive practices to promote their respective agendas.  Finally, it is not surprising that published rankings that cast schools as ‘best’ and ‘worst’ according to an imposed reflexive logic of practice invites agents involved in the ranking game to respond critically. The epistemology of seeing schools through a statistical lens that is manufactured in this way engages agents on the broader field of power because they are made visible. Here then is opportunity for voices of marginalized students to be heard in relation to discursive practice. Embedded, therefore, within Bourdieu’s sociological theory is what Mills and Gale (2007) describe as the “revolutionary potential of agents” (Mills & Gale, 2007, p. 437). The revolutionary potential of agents is made possible in the new politic of visibility because it highlights the uneven playing field, on which different kinds of schools compete for, and leverage, capital. This is something I feel warrants serious consideration because an uneven playing field defines what strategies agents employ on the field of power to promote their respective agendas. Foucault’s epistemic grounding that discourse and power are enmeshed fails to recognize that discursive practices become active in relation to multi-layered, complex, intersecting social fields. Callewaert (2006) described the essential difference between Foucault (the philosopher) and Bourdieu (the sociologist) when he noted:  “Although *Foucault+ wrote thousands of very innovative pages on power, he never wrote about power as a social activity in action. He wrote only very marginally about forms of exercise of power, or about power as an aspect of discourse” (Callewaert, 2006, pp. 90-91).  Callewaert’s position was well reflected in the way Bourdieu himself critiqued Foucault.  “*Foucault+ explicitly refuses to search outside the ‘field of discourse’ for the principle, which would elucidate discourses within it. He rejects…the   55 endeavour to find in the ‘field of polemics’ or in the ‘divergences of interests or mental habits of individuals’…the explanatory principle of what happens in the ‘field of strategic possibilities’ (Bourdieu, 1992, pp. 195-206).   I argue that an analytical approach that weaves together Bourdieu and Foucault would salvage some of the limitations expressed above. The approach explains how shifting discourses may be leveraged as forms of capital mobilization on a shifting field of power that promotes political agendas through context-specific action strategies. The next section outlines the guiding principles of an integrative approach that I have proposed, which builds on the theoretical insights provided by Foucault and Bourdieu. I use this approach to explain how the Fraser Institute has effectively managed to promote its school ranking agenda, not only within British Columbia, but throughout Canada as well.  An Integrative Approach of Foucault and Bourdieu  To illustrate the shifting configurations of complex alliances, political forces, and strategies that are at play between the Fraser Institute and other competing agents in the broader field of power I propose an approach that links Foucault’s theoretical testimony that régimes of truth are manufactured to promote political agendas through discursive practices with Bourdieu’s conceptualization of habitus, field, and capital. In the approach, different agents compete for (and/or inherit) cultural, social, symbolic, and political capital that is used to promote different agendas. This integrative approach is depicted in Figure 4 as schematic, but it has been conceived in three interdependent parts.  Figure 1 illustrates how Foucault thinks about knowledge, language, truth, and discourse. In this representation three overlapping circles (knowledge, language, and truth) intersect at the nexus of discursive practice—the place “where truth is both manifested and expressed” (Foucault, 2006, p. 41). The Fraser Institute uses the semiotic language of statistical rubrics, for example, to promote a particular régime of   56 truth—a régime that is principally anchored in standardization, measurement, and performativity. So when Peter Cowley says, “If people think we have a narrow focus, give us more data”11, he is really emphasizing the value the Fraser Institute places on how the ‘objective’ language of data can be used to know something in particular about schools. Embedded in Cowley’s comment is an epistemic positioning by the Fraser Institute—for the Fraser Institute—which, in turn, informs its discursive practice. The discursive practices used by competing agents opposed to this kind of stance like (for example) the BCTF are anchored in different truth claims because that truth is born out of a different experience—an experience that is discernibly more contextual by comparison. What the BCTF ‘knows’ about schools is different from what the Fraser Institute ‘knows’ about schools because teachers operate from different epistemic and ontological vantage points.  Figure 1: Discursive practices (DP) emerge from the conflation of knowledge (K), truth (T), and language (L)      11  Personal notes made by Michael Simmonds while attending a PDK – UBC Chapter, dinner meeting at the Arbutus Club, Vancouver, April 19, 2006.   57 Figure 2 represents how knowledge (K), language (L), truth (T), and discursive practices (DP) are shaped by habitus, which is represented in this approach as a box containing three overlapping circles. Habitus explains why different agents experience the same accountability game in different ways. The Fraser Institute perceives school rankings as a way to promote educational reform initiatives that are principally rooted in privatization and choice (Cowley, 2003b, 2005b). The BCTF perceives school rankings as undermining the work of teachers (Clarke, 2004; Kuehn, 2002). These disparate perspectives are shaped by disparate habitus.  Figure 2: Discursive practices (DP) are shaped by habitus     58 Figure 3 depicts the net-capital acquisition of different kinds of symbolic capital by agents. Capital acquisition takes place in strategic ways over time. The acquisition and mobilization of capital by competing agents on the field of power is essential to winning the school wide accountability game. The discursive practices used by agents on the field of judgment can be seen as a form of capital that is leveraged by competing sides to win the school accountability game. At stake is the public’s perception of secondary schools in British Columbia. The more net-capital that is acquired by agents is reflected in this figure by an increase in the height and base of the triangle.  Figure 3: Agents acquire capital (C) on the field of power    59 Figure 4 illustrates how the school accountability game is played. Competing agents develop strategies they believe will result in the acquisition and mobilization of capital. In the figure the Fraser Institute is represented as ‘Agent A’, and the competing agent— the BCTF—is represented as Agent B. The sizes of the arrows pointing towards the lever are intended to reflect the relative effectiveness of the agents in developing game- winning strategies. What is key to understanding this integrative approach is that the acquisition and mobilization of symbolic capital is a complex, on-going exercise that occurs over time. Here, capital reflects all the political, symbolic, social, and cultural capital that agents acquire (and mobilize) while playing the accountability game. I argue that discourse can be thought of as a form of capital that is leveraged by competing agents to sway public opinion about the value of school rankings. When the discursive practices used by the Fraser Institute (Agent A) prevail in promoting their agenda the capital fulcrum shifts to the right. When the BCTF and the constellation of other like- minded political forces (Agent B) prevail in promoting their competing agenda the capital fulcrum shifts to the left. The net effect of a shifting fulcrum (which represents the mobilization of capital in this approach) is that one agent gains ground on the field of power at the expense of the competing agent. This tips the balance of power.   60 Figure 4: Analyzing the school accountability game through an approach that integrates Foucault and Bourdieu        61 Research Questions  It is possible to formulate the following research questions in relation to the theoretical testimony presented throughout this chapter. These questions emerge as well from the debates, struggles, and controversies underpinning the introduction and use of the Fraser Institute’s secondary school report card in British Columbia and will be addressed in Chapters 4 and 5 respectively.  Chapter 4  1. How have the statistical components of the Fraser Institute’s secondary school ranking in British Columbia changed over time in terms of their modes of statistical representation?  2. What implications do these statistical changes have for the way secondary schools come to be known by the public, and how do they shape the field of visibility through which secondary schools are viewed?   Chapter 5  1. How can agents use language to mediate relationships of power and privilege in social interactions, institutions, and bodies of knowledge? How does the naturalization of ideologies come about?  2. What particular régimes of truth are manufactured by the media about secondary schools to construct a reality effect in the public’s mind about the state of secondary school education in British Columbia?  3. How do different agents involved in the ranking debate mobilize different forms of capital on the field of power to promote their respective agendas with respect to schools?   These questions will be examined through a critical discourse analysis (CDA) of the Fraser Institute. CDA “should deal primarily with the discourse dimensions of power abuse in ways which make manifest the injustice and inequality that result from it” (van Dijk, 1993, p. 252). CDA “tries to explore how socially produced ideas and objects that populate the world were created in the first place and how they are maintained and   62 held in place over time” (Phillips & Hardy, 2002, p. 6). It describes and explains how power abuse is enacted, reproduced or legitimized by the talk of the dominant groups and institutions (van Dijk, 1993).  Critical theory “is characterized by…a pronounced interest in critically disputing actual social realities. Its guiding principle is an emancipatory interest in knowledge” (Alvesson & Skoldberg, 2003, p. 110). A critical theory approach to the Fraser Institute ranking of secondary schools “presupposes the idea that societal conditions are historically created and heavily influenced by the asymmetries of power and special interests, and that they can be made the subject of radical change” (Alvesson & Skoldberg, 2003, p. 110).  I use CDA and critical theory as the epistemological lenses through which to view the Fraser Institute’s ranking of secondary schools for three principal reasons: (1) it makes possible the unpacking of discursive practices and technologies of governance that underpin the ranking phenomenon within, and beyond, British Columbia’s borders; (2) it brings to light the politics of power associated with agents promoting different visions for secondary schooling within the fields of visibility, judgment, and power; and (3) it provides an epistemic framework on which to build an integrative theoretical approach that explains how shifting discourses can be used as instruments of disciplinary power to acquire and mobilize capital within a shifting accountability field of judgment to create a reality effect that private and independent school system is ‘better’ than the public school system.   63 CHAPTER 3: Methodology Research Design—Case Study  In the broadest sense, a central question that underpins the methodological approach used in this study is: What is the school ranking phenomenon a case of? Given the thirteen-year monopoly the Fraser Institute has on ranking schools in British Columbia a more nuanced and relevant central question becomes: What is the Fraser Institute ranking of schools phenomenon a case of? This is an important distinction that I believe warrants consideration because it shifts the focus of my research away from a study about the Fraser Institute per se—an institutional case study—towards an investigation that has as its focus the case of school rankings as they are conceived, published, and promoted by the Fraser Institute. In essence, the case study at issue here is secondary school rankings (the phenomenon under investigation) and not the Fraser Institute—an advocacy think tank. This important distinction has clear methodological implications because it requires that I problematize the school ranking issue through a case study approach that accounts for the discursive, contextual, and statistical elements that frame the ranking phenomenon being studied. It also informs the kinds of research questions that will establish the methodological trajectory of this study.  Yin (2003) suggests that ‘how’ and ‘why’ questions are most appropriate for case study research when they are “being asked about a contemporary series of events, over which the investigator has little or no control” (Yin, 2003, p. 9). This kind of investigation demands that multiple sources of data be used because the phenomenon under investigation is highly contextual.  “Case study research is particularly appropriate for situations in which the examination and understanding of context is important. Multiple sources of evidence are used and the data collections techniques include document and text analysis” (Darke & Shanks, 2002, p. 113).   Like other research strategies the case study is a way of investigating an empirical   64 topic that not only relies on multiple sources of evidence (see Table 1), but also “investigates a contemporary phenomenon within its real-life context” (Yin, 2003, p. 13). The phenomenon of interest in this case study is secondary school rankings—the primary unit of analysis. Yin (2003) indicates that four tests are commonly used to establish the overall quality of any empirical social research design. A good case study is strong in construct validity, internal validity, external validity, and reliability. These quality control research markers will be demonstrated in this investigation through— what Yin (2003) describes as being—a Type 1, single-case, holistic case study design. This particular design matrix is appropriate when a single case represents the critical case in testing a well-formulated theory and when the single-case is studied at two or more different points in time (Yin, 2003, pp. 39-42). Given this study’s principal focus of analyzing the Fraser Institute’s published school ranking over the past thirteen years it is clear that a ‘critical case’ is also a longitudinal case because it is considered at two or more different points in time.  Triangulation  Yin (2003) describes the important need for case study researchers to use different sources of information as a way to ensure the investigation is valid. He metaphorically calls this ‘listening’, but Yin clearly establishes the rationale for using multiple sources of evidence in conducting robust case studies. Multiple sources of evidence develop “converging lines of inquiry, a process of triangulation” (Yin, 2003, p. 98). Triangulation is usually defined as using two or more methodologies to look at the same broad research topic (Olsen, 2002). It is generally regarded as a methodological approach that strengthens the validity of the findings obtained through a single qualitative method. “When you have really triangulated the data, the events or facts of the case study have been supported by more than a single source of evidence” (Yin, 2003, p. 99). Using both qualitative and quantitative evidence helps establish the validity of claims made in response to the research questions posed. This kind of convergence is called data triangulation. Theoretical triangulation combines two or more different theoretical   65 perspectives to examine the same phenomenon. They converge in this study through my use of Foucault and Bourdieu’s respective theoretical testimonies in the ways I described earlier.  Data Gathering  Yin (2003) indicates that evidence for case studies may come—indeed it must come—from a variety of sources if the investigation is to satisfy the validity and reliability tests described earlier. Documents are relevant to every case study topic and include: memoranda and communiqués, written reports, and newspaper clippings and other articles appearing in the mass media (including the internet). These kinds of documents are important to collect because they help establish “explicit links between the questions asked, the data collected, and the conclusions drawn” (Yin, 2003, p. 83). Documents are stable, unobtrusive, exact, and broadly elucidate the questions under investigation. Their weakness lies in an obvious reporting bias that is author-specific.  The documents used in this study come from three principal sources: the Fraser Institute, the Ministry of Education, and published print and online media reports, articles, and accounts. These documents may be considered—what Smith (2001)—calls “organizing texts” (Smith, 2001, p. 174). In her paper, ‘Texts and the Ontology of Organizations and Institutions’, Smith (2001) described how organizing texts mediated people’s daily lives and activities within organizations. Moreover, she described the conditions necessary for ‘texts in action’ to “co-ordinate multiple sites of people’s everyday activities” when she noted “organizing texts must be readable as the same even though they are taken up and interpreted differently in the different settings in which it is read to the organizing system of texts that co-ordinates multiple sites of such reading and writing” (Smith, 2001, p. 174). I will be drawing on Fraser Institute produced school reports that describe in detail how successive ranking iterations are manufactured. This is important because the rationale is given for why some key performance indicators (KPIs) are included in the ranking rubric while others are neutralized and/or excluded. If the Fraser Institute’s (school reports) ‘coordinate   66 multiple sites of people’s everyday activities’ as Smith (2001) suggests, then it must be possible to demonstrate why, when, and how people’s activity is coordinated by a school ranking instrument that exerts some degree of control.  Direct observation can run the gambit from casual to formal data collection and include observations made at meetings and other public gatherings. They are useful because they cover events in real time and are highly contextual, however issues of selectivity (what is remembered) and reflexivity (how the event proceeds because it is being observed) both factor into the data collection process. The direct observations that I have conducted are defined by the twenty-one years of professional experience that characterize my time as an educator.  Yin (2003) describes artifacts as being a “technological device, a tool or instrument, a work of art, or some other physical evidence” (Yin, 2003, p. 96). Artifacts are relevant in case study research when they assume an important component in the overall case. In this study the artifact is the essential component of the case because the physical evidence of school rankings is the Fraser Institute’s report card and associated online reports. I am arguing here that the Fraser Institute uses particular discursive strategies to promote its privatization agenda by publishing a ranking artifact called a ‘school report card’ with which readers engage.  Having established the rationale for adopting a case study approach to investigating the Fraser Institute ranking of secondary schools in British Columbia, it is important to elucidate in more specific terms how the case under investigation will be analyzed. At its core this project brings together critical social theory and critical discourse analysis to “describe, interpret, and explain the ways in which discourse constructs, becomes constructed by, represents, and becomes represented by the social world” (Rogers, Malancharuvil-Berkes, Mosley, Hui, & O'Garro Joseph, 2005, p. 366).  Critical Discourse Analysis  I need to say at the outset that I am not approaching the statistical dimension of the Fraser Institute ranking of schools as a statistician might—focused in the critique of   67 the kinds of multivariate regression formulae used by the Fraser Institute in the construction of its school ranking rubric. But I am saying that statistical rankings constitute a particular kind of discourse that is grounded in a sociopolitical context. A critical discourse analysis (CDA) of the Fraser Institute’s report card on secondary schools, therefore, not only examines “the nature of social power and dominance” (van Dijk, 1993, p. 254), but it also “focuses on how language as a cultural tool mediates relationships of power and privilege in social interactions, institutions, and bodies of knowledge” (Rogers, et al., 2005). Social power is based on privileged access to socially valued resources like income, position, status, group membership, education, and/or knowledge. van Dijk (1993) notes that modern power “is mostly cognitive, and enacted by persuasion, dissimulation or manipulation, among other strategic ways to change the mind of others in one’s own interests” (van Dijk, 1993, p. 254). CDA is specifically interested in the deployment of power in discourse, which van Dijk (1993) calls dominance. Dominance is seldom total, and as van Dijk (1993) points out in his paper, ‘Principles of Critical Discourse Analysis’, dominance may be restricted to specific domains. He very clearly establishes when dominance crosses into the domain of hegemony when he says,  “if the minds of the dominated can be influenced in such a way that they accept dominance, and act in the interest of the powerful out of their own free will, we use the term hegemony. One major function of dominant discourse is precisely to manufacture such consensus, acceptance and legitimacy of dominance” (van Dijk, 1993, p. 255).   van Dijk (1993) argues that power and dominance can be institutionalized to enhance their effectiveness and can be sustained and reproduced by the media. This is an important insight because it highlights a principal argument that I intend to make through a CDA of the Fraser Institute’s published ranking of secondary schools—that dominant discourses shape public opinion and “facilitate the formation of social representations” (van Dijk, 1993, p. 259). In other words a CDA reveals how agents “enact, or otherwise ‘exhibit’ their power in discourse” (van Dijk, 1993, p. 259).   68 Moreover the actions of a “powerful group may limit the freedom and actions of others, but also influence their minds” (van Dijk, 1993, p. 254).  In his paper, Critical and Descriptive Goals in Discourse Analysis, Fairclough (1985) suggested that “there is a one-to-one relationship between ideological formations and discursive formations” (Fairclough, 1985, p. 751). He referred to institutions as a “speech community” (Fairclough, 1985). Speech communities determine “what can and should be said” (Fairclough, 1985, p. 751). He characterized the inseparability of ‘ways of talking’ and ‘ways of seeing’ as “ideological discursive formations (IDFs)” and indicated that IDFs were “ordered in dominance” (Fairclough, 1985, p. 751). A feature of a “dominant IDF is the capacity to ‘naturalize’ ideologies by winning acceptance for them as non-ideological ‘common sense’” (Fairclough, 1985, p. 752).  “To ‘denaturalize’ them is the objective of a discourse analysis which adopts ‘critical’ goals. I suggest that denaturalization involves showing how social structures determine properties of discourse, and how discourse in turn determines social structures” (Fairclough, 1985, p. 739).  I am interested in denaturalizing how language is used to construct meaning within a field of judgment that has as its central feature the culture of performativity. As such, my focus is on disrupting and destabilizing the epistemic and statistical assumptions that underpin the Fraser Institute’s ranking of secondary schools and which construct a one- size-fits-all-school-ranking-rubric. It is essential that a CDA of the statistical aspect of the Fraser Institute ranking be carried out in this way because the Fraser Institute report card is compiled entirely from quantitative data provided by different ministerial branches of provincial government. A CDA not only ‘involves examining the production, consumption, and reproduction of the texts [but] the analysis of sociocultural practice [as well], which includes an exploration of what is happening in a particular sociocultural framework” (Rogers, et al., 2005, p. 371). The interdependency of CDA and critical social theory is not difficult to appreciate given,  “*t+he word “discourse’ comes from the Latin discurses, which means, “to   69 run to and fro.” That is, discourse moves back and forth between reflecting and constructing the social world. Seen in this way, language can not be considered neutral because it is caught up in political, social, economic, religious, and cultural formations” (Rogers, et al., 2005, p. 369).   By definition a CDA of school rankings also focuses on sociopolitical dimensions at play when a de facto policy document is produced by an advocacy think tank with clout. It will be important, therefore, to analyze policy documents that are published by the Fraser Institute with the goal of establishing prevailing ideological stances this particular advocacy think tank promotes. This is an important consideration because in promoting school rankings the Fraser Institute also promotes its ideological position about how best to improve schools. In exploring how rankings have changed and evolved over time a CDA makes possible an examination of how published school rankings have overexerted their authority on the accountability field by promoting neo-liberal ideologies that privilege certain kinds of schools.  Bourdieu’s central theme in his analysis of education was that the “system consecrates privilege by ignoring it, by treating everybody as if they were equal when, in fact, the competitors all begin with different handicaps based on cultural endowment” (Jenkins, 2002, p. 113). School rankings, therefore, may be thought of as being schemes of construction that—by their very nature—include and exclude certain kinds of schools that serve certain kinds of students. For if Bourdieu’s assertion that social groups occupy similar positions within a field because they share a common habitus is operational within a school ranking discourse, then an analysis of statistical data should reveal contextual similarities and differences between schools that obtain similar overall scores. By this logic it is entirely possible that a particular kind of independent school is more likely to achieve the highest possible ranking.  I am also interested—not only analyzing how the ranking rubric has shifted and mutated over time—but in looking at how the general public engaged with published school reports through published media accounts from 1998-2010. Smith (2001) describes “text-reader conversations in which, unlike real life conversations, one side of   70 the conversation is fixed and unresponsive to the other’s response” (Smith, 2001, p. 175).  “In face-to-face conversations among people, the utterance-response sequence is one in which each next utterance is modified as a response to the utterance that preceded it. In text-reader conversations, one side is obstinately unmovable. However, the reader takes it up, the text remains as a constant point of reference against which any particular interpretation can be checked” (Smith, 2001, p. 175).   I will be using CDA to show that people make sense of school rankings from (private) text-reader conversations that people engage in when reading published annual rankings, and in (public) face-to-face conversations that take place in the media and online. Furthermore, rankings could not be created without the help of technological devices (like computers) and the technologies of governance that make possible the data from which school rankings are derived in the first place. By this logic, published school rankings become physical artifacts and serve as a primary source of data for this project. School ranking tables/documents are relevant here because they exemplify what Smith (2001) calls “the textual mediation of people’s activities through standardized genres” (Smith, 2001, p. 173). Textual mediation, therefore, creates artifacts that stem from “the coordinating machinery of organization and institution” (Smith, 2001, p. 174). For the purpose of this investigation textual mediation principally takes the form of school reports cards that place an emphasis on key performance indicators (KPIs) and their relationship to the phenomenon of school performativity. In large measure the Fraser Institute compiles its annual secondary school report card from average exam results that students achieve on standardized (compulsory) Ministry examinations. These subject examinations are based on a Ministry prescribed curriculum and are carried out within schools across the province. Foucault believed that “the age of the “examining” school marked the beginnings of a pedagogy that function*ed+ as a science” (Rabinow, 1984, p. 198). He “sought to understand the history and evolution of constructs that were considered natural…and how such constructs are   71 a product of power/knowledge relationships” (Rogers, et al., 2005, p. 370). It is essential, therefore, that a CDA of school rankings be made with the goal of problematizing the statistical techniques used by the Fraser Institute to manipulate the climate of public opinion because they are perceived by many to be ‘normal’ constructs that operate within the domain of school peformativity. Table 1 lists the documents that will be analyzed in this study. Each document may be considered a discursive event that has three dimensions: (1) it is a spoken or written text; (2) it is an instance of discourse practice involving the production and interpretation of texts; and (3) it is a part of a broader sociopolitical context (Rogers, et al., 2005).    72 Table 1: Documents used for critical discourse analysis  Sources of Data — Collecting the Evidence  Primary Sources  Fraser Institute Produced Documents  Fraser Institute Report Cards on British Columbia’s Secondary Schools (1998- 2010)  Fraser Institute Annual Reports (1998-2010)  Fraser Forum Magazine Articles  Information published on the Fraser Institute’s website  Newspaper & Magazine Articles, Editorials, & Letters to the Editor  As published in The Province, The Vancouver Sun, Globe & Mail, The National Post, & other regional newspapers (1998-2010)  BCTF Newsletters, Maclean’s, & other printed news-related sources  Secondary Sources  BC Ministry of Education Produced Documents & Reports  School & District Reports  Federation of Independent School (FISA) generated data  Garfield Weston Awards for Excellence in Education  Tertiary Sources Interviews  Webcasts  Radio and print interviews  Personal Observations  Arbutus Club Dinner: Peter Cowley Guest Speaker  Twenty-one years experience working as a teacher, administrator, and leader in the independent school system of British Columbia (September 1991-December 2010)    73 Data Sources Contextualized It is not an overstatement to say that the most important document source for a case study that focuses on the Fraser Institute’s secondary school rankings comes from the Fraser Institute itself. By analyzing thirteen years of secondary school report cards I document and explain how the Fraser Institute has effectively managed to ‘naturalize’ its ideological stance that it is possible to objectively determine the province’s ‘best’ and ‘worst’ performing schools. By specifically “focusing attention upon the ‘social institution’ and upon discourses which are clearly associable with particular institutions, rather than on casual conversations” answering the question: “How does the naturalization of ideologies come about?” is possible (Fairclough, 1985, p. 747). Fairclough (1985) believed, as I do, that social institutions were “an apparatus of verbal interaction, or an ‘order of discourse’” (Fairclough, 1985, p. 749). I will be analyzing the various iterations of the Fraser Institute’s secondary school report cards to show how they operate as disciplinary ‘order of discourse’ to reward and penalize different kinds of schools on the field of visibility. Moreover, the analysis of the report cards in this way will document how, and when, the ranking rubric has changed over time. This is important because I argue there exists a circumstantial relationship between the province’s changing political context and the Fraser Institute’s changing ranking rubric. As well, the analysis will show how changing the ranking over time has significantly reduced the likelihood that public secondary schools in British Columbia could achieve top-ten-school status. Finally, secondary school report cards that have been devised (and published) by the Fraser Institute on its website since 1999 contain important information that are not otherwise published in newspapers featuring the Fraser Institute school report cards.12  12  The kind of information that is presented in the Fraser Institute generated reports will be unpacked and problematized in sufficient detail in Chapters 4 and 5, but in general terms it relates to providing more detailed accounts of how key performance indicators are calculated; the kinds of schools not included in the ranking; relationships the Fraser Institute has developed with other like-minded organizations throughout the world; as well key insights made by the authors of the report card that are relevant to the analysis.   74 Another important textual source of data includes ‘Fraser Institute Annual Reports‘ that have been published online from 1998-2010. They are relevant to an investigation about school rankings because they serve to contextualize the Fraser Institute’s scope of influence in shaping public health, environmental, and economic policies. As well the annual reports identify the Board of Trustees and the Executive Advisory Board by name. This is helpful information because it documents and situates people and groups that are associated with the Fraser Institute within broader networks of power relations. As well the reports contain additional information about the Fraser Institute and its membership that does not usually get reported. The Federation of Independent Schools Association represents a cohort of private and independent schools in British Columbia that includes: the Association of Christian Schools International in British Columbia (ACSIBC), the Associate Member Group (AMG), Catholic Independent Schools (CIS),13 the Independent Schools Association of British Columbia (ISABC), and the Society of Christian Schools in British Columbia (SCBC)14 (The Federation of Independent Schools, 2010a). FISA is not unlike the Fraser Institute in that it also believes that “numbers supply quantitative evidence of a reality which exists” (The Federation of Independent Schools, 2010b). I will be drawing on FISA generated data that gives an accurate historical accounting of student enrolment in different kinds of independent schools since the Fraser Institute published its first school report card in British Columbia. This information is relevant because it will document any demonstrated relational trends that may exist between student enrolment patterns and the place independent schools consistently occupy as ‘top’ ranked schools by the Fraser Institute. Another important source of data for this project comes from published newspaper and magazine articles, newsletters, and editorials. I will be using them to highlight and explain how and why the polemical debate around school rankings is ongoing and highly contestable. In a paper published in Discourse & Society, van Dijk (1993), notes that “one crucial presupposition of adequate critical discourse analysis is  13  Formerly called Catholic Public Schools (CPS). 14  Formerly called National Union of Christian Schools - District 12.   75 understanding the nature of social power and dominance” (van Dijk, 1993, p. 254). He indicates that “power involves control, namely by (members of) one group over (those of) other groups” (van Dijk, 1993, p. 254). Given that the Fraser Institute ranking exerts disciplinary power over schools through a statistical discourse it is not surprising that a counter discourse has emerged in response. The polemical debate that has defined the school ranking initiative is one that has emerged between ‘the institution’ and ‘the client’ (Fairclough, 1985, p. 749). The “client is an outsider rather than a member *of the institution] who nevertheless takes part in certain institutional interactions in accordance with norms laid down by the institution” (Fairclough, 1985, p. 749). By way of example Fairclough (1985) identifies the physician/patient relationship as being analogous to the institution/client relationship. This pairing is not unlike the relationship school rankings have with secondary schools because—like the patient—secondary schools are complicit in their own (institutional) examination by the Fraser Institute15. Clearly, schools cannot respond to the Fraser Institute ranking per se, but the people working within them can. An analysis of published media accounts of the school ranking phenomenon by the people who work closely with students is an essential part of this project. Reports and documents produced by the British Columbia Ministry of Education about secondary schools constitute another important source of textual data. My intent is to highlight aspects of school and district reports that the Fraser Institute ignores in generating its school-ranking rubric. This is relevant because it underscores the statistical bias inherent in a ranking rubric that excludes—what is arguably—important data. Radio interviews, podcasts, and published online debates make up another tertiary source of data. These recordings will be analyzed critically in much the same way as the textual data described above, but that information is delivered verbally and  15  I do not mean to suggest here that secondary schools in British Columbia willingly submit to the Fraser Institute’s ranking of them in the same way that willing patients submit to be examined by a physician. I am saying that secondary schools in British Columbia cannot opt out of being included in the Fraser Institute ranking and are therefore drafted into a process that many school leaders say they object to.   76 not textually serves to broaden “discursive event*s+” (Rogers, et al., 2005, p. 371) that constitute the case being analyzed. Finally, the personal observations I’ve experienced as an educator and leader working within the independent school system for the past twenty-one years serve as an important source of data as well. This potentially rich source of data that has inspired and motivated me to study the Fraser Institute ranking of secondary schools in the first place is also a potential liability because it illustrates the “classic tension that exists between distance and closeness in the research setting [that] is often blurred in education research” (Rogers, et al., 2005, p. 382). This point will be taken be taken up in the next section.  Limitations of Critical Discourse Analysis  Reflexive intentions endeavor to account for the interpretative dimension of empirical research (Alvesson & Skoldberg, 2003). That is to say it is impossible for the researcher to remove him or herself from the phenomenon under investigation completely, and that it is the responsibility of the researcher to recognize and acknowledge any positional bias (s)he may bring to the investigation. Rogers et al., (2005) notes that “reflexivity is crucial in research agendas involving CDA in education research” because “education researchers are often researchers of familiar education settings…and as such, we bring with us (often successful) histories of participation in those institutions as students, teachers, and parents” (Rogers et al., 2005, p. 382). In other words, the perspective of educators who work directly with students in different educational settings informs the understandings they have about teaching and learning. Rogers et al. (2005) emphasizes the important need, therefore, for researchers to situate themselves within the research project.  A second limitation of my conducting a CDA of the Fraser Institute school ranking is that all of my data is limited to publically available sources. Without exception, every document, article, report, and interview that is part of this project is also a part of the public domain. I did not conduct a single interview or collect any data. In fact, this   77 project unfolded without the need for an Ethics Committee to be struck at the University of British Columbia, but given the very public nature of the polemical debates that surround the school ranking issue limiting the CDA to public data in this way is—I believe—warranted. Finally, it is important to note here that my experience as a teacher, administrator, and school leader is defined by my work in the British Columbia independent school system. As I have already mentioned, my interest in understanding the Fraser Institute ranking of secondary schools was born out of my experience working in one of Vancouver’s ‘elite’ independent schools that—initially—did not perform well on the ranking. An important part of my job then (as the Director of the Senior School) was to understand a ranking instrument that made York House School look ‘bad’ to parents, alumni, and—most importantly—prospective parents. This was especially important given that a number of (free) local public high schools outperformed York House. It is not a stretch to say that the long-term future of the school was potentially at risk if York House could not significantly improve its ranking score. I understood what it felt like to have an excellent school be reduced to a single measure and it was—in part—my job to understand and implement whatever strategic changes were necessary to play (and win) the Fraser Institute’s school ranking game of accountability. Furthermore, my professional practice as an educator is (mostly) informed by the relationships I have established and cultivated with other heads’ of independent schools that belong to the Independent Schools Association of British Columbia (ISABC)—a cohort defined by a group of schools that have consistently achieved ‘top’ ranked scores on the Fraser Institute’s school report card from the beginning (see Appendix G). It could be argued that the longstanding academic, financial, and institutional success these kinds of schools enjoy can be attributed to them effectively leveraging the very neoconservative and neoliberal forces that are the subject of critique in this project.   78 CHAPTER 4: A Changing School Ranking Rubric Introduction The purpose of Chapter 4 is not only to show how the ranking rubric devised by the Fraser Institute has changed over time, but to examine how these changes have shaped the field of visibility through which secondary schools are perceived in the public domain. Initially the changes made to the ranking rubric reflected new key performance indicators (KPIs) the Fraser Institute felt were important to introduce to its ranking like, for example, subject-specific gender gap measures that compared the achievement results of boys to girls in mathematics and English (Cowley & Easton, 2001). Other changes were introduced because the ranking rubric was not immune to modifications the Ministry of Education made to its own secondary school graduation program (British Columbia Ministry of Education, 2004). As well, the recruitment of foreign ESL students to British Columbia’s public school system, and changes to admission policy requirements by Canadian universities that de-emphasized the importance placed on Grade 12 examination results, altered how the Fraser Institute devised its school- ranking rubric (Cowley & Easton, 2003; McGill University, 2010; The University of British Columbia, 2009). The impact these (and other) changes had on the ranking rubric is depicted in Table 2. It documents how, and when, changes made by the Fraser Institute were incorporated into the statistical rankings to say something ‘objective’ about schools from 1998-2010. The table also highlights the descriptive data used by the Fraser Institute to say something ‘contextual’ about schools during the same period. My goal is to use the data presented in the table throughout this chapter to show how the Fraser Institute leveraged disciplinary power within its ranking discourse to tell particular kinds of stories about particular kinds of schools. It is important to note that the chapter has been organized around five key iterations16 that I believe characterize important modifications that were made by the Fraser Institute to its school-ranking  16  Iteration # 1 (1998-2000); Iteration #2 (2001-2002); Iteration #3 (2003-2006); Iteration # 4 (2007); Iteration #5 (2008-2010).   79 rubric over its thirteen-year history. These changes delineate the data points used in the analysis to address the following research questions:  1. How have the statistical components of the Fraser Institute’s secondary school ranking in British Columbia changed over time in terms of their modes of statistical representation?  2. What implications do these statistical changes have for the way secondary schools come to be known by the public, and how do they shape the field of visibility through which secondary schools are viewed?  An analysis of the historical data presented will show that modifications made to the ranking’s statistical formulae implemented by the Fraser Institute rewarded certain kinds of schools in British Columbia while statistically sanctioning others. Moreover, the data interpretation demonstrates that a revised method of calculating a school’s overall rating by the Fraser Institute in 2001 (Iteration #2) resulted in a significant decline in the number of public schools achieving an overall ranking between 9.0 to 10.0—the highest decile score possible. This marked redistribution of British Columbia’s ‘best’ schools has remained consistent since the first revision was made and it has not been well documented in the mainstream press, if it has been documented at all. Finally, by drawing principally on published media accounts and the Fraser Institute’s own documents, this chapter will illustrate how—in devising a statistical narrative about the state of the secondary school system in British Columbia—the Fraser Institute has leveraged discursive power to control how schools are perceived in the public realm. In so doing I argue that the Fraser Institute has effectively managed to cast the public school system as being inferior to the private school system, which operates on competition, market forces, and parental choice. What follows is an analysis of how the Fraser Institute established the terms by which secondary schools in British Columbia were ‘seen’ within the public space.    80 The Epistemology of Seeing Initially, the Fraser Institute was motivated to devise and publish a secondary school report card because there was “no uniform system for evaluating the performance of schools in the province” of British Columbia (Cowley, et al., 1998, p. 4). Moreover, the authors noted that no evaluative procedure was contemplated by the British Columbia Ministry of Education to determine how well the school system worked. “The only way to find out whether our schools are doing their job satisfactorily”, the authors of the first school report card noted was, “to measure results in an objective and quantifiable way” (Cowley, et al., 1998, p. 4). The data-driven initiative of a school-ranking rubric resonated with the Fraser Institute’s emphasis on measurability given its institutional motto, “If it matters, measure it” (Levant, 2005, p. A19). Additionally, the Fraser Institute’s position that school performance could be measured with the goal of improving British Columbia’s high schools was clearly articulated by its then Executive Director, Michael Walker,17 when he said,  “Let’s get past the notion that school performance cannot be measured. The process of continuous improvement, to which we all aspire, consists of measuring performance, making corrections to what we are doing and measuring once again to discover the next set of corrections. And so on” (Proctor, 1998d, p. A3).  It is important to note here that Walker’s understanding of educational reform through continuous improvement is positioned within a specific epistemology of seeing called positivism—“a theory of knowledge which contends that what should count as knowledge can only be validated through methods of observation which are derived  17  Michael Walker was also a principal author of the Fraser Institute’s first published school report card (1998) in British Columbia.    81 Table 2: Changing iterations of the Fraser Institute ranking rubric (1998-2010) Key Performance Indicator (KPI) Iteration 1  Iteration 2 Iteration 3  Iteration 4  Iteration 5  1998 1999 18 200019 200120 200221 200322 2004 2005 200623 200724 2008 2009 2010 1. Average exam mark 20% Grade 12 20% Grade 12 20% Grade 12 20% Grade 12 20% Grade 12 20% Grade 12 20% Grade 12 20% Grade 12 20% Grade 12 15% (Grade 12) 5% (Grade 10) 25% Grade 10-12 25% Grade 10-12 25% Grade 10-12 2. Percentage of exams failed 20% Grade 12 20% Grade 12 20% Grade 12 20% Grade 12 20% Grade 12 20% Grade 12 20% Grade12 20% Grade12 20% Grade 12 20% Grade 10 & 12 25% Grade 10-12 25% Grade 10-12 25% Grade 10-12 3. School vs. Exam 20% 20% 20% 10% 20% (sss) 10% 20% (sss) 10% 20% (sss) 10% 20% (sss) 10% 20% (sss) 10% 20% (sss) 10% 20% (sss) 13% 25% (sss) 13% 25% (sss) 13% 25% (sss) 4. Graduation Rate 20% 20% 20% 20% 20% 10% 20% if composite dropout is 0% 10% 20% if composite dropout is 0% 10% 20% if delayed advancement rate is 0% 10% 20% if delayed advancement rate is 0% 10% 20% if delayed advancement rate is 0% 12.5% 25% if delayed advancement rate is 0% 12.5% 25% if delayed advancement rate is 0%% 12.5% 25% if delayed advancement rate is 0%% 5. Number of exams taken per student 20% 20% 20% 20% 20% 20% 20% 20% 20% 20%  Revised Graduation Program 6. MATH 12 gender gap  D 5% n/a (sss) 5% n/a (sss) 5% n/a (sss) 5% n/a (sss) 5% n/a (sss) 5% n/a (sss) 5% n/a (sss) 7. ENGLISH 12 gender gap D 5% n/a (sss) 5% n/a (sss) 5% n/a (sss) 5% n/a (sss) 5% n/a (sss) 5% n/a (sss) 5% n/a (sss) 8. Composite Dropout Rate / Delayed Advancement rate (Doesn’t count for elites)    D 10% 0% if there is no composite dropout rate 10% 0% if there is no composite dropout rate 10% 0% if there is no delayed advancement rate 10% 0% if there is no delayed advancement rate 10% 0% if there is no delayed advancement rate 12.5% 0% if there is no delayed advancement rate 12.5% 0% if there is no delayed advancement rate 12.5% 0% if there is no delayed advancement rate 9. MATH 10 gender gap        6% n/a (sss) 6% n/a (sss) 6% n/a (sss) 10. ENGLISH 10 gender gap 6% n/a (sss) 6% n/a (sss) 6% n/a (sss) 11. Average Income D25 n/a n/a n/a n/a n/a n/a n/ n/a n/a n/a D D 12. Parents avg. education in yrs. n/a D26 D D D D D D D D D D D 13. Kind of School (Public/Private) n/a n/a D D D D D D D D D D D 14. Socio-economic indicator (Actual vs. Predicted) n/a D 27  D D D D D D D D D D 15. Grade 12 Enrolment n/a D D D D D D D D D D D D 16. Trend/Progress Indictor n/a D D D D D D D D D Removed Removed Removed 17. Subject Specific Exam Averages & student participation rate D D28 n/a n/a n/a n/a n/a n/a n/a n/a n/a 18. % ESL Students & % Special needs n/a n/a n/a n/a n/a n/a D D D D D D D 19. Sports Participation Rate n/a n/a n/a n/a n/a n/a n/a D D n/a n/a n/a n/a 20. % French Immersion n/a n/a n/a n/a n/a n/a n/a n/a n/a D D D D Table compiled from the following sources: (Cowley & Easton, 2000, 2001; Cowley & Easton, 2002; Cowley & Easton, 2003, 2004b; Cowley & Easton, 2005, 2007, 2008; Cowley & Easton, 2009; Cowley, Easton, & Thomas, 2010; Cowley, et al., 1998; Cowley, et al., 1999; Cowley & Easton, 2006) Legend: sss= single sex schools; D = Descriptive; n/a = not applicable  18 Gender Report Published by Cowley indicating girls outperform boys on school-issued marks but that boys outperform girls on provincial exams. 19 Value-added trend indicator for SES; Gender gap reported but not counted; subject-specific exam averages reported; subject-specific participation rate reported but not counted. 20 The BIG statistical switch (new method of calculating overall rating); Gender counts; Fraser Institute recalculated the rankings given the new gender indicator. 21 Introduction of the ‘Composite Drop Out’ KPI as a description. This KPI was first used in France. 22 Student cohort is “refined” to exclude international students from the ranking. This results in a re-calculation of previous ranking scores with the “revised data”. 23 First time Yukon is included in the British Columbia report card. 24 First ranking published with Grade 10 exam data. 25  Reported for public school parents only in 1998. Median income of parents sending their children to independent/private schools not included. 26 Initially reported for public school parents only. 27 Reported for all schools included in the Fraser Institute ranking. A larger positive difference would suggest that the school is effective in enabling its students to succeed regardless of their socio-economic background. 28  ENGLISH 12 (provincial exam averages compared with school).   82 from the example set by the physical sciences” (Sedgwick & Edgar, 2003, p. 290). This position privileges sense-making born out of data-gathering and it serves to highlight the prevailing ideological discursive formation (IDF) the Fraser Institute used to promote its school-ranking rubric from the beginning. Moreover, Walker clearly established the initial boundaries of the accountability field when he proclaimed, “*f+or the first time, a variety of relevant and publically available data were combined to produce academic rating of public and independent schools” (Cowley, et al., 1999, p. 3). In this way the Fraser Institute established the initial boundaries of the accountability playing field. They were: Ministry collected data about public, private, and independent secondary schools throughout British Columbia compiled according to a Fraser Institute developed statistical rubric. Given the original five key performance indicators (KPIs) initially chosen by the Fraser Institute to rank secondary schools in British Columbia it is essential to highlight the source of their data. In their first published ‘Occasional Paper’ entitled, ‘A Secondary Schools Report Card for British Columbia’, Cowley, Easton, and Walker (1998) explain how—in the interest of transparency—the statistical manipulation of the Ministry’s raw data was kept to the “very minimum” (Cowley, et al., 1998, p. 6). As well, they described how the KPIs used in the first school ranking Iteration #1 (1998-2000) were derived from publically accessible databases maintained by two different Ministry of Education organizational branches: (1) the School Finance and Data Management Branch, and (2) the Evaluation and Accountability Branch (Cowley, et al., 1998). The Ministry of Education used some of the information obtained from these ministerial branches to quantify student enrolment numbers, as well as to provide information to school districts about annual per-student operating grants. The Fraser Institute used some of the data they extracted from the same ministerial databases to develop their first five KPIs. It is important to problematize the sources of data used by the Fraser Institute to develop its school- ranking rubric at this juncture for two principal reasons: (1) it highlights the nature of selective data mining in the construction of statistical storytelling, and (2) it illustrates how the Fraser Institute begins to exert control on the field of power by exercising—   83 what Foucault (1977) calls—a distinctly modern technique for observing subjects. What follows is my analysis of these two points. In selecting the data it wanted to use to construct its school-ranking rubric the Fraser Institute made a decision about what information to include, and what information to disregard. It did not, for example, use data available from the Ministry of Social Services to construct its school-ranking rubric. Nor did it use all of the data provided by the School Finance and Data Management Branch, and the Evaluation and Accountability Branch. It did, however, recognize the statistical limitation of extracting— and using—some of the data provided from two different ministerial branches to construct its first school-ranking rubric when the authors of the first report card noted,  “*b+ecause these databases were not created by the Ministry of Education for the purpose of evaluating the performance of schools, they are not entirely suited to the purpose and the indicators derived from them are far from perfect. Nevertheless, the databases include valuable information from which we have been able to extract five statistics for the initial ‘Secondary Schools Report Card for BC’ (Cowley, et al., 1998, pp. 5-6).  What is relevant to note is the value the Fraser Institute places on its ability to extract information it deems useful from an available source. Here, we have an example of how the Fraser Institute mines raw data that—like any raw material taken from the Earth’s crust—is first processed before it becomes valuable. As well, the authors acknowledged the inherent bias contained within the KPIs when they noted in their first published report “*t+he only built-in bias is in the selection of the data itself” (Cowley, et al., 1998, p. 6).  Secondly, we see in the Fraser Institute’s strategy to avail itself of Ministry- acquired data the exercise of disciplinary power. Here is an example of how hierarchical observations made about student achievement are leveraged by the Fraser Institute to construct a school-ranking rubric that operates like Bentham’s Panoptic prison tower. They are similar because they both have structural dimensions that are designed to locate, fix, and observe their respective subjects. This is made possible because school   84 rankings and panoptic prisons have surveillance at their functional core. And just as panoptic prisons have at its center a single imposing tower from which guards could cast their omnipresent gaze on incarcerated prisoners without being seen, so too do school rankings have at their centre statistical rubrics that cast their omnipresent gaze on secondary schools and—by implication—the teachers working within them. Published school rankings, however, are different from prison towers because they have a multiplying effect. Hundreds of thousands of papers are published daily for the public to read and—every spring—a provincial newspaper publishes the Fraser Institute’s school report card. As such, every single published newspaper that contains the school ranking tables acts like a single panoptic prison tower because the public’s gaze is cast on the object of scrutiny—secondary schools. The collection and analysis of data by the Fraser Institute in this way is tied directly to disclosure and the (new) politic of visibility because—like Panoptic prison towers—the school accountability system is also predicated on surveillance. Notwithstanding the inherent limitations embedded—not only within the Ministry databases used to manufacture the school-ranking rubric, but with the KPIs derived from them—the Fraser Institute published its first school report card in The Province newspaper (Cowley, et al., 1998). What follows is a descriptive and critical analysis of each of the five key iterations developed by the Fraser Institute from 1998- 2010. A number of tables and graphs appear throughout this chapter that reflect—not only how entire populations of secondary schools were impacted by the Fraser Institute’s ranking rubric over time—but as importantly, how a single school was impacted by the statistical mechanics underpinning the ranking rubric.  Iteration #1 (1998-2000): Five Key Performance Indicators  There were initially five KPIs identified by the Fraser Institute to construct its inaugural school-ranking rubric. They were: (1) average exam mark, (2) percentage of exams failed, (3) school vs. exam mark difference, (4) exams taken per student, and (5)   85 graduation rate. These KPIs are noted in Table 3 along with their relative percentage weights for both co-educational and single sex schools.  Table 3: Relative percentage weights of KPIs for iteration #1   Iteration #1: 1998, 1999, and 2000 Key Performance Indicator (KPI) Co-Educational and Single Sex Schools 1. Average Exam Mark 20% 2. Percentage of Exams Failed 20% 3. School vs. Exam Mark Difference 20% 4. Exams Taken per Student 20% 5. Graduation Rate 20% TOTAL 100% Descriptive Measures 1998 1999 2000 6. Parents’ Average Education in Years n/a Descriptive 7. Kind of School (Public or Private) Descriptive 8. Grade 12 Enrolment n/a Descriptive 9. Semiotic Trend Progress Indicators n/a Descriptive 10. Subject-specific exam averages29 n/a n/a Descriptive 11. Student Participation Rate n/a Descriptive 12. Gender Gap Indicator n/a n/a Descriptive Table compiled from the following sources: (Cowley et al., 1998, 1999; Cowley & Easton, 2000)  What is important to note here is that descriptive measures were absent in the first year (1998) the ranking was published. As well, it is essential to bear in mind that during the first three years that the Fraser Institute published its school report card the same five KPIs were uniformly applied to all of the schools they ranked. This meant that secondary school report cards that were published in British Columbia in 1998, 1999, and 2000 did not statistically distinguish between public, private, independent, co-educational, and single sex schools.30 In this way the statistical logic embedded within the ranking rubric  29  The following Grade 12 provincially examinable subjects were noted in the 2000 version of the Fraser Institute school report card: English, math, biology, chemistry, geography, history, physics, and French. The participation rates associated with each subject were also noted.  30  Single sex schools are comprised of all-boys or all-girls. Single sex schools are de facto private and independent schools because there are no public schools in British Columbia that are also single sex schools.   86 itself was uniformly discerning because it was uniformly applied to all schools captured in the Fraser Institute’s secondary school report card throughout the first three years that defined iteration #1. Table 4 shows how a single Vancouver school—York House School—was depicted in the Fraser Institute’s first published school ranking, as that table appeared in the Fraser Institute published document, ‘A Secondary Schools Report Card for British Columbia’ (Cowley, et al., 1998). KPIs 1-3 were devised by the Fraser Institute to reflect effective teaching practices (Teaching) within schools, while KPIs 4 and 5 were devised to reflect effective student counseling practices (Advising). It is evident from the table that York House School achieved an overall ranking of nine-point-zero (9.0) on the first secondary school report card published by the Fraser Institute in British Columbia.  Table 4: First school ranking table published for YHS (1998)   York House School (YHS)  Teaching Advising Year KPI 1 KPI 2 KPI 3 KPI 4 KPI 5 Overall 1996/97 2.4 / 10 77.9 / 10 5.9 / 5 100.0 / 10 5.8 / 10 9.0 1995/96 1.2 / 10 79.5 / 10 4.3 / 8 96.2 / 9 4.8 / 10 9.4 1994/95 1.0 / 10 78.2 / 10 5.9 / 5 100.0 / 10 5.1 / 10 9.0 1993/94 1.5 / 10 79.7 / 10 4.9 / 7 100.0 / 10 5.1 / 10 9.4 1992/93 1.4 / 10 76.6 / 10 4.7 / 8 100.0 / 10 5.0 / 10 9.6 Table compiled from the following source: (Cowley, et al., 1998, p. 41).  Legend: KPI 1 = Exams Failed / Fraser Institute Ranking KPI 2 = Average Exam Mark / Fraser Institute Ranking KPI 3 = Exam vs. School Mark / Fraser Institute Ranking KPI 4 = Graduation Rate / Fraser Institute Ranking KPI 5 = # Courses Taken / Fraser Institute Ranking Overall = Average score of five KPIs      87 What follows is a descriptive and critical analysis of each of the five KPIs used during this time. As well the descriptive measures introduced by the Fraser Institute to its school- ranking rubric will also be described and analyzed.  KPI #1: Average Exam Mark31 “For each school, the indicator is the average of the mean scores achieved by the school’s students in each of the provincial examinations at all sittings during the year, weighted by the relative number of students who wrote the examination” (Cowley & Easton, 2000; Cowley, et al., 1998, 1999).   It is not uncommon for mean scores to be included in a statistical analysis of any kind, and its measure can say something meaningful about any given data set. However, average examination scores in a school setting have been shown in the literature to be directly impacted by two variables: (1) class size, and (2) the amount of out-of-class support that students get from private tutoring. In an article about class size, student achievement, and the policy implications associated with their relationship, Odden (1990), reported on meta-analysis investigations and concluded “there was a clear and strong relationship between class size and student achievement” (Odden, 1990, p. 213). In the same paper he also reported that “research is rather consistent in showing that smaller classes have a positive impact on teachers' classroom attitude and behavior” (Odden, 1990, p. 218). Not only were teachers able to develop their lessons in more depth and move through the curriculum more quickly, but the study noted that “teachers were better able to manage their classes” (Odden, 1990, p. 218). Studies also indicated that small classes function more smoothly; that less time gets spent on discipline; and that student absences are proportionately lower (Odden, 1990). Boozer and Rouse’s (2001) study on the patterns and implications of intra-school class size  31  Cowley & Easton (2000) reported the examination averages achieved by students on the most “popular” provincially examinable courses “so that comparisons could be made between different department’s teaching effectiveness” (Cowley & Easton, 2000, p. 11). They also reported on the participation rate—the ratio for a school between the number of students writing the provincial examination in a particular subject and the number of students in grade 12. This data served a descriptive purpose only because the results did not affect a school’s overall ranking. This subject-specific data did not appear in later iterations of the Fraser Institute’s school report card, as it was published in British Columbia.   88 variation indicated that “lower class sizes appears to lead to larger test score gains” (Boozer & Rouse, 2001, p. 187).  This kind of research is relevant to a school ranking system that compares student achievement results between public and private schools because—in general—private schools have smaller class sizes than do their public school counterparts. This distinction is used by many fee-paying schools as being an important difference between private schools, and many private and independent school personnel spend considerable energy highlighting the difference to prospective parents. At York House School, for example, the largest class size was twenty students, and it was not uncommon for senior classes to have between twelve and sixteen students.32 As well, some Advanced Placement (AP) courses were offered at York House School to classes of seven to ten students. In addressing prospective families admissions personnel working in many of Vancouver’s ‘top’ ranked schools emphasize how difficult it is for students to fall between the cracks in schools that offer small class sizes.  Another factor that has been shown to correlate positively with student achievement is the amount of out-of-school-support students obtain. My experience working at York House School helped me understand that it was not uncommon for some of the school’s ‘top’ students to receive additional (out-of-school) tutoring support in mathematics, English, and French—a phenomenon called ‘shadow education’ in the literature (Ireson, 2004). Students engaged in this kind of after-school, subject-specific support have been shown to outperform control students on examinations (Cohen, Kulik, & Kulik, 1982). As well, the authors’ findings on the educational outcomes of tutoring indicated that tutored students developed more positive attitudes toward the subject matter covered in the tutorial program (Cohen, et al., 1982). More recently, Mischo and Haag (2002) conducted an empirical study to determine the effectiveness of private tutoring in a prepost-control-group-design. They compared the results of a group of one hundred and twenty-two students that received private tutoring over a period of nine months to a same-sized group that did not receive private tutoring. Their  32  In British Columbia class sizes are have been enshrined in law since 2002. There can be a maximum of 32 students in any regular class.   89 results showed that “[p]upils receiving paid tutoring as remedial instruction showed an improvement in school marks significantly higher than pupils without tutoring” (Mischo, 2002, p. 270).  These findings are relevant to consider for a school-ranking rubric that includes student examination results as a KPI because they demonstrate the relationship between the positive impact that small class sizes and additional after school tutoring can have on improving student achievement. The presumption made by the authors of the ranking is that good examination results reflect good classroom teaching. The KPI does not make room—or account—for conditions that exist outside the classroom that positively, and negatively, affect the level of student achievement inside the classroom.  KPI #2: Percentage of Examinations Failed “This indicator provides the rate of failure (as a percentage) in the provincial examinations.33 It was derived by dividing the sum, for each school, of all provincial examinations written where a failing grade was awarded by the total number of such examinations written by the students of that school” (Cowley & Easton, 2000; Cowley, et al., 1998, 1999).  While this index may approximate a fair measure of exam performance within schools, it is important to note that ‘best’ schools received top marks from 1998-2000 when less than 6.2% of the class failed. This is problematic when one considers the adverse effect very small increases in the failure rate above 6.2% have on a school's over-all ranking. The implication here is that schools (in which 2% of the class fail) receive the same top score as schools in which 6.2% of the class fail. This scaling creates a relatively wide margin for accountability in ‘best’ performing schools (0-6.2%) when other schools are significantly penalized for the slightest increase in the rate of “failures” (1.37-2.61%) in the percentage of provincial exams failed (see Appendix A).  33  Provincially examinable courses during 1992-1997 included: Biology 12, Chemistry 12, Communications 12, English 12, English Literature 12, French 12, Français Langue 12, Geography 12, Geology 12, German 12, History 12, Japanese 12, Latin 12, Mandarin 12, Mathematics 12, Physics 12, Spanish 12. In the 1997/98 school year, three new courses—Technical and Professional Communications 12, Applications of Mathematics 12, and Punjabi 12—were added to the list of examinable subjects and Latin 12 was eliminated.   90 Finally, students who fail provincial exams could end up passing the course because a student’s final mark in any provincially examinable subject was achieved in 1998-2000 by blending the school-issued mark (40%) with the provincial exam mark (60%). There is no KPI that reflects the percentage of courses failed by students which—arguably— presents a more holistic and comprehensive picture of the student’s experience in any given provincially examinable course. The Fraser Institute places an emphasis on the percentage of exams failed. Finally, it is not uncommon for students who may want pursue a post-secondary discipline like Architecture, for example, to take a subject like Principles of Math 12 because their post-secondary admission requires that students successfully complete that compulsory course. As such, the assumption made by the Fraser Institute that Grade 12 students have the “freedom” to choose subjects they enjoy and/or are genuinely interested in does not square with students who are compelled to take required courses based on their university and post-secondary aspirations (Cowley, et al., 1999, p. 78). This is especially true for students pursuing undergraduate engineering, science, and computer science programs. As well the assumption that all schools require students to complete prerequisite courses before taking provincially examinable subjects is not always true. Some courses like Biology 12, for example, did not require students to take a prerequisite Biology 11 course because the course syllabi were very different from one another. This made it possible for Grade 11 (or Grade 12) students to take Biology 12 without having taken Biology 11. Geography 12 and History 12 were two other provincially examinable courses that did not require prerequisites.  KPI #3: School vs. Exam Mark Difference “This indicator gives the average of the absolute value of the difference between the average mark contained on the provincial examinations and the average final “school” mark—the accumulation of all the results from tests, essays, quizzes, and so on given in class—for all the provincially examinable grade 12 courses. Top marks are awarded to schools that predict how closely students' final exam marks correlate with their school-issued mark in provincially examinable subjects” (Cowley & Easton, 2000; Cowley, et al., 1998, 1999).   91  The Fraser Institute's rationale for including this particular KPI is that marks assigned by the school should be roughly the same as the mark achieved by the student on the provincial examination. “Thus, if a school has accurately assessed a student as consistently working at a C+ level, the student’s examination result will be at a similar level” (Cowley, et al., 1998, p. 74). Expecting such a direct correlation to exist between school-issued and provincial examination marks reflects a particular logic that is embraced by the Fraser Institute while concurrently discounting the possibility that different schools have different visions for how to measure student achievement. The assumption made by the Fraser Institute is that students perform on time-limited, standardized, provincial examinations in the same way they perform throughout the year under the guidance of their respective subject teachers who are trained (and expected) to assess student progress and understanding in ways that expand the limits imposed by pen-and-paper testing. The assumption here also undermines some of the subject-specific prescribed learning outcomes (PLOs) that can’t be assessed by teachers using traditional pen and paper tests and exams like, for example, the ability of students to work effectively in groups.34 Embedded within this KPI, therefore, is the tacit implication by the Fraser Institute that teachers artificially inflate school-issued marks. Cowley et al. (1999) noted that “in 1997/98, for instance, almost 78% of reported average school marks were higher than the corresponding average examination marks” (Cowley, et al., 1999, p. 6). This statistical observation does not make room for the fact that the vast majority of classroom teachers throughout the province know their students in ways standardized examinations cannot. It is a statistical construction that does not account for the lived experiences of teachers working with students in ways that allow them to authentically gauge and assess a student’s subject-specific strengths and limitations. As well it can be argued that some teachers are especially discerning in awarding marks to their students in an attempt to raise the intellectual standard in the  34  The prescribed learning outcomes (PLOs) set the learning standards for the provincial K-12 education system and form the prescribed curriculum for British Columbia. They are statements of what students are expected to know and do at the end of an indicated grade or course. Schools have the responsibility to ensure that all subject-specific PLOs are met.    92 classroom. If, for example, a student receives 86% from a hard-marking teacher and 93% on the provincial exam, the school (and teacher) is penalized by for preparing the student to write a stellar final exam with confidence. What else would be at stake if schools adopted the Fraser Institute's policy of assessment? Science teachers might well decide not to devote considerable class time developing students' lab skills because lab skills are never assessed in an exam setting. The province's ‘best’ schools would be the ones in which educators taught to the provincial exam. Student achievement would best be measured by a series of tests, quizzes, and mid-term exams that reflected the types of questions on final examinations. These assessment strategies are not only limiting in scope but, used to exclusion; promote a particular kind of knowledge.  KPI #4: Exams Taken per Student “This performance indicator measures the average number of provincially examinable courses taken by students at any given school and is derived by first summing the number of students at each school who wrote provincially examinable subjects (∑x) and then dividing by the number of grade 12 students enrolled in the school (n). Average # of exams taken per student = ∑x ÷ n” (Cowley & Easton, 2000; Cowley, et al., 1998, 1999).   The assumption made by the Fraser Institute is twofold: (1) that most high school students are bound for post-secondary institutions; and (2) the more provincially examinable subjects students take the more opportunity they will have once they graduate. These assumptions do not apply to all secondary school students, but they are made by the Fraser Institute to gauge how effective schools counsel their students to make good course selection choices. Many graduating students pursue careers in the trades whereby—in 1998—they were required to take a single (compulsory) Language Arts examination in Grade 12 (British Columbia Ministry of Education, 1997). Here we have an example of how the Fraser Institute’s ranking rubric under-estimates the strategies that some high school students leverage to plan for their future because it does not make room for continuing their educational trajectory along unconventional   93 post-secondary paths in the arts, science, business, nursing, engineering, and education (to name but a few). Another limitation of this KPI is that it doesn’t account for Grade 11 students who take Grade 12 provincially examinable subjects in their Grade 11 year. It is not uncommon for Grade 11 students attending Vancouver’s ‘top’ ranked independent schools, for example, to take French 12, Geography 12, History 12, and/or English 12 in their Grade 11 year. This very real possibility artificially inflates the total number of provincially examinable courses taken at any given school because the total number of provincial exams taken (the numerator in the statistical equation) is really the total sum total of Grade 11 and 12 students that take Grade 12 exams—an inflated measure. When this measure is divided by the total number of Grade 12 students enrolled at the school the ‘average number of exams taken per student’ KPI becomes inflated. Ken Denike, the then chairman of the Vancouver School Board, noted another explanation for why the average number of exams taken per student KPI was problematic. He questioned the correlation between how the Fraser Institute made meaning of the exam data it used to rank secondary schools when he said,  “excellence in some public school programs may actually result in a lower ranking. For instance, excellent International Baccalaureate, fine arts, trades and athletics programs may drag a school’s ranking down because students interested in such programs take fewer courses that are tested using provincial exams, and are also less focused on those courses” (Chung, 2006, p. B1).  As well, schools that have developed a specific program focus like, for example, Langley Fine Arts School (LFAS), which provides “a comprehensive education for students, while focusing on the development of aesthetic intelligence through programmes in the Visual Arts, Literary Arts, Dance, Drama, and Music” (Langley Fine Arts School, 2010) operates from a different epistemic foundation than does the Fraser Institute with its focus on school examination data. The Fraser Institute’s assumption that ‘top’ ranked schools   94 have students writing (on average) at least 3.4935 provincial exams to achieve a top- ranked score underscores a prevailing limitation of the school report card because it ignores the cultural, epistemic, and contextual dimensions of mission-driven schools that make possible different kinds of post-secondary opportunities beyond traditional college and university admission tracts. The Fraser Institute’s 2000 report card indicate that students from LFAS took 2.63 provincial examinations—on average—between the years of 1994 to 1999 (Cowley & Easton, 2000). By comparison, students from ‘top’ ranked Crofton House School (CHS) took—on average—5.52 provincial examinations over the same time period (Cowley & Easton, 2000). CHS is also a mission-driven school but unlike LFAS it prepares its graduates for university admission into some of the most selective post-secondary institutions in North America. The possibility, of course, exists for LFAS gradates to be accepted into equally selective fine arts schools in North America like, for example, New York City’s Julliard School for the performing arts, Vancouver’s Emily Carr University, or Toronto’s Ryerson University. However, the criteria for admission into post-secondary fine arts programs is not the same as it would be for science, engineering, and arts degrees because student applicants submit portfolios and/or are required to audition in order to be accepted. The overall number of provincial examinations taken by students applying to these kinds of post-secondary programs is not necessarily consequential.  KPI #5: Graduation Rate “This indicator compares the number of “potential” graduates enrolled in the school on September 30 with the number of students who actually graduate by the end of the same school year. Only those enrollees who are capable of graduating with their class within the current school year are included in the count of potential graduates” (Cowley & Easton, 2000; Cowley, et al., 1998, 1999).   The Fraser Institute maintains that—for the majority of students in British Columbia—the “minimum requirements for graduation are not onerous” (Cowley, et al.,  35  See Appendix A: Decile Range Table.   95 1999, pp. 77-78). The authors of the ranking believe the likelihood that students will not graduate solely because they are unable to meet the intellectual demands of the curriculum is small. “Nevertheless, the graduation rate varies quite widely from school to school throughout the province…*and+ there is no reason to expect these factors to influence particular schools systematically” (Cowley, et al., 1999, p. 78). Accordingly, Cowley et al. (1999) perceive variations in the graduation rate to be an indicator of the extent to which students are being ‘well counseled’ in their educational choices. While having students complete the entire graduation program is the goal for most secondary schools in British Columbia the literature is replete with studies that document the positive correlation that exists between poverty, parental education (especially the mother’s) and the failure of students to complete high school (Desimone, 1999; Ensminger & Slusarcick, 1992; Haveman, Wolfe, & Spaulding, 1991). In a paper entitled, ‘Childhood Events and Circumstances Influencing High School Completion’, Haveman and colleagues found that “growing up in a family with more children (who compete for resources), being persistently poor and on welfare, and moving one's residence as a child have significant negative impacts on high school completion (Haveman, Wolfe, & Spaulding, 1991, p. 133). The Fraser Institute does not statistically factor the impact these socioeconomic influences have on student retention rates into a single ranking iteration—ever.  Descriptive Indicators: Enrolment Data, Trend Indicators, and Parents‘ Education Cowley, Easton, and Walker recognized the public controversy generated by their first report card when, in the introduction to the second report card, they acknowledged the “frustration, confusion, and antagonism among parents, teachers, and administrators” (Cowley et al., 1999, p. 3). This was countered by their observation that “others accepted the Report Card 1998’s overall ratings as the only evidence they needed that public schools in the province were failing miserably” (Cowley et al., 1999, p. 3). Both of these remarks are important to note at this juncture because they highlight divergent epistemic positions embraced by professionals working in schools   96 (on one hand) and the Fraser Institute and its supporters (on the other). Embedded in the language of statistical representation the “principle of rarefaction” (Foucault, 1984, p. 8). The principle of rarefaction describes the relationship between epistemologies and discursive practices whereby one position supplants another. Here we have an example of how the principle of rarefaction operates within school ranking discourses to supplant counter discourses made by teachers, and the political organizations to which they belong. While some critics objected to the school ranking ‘others’ accepted the ‘overall ratings’ as being the ‘only’ evidence they needed that ‘public’ schools were ‘failing’. The tacit implication here is that private schools were not failing students, because private schools outperformed public schools. Moreover, the data used by the Fraser Institute to determine a school’s overall rating was all that some people needed to see. The data spoke for itself. Notwithstanding, the authors did take into consideration the opinions voiced by critics after the 1998 report card was published and responded to seven key points that emerged from the debate about how the Fraser Institute could improve their report card. These points related to: (1) an expansion of the KPIs to include “other aspects of school performance” (Cowley et al., 1999, p. 4); (2) the focus on Grade 12 examinations;36 (3) including public and private schools in the same ranking; (4) schools being ranked on different provincially examination data;37 (5) accounting for discrepancies between school-issued marks awarded by teachers and marks obtained by students on provincial exams;38 (6) the exclusion of “certain socioeconomic  36  The Fraser Institute wanted to incorporate school-performance data derived from Grade 10 results, which at the time could only be obtained from Grade 10 Foundation Skills Assessment (FSA) data obtained from the Ministry of Education. This data was not made available to the Fraser Institute. 37  The only compulsory, provincially examinable, grade 12 course that every British Columbia student had to take was Language Arts 12. Students could satisfy this requirement by taking English 12 or Technical and Professional Communications 12. The latter course was considered to be easier by teachers, who objected to the Communications 12 exam results being statistically equal to the English 12 exam results for the purpose of school rankings. (University-bound students had to take English 12—the more challenging course.) 38  Cowley et al. (1999) expected for there to be a fairly normal distribution of the difference between school- and exam-based assessments but there was not. Their analysis of the data indicated that 78% of school-issued marks were higher than the exam marks achieved by students. This implied that teachers were inflating their school-issued grades.   97 characteristics of the student body” (Cowley et al., 1999, p. 6); and (7) the exclusion of “school-level changes over the study period” (Cowley et al., 1999, p. 7). As a result of the criticisms expressed after the 1998 report was published, three new variables appeared in the school ranking tables by the Fraser Institute. These variables, however, served to provide additional (contextual) information to The Province’s readers about the schools being ranked. They were descriptive in nature, but they were not factored into the school ranking formula. They included: Grade 12 enrolment data, semiotic trend/progress indicators, and the average years of education achieved by parents. The inclusion of Grade 12 enrolment data allowed for parents to gauge the overall size of the school being ranked. Large public schools, for example, like Alberni District Secondary (ranked 7.2/10), had five hundred-and-six students enrolled in its Grade 12 class, while small independent schools, like York House (ranked 8.6/10), only had forty- three (43), Grade 12 students by comparison (Cowley, et al., 1999). Cowley et al. (1999) reminded The Province’s readers that “the smaller the school the more caution should be used in interpreting these results” (Cowley & Easton, 1999b, p. A21). The inclusion of this kind of additional information helped parents contextualize school rankings in ways that were not possible before. The acknowledgement, however, that factors outside the school environment also had an impact on student achievement was an important one. In the introduction to its second report card Cowley et al. (1999) recognized the impact socioeconomic factors had on student achievement when they noted:  “Research by the Fraser Institute has shown that the level of parents’ education is more closely associated with school performance than parents’ income. So, for each public school, the average years of education of the female parent (or lone parent in a single family) is reported. This statistic was derived by matching 1996 Census data from Statistics Canada with postal code enrolment data for each school. Researchers found higher levels of parental education were more closely associated with better school performance. When schools with similar parent education values record different results, it suggests that one school is more successful in taking the student’s home life into account in its teaching and counseling practices” (Cowley & Easton, 1999b, p. A21).    98 This acknowledgement by the Fraser Institute not only supported extensive research that suggested the same relational effect, but signaled to school ranking detractors an acquiescence, of sorts, that factors outside the teacher’s control played an important determinant role in the success of students at school. Notwithstanding, however relevant this statistic was deemed to be it served only a descriptive purpose: The measure did not then—nor does it now—statistically factor into the school’s overall average. Moreover, this measure was not included for independent and private school parents, but there was no explanation why.39 In their 2000 report card the Cowley & Easton (2000) used parents’ average education “as an indication of the socio-economic background of the student body” (Cowley & Easton, 2000, p. 15). This descriptive measure was reported in the school report card tables as ‘Actual rating vs. predicted (based on parents’ education)’. “A positive difference suggests that the school is effective in enabling its students to succeed regardless of their socio-economic background” (Cowley & Easton, 2000, p. 15). Here is an example of two competing ideological discursive formations (IDFs) overlapping in a space that moves beyond polemical discourse because we see the Fraser Institute attempting to quantitatively account for a qualitative measure that teachers say matter—a student’s home experience. And yet, this variable does not factor into a school’s overall rating. The spirit underpinning its calculation and inclusion as a descriptive measure, however, was encouraging. Cowley (2001) hoped that,  “instead of using socio-economic status as an excuse for poor school performance, let’s identify where the kids have overcome disadvantages and succeed. Then let’s find out what these schools are doing right” (Canada NewsWire, 2000, p. 1).  39  The Parents’ average education (yrs.) did appear in the next iteration of the report card, and every report thereafter. This statistic is used to calculate the difference between the school’s actual overall rating of academic performance and the rating that one might expect when the parents’ level of education is taken into account. A larger positive difference would suggest that the school is effective in enabling its students to succeed regardless of their socio-economic background. The Parents’ average education (yrs.) can also be used to identify other schools whose students have similar socio-economic backgrounds. A comparison of the results of these similar schools can identify those schools that are particularly effective in taking socio-economic conditions into account in their teaching and counselling practice (Cowley & Easton, 2001).   99  This perspective points to the possibility of redress embedded within a school ranking system that has not gained much traction in the press. Although individual schools have been held up by the Fraser Institute as being exemplar schools for making significant gains in their overall ranking there is no common theme that can be cited as being the cause for a school’s improved ranking score—an observation that will be thoroughly addressed in Chapter 5. However despite the inherent limitations and assumptions made by the Fraser Institute in extracting and using Ministry data to compile its first iteration of British Columbia’s secondary school ranking report card it must be noted that the same five KPIs were uniformly applied to every co-educational, single-sex, public, private, and independent school included in its first three reports. The statistical leveling of schools in this way would be disrupted in the next—and every other— successive school ranking iteration. Table 5 shows how York House School was depicted in the Fraser Institute’s published school report card in 1999. Descriptive KPIs were integrated throughout the top of the table. In this (second) edition of the ranking York House School achieved an overall score of 8.6—a drop of 0.4 points when compared to the previous year.    100 Table 5: Second school ranking table published for YHS (1999)   York House School (YHS) D3 = Private  D1 = Parent’s Avg. Income: n/a D2 = Grade 12 Enrolment: 43 Year 1993 1994 1995 1996 1997 1998 S1 KPI 1 76.6 79.7 78.2 79.5 77.9 77.1 ↔ KPI 2 1.4 1.5 1.0 1.2 2.4 3.3 ↓ KPI 3 4.7 4.9 5.9 4.3 5.9 7.3 ↔ KPI 4 100.0 100.0 100.0 96.2 100.0 100.0 ↔ KPI 5 5.0 5.1 5.1 4.8 5.8 6.0 ↑ Overall 9.6 9.4 9.0 9.4 9.0 8.6 ↓ Source taken from (Cowley, et al., 1999, p. 43).  Legend KPI 1 = Average provincial exam mark KPI 2 = Percentage of provincial exams failed KPI 3 = Difference between exam mark and school mark KPI 4 = Graduation rate KPI 5 = Provincial taken per student D1 = Descriptive indicator: Parent’s average education in years D2  = Descriptive indicator: Grade 12 enrolment D3 = Descriptive indicator: Kind of School (public or private) S1 = Semiotic progress trend indicators: improvement (↑); decline (↓); no change (↔)  Iteration #2 (2001-2002): Gender Matters In May of 1999, Cowley and Easton co-authored a ‘Fraser Institute Occasional Paper’ entitled, ‘Boys, Girls, and Grades: Academic Gender Balance in British Columbia’s Secondary Schools. The authors suggested in their report that, “it was boys who were getting short-changed” in British Columbia’s classrooms (Cowley & Easton, 1999, p. 3). Additionally, Cowley and Easton (1999) indicated in the Executive Summary portion of their paper that “no conclusive evidence could be found that boys and girls were destined to achieve at different levels in any aspect of the academic program” (Cowley & Easton, 1999, p. 3). Their analysis of the eight most popular provincially examinable courses taken by students in British Columbia revealed that “girls received higher grades on school-based assessments in all subjects regardless of their relative performance on the provincial examinations” (Cowley & Easton, 1999, p. 9). The data marshaled by   101 Cowley and Easton (1999) in their report was used to promote the idea that classroom teachers were treating boys and girls differently and they pointed to the discrepancy between school-issued and provincial examination results as evidence of teacher bias in the classroom—girls were being favoured over boys in British Columbian classrooms. The data presented in the Fraser Institute’s gender study showed that girls consistently outperformed boys on the school-issued marks they received in all eight of the most popular provincially examinable subjects taken by students in British Columbia, but that boys outperformed girls on the provincial examination marks they received in five of eight subjects. The results prompted Cowley and Easton to pose the following question: “Are girls actually learning more or are school-based assessments systematically biased against boys?” (Cowley & Easton, 1999b, p. 3). Because the Fraser Institute deemed the gender gap issue as being “vitally important” (Cowley & Easton, 2000, p. 4), the authors included it as descriptive measure for the first time in their 2000 report card, which meant that gender was not weighted in the ranking as a KPI. Table 6 depicts the relative percentage weights of the seven KPIs that were used by the Fraser Institute to rank secondary schools in its second report card iteration. It is followed by a description of two additional gender gap KPIs that were included in the ranking for co-educational schools. It is important to note how the relative percentage weight assigned to KPI #3 was changed for co-educational schools for the first time. Note as well how two new KPIs (#6 and #7) were introduced to the sample of co- educational schools included in the Fraser Institute’s school report card. These indices were excluded for single-sex schools.   102 Table 6: Relative percentage weights of KPIs for iteration #2   Iteration #2: (2001 and 2002) Key Performance Indicator (KPI) Co-Educational Schools Single Sex Schools 1. Average Exam Mark 20% 2. Percentage of Exams Failed 20% 3. School vs. Exam Mark Difference 10%  20% 4. Exams Taken per Student 20% 5. Graduation Rate 20% 6. English 12 Gender Gap 5% n/a 7. Math 12 Gender Gap 5% n/a TOTAL 100% 8. Composite Dropout Indicator40 Descriptive 9. Kind of School (Public or Private) Descriptive 10. Grade 12 Enrolment Descriptive 11. Semiotic Trend Progress Indicators Descriptive 12. Parents’ Average Education Descriptive 13. Actual vs. Predicted Rating  Descriptive Table compiled from the following sources: (Cowley & Easton, 2001; Cowley & Easton, 2002)  What follows is a descriptive and critical analysis of each KPI used during this time.  KPI #6: English 12 Gender Gap “This indicator measures the difference (in percentage points) between boys and girls in the extent to which their school marks in English 12 are different from their examination marks. The indicator reports which sex received the highest average school mark in English 12 as well as the actual difference in percentage points between the two results. It shows how effective the school has been in minimizing the differences in results between the sexes. Where the difference favours girls, the value is preceded by an F; where the difference favours boys, the value is preceded by an M. An E means there is no difference between the boys and girls on this measure” (Cowley & Easton, 2000, 2001; Cowley & Easton, 2002; Cowley & Easton, 2003, 2004b; Cowley & Easton, 2005; Cowley & Easton, 2006; Cowley & Easton, 2007).    40  This measure was first introduced in the 2002 report card edition. It served a descriptive purpose initially, but it was included by the Fraser Institute as a KPI in the iteration that was to follow in 2003.   103 KPI #7: “Math 12 Gender Gap” “This indicator measures the difference (in percentage points) between boys and girls in the extent to which their school marks in Math 12 are different from their examination marks. The indicator reports which sex received the highest average school mark in Math 12 as well as the actual difference in percentage points between the two results. It shows how effective the school has been in minimizing the differences in results between the sexes. Where the difference favours girls, the value is preceded by an F; where the difference favours boys, the value is preceded by an M. An E means there is no difference between the boys and girls on this measure” (Cowley & Easton, 2000, 2001; Cowley & Easton, 2002; Cowley & Easton, 2003, 2004b; Cowley & Easton, 2005; Cowley & Easton, 2006; Cowley & Easton, 2007).   The Fraser Institute’s rationale for including gender gap KPIs is captured by the comment “that every student has special needs” (Cowley & Easton, 1999, p. 6). This statement was made in relation to the “concept of accounting for differences among students in the teaching process—teaching in context—[a concept] routinely touted as a critical component of the school system’s mission and as an achievable goal of effective teaching and counseling” (Cowley & Easton, 1999, p. 5). Cowley and Easton (1999) suggested in their gender report that the BCTF’s commitment to “establish strong guarantees that children with special needs have those needs met” (Cowley & Easton, 1999, p. 5) extend beyond students with “specific physical, mental, or social challenges” (Cowley & Easton, 1999, p. 6) to include boys. In the same report the authors noted that subject-specific gender data was collected by the Ministry that could be used by the Fraser Institute to take “into account one aspect of context: student gender” (Cowley & Easton, 1999, p. 6). To be meaningful this statement needs to be examined in relation to other contextual data the Fraser Institute had access to—and used—for descriptive purposes when it published its first iteration of school rankings from 1998-2000. Establishing a KPI in 2001 that accounts for gender differences in mathematics 12 and English 12 while—at the same time—choosing not to establish a KPI that statistically accounts for measured socioeconomic disparities is problematic. This is especially true given the Fraser Institute’s demonstrated ability to quantitatively account for the impact   104 parents’ average educational experience has on student achievement in all schools (Cowley & Easton, 2001). When new ‘contextual’ KPIs are included in the ranking rubric that account for gender-related differences in this way the Fraser Institute is deploying a modern technique for observing its subjects. Foucault (1977) describes how disciplinary power is exercised through a normalization process that is not only anchored in judgment, but that compares individual actions to a whole. In this case, one ‘individual’ may considered the entire population of boys while the other ‘individual’ may be considered the entire population of girls. When the examination results of each ‘individual’ can be introduced to the broader field of visibility for comparison the “constraint of conformity” has been achieved (Foucault, 1977, p. 183). The Fraser Institute’s expectation is that boys and girls should achieve similar school-issued grades in provincially examinable subjects. Although the gender gap indicator would not factor into a school’s overall ranking in a material way until 2001, the gender report authored by Cowley and Easton (1999) demonstrates a strategic and focused attempt by the Fraser Institute to disrupt the public’s confidence in the status quo secondary school system because all the single-sex schools in British Columbia were de facto private and independent schools. As such it was impossible for the public to judge the educational experience of students attending single sex schools in the same way because there was no basis for the comparison to be made. The implication of assigning subject-specific gender gap indicators in “the two most popular provincially examinable courses—Mathematics 12 and English 12” (Cowley & Easton, 2000, p. 13) proved to be statistically consequential for both coeducational (Co- Ed) and single-sex schools (SSS) because their inclusion changed the relative percentage weightings of the KPIs used to rank single-sex and coeducational schools. Table 7 depicts the changes in the KPIs from Iteration #1 (1998-2000) to Iteration #2 (2001-2002) for coeducational and single-sex schools in British Columbia. Note how subject-specific gender gap indicators in English 12 and mathematics 12 do not apply to single-sex schools, which consequently resulted in corresponding shifts in the relative weightings of KPIs between co-educational and single sex schools. Whereas single sex schools were   105 subject to five KPIs; co-educational schools were now subject to seven KPIs. The implication was statistically consequential insomuch as subject-specific gender gap indicators accounted for 10% of the variation between public and private schools. As well, the Fraser Institute could no longer say about its school-ranking rubric that it was uniformly applied to all public, private, and independent schools in British Columbia. Iteration #2, therefore, marks the first time the Fraser Institute’s school report card begins to exert discretionary disciplinary power on the field of accountability because we see in the ranking rubric statistical differences between how public and private/independent schools are treated. This is important to note at this juncture because it illustrates how the Fraser Institute leverages its objective ranking rubric matrix to emphasize differences between public and private school systems.  Table 7: Relative percentage weights of KPIs from iteration #1 to iteration #2   Iteration #1 (1998-2000) Iteration #2 (2001-2002) Key Performance Indicator (KPI) Co-Ed and Single Sex Schools Co-Ed SSS 1. Average Exam Mark 20% 2. Percentage of Exams Failed 20% 3. School vs. Exam Mark Difference 20% 10%  20% 4. Exams Taken per Student 20% 5. Graduation Rate 20% 6. English 12 Gender Gap - 5% n/a 7. Math 12 Gender Gap - 5% n/a TOTAL 100% 8. Composite Dropout Indicator - Descriptive 9. Parents’ Average Education in Years Descriptive 10. Actual vs, Predicted Rating Descriptive 11. Kind of School (Public or Private) Descriptive 12. Grade 12 Enrolment Descriptive 13. Semiotic Trend Progress Indicators Descriptive 14. Subject-specific exam averages Descriptive - Table compiled from the following sources: (Cowley & Easton, 2001; Cowley & Easton, 2002; Cowley, et al., 1998, 1999; Cowley & Easton, 2000)  The authors of the report card believed the revised rubric (as it was reflected in iteration #2) had been improved from its previous iteration—“For the first time, each   106 school’s overall rating will be affected by the extent to which the school ensures that both boys and girls are able to succeed” (Cowley & Easton, 2001, p. 4). As importantly,  “the recalculation of all previous overall ratings allowed us to reflect the Gender gap in the historical results. The introduction of this new indicator will change some schools’ past overall ratings” (Cowley & Easton, 2001, p. 4).  This statement is relevant because it illustrates how the Fraser Institute exerts discretionary disciplinary power on the school-wide accountability field. Here, we have an example of the Fraser Institute changing the ranking rubric in ways that make sense to the Fraser Institute. Gender related issues were not part of the broader school ranking debate before 2001, but the Fraser Institute leveraged Ministry data to show that boys and girls were not performing equally on school-based and exam-based assessments for students attending co-educational schools. They did not show the same statistical trend was true (or false) for students attending single sex schools. Such discrepant statistical approaches to how co-education (public) schools were treated in comparison to their (private/independent) school counterparts illustrates how the Fraser Institute exercised discretionary disciplinary power on the field of visibility. This bifurcation in the ranking rubric is relevant to consider because it shows how the Fraser Institute imports and expands a discourse of difference between schools and school systems. Cowley and Easton’s statement is important for a second reason. The expressed logic of including gender-gap KPIs to ensure that boys and girls are able to achieve equally in the classroom once again speaks to the emancipatory potential of redress embedded in the ranking rubric. In highlighting the achievement variation between boys and girls the possibility exists for that variation to be addressed by teachers within the classroom setting—if it can be addressed at all. However, the Fraser Institute is selectively discerning about what data-driven differences it highlights. For example, although it can statistically measure a socioeconomic index that ranking critics say   107 accounts for significant between-school variation the Fraser Institute chooses not to include the index in its ranking in a material way. The combined effect of adding the gender gap indicator to the second iteration of the Fraser Institute school report card and recalculating previously published school rankings considerably changed the distribution of ‘top’ ranked schools in British Columbia appreciably. However, the resulting discourse appearing in newspapers had nothing to do with the Fraser Institute implementing a different, “well accepted statistical method…to make differing data sets more comparable” (Cowley & Easton, 2001, p. 4). Furthermore, and perhaps most importantly, the resulting discourse in newspapers did not highlight an essential change to the Fraser Institute’s school-ranking rubric that no longer reduced schools to identical—common—performance indicators for the basis of comparison. Whereas the previous iteration made possible the same kinds of statistical assumptions for all public and private schools throughout the province, the inclusion of gender gap differences as a performance indictor resulted in single-sex schools being treated differently than co-educational ones—at least statistically. It was impossible, therefore, for single sex schools (all of which were independent and private schools) to gain or lose points in the gender gap category because that performance indicator measured how aligned boys and girls performed on provincial exams and school-issued marks in English 12 and mathematics 12 respectively. Given that all-girl schools didn’t have any boys enrolled in their sample populations, and given that all-boy schools didn’t have any girls enrolled in their sample populations, it was impossible for the Fraser Institute to include the gender gap measure in the same way it was able to for co-ed public (and co-ed private) schools. Gender mattered, therefore, because there were no gender gap differences to measure in single sex schools, which were all private and independent schools. That important demographic gender disparity resulted in a redistribution of top ranked schools in the province such that perfect-scoring (10/10) schools were all single-sex, private schools in 2001 (Cowley & Easton, 2001).   108 Table 8 depicts five years of school ranking data as published by the Fraser Institute during its first two iterations. It shows how public and private41 schools were distributed, and re-distributed, across decile ranges for Iterations #1 and #2 respectively. What is relevant to note here is the percentage of public (PU) schools that occupied the ‘top’42 decile range in the Fraser Institute’s ranking during the first iteration (1998-2000) compared to the percentage of public schools that occupied the same ‘top’ position during the second iteration. Before gender gap indices were included in the ranking rubric approximately 5% of all the public schools then ranked by the Fraser Institute achieved ‘top’ scores. After gender gap indices were introduced by the Fraser Institute the percentage of ‘top’ ranked public schools occupying the same decile range dropped to 0.4%. This represents a 92% decline in the number of potential public schools that achieved scores within the 9.0-10.0 range. By way of comparison, before gender gap indices were included in the ranking rubric approximately 31% of all private/independent (PV) schools then ranked by the Fraser Institute achieved ‘top’ scores. After gender gap indices were introduced the percentage of ‘top’ ranked private/independent schools occupying the same ‘top’ decile range dropped to approximately 21%. This decline represents a 34% decline in the number of ‘top’ ranked private/independent schools. So while public and private school systems were both adversely affected by the introduction of a new ranking rubric that included gender gap indices during iteration #2, public schools fared significantly worse as a result.   41  In this case ‘Private’ (PV) schools represent all non-public schools (Independent and Private) as those terms have been defined previously.  42  In this analysis ‘top’ ranked schools occupy the highest decile range possible as determined by the Fraser Institute’s ranking rubric; (i.e.) 9-10.    109 Table 8: Percentage distribution of public (PU) and private (PV) schools for iterations #1 and #2   Iteration #143 Iteration #244   1998 1999 2000 2001 2002 Rank PU PV PU PV PU PV PU PV PU PV 9-10 6.3 34  4.4  32.4  4.3  27.5  0.4  20.9  0.4  20.0 8-8.9 16.6  12.5  15.0  29.4  14.3  32.5  2.5  16.3  1.7  17.5 7-7.9 16.6  37.5  19.8  11.8  21.7  12.5  15.1  16.3  13.4  30 6-6.9 20.6  6.3  19.8  14.7  21.2  17.5  34.9  23.3  37.4  20 5-5.9 17.9  6.3  18.1  5.9  13.4  2.5  27.3  11.6  27.3  5.0 4-4.9 9.4  3.1  9.7  8.8  11.7  7.5  12.6  9.3  10.1  2.3 3-3.9 8.1  0  9.7  0  9.1  0  2.1  0  4.6  5.0 2-2.9 2.7  0  2.2  0  3.0  0  1.7  2.3  2.5  0 1-1.9 1.4  0  1.3 0  1.3  0  1.2  0  0.01  0 0-0.9 0.4  0  0  0  0  0  2.1  0  0.01  0 N= 223 32 227 34 231 40 238 43 238 40 Table compiled from the following sources: (Cowley & Easton, 2000; Cowley & Easton, 2002; Cowley, et al., 1998, 1999; Cowley & Easton, 2001)  Figure 5 shows the number of public (PU) and private45 (PV) schools that achieved an overall school rating between 9.0 and 10.0 for iteration # 1 and iteration #2. It shows that before the Fraser Institute introduced its revised method of calculating a school’s overall ranking forty-six percent (46%) of British Columbia’s ‘best’ schools were identified as being public schools. After the Fraser Institute revised its method of calculating a school’s overall ranking the percentage of ‘top’ ranked public schools dropped ten percent (10%).   43  The ranking rubric was uniformly applied to all public and independent/private schools. Each KPI is weighted at 20% in each of the five KPIs.  44  Gender gap indicators introduced for English 12 and Math 12 respectively. KPI weightings shift proportionately to reflect the change. (This KPI does not apply to single-sex schools). The Fraser Institute recalculates all previous school rankings published in British Columbia from 1998-2000.  45  Private (PV) in this table conflates independent (IN) and private (faith-based) schools.   110 Figure 5: Number of 'top' ranked public and private schools for iterations #1 and #2    Complied from data provided in the following sources:  Cowley & Easton, 2001; Cowley & Easton, 2002; Cowley, et al., 1998, 1999; Cowley & Easton, 2000)  Not only were entire categories of schools (public and private) affected by the revised ranking during the second iteration (2001-2002), but so too were individual schools affected in ways that seemed to reward and punish them. Take, for example, the case of Kitsilano Secondary (a public, co-educational, grade 8-12) school, and York House (an independent, single-sex, k-12) school. Figure 6 illustrates how the introduction of gender gap indictors in the report card’s second iteration resulted in an overall (historical) reduction of Kitsilano Secondary’s school rating.    111 Figure 6: Kitsilano Secondary's overall school ranking for iterations #1 and #2   Compiled from data obtained in the following sources: (Cowley & Easton, 2001; Cowley & Easton, 2002; Cowley, et al., 1998, 1999; Cowley & Easton, 2000)  Before the revision Kitsilano Secondary achieved overall higher ranking scores (Iteration 1, which is identified in the upper graph in red). After the revision the same school’s results were adjusted, which resulted in consistently lower scores (Iteration 2, which is identified in the lower graph in blue). By comparison, Figure 7 shows how York House School—a school exempt from the imposition of gender gap indictors in the report card’s second iteration—improved its overall (historical) school ranking from iteration #1 (which is identified in the lower graph in red) to iteration #2  (which is identified in the upper graph in blue).     112 Figure 7: York House School's overall school ranking for iterations #1 and #2    Compiled from data obtained from the following sources: (Cowley & Easton, 2001; Cowley & Easton, 2002; Cowley, et al., 1998, 1999; Cowley & Easton, 2000)  Here, we have an example of how one kind of single-sex, independent, k-12 school was rewarded by the statistical revision imposed by the Fraser Institute in its second iteration (represented by an overall shift upwards in the ranking graph from iteration #1 to 2) and how a different kind of co-ed, public, 8-12 school was punished by the same statistical iteration (represented by an overall shift downwards in the ranking graph from iteration # 1 to 2)—if reward and punishment is understood as a correlate of corresponding increases and decreases in a school’s overall rating out of ten. While these examples illustrate how the Fraser Institute’s second ranking iteration impacted two specific schools it does not say anything meaningful about how the greater population of British Columbia’s schools were impacted overall. Appendix F, however, depicts the ‘top’ ranked (9.0-10.0) secondary schools in British Columbia between 1998-2010. It shows that single-sex schools would continue to achieve   113 disproportionately ‘perfect’ overall ratings of 10/10 after the Fraser Institute introduced the gender gap indicator in 2001. That the percentage of public (and by implication co- educational) schools achieving school rankings between 9.0 and 10.0 had significantly decreased since the gender KPIs was first introduced points to an important relational trend that cannot be ignored—that is, there exists a statistical bias embedded in the KPIs used to rank schools because single sex schools cannot be penalized for discrepant, gender-related, school-issued and examination results in the same way co-educational schools can. This bias is evident by noting the kinds of schools that have achieved perfect scores on the Fraser Institute’s ranking since 1998. Table 9 shows that schools achieving a ‘perfect’ 10/10 score share similar characteristic school profiles: They are mostly k-12 schools; they are mostly independent/private; they are mostly day schools; all of the independent and private schools are Group 2 funded; and they all prepare students to pursue highly competitive degree-granting programs at universities and colleges throughout North America. By comparison, of the fourteen schools identified by the Fraser Institute to achieve a perfect score on its ranking over a thirteen year span only two public schools are noted—University Hill Secondary and Prince of Wales Secondary.    114 Table 9: Schools attaining a score of ten on the Fraser Institute ranking (1998-2010)   School Profile Characteristics School Name PU IN PV CE SS K-12 8-12 D B D/B Year(s) @  St. George’s  X   X X  X   2000-2009 10 Little Flower   X  X  X X   2000-2008 9 York House  X   X X  X   2001, 2003-2010 9 Crofton House  X   X X  X   2001-2007 7 Southridge  X  X  X  X   2002, 2005-2009 6 WPGA  X  X  X  X   2005-2007, 2009 4 St. Margaret’s  X   X X    X 1998, 2003 2 University Hill X   X   X X   2003, 2004 2 Van. College   X  X  X X   2005, 2008 2 Prince of Wales X   X   X X   1999 1 St. Thomas Aq.   X X   X X   1999 1 Brentwood  X  X   X  X  2003 1 Saint Michael’s  X  X  X    X 2003 1 GLN  X  X  X  X   2003 1 Total 2 9 3 8 6 6 6 11 1 2 Table compiled from the following sources: (Cowley, 2005b; Cowley & Easton, 2000, 2001; Cowley & Easton, 2002; Cowley & Easton, 2003, 2004b; Cowley & Easton, 2007, 2008; Cowley & Easton, 2009; Cowley, et al., 2010; Cowley, et al., 1998, 1999; Cowley & Easton, 2006)  LEGEND Year(s) = documents the calendar year(s) in which a school achieved a ‘perfect’ score of ten; @ = the total number of times that a school achieved a ‘perfect’ score of ten; PU = Public School; IN = Independent School; PV = Private (faith-based) School; CE = Co- Educational; SS = Single Sex; (k-12) or (8-12) = Grades offered at School; D = Day School; B = Boarding School; D/B = Day and Boarding School  A question that begs to be asked at this juncture is: Why did the Fraser Institute redefine its school-ranking rubric to capture gender-related data provided by the Ministry of Education? If nothing else, the introduction of gender-related-data by the Fraser Institute alluding to gender-biased-teaching in secondary schools effectively expanded the field of visibility on which the school wide accountability game was played. Henceforward boys and girls could be seen as separate populations where they were otherwise blended together as a single student population in the first iteration of the report card. This was strategically important for the Fraser Institute because in pointing to discrepant educational experiences boys and girls seemed to be having in British   115 Columbia high schools, the Fraser Institute introduced a new visual asymmetry to the greater field of school wide accountability. It is essential this be problematized at this juncture given that fields are—by definition—socially constructed areas of activity where struggles take place between agents in a supply and demand market. Brighenti (2007) reminds us of this point:  “*W+hen something becomes more visible or less visible than before, we should ask ourselves who is acting on and reacting to the properties of the field, and which specific relationships are being shaped. Shaping and managing visibility is huge work that human beings do tirelessly. As communication technologies enlarge the field of the socially visible, visibility becomes a supply and demand market. At any enlargement of the field, the question arises of what is worth being seen at which price— along with the normative question of what should and what should not be seen. These questions are never simply a technical matter: they are inherently practical and political” (Brighenti, 2007, p. 327).  Whereas the previous (first) iteration of the Fraser Institute ranking reflected and highlighted what critics noted were class-based distinctions that existed between schools (Proctor, 1998a; Steffenhagen, 2002b), the introduction of gender-biased data into the school wide accountability issue reflected and highlighted gender-based distinctions the Fraser Institute wanted the general public to see was operating in secondary schools (Cowley & Easton, 2003; Cowley & Easton, 1999c; Ferry, 2000). Expanding the field of visibility to include gender-related data in this way effectively marked—what was previously an unmarked—social category. This was an important strategy because as Brighenti (2007) notes in her paper on visibility, “*o+nce a way of marking and dividing people is set up…the resulting classification is a tool that can be applied to every case” (Brighenti, 2007, p. 334). The effect, therefore, of the Fraser Institute reconfiguring whole school populations into gender-constructed, sub- populations was to cast a wider statistical net that captured public-private school distinctions, which otherwise remained hidden. In this way, the Fraser Institute effectively amplified its power of surveillance on the field of visibility by widening its scope of vision. Whereas the previous iteration of the ranking pitted school against   116 school, the second iteration pitted boys against girls—and by implication, public schools against private schools, because all of the single sex schools ranked by the Fraser Institute were de facto independent and private schools. Including gender-related KPIs in the ranking rubric also made it possible for a greater population of parents to become interested in the ranking where they might not have been interested before because—for the first time in the history of the ranking—‘top’ ranked schools could be exposed for not meeting the subject-specific educational needs of boys and girls, equally—something every informed (good) parent, teacher, and administrator could know about. In this way, generic school rankings that treated all schools the same became more discerning in nature because the broader population of students under investigation was further delineated along gender lines in the serialization of Ministry-collected data. Expanding the field of visibility by creating two new categories of students (boys and girls), therefore, effectively widened the report card’s sphere of influence in British Columbia. For the first time parents sending their children to ‘top’ ranked public and co-educational independent/private schools could know if their sons and daughters academic potential was being met equally in the eight subject areas held up for public scrutiny by the Fraser Institute. As such, more parents were called to action in ways that only a published school ranking could muster because more parents were called to care in ways they had not been in previous ranking iterations. The gender debate also served to deflect and redirect some of the criticisms levied by school ranking opponents as they pertained to discrepant socioeconomic indicators that, like poverty, the ranking didn’t consider because socioeconomic indicators were contestable. Gender was not.  Descriptive Indicator: Dropout Rate  In the ‘Report Card on British Columbia’s Secondary Schools: 2002 Edition’, Cowley & Easton (2002) described their newest contextual measure of teaching and counseling effectiveness. It was labeled the ‘Dropout Rate’ indicator and it measured “the extent to which schools keeps their students in school and on task” (Cowley &   117 Easton, 2002, p. 4). Interestingly, the Fraser Institute adopted “a technique first used by France’s national ministry of education to calculate the likelihood that a student will graduate from a given school in the normal time” (Cowley & Easton, 2002, p. 4). This point illustrates how the Fraser Institute imported aspects of other school ranking report systems that were developed internationally. This is problematic because it implies that British Columbia’s secondary school system is like France’s secondary school system. But schools operate contextually—within cultural, financial, and political boundaries that are as unique to Canadian provinces as they are to France, Germany, Iceland, and Spain. Notwithstanding the contextual differences that quite naturally exist between British Columba and France, the Fraser Institute determined for itself that the “normal” composite dropout rate for students attending British Columbia’s high schools was 13%46 based on the authors’ analysis of elementary school data “where students are unlikely to leave the school system for reasons other than emigration or death” (Cowley & Easton, 2002, p. 5). The Fraser Institute felt it necessary to include the new contextual measure because, as they noted in their 2002 school report card,  “it appears that the existing Graduation rate indicator will soon be little use in differentiating among schools. The average value for all schools on this indicator has risen steadily from 82.5% in the 1992/93 school year to nearly 94% in 2000/2001. As a matter of simple mechanics, an indicator that is unvarying is not a useful one in determining differences in effectiveness among schools” (Cowley & Easton, 2002, pp. 5-6).  This statement underscores the prevailing ideological formation at play in the Fraser Institute manufacturing a ranking rubric that has been designed to reward and punish schools. It made no sense to the Fraser Institute to include an important contextual measure on which most schools in the province had improved—despite its claim that measuring schools “will determine whether our schools are doing their jobs  46  Cowley & Easton (2002) determined the average annual rate of disappearance from the system to be roughly 2.75%. Applying that level of disappearance as a benchmark for the five years of secondary school (Grades 8-12) the authors concluded that normative disappearance rates by students in the secondary school system approximated 13%.   118 satisfactorily” (Cowley et al., 1999, p. 4). Here, is an example of secondary schools making a positive difference in the lives of students because we see in the ‘Graduation Rate KPI’ quantitative evidence of more students completing the high school graduation program. Replacing the ‘Graduation Rate’ indicator (an index on which public schools were noted to have improved their standing) with the ‘Dropout’ indicator (an index on which public schools could be statistically penalized) the Fraser Institute changes what it wants the public to see on the field of visibility. Where there was little variance in the ‘Graduation Rate’ KPI between public and private schools the Fraser Institute simply eliminated the index from its rubric and replaced it with a ‘Dropout’ index on which there was greater variation between public and private schools. Metaphorically speaking, when public schools could clear the same hurdle that private and independent schools could the Fraser Institute simply replaced it with a new hurdle that many public schools found difficult to clear. When the Fraser Institute selectively uses KPIs to hide and amplify differences between public and private schools disciplinary power is being exercised.  Iteration #3 (2003-2006): Refining the Student Cohort In part the expansion of descriptive statistical measures introduced by the Fraser Institute to its school report card paralleled an expansion of British Columbia’s graduation program internationally. In an article published in B.C. Teachers’ Federation (BCTF) magazine, Larry Kuehn (2002), opposed then Premier Gordon Campbell’s political agenda when he said, “*t+he B.C. Liberal government is reshaping public education through privatization and a market approach to education” (Kuehn, 2002, p. 1). The article cited specific policies that supported Kuehn’s position, but one government policy in particular changed the way data was manipulated by the Fraser Institute in compiling its secondary school ranking. That policy was directed at recruiting international students to British Columbia’s secondary school system.    119 “Many districts have moved quickly to bring in international students, who pay high tuition and top up the district budget. In 2000-01, districts charged an average tuition of $10,000. On average, they spent $5,000 per student, leaving an average profit of $5,000. Lots of businesses would like to work on such a margin. Between 2000-01 and 20001-02, the number of international students jumped from 2,947 to 4,035. The revenue from international student tuitions totaled $40 million in 2001-02” (Kuehn, 2002, p. 1).  The BCTF, therefore, saw the international expansion of British Columbia’s graduation program into Pacific-Rim countries as a lucrative business venture that would subsidize the high cost of public education and objected to the privatization of public education. The (public) school model being sold abroad was not unlike the (private) school model being sold within British Columbia on two fronts: (1) prospective students applied to attend public schools in the same way prospective students applied to private schools, and (2) parents of foreign students accepted into British Columbia’s public schools paid annual tuition in the same way parents of Canadian and landed immigrant status students paid annual tuition fees to private schools. And while schools, and school districts, may have benefited from the added revenue that foreign ESL students brought into the public school system their resulting public school rankings did not. This problematic situation was resolved in 2003 when the Fraser Institute established a third iteration of its school report card—one that would statistically negate the impact foreign ESL students had on a school’s English 12 examination results. All that was required for the Fraser Institute was to recast its statistical net by “refining the student cohort” on which school rankings would be based (Cowley & Easton, 2003, p. 4). The rationale of incorporating this statistical refinement into their ranking rubric was explained in the introduction to the 2003 report card. The authors explained that,  “Administrators were also concerned that while they were being encouraged by the ministry to recruit international students as a means by which to earn revenue for the operation of their schools, these transient students’ academic results were not necessarily reflective of the quality of teaching at the school. Administrators encouraged us to explore ways to rate the schools only on the basis of students normally   120 resident in British Columbia. We believe that this is a reasonable refinement of our approach and, using revised data provided by the ministry, have excluded these students’ results from the calculation. The revised data were used to calculate the indicator and rating values for the school years 1997/98 through 2001/2003 that appear in this edition” (Cowley & Easton, 2003, p. 4).  Here, we have evidence for how the Fraser Institute continues to exert discretionary power on the accountability field by rendering invisible an entire population of ‘transient’ students that serve an economic purpose. The attraction of foreign ESL students to public schools brings with it additional revenue streams to a public educational system the Fraser Institute critiques. Embedded within a model for schooling that seeks to increase revenue streams in this way is an alignment of public policy initiatives with the Fraser Institute’s mission of privatizing public education through choice-based reforms. The off-shore interest of foreign students choosing British Columbian schools can be seen through a business lens as a lucrative niche market to be developed by the government. However, an unintended consequence of attracting the same population of foreign ESL students to British Columbian secondary schools is that their collective school-wide presence adversely affects a school’s overall ranking. The Fraser Institute effectively managed the situation by removing the statistical impact foreign students had on a school’s ranking position. Furthermore, the Fraser Institute leveraged the ground swell of support from school administrators that called on the Fraser Institute to address this issue because they felt their overall school ranking scores were being unfairly compromised by the presence of high populations of ESL students. Here, we have an example of how the Fraser Institute co-opts school administrators into accepting its manufactured régime of performativity because school administrators accept the presence of the Fraser Institute’s ranking rubric as being a permanent fixture in shaping a school’s operational habitus. When individuals, schools, and school districts are co-opted into performing a particular way for the sake of being publically rewarded by achieving higher school ranking scores in this way Ball (2003) suggests that a new policy technology has been deployed. In this case, public school   121 principals are rewarded for recruiting foreign ESL students to their institutions because their annual school budgets increase in proportion to the number of foreign students they attract, but their schools are not penalized on the ranking as a result. This outcome can be viewed as a win-win for both school administrators and the Fraser Institute.  Table 10 depicts the redistribution of public and private schools identified by the Fraser Institute in its 2003 report card (Report Card #6, Iteration #3). What is noteworthy is the increase in the percentage of public schools identified by the Fraser Institute that occupied the top-two decile47 ranges from 2002 (iteration #2) to 2003 (iteration #3). The table shows that when English 12 examination results from foreign ESL students were included in the 2002 ranking rubric 0.52% of all public schools included by the Fraser Institute in its annual report achieved an overall school rating between 8.0-10.0. After the Fraser Institute ‘refined’ the cohort by excluding English 12 examination results achieved from ‘transient’ students from their ranking calculations the number of public schools included by the Fraser Institute in its annual report occupying the top-two decile ranges increased to 5.42%. This reflects a ten-fold increase in the percentage of public schools occupying the top-two decile scores. Excluding the ESL examination data from independent/private schools occupying the same top two- decile ranges in iteration #3 did not affect their overall distribution in the same marked way. Approximately 37.5% of all private/independent schools ranked by the Fraser Institute achieved scores in the top-two decile ranges when ESL students were included in the ranking as compared to 43.8% when the same population of students was statistically removed. These discrepant shifts suggest that a greater number of public schools in British Columbia serve a population of students whose first language is not English.   47  The top-two decile ranges are defined by schools achieving overall ratings between 8.0-10.0.   122 Table 10: Percentage of schools ranked in the top two decile ranges for iterations #2 and #3   Iteration #2 (2002) Iteration #3 (2003) Rank Public Private Public Private 9-10 0.4 20.0  0.42  31.6 8-8.9 0.12  17.5  5.0  13.2 7-7.9 13.5  30.0  20.0  31.6 6-6.9 37.4  20.0  32.0  7.9 5-5.9 27.3  5.0  26.3  10.5 4-4.9 10.1  2.5  10.4  0 3-3.9 4.6  5.0  2.1  2.6 2-2.9 2.5  0  1.67  0 1-1.9 0.84  0  0.42  0 0-0.9 1.7  0  2.08  0 Total % 100 100 100 100 N 238 40 240 38 Table compiled from the following sources: (Cowley & Easton, 2002; Cowley & Easton, 2003)  While promoting the graduation program abroad was seen by the BCTF as being an integral part of Premier Campbell’s strategic plan to increase government coffers, ranking opponents began characterizing the Fraser Institute’s school ranking report card as “predictable" (Steffenhagen, 2002b, p. B3). They argued that independent and public schools that catered to privileged families and were located in wealthy neighbourhoods consistently (and predictably) ranked high, while schools in disadvantaged areas (predictably) fell to the bottom. By the spring of 2003, however, the Fraser Institute’s secondary school-ranking rubric had undergone its third iteration. The original five key performance indicators had grown to eight with the inclusion of composite dropout indicator in the 2003 report card.  KPI #8: Composite Dropout Indicator48 “This indicator measures the extent to which schools keep their students in school and progressing in a timely manner toward completion of their diploma program” (Cowley & Easton, 2003, p. 8).  48  Where no Composite dropout rate could be calculated, the Graduation rate was weighted at 20%.   123 What is relevant to note about this KPI is how the authors of the report card cite its inclusion as being sensitive to the concerns expressed by report card critics. The authors explained their rationale for including the eighth KPI in the ‘Report Card on British Columbia’s Secondary Schools: 2003 Edition’.  “Many administrators felt that because the Report Card was based almost entirely on events and results that occurred in grade 12, no weight was given to the efforts made by schools to ensure their students’ success in the junior grades. The composite dropout rate is a first step in addressing the imbalance” (Cowley & Easton, 2003, p. 4).   What is striking about the composite dropout key performance indicator is the extent to which schools were not uniformly subjected to the statistical assumptions underpinning it.  “Where a school does not enroll grade-8 students, the net dropout rate is calculated using the three-year average grade-8 dropout rate for the school district in which the school is located. Where a school does not enroll grade 10 or grade 11 students, no Composite dropout rate can be calculated” (Cowley & Easton, 2003, p. 9).  Here, the Fraser Institute acknowledges that imbalances exist within its school-ranking rubric because not every secondary school is comprised of students in Grades 8 through 12. This serves as more evidence of how the Fraser Institute uses its accountability ranking tool on the field of power in different ways to establish what is relevant and what is not; what is normative and what is not. Embedded, therefore within the selected KPIs are disparate approaches to how KPIs are used to tell stories about schools. Table 11 depicts the relative weightings of the KPIs included in the Fraser Institute’s third iteration of its secondary school-ranking rubric; that is from 2003-2006. Note that the ‘Composite Dropout’ indicator was formerly called the ‘Delayed Advancement Rate’ in 2005. Although this KPI was calculated in the same way the Fraser Institute did not account for why the measure was renamed. At the surface, however, it   124 can be argued that ‘Delayed Advancement Rate’ is a more euphemistic way to account for dropouts in the coded discourse of competence. As well it is important to note in Table 11 that for schools in which no dropout rates were noted the percentage weighting for a school’s graduation rate (KPI #5) increased from 10% to 20%.  Table 11: Relative percentage weights of KPIs for iteration #3   Iteration #3 (2003-2006)  2003-2004 2005-2006 Key Performance Indicator (KPI) Co-Ed SSS Co-Ed SSS 1. Avg. Exam Mark 20% 2. Percentage of Exams Failed 20% 3. School vs. Exam Mark Difference49 10% 20% 10% 20% 4. Exams Taken per Student 20% 5. Graduation Rate50 10% or 20% 6. English 12 Gender Gap 5% n/a 5% n/a 7. Math 12 Gender Gap 5% n/a 5% n/a 8. Composite Dropout Rate / Delayed Advancement Rate 10% or 0% TOTAL 100% 9. Parents’ Education (SES) Indicator  Descriptive 10. Kind of School (Public or Private) Descriptive 11. Grade 12 Enrolment Descriptive 12. Semiotic Trend Progress Indicators Descriptive 13. % ESL Students Descriptive 14. % Special Needs Students Descriptive 15. Sports Participation Rate Descriptive Table compiled from the following sources: (Cowley & Easton, 2003, 2004b; Cowley & Easton, 2005; Cowley & Easton, 2006)  Descriptive Indicator: Extracurriucular Activities  In 2005 and 2006 the Fraser Institute included a new descriptive performance indicator. It was called the ‘Sports Participation Rate’ and its inclusion signaled the  49  “For schools for which there were no gender-gap results because only boys or girls were enroled, the School vs exam mark difference was weighted at 20%” (Cowley & Easton, 2003, p. 57). 50  “Where no Composite dropout rate could be calculated, the Graduation rate was weighted at 20%” (Cowley & Easton, 2003, p. 57).   125 Fraser Institute’s “desire to broaden the focus of the Report Card” (Cowley & Easton, 2005, p. 4) beyond academic results.  Sports Participation Rate “The indicator provides a measure of the extent to which each school encourages its students to adopt and maintain a healthy and active lifestyle. The indicator reports the proportion of the students at each school who were registered members of at least one interschool sports team during the school year” (Cowley & Easton, 2005, p. 4).  Although the Fraser Institute hoped that sports participation rate would become a KPI that factored into a school’s overall rating, it ceased being included in the school report card—even as a descriptive measure—in 2006 for a number of reasons. To begin with, smaller schools would not have the resources available to run a myriad of extracurricular sport teams larger schools in the province would have. As well, the rationale given by the Fraser Institute for including the measure that “interschool sports teams encourage students to participate in an active and healthy life style, to engage in positive competition, and to build teamwork and leadership skills” (Cowley & Easton, 2005, p. 4) discounts the very real possibility that students playing non-competitive, ‘fun’, intramural sports during lunch and afterschool are acquiring the same skills and/or making the same active, healthy, lifestyle choices as student athletes. Finally it is entirely possible that students develop teamwork and leadership skills by engaging in positive competition beyond the habitus of sports. Public speaking, debating, participating in band ensembles, and school drama productions also promotes important skill sets in students the Fraser Institute identifies as being associated with competitive sports. Here then we see another example of how the Fraser Institute makes visible dimensions of school culture that define, limit, and homogenize the experience of students because the Fraser Institute had “access to data” (Cowley & Easton, 2005, p. 4) that make such comparisons possible.51 Such access was short-lived,  51  “The data used to calculate this indicator only represent those students actually registered on school teams sanctioned by BC School Sports and regulated by its 18 Sports Commissions. There are other popular sports such as Hockey, Lacrosse, and Girl’s Rugby, that are not sanctioned by BCSS and are,   126 however, when “the Board of Directors of British Columbia School Sports Association decided that these data would no longer be shared with us. For this reason, this valuable indicator of a non-academic aspect of school performance is no longer included in the Report Card” (Cowley & Easton, 2007, p. 4). It is relevant to note here the extent to which the Fraser Institute is limited in developing its ranking rubric when it is denied access to data that it deems important.  Iteration #4 (2007): A Revised Graduation Program  The Fraser Institute’s school-ranking rubric increased from eight KPIs to nine in the spring of 2007. This change reflected the Ministry-imposed changes that had previously defined a 52-credit graduation program over two years (Grades 11 and 12) to a revised, 80-credit graduation program over three years (Grades 10, 11, and 12). The 2007 edition of the Fraser Institute’s school report card in British Columbia reflected this change and the results of compulsory Grade 10 exam data were included for the first time (Cowley & Easton, 2007). Table 12 shows the composition and relative weighting of KPIs used by the Fraser Institute during its fourth iteration to rank secondary schools in British Columbia for co-educational (Co-Ed) and single sex schools (SSS).   therefore, not included in these data. In addition, some schools may not have registered their grades 7, 8 or 9 teams, even though it is a requirement of BCSS” (Cowley & Easton, 2006, p. 14).   127 Table 12: Relative percentage weights of KPIs for iteration #4   Iteration #4 (2007) Key Performance Indicator (KPI) Co-Ed SSS 1. Avg. Exam Mark (Grade 12)52 15% 2. Avg. Exam Mark (Grade 10)53 5% 3. Percentage of Exams Failed54 (Grade 10 & 12 Exams) 20% 4. School vs. Exam Mark Difference55 (English 12) 10% 20% 5. Exams Taken per Student 20% 6. Graduation Rate56 10% or 20% 7. English 12 Gender Gap 5% n/a 8. Math 12 Gender Gap 5% n/a 9. Delayed Advancement Rate 10% or 0% Total 100% 10. Parents’ Average Income Descriptive 11. Kind of School (Public vs. Private) Descriptive 12. Socioeconomic Indicator (SES) Descriptive 13. Grade 12 Enrolment Descriptive 14. Semiotic Trend Progress Indicators Descriptive 15. % ESL Students Descriptive 16. % Special Needs Students Descriptive 17. % French Immersion Students Descriptive Table compiled from the following sources: (Cowley & Easton, 2007)  Not only were Grade 10 examination results in math, science, and English included for the first time in a school’s overall rating, but as importantly the Fraser Institute changed how the average examination mark was calculated for each school as  52  Applications of Mathematics 12; BC First Nations Studies 12; Biology 12: Chemistry 12; Communications 12; English 12; English Literature 12; Français Langue Premiere 12; Français Langue Seconde-Immersion 12; French 12; Geography 12; Geology 12; German 12; History 12; Japanese 12; Mandarin Chinese 12; Physics 12; Principles of Mathematics 12; Punjabi 12; Spanish 12; and Technical Professional Communications 12. 53  Applications of Mathematics 10; Essentials of Mathematics 10; Principles of Mathematics 10; English 10; Science 10. (Students enrol in one of the three mathematics courses: Applications, Essentials, or Principles.) 54  This KPI reflects the percentage of grade-10 and grade-12 provincial examinations failed. 55  The weighting of this KPI is markedly different in Co-Ed. and SSS because subject-specific gender gap indicators not applicable to SSSs. In 2007 the gender gap KPI reflected English 12 results only. 56  “Where no Composite dropout rate could be calculated, the Graduation rate was weighted at 20%” (Cowley & Easton, 2007, p. 49)   128 well. Whereas previous iterations assigned a value to the percentage of Grade 12 exams failed within any given school, the revised iteration assigned a value to the percentage of Grade 10 and Grade 12 examinations failed. This marked the first time in the history of the ranking that data sets obtained from separate grades within the same the school were conflated under a single KPI (Cowley & Easton, 2007). In 2007 the Fraser Institute also expanded the categories of students it made visible within schools by including the percentage of French Immersion students; the percentage of special needs students; and the percentage of ESL students registered at each school. “When you want to compare academic results, these statistics can be used to find other schools where the student body has similar characteristics” (Cowley & Easton, 2007, p. 13). This is an important statement because it underscores that one of the ranking’s principal authors recognizes the influence contextual factors play in a school’s overall achievement. It also suggests that the Fraser Institute could change the way it presents schools to the public. Instead of comparing all schools to each other the Fraser Institute could group schools by common organizational capacity characteristics. That is to say, instead of making invisible entire populations of ESL students the Fraser Institute could choose to include them and group schools that share similar student profile characteristics. In this way, parents could identify schools the Fraser Institute deems as being effective in helping ESL students achieve levels of success in the classroom as opposed to negating their statistical presence described earlier.  Iteration #5 (2008-2010): Revised University Admission Policy Changes The fifth—and most current—iteration of the Fraser Institute’s British Columbia report card not only reflected changes in Ministry of Education’s compulsory examination assessment policy, but it also reflected Canadian university admission policies that no longer required Grade 12 students to write Grade 12 provincial examinations (McGill University, 2010). The implication of this policy shift by some Canadian universities meant that British Columbian students could be accepted into post-secondary, degree-granting programs based on their school-issued (year-end)   129 grades without having to write compulsory provincial examinations.57 The Ministry’s revised graduation program also meant that students would have to write a total of five compulsory exams over the three years that defined the 2004 Graduation Program. They were: math 10, science 10, English 10, socials 11, and English 12. The Fraser Institute conflated the average examination data obtained from students in grades 10, 11, and 12 into a single measure. These changes affected how KPIs were devised and used by the Fraser Institute to rank secondary schools in British Columbia. Table 13 identifies the KPIs devised by the Fraser Institute to construct its school- ranking rubric in its latest iteration—Iteration #5. It also shows the relative percentage weights of each KPI in 2008, 2009, and 2010. It is important to note here that gender gap KPIs in math and English reflected data obtained from Grade 10 students only. “This change was made because the provincial examination in Principals of Mathematics 12— the results from which were previously used in the calculation of the mathematics gender gap—is no longer mandatory” (Cowley & Easton, 2008, p. 4). This was also true of every other grade 12 course that was also a provincially examinable course. That is to say, students enrolled in provincially examinable Grade 12 courses no longer had to write compulsory Grade 12 subject exams in order to receive credit for the course. This significantly diminished the importance Grade 12 examination results had to a schools’ overall ranking. As such, the revised graduation program refocused the examination spotlight to direct its attention on the examination results achieved by Grade 10 students. There were some other notable changes to the Fraser Institute’s latest iteration. As a result of the changes to the ministry’s testing policy noted above and “possible further changes to the admissions policies of the province’s universities” (Cowley & Easton, 2008, p. 4) the Fraser Institute removed the participation rate KPI indicator from their report card because it no longer served as a way to differentiate between schools. Under the Ministry’s revised graduation program all students were  57  English 12 is still a compulsory course every university-bound student is required to take. Final English 12 grades in British Columbia continued to reflect a student’s blended mark between their school-issued mark and their compulsory English 12 examination mark.    130 required to write five provincial examinations over three years. As well, the gender gap index accounted for 12% of the variation between co-educational (public) schools and single sex (private and Independent) schools—up 2% from the previous iteration. Finally, “where no Composite dropout rate could be calculated, the Graduation rate was weighted at 25%” (Cowley & Easton, 2008, p. 42). We see embedded in the fifth iteration of the ranking, therefore, a revised rubric that makes it more difficult to capture the statistical variability between schools in a uniform way because the relative weightings of KPIs used to tell stories about schools are used in different ways. This is especially problematic given the logic underpinning standardized formulae that—by 2008—makes room for increasing states of exception between schools.   131 Table 13: Relative percentage weights of KPIs for iteration #5   Iteration #5 (200858, 2009, & 2010) Key Performance Indicator (KPI) Co-Educational Single Sex School 1. Avg. Exam Mark 25% 2. Percentage of Exams Failed 25% 3. School vs. Exam Mark Difference59 13% 25% 4. English 10 Gender Gap60 6% n/a 5. Math 10 Gender Gap 6% n/a 6. Graduation Rate61 12.5% or 25% 7. Delayed Advancement Rate62 12.5% or 0% Total 100% 8. Parents Average Education Descriptive 9. Kind of School (Public vs. Private) Descriptive 10. Socioeconomic Indicator (SES) Descriptive 11. Grade 12 Enrolment Descriptive 12. %ESL Students Descriptive 13. %Special Needs Students Descriptive 14. %French Immersion Students Descriptive Table compiled from the following sources: (Cowley & Easton, 2008; Cowley & Easton, 2009; Cowley, et al., 2010)       58  Mandatory provincial examinations were administered in the following grade-10, grade-11, and grade- 12 subjects: Applications of Mathematics 10; Principles of Mathematics 10; Essentials of Mathematics 10; Science 10; English 10; Social Studies 11; Civic Studies 11; BC First Nations Studies 12; Communications 12; English 12; Français Langue Premiére 10; Français Langue Premiére 12; and Technical Professional Communications 12. 59  The school vs exam mark indicator was redesigned for the 2009 and 2010 report cards. “For each school, this indicator (in the tables School vs. exam mark difference) gives the average amount (for all grade-10, grade-11, and grade-12 courses with a mandatory provincial exam) by which the “school” mark—the assessment of each student’s learning that is made by the school—exceeds the exam mark in that course” (Cowley, et al., 2010, p. 6).  60  “For schools for which there were no gender-gap results because only boys or girls were enroled, the School vs exam mark difference was weighted at 25%” (Cowley & Easton, 2008, p. 42). 61  For schools in which every student graduates this KPI counts for 25%. For schools in which not every student graduates this KPI counts for 12.5%.  62  For schools in which every student graduates there is no “delayed advancement rate” and this KPI counts as 0%. For schools in which some students don’t graduate this KPIS counts for 12.5%.   132 KPI# 6: Graduation Rate (2008-2010) “This indicator, related to the Delayed advancement rate, compares the number of students eligible to graduate enrolled in the school on September 30 with the number of students who actually graduate by the end of the same school year. Only those enrollees who are capable of graduating with their class within the current school year are included in the count of eligible graduates” (Cowley & Easton, 2008, p. 9).   KPI #7: Delayed Advancement Rate (2008-2010) “This indicator measures the extent to which schools keep their students in school and progressing in a timely manner toward completion of their diploma program. It uses data that report the educational status of students one year after they have enrolled in a given grade at a school in British Columbia” (Cowley & Easton, 2008, p. 8).  In 2008 the Fraser Institute acknowledged something that many of its critics had been saying since the first time the school report card was published ten years earlier— “When a school had higher income parents, the Overall rating at the school was likely to be higher” (Cowley & Easton, 2009, p. 10). What is interesting to note about this admission is Cowley and Easton’s (2009) footnote that accompanies the statement. It points the reader to a related—but different—finding in their 2000 report whereby the authors “identified one characteristic that was significantly associated with the Overall rating: the average number of years of education of the most educated parent in a two- parent family (or of the lone parent in a single parent-family)” (Cowley & Easton, 2000, p. 12). The same footnote also points the reader to Appendix 2 in the ‘Third Annual Report Card on British Columbia’s Secondary Schools’ entitled, “Measuring socio- economic context” (Cowley & Easton, 2000, p. 119). Here a number of ‘independent variable names’ describe the socio-economic familial context the Fraser Institute is able to quantify. They include: average parental income, average parental government transfer payment income, average parental other income, percentage of target families in which the principal parent claims no knowledge of either official language, average age of the principal parent in the target families, percentage of target families in which   133 there is only one parent that resides in the home, and the average number of years of education of the most educated parent. The coefficients assigned to each of these variables were also noted. It is clear from the data presented that a parents’ average educational experience was shown to significantly impact a school’s overall rating at the 99% confidence level (Cowley & Easton, 2000, p. 119). These detailed statistical disclosures, however, did not appear in the appendices of successive Fraser Institute reports. What is important to note here is the demonstrated ability of the Fraser Institute to capture a myriad of socio-economic factors it believes contribute to a school’s overall rating beyond. This statistical recognition highlights an inherent limitation embedded in the school raking reports in every one of its ranking iterations—that conditions exist for students outside the classroom that (positively and negatively) affect their levels of achievement inside the classroom over which teachers have absolutely no control. This fact disrupts the legitimacy of a school ranking system that has been manufactured by the Fraser Institute to measure the effectiveness of teachers in secondary school classrooms through British Columbia. When the Fraser Institute acknowledges that (at least) two socioeconomic factors—parents’ income and parents’ average education— are determinant factors in how schools rank on its annual school report card but renders them statistically invisible in their ranking rubric disciplinary power is being exercised by one group on another. It is relevant to—not only how the Fraser Institute manages to capture and quantify socioeconomic contextual measures that are known to affect how students perform at school—but the way such indices are used by the Fraser Institute to “measure of the success of schools to account for the socioeconomic characteristics of the student body” (Cowley & Easton, 2009, p. 10). The authors illustrate the potential its data has to tell different kinds of stories about schools by way of an example.  “*D+uring the 2007/2008 school year, Pinetree Secondary, a public school in Coquitlam, achieved an Overall rating of 7.6 and yet, when the average parental income of the student body is taken into account, the school   134 was expected to achieve a rating of only about 5.9. The difference of 1.7 is reported in the tables. On the other hand, the actual Overall rating of H. D. Stafford Secondary in Langley was 4.6, although its predicted rating was 6.2. The reported difference for H. D. Stafford is –1.6. This measurement suggests that Pinetree is more successful than H. D. Stafford in enabling all of its students to reach their potential” (Cowley & Easton, 2009, p. 10).  What is striking about this example is the how the Fraser Institute’s prevailing ideological formation (IDF)—its ‘way of seeing’ schools—statistically recognizes the challenges classroom teachers face. However sensitive the Fraser Institute may be to external conditions that have been shown to impact student performance in this way they choose not to include them as KPIs in their ranking. That is, socioeconomic indices serve a descriptive purpose and their inclusion in the report does not impact the overall rating of schools in a material way. Despite providing a clear rationale for why the Fraser Institute changed how the report card was manufactured for its fifth iteration63 edition it is important to note that single sex schools continued to be treated differently from co-educational schools—at least statistically. As well it was possible for ‘top’ ranked single sex schools to be ranked according to four KPIs (each worth 25%) if the school’s delayed advancement rates approximated zero. Their public school (co-educational) counterparts like, for example, Prince of Wales Secondary, were ranked according to seven KPIs. This discrepancy is especially noteworthy because it underscores the statistical variation that exists in—what has been promoted by the Fraser Institute as being—an objective measure of school performance. When gender differences that account for 12% of the statistical variation are combined with delayed advanced rates that account for another 12.5% of the variation there exists a potential for 24.5% of the statistical variation between schools to be unequally accounted for.   63  Gender gap indicators were re-designed in the 2009 and 2010 report cards. Whereas previous iterations had the gender gap indicators reflect the difference between school-issued and provincial examination results the revised KPI calculated the difference between boys and the girls in the marks they received on the mandatory provincial exams in each of the compulsory courses. The relative percentage weighting of the KPI, however, remained the same.   135 Conclusion The Fraser Institute has shaped how secondary schools are perceived in British Columbia by developing a ranking rubric that forces an epistemic consciousness on the public about what they thinks matters in education. Their vision for holding schools accountable is grounded in the belief that complex organizations (like schools) can be understood in objective and discrete terms called key performance indicators (KPIs). In devising its own accountability system the Fraser Institute has imported its mission- driven logic of practice onto the field of education; a logic that is highly contested given the mandate schools have to serve the diverse educational needs of students. In devising an accountability tool that establishes what is relevant and what is not within the field of education, the Fraser Institute promotes a régime of truth that exerts disciplinary power on schools and school systems. I have shown that a ranking instrument that is promoted by the Fraser Institute as being objective does not serve all schools in the same way. An analysis of ranking data available in British Columbia from 1998-2010 shows that the percentage of public schools occupying ‘top’ (9-10) ranked decile scores initially equaled, or surpassed, the percentage of independent/private schools occupying the same ‘top’ spots during the first iteration of the report card. However, successive statistical iterations brought with it notable changes in how public schools fared on the Fraser Institute’s report card. ‘Iteration #2’ (2001-2002) resulted in a significant redistribution of ‘top’ ranked schools in British Columbia. Specifically, an analysis of the data shows a marked reduction in the percentage of public schools occupying ‘top’ ranked positions when gender differences were included in the ranking rubric. During the three years that defined ‘Iteration #1’ approximately fifty-percent (50%) of the province’s ‘top’ ranked schools were public schools. After gender-related data was introduced into the school-ranking rubric for ‘Iteration #2’ the number of ‘top’ ranked public schools dropped to ten-percent (10%). Despite another statistical revision to the ranking rubric in 2003 that rendered invisible the impact ESL students had on examination averages the number of public schools occupying ‘top’ positions has not reached the same on-par level they experienced during the first iteration. This suggests   136 an inherent bias in the ranking rubric that rewards and punishes certain kinds of schools. The art and science of rewarding and punishing schools in this way relies on a whole technology of representation that has at its functional core surveillance. School rankings act like 17th century Panoptic prison towers because they operationalize power in similar ways. They are similar because both constructs serve as instruments of disciplinary power that have been designed to monitor and scrutinize human activity. As instruments, however, panoptic prisons and school-ranking rubrics limit what can be ‘seen’ on the field of visibility. The Fraser Institute limits its field of visibility by reducing schools to a number of discrete measures called KPIs while at the same time disregards the impact descriptive measures have on student achievement patterns. The selective use and manipulation of data in this way—not only limits the kinds of stories that can be told about schools—but, as importantly may be viewed as an act of discretionary power in and of itself: The Fraser Institute chooses what KPIs it uses to construct its ranking while schools have no say in the matter. Herein lies one of the principal objections that critics of school rankings have to the Fraser Institute’s epistemic vision in the accountability debate: It is considered by most teachers to be contextually void of meaning because it discounts the out-of-class experiences that students quite naturally bring with them to school. Moreover the emancipatory possibility of redress—the place where surveillance, disclosure, transparency, knowledge, language, and régimes of truth collide on the broader field of judgment—is diminished by a ranking discourse that perceives redress through the single lens of performativity. This one-dimensional perspective has very real implications because it limits how an instrument of power can be used to address social justice related issues that have always existed—and will continue to exist—in schools of every imaginable type. Measurement is not the enemy. Establishing data-driven achievement patterns in students from different backgrounds is—in fact—a key first step to improving the educational experience for all students. If gender-gap KPIs, for example, suggest that boys and girls have significantly different student achievement patterns within the   137 classroom context then redress is possible to the extent this variation is determined to be a function of discrepant pedagogical practices that discriminate based on gender and the extent to which discriminating practices change to address the learning needs of boys and girls equally. But when the same possibility exists to statistically demarcate student achievement patterns that are more closely aligned to socioeconomic condition in a material way the Fraser Institute ranking falls short. The disciplinary power embedded within the ranking rubric, therefore, is a power that is born out of a régime of truth that identifies KPIs in the first place and assigns relative percentage weights to each of them in the second place. Descriptive measures do not exercise disciplinary power in the same way because they are— metaphorically—silenced. They are ‘seen’ but not heard. The privileging of KPIs within the ranking rubric over descriptive measures in this way stands at its functional core. It has been manufactured this way. Moreover, it demonstrates how power is operationalized through the principle of rarefaction because we see in the elevated KPI- status (these measures are ‘seen’ and heard) the statistical subjugation of its descriptive-measure-counterpart. Were the Fraser Institute to redefine the KPIs used in its ranking rubric to include descriptive measures the instrument could be used to exert a different kind of disciplinary power on the field of education—a power that addressed contextual differences that continue to exist between students beyond the limits imposed by gender. A deliberate attempt by the Fraser Institute to include a more nuanced and balanced portrayal of schools in this way would result in a radically different picture of what success looks like for secondary students in British Columbia’s classrooms.      138 CHAPTER 5: Discursive Practices and the Mechanics of Capital Mobilization Introduction The purpose of Chapter 5 is two-fold: (1) to show how competing agents use language to mediate relationships of power; and (2) to show how competing agents acquire, consolidate, and leverage capital on the field of power to promote divergent visions about the role school rankings should play in holding teachers accountable for their work. Drawing chiefly on published media accounts, and the Fraser Institute’s own documents, the analysis shows how a reality effect was created in the public’s mind over time that private schools were better than public schools. This impression moved beyond published ranking scores and was bolstered by articles, letters, and editorials that appeared in newspapers, which highlighted the differences between public and private school systems, and the teachers working within them. The chapter has been organized to show how knowledge discourses (that initially characterized the school- wide accountability debate) shifted over time to become action discourses (that focused the public’s attention on the relationship between school improvement and school choice.) Problematizing the impact discourse had on shaping public perception in this way is key to understanding why student enrolment patterns in private and public schools changed appreciably since the ranking was first published in 1998. This is an important consideration because once choice is successfully admitted as a regulative rhetorical device on the field of power in which public, private, and independent schools compete for limited resources new forces emerge that can alter the educational landscape. The chapter also explores the methods by which the Fraser Institute inserted itself into the lives of elementary and Aboriginal students—both within and beyond the borders of British Columbia—because, I believe, it says something about the techniques and instruments of power used by the Fraser Institute to gain political and symbolic capital on an expanding field of accountability. To this end I focus on how the Fraser Institute established relationships with other political agents that share a similar habitus to expand and promote its privatization agenda in British Columbia and elsewhere.   139 The chapter is organized around the following research questions:  1. How can agents use language to mediate relationships of power and privilege in social interactions, institutions, and bodies of knowledge? How does the naturalization of ideologies come about?  2. What particular régimes of truth are manufactured by the media about secondary schools to construct a reality effect in the public’s mind about the state of secondary school education in British Columbia?  3. How do different agents involved in the ranking debate mobilize different forms of capital on the field of power to promote their respective agendas with respect to schools?  These questions will be addressed by drawing on elements of the schematic that I presented in Figure 4. It is important to note that when I talk about Bourdieu’s notion of ‘fields’ throughout this chapter that I think about them as being a myriad of different fields: The field of visibility; the field of accountability; the field of education; the field of judgment; the field of politic; and the field of power. All of these fields occupy the social space; a space inhabited by different agents who compete for, acquire, and leverage capital on different but interdependent fields at the same time. As well it is essential to note here that I perceive discourse to be a form of Bourdieu-ian capital that is used by competing agents to naturalize their ideological perspectives and régimes of truth. What follows is an analysis of how discourse is leveraged on multiple fields by different agents to effect an outcome—an outcome that has at is core the public’s perception of secondary schools in British Columbia.  Knowledge Discourses Delineating the accountability field with the goal of identifying the ‘best’ and ‘worst’ performing schools in British Columbia proved controversial from the beginning. The then president of the British Columbia Teachers’ Federation (BCTF), Kit Krieger, described the high school report card as being an “unfair and spurious grading system”   140 (Proctor, 1998d, p. A37). He went on to characterize the Fraser Institute as being “right- wing” and directly challenged, not only the relevance of the report, but also the credibility of Peter Cowley when he said, “the grading was done by someone with a background in marketing” (Proctor, 1998d, p. A3). Krieger further questioned the journalistic integrity of The Province by attacking the newspaper for publishing the ranking in the first place when he said, “*o+bjectivity, credibility, balance, fairness— surely The Province expects to be held accountable to these basic journalistic standards” (Proctor, 1998d, p. A3). In marked contrast, Michael Walker, the then Executive Director of the Fraser Institute, disclosed the think tank’s motives for producing the ranking when he said,  “The primary *reason+ is to provide the consumers of education with the equivalent of a “consumer’s report”. A second reason is to inform parents and children who have no choice how their school performs relative to schools in other areas. We also wish to inform the producers of education” (Proctor, 1998d, p. A37).  The school-wide accountability framework, therefore, was originally positioned within a broader knowledge discourse that: (1) provided information to consumers of education, (2) made comparisons between public, private, and independent high schools, and (3) informed educators, administrators, and school trustees about how well they were doing their jobs. With one broad-sweeping accountability stroke the published report card on secondary schools rendered judgment on an educational collective that cut through the vertical slice of the entire educational system. In creating a report whereby schools were pitted against schools under the guise of a parent’s right to know, neighbourhood, district, regional, and socio-economic boundaries were obliterated in a ranking that focused exclusively on provincial exam results. In Walker’s own words, “the process of making more precise distinctions between schools”, had begun (Proctor, 1998d, p. A37). This is an important statement because it underscores the distinct epistemic framework on which the Fraser Institute approached school wide   141 accountability from the beginning—that it was possible to measure school performance and to rank schools accordingly. Not surprisingly Walker’s remarks did not go unchallenged. Other agents appeared on the playing field of school wide accountability. They positioned themselves in direct opposition to Walker’s rhetoric and the logic presented by the school ranking rubric. Their agenda was to redefine and expand the boundaries of play by changing the discourse about how schools were characterized by the Fraser Institute’s school report card. These agents included thousands of teachers and administrators who had been engaged in the school wide accountability game long before the Fraser Institute stepped onto their field of play. They were more interested in having conversations about what the Fraser Institute could not measure in the life world of students attending British Columbia’s high schools, and not surprisingly, Krieger rallied to their defense. He objected to the bias he believed was inherent in the Fraser Institute’s published school ranking. Krieger spoke for thousands of hard working, committed, high school teachers throughout the province when he said the Fraser Institute was not measuring what really mattered in schools: “Poverty and parents education are the greatest predictors for a high-ranked school” (Proctor, 1998d, p. A3). The Fraser Institute was not detracted by oppositional voices and sought to expand its readership base by publishing its own material about schools. In the spring of 1998 the Fraser Institute published a policy paper called, ‘A Secondary Schools Report Card for British Columbia’ for the first time (Cowley et al., 1998). This policy document—A Fraser Institute Occasional Paper—has been published every year since and can be downloaded from the Fraser Institute’s website. These documents describe aspects of the Fraser Institute’s school ranking in ways that are not covered in the mainstream press. They are important because they serve as the scaffolding from which the Fraser Institute initially builds (and later promotes) its commanding presence on the field of school wide accountability. The second report published in this series was especially relevant because it describes four salient points that contextualize and outline the Fraser Institute’s   142 strategy in promoting school choice as well as the tactics by which the strategy is orchestrated. The first point makes clear that Fraser Institute school rankings are “based on student results data provided by the B.C. Ministry of Education” (Cowley et al., 1999, p. 3). Strategically, it was imperative for the Fraser Institute to align itself with the data provided by the Ministry of Education in the beginning because the ranking garnered a degree of legitimacy as a result—the Ministry was not in a position to devalue the data used by the Fraser Institute to compile its ranking because the Ministry had collected the data in the first place. This put the Ministry is an awkward position because the Fraser Institute was able to use data that had been collected by the Ministry about schools and student achievement in any way it deemed necessary. This effectively buffered the Fraser Institute in a way that was very important because the data used by the Fraser Institute to compile its first secondary school report card could not be challenged on its epistemic foundation as being invalid and/or unreliable. This was the principal strategic foundation upon which the Fraser Institute ranking of schools was built. It also served as the principal foundation on which to legitimize school rankings from the beginning. The second point speaks to a mounting public critique by the Fraser Institute—not only on the state of secondary schools in British Columbia—but as importantly, on the state of an ineffective government. The rationale for establishing the ranking within the broader discourse of critique helped position the Fraser Institute within the broader context of political forces at play during the time. Specifically, there were two reasons cited for why the Fraser Institute felt it necessary to rank British Columbia’s secondary schools. The first was to improve the overall performance of schools operating within a broken (and expensive) educational system. “*A+lthough it is responsible for the $4 billion spent each year educating students from kindergarten to grade 12, the British Columbia Ministry of Education makes no systematic effort to determine whether each school is effective in the discharge of its duties” (Cowley et al., 1999, p. 4). The inference made here was that the left-leaning NDP government was not being responsible to the electorate. Even though billions of tax dollars were being directed toward the k-12   143 educational system the Fraser Institute pointed out that there was no systematic effort to determine the overall effectiveness of schools by the government. If nothing else, the Fraser Institute ranking of schools made the NDP seem ineffectual because there was no discernable system in place by the NDP government to hold itself accountable for how taxpaying dollars were being spent within the Ministry of Education. The second reason given by the Fraser Institute to rank British Columbian high schools was to promote consumer awareness. In highlighting the ability of some64 parents and students “in many parts of the province…to choose among education providers”, the Fraser Institute once again begins to shift public discourse away from ‘parental knowledge discourses’ toward ‘parental choice discourses” (Cowley, et al., 1999, p. 4). The extent to which parental choice lessens the financial burden on British Columbian taxpayers is something the Fraser Institute values given its belief that “individuals benefit from greater choice, competitive markets, and personal responsibility” (The Fraser Institute, 2010d). The third point addressed key criticisms that were levied by a chorus of vocal critics in response to the first report card published in 1998. Of the seven criticisms addressed by the Fraser Institute in its policy document all but one are related to the statistical aspects of the ranking, which have been discussed in Chapter 4. The remaining criticism levied at the Fraser Institute pertained to their practice of comparing public with private schools in the same report. Many critics believed that “it would be better to have two leagues, one for public schools, which, it was maintained, do not select their students in any way and another—a sort of Premier league—for the independent schools that are selective in their admission policies and therefore can create a student body of excellent, motivated, and supported students” (Cowley, et al., 1999, p. 5). The authors of the report categorically rejected this criticism given their ideological stance that parental “awareness of the success (or failure) of alternative education delivery systems  64  At the time, parents and students throughout the province were given some ability to choose among education providers. They could, for example, choose between neighbourhood public schools, magnet schools, independent and private schools, and even home schooling. The number of choices that parents and students had, however, was informed by the geographical and socio-economic factors the Fraser Institute deemphasized.   144 provides useful information for the effort to improve all schools, public and private” (Cowley, et al., 1999, p. 5). The fourth and final point describes Fraser Institute plans to develop an expanded network of data gathering with the goal of making the ranking more statistically relevant in future report cards. Consider the following plans the Fraser Institute had for its future reports: “*W+e shall investigate the possibility of adding to our database school performance measures that are only available at the district and school level.” (Cowley, et al., 1999, p. 4). This kind of documented, long-range, plan-making demonstrates the methods by which the Fraser Institute intended to strengthen its statistical report from the beginning. As well, it established the kinds of relationships the Fraser Institute hoped to cultivate in order to access the kind of data it needed to produce a more statistically nuanced school ranking. At issue for the Fraser Institute was developing a methodology for determining the value-added measure of school effectiveness in the marketplace of schools.  “In next year’s report card we will measure each school’s success in developing its students over the secondary school years with more accuracy. We will incorporate into the report card newly available school performance data derived from certain Grade 10 results. By doing so, we hope to provide a measure of the value added by the school over time” (Cowley, et al., 1999, p. 5).  What is relevant to note at this juncture is the extent to which the Fraser Institute relies, not only on the Ministry of Education to expand its database of school performance measures, but also on districts and individual schools. In this way the breaking down (or serialization) of the larger system by the Fraser Institute makes possible inferences at other levels as well: (1) the school level (secondary schools); (2) the system level (public versus private), and (3) the professional level (teachers and administrators). The presentation of data in this way gives the Fraser Institute more totalizing power because the field of visibility changes on which the school accountability game is played. There is no escape from the Fraser Institute’s gaze. The same document described the   145 relationship between the Fraser Institute’s reporting of school performance and school improvement when it pronounced,  “easily accessible reporting of school performance is a necessary element of an effective program of continuous improvement in the delivery of education. With such a régime in place parents and students can make rational choices when considering educational alternatives” (Cowley, et al., 1999, p. 74).  The foreshadowing, then, of the Fraser Institute’s shifting of the school-wide accountability framework away from knowledge discourses and towards choice discourses is made possible to the extent statistical gathering régimes are in place to support the initiative. As well, the media played an important role in managing the discourse around published school rankings from the beginning. In an Editorial that appeared on the front page of its March 24th, 1999, issue the Fraser Institute’s rationale for publishing the first school ranking the previous spring was clearly articulated and positioned within a discursive strategy that privileged a parent’s right to know. In fact, the fourteen paragraph, front-page, article contained the sentence—“parents have a right to know”—at three different places in the copy that, to some, may have resembled a political speech (Editorial, 1999, p. A1). Regardless of how the text had been interpreted by the reader, however, it was clear that the Fraser Institute was committed to its goal of making more precise the distinctions between schools, districts, and the students populating them. The Province newspaper articulated its position about where it stood in the ranking debate in an “exclusive report” when it said,  “By referring to the report card, which was prepared by the prestigious Vancouver-based Fraser Institute, parents will have information they need to decide if their school’s doing a good job. And they can do something about it” (Editorial, 1999, p. A1).    146 Establishing a relationship with a provincial newspaper was critical in order for the Fraser Institute to gain a stronghold on shaping the discourse on educational matters because it provided the think tank with direct access to a significant population within British Columbia who were already loyal Province readers. The newspaper publication also provided its readership with an artifact of the ranking itself because the tables generated by the report could be saved, examined, and scrutinized. In this relationship the Fraser Institute devised different iterations of the school ranking rubrics over time while media outlets published the Fraser Institute’s findings in its newspaper. This was true for British Columbia as much as it was true for other iterations of the Fraser Institute report card that would be published in other provinces. In a statement published in the Fraser Institute’s 2002 Annual Report, Board Chair, Mr. Ray Addington, qualified the nature of the relationship between the Fraser Institute and the media when he noted,  “The distribution of the report card has been a critical factor, since we want to ensure that every educator, parent, and child in the province has access to the results. Accordingly, in each province we have chosen to partner with a widely distributed newspaper or magazine. In British Columbia, we chose The Province, the newspaper with the largest circulation in BC, and a demographic appropriate to our goal” (The Fraser Institute, 2002, p. 2).   In mobilizing the media in this way the Fraser Institute effectively managed to direct parents’ attention on what mattered most to the Fraser Institute—measuring specific aspects of school performance.65 The Fraser Institute’s policy think tank counterpart, the Canadian Centre for Policy Alternatives (CCPA), entered the school ranking debate in 2000—two years after the first school report card was published. This is an important development because it shows how local school rankings (and the debates it generates) transcend the normal  65  The coverage of the school ranking report changed (most notably) in the past two years as it now appears in the Vancouver Sun. The ‘stand alone’ school ranking section that was central to the depiction of schools in The Province newspaper no longer exists.   147 field of schools to become a national policy issue of interest to the CCPA. Like the Fraser Institute, the CCPA is also an independent, non-partisan research institute. Unlike the Vancouver-based Fraser Institute, the CCPA is head quartered in Ottawa and has four other Canadian offices in Vancouver, Winnipeg, Halifax, and Regina.66 The CCPA and the Fraser Institute have conflicting institutional ideologies. Whereas the CCPA concerns itself with “issues of social and economic justice” (Canadian Centre for Policy Alternatives, 2010) the Fraser Institute concerns itself with “the impact competitive markets and government interventions *have+ on the welfare of individuals” (The Fraser Institute, 2008b). Given the ideological clash between the left-leaning social justice perspective of the CCPA and the right-leaning market driven perspective of the Fraser Institute, it was not surprising when the CCPA spoke out against school rankings for the first time in March of 2000 when it said,  “[m]ost parents want their children to have an excellent education. The Fraser Institute (FI) taps into this concern with their much-ballyhooed “Report Card.” This manipulation of public opinion is meant to undermine confidence in public education. It doesn’t help schools serve our kids” (Gaskell & Vogel, 2000).  This position prompted a strong reaction from one individual in particular. In an article published in Fort St. John’s Alaska Highway News, a Mr. Hubert Beyer noted,  “in the absence of other methods to assess the effectiveness of our high schools, the Fraser Institute’s annual report card will do just fine. And while we’re at it, I think testing teachers every so often is a swell idea” (Beyer, 2000, p. 4).  This comment illustrates how school rankings that were developed to focus the public’s attention on student performance could be used to focus the public’s attention on teacher performance. Beyer (2000) suggests that it is the teachers who should be assessed and not the students. In this way school rankings begin to impose on the  66  The Fraser Institute has offices in Vancouver, Calgary, Toronto, Ottawa, and Montréal.   148 public’s consciousness a new gaze that seeks to secure its hold—not only on teachers working within individual schools—but on the field of public education. Evidence for shifting the focus away from students and towards classroom teachers can also be found in the Fraser Institute’s 2001 Report Card itself. The authors dismiss arguments made by educators that schools exist for purposes other than those deemed important by the Fraser Institute. As well, Cowley and Easton (2001) were highly skeptical of teachers and their roles they played in the lives of students when they said,  “They point to their schools’ mission statements as evidence of their breadth of purpose. These statements suggest that taxpayers are paying the schools to provide far more than academic training. Schools have taken upon themselves the responsibility of teaching the fine arts to their students. They promise to instill in the students an understanding of sport as an important aspect of a well-rounded life. They declare their graduates will fully appreciate their rights and responsibilities as citizens of Canada. Are educators delivering on their promises? Do they have the slightest idea of their degree of success?” (Cowley & Easton, 2001, p. 3).  The rhetoric used by Cowley and Easton challenges assumptions about how schools operate and what purpose they serve. Notwithstanding the myriad of perspectives that students, parents, and teachers will have to the question is a position taken by the Fraser Institute that teachers might not be doing their jobs. The issue here is no longer about school rankings but about regulating the work of teachers—‘Do they have the slightest idea of their degree of success?’. The position taken by the Fraser Institute here is also relevant because it describes an expanding political configuration of alliances that includes all ‘taxpayers’—including those in the province who do not have school-aged children. This serves an important strategic function because the Fraser Institute’s published ranking makes possible the ability of all taxpayers in the province to see how well their neighbourhood school is doing in relation to all schools, and in so doing, invites them to participate in the accountability debate. Whereas previous iterations of the Fraser Institute’s report emphasized a parent’s right to know how well their child’s   149 school was doing relative to others, now the Fraser Institute emphasized that taxpayers right to know as well because they were paying for school program perks that fell beyond the Fraser Institute’s measurement mandate; things like band, debating, drama, leadership, citizenship, fine arts, and athletic programs (to name but a few). The fact that British Columbia high schools were not measuring how successful they were in these mission-relevant activities frustrated Cowley and Easton.  “they have not provided us with any data that records their results in non-academic activities. Nor have they established their own annual reporting mechanisms so that parents, taxpayers, and other interested parties can compare and judge the schools in the areas. Why not? The results of the teaching students fine arts, physical education, leadership, and citizenship can be measured. Yet it appears that schools only report results that they are required to report” (Cowley & Easton, 2001, p. 3).  The position taken by the authors here dramatically underscore the inherent epistemic tension that exists between teachers and the Fraser Institute at its core— a tension shaped and defined by measurement as a precursor to holding schools accountable. For however clear Cowley and Easton may be about the possibility that successful citizenship and leadership programs, for example, can somehow be measured in high schools throughout British Columbia, they are silent in proposing how such measures could be obtained in the first place. And while secondary schools report only what the Ministry requests of them it is important to note that they also report on measures the Fraser Institute does not include in its annual ranking like, for example, student award and scholarship data. What clearly emerges in the fourth school report card is the strategic importance the Fraser Institute places on casting public school teachers and public school administrators unfavourably. Consider what Cowley and Easton had to say about the embedded accountability that existed at mission-driven independent schools.  “If individual parents were paying for their student’s education and if each could choose [my emphasis - MJS] from a variety of education   150 providers, then some might be willing to credit the promises made in school mission statements. Other parents would require objective evidence of past success and expect regular report cards that measure school effectiveness against a variety of measures in much the same way that the Consumers’ Union organization measures the performance of a wide variety of goods and services” (Cowley & Easton, 2001, p. 3).  This is an important statement by the authors of the school report cards because it signals to parents how school rankings could be used to replace teacher unions with parent unions that exist to consume an industrial good—education. The inference made here is that public school teachers are not to be trusted in the same way private school teachers can because inherent in fee-paying schools is a level of accountability by fee-paying parents of teachers working within them. What Cowley and Easton (2001) are proposing is an economic model of schooling that sees education as an industrial good to be valuated in the marketplace; a model that aligns with the ideology espoused by political liberalism. This position stands in marked contrast to the stance taken by the BCTF, which sees education as a democratic right; a model that aligns more with the ideology espoused by progressive liberalism. This marks the first time the Fraser Institute shifts the school- wide accountability issue away from parental knowledge discourses towards parental choice discourses that overtly challenge the authority of a teachers’ union. The Fraser Institute also casts public school administrators as being uncooperative in its quest to improve the educational condition for secondary students in British Columbia. Consider what Cowley and Easton say about them en mass.  “Since the province’s public school districts refused to provide us with data on student attendance, we requested the information under the Freedom of Information Act. By fall of 2000, we had received historical data from almost all of the districts and are currently analyzing these data to determine its value” (Cowley & Easton, 2001, p. 5).  What is noteworthy about this statement is the way the Fraser Institute effectively casts public school districts in a negative light because they ‘refuse’ to comply with the Fraser   151 Institute’s request to provide it with particular data as if they had something to hide. The districts are portrayed as being contrary and inflexible to the point the Fraser Institute gained access to the data it sought through other means. In this way the authors effectively leverage the institute’s political capital by demonstrating its ability to target (and obtain) any data it deems worthy. It is also telling that the Fraser Institute authorizes itself to determine of what value the data will have in the next version of the school ranking report card. What is unquestionably valuable to the Fraser Institute, however, is widening its potential base of support by appealing to a broader target audience—the British Columbian taxpayer.  “it is taxpayers and not only parents of today’s students who foot the bill for the education of the next generation. As long as this is the case, taxpayers should have easy access to reports about the effectiveness of every school in all areas for which funding is provided. The Ministry of Education should insist: “No results reporting. No funds. Period”” (Cowley & Easton, 2001, p. 4).  In actual fact schools and school districts have always provided the Ministry of Education with the data it has requested; reporting on a myriad of factors from class size to graduation rates (British Columbia Ministry of Education, 2011). The Fraser Institute, however, implies otherwise and calls on taxpayers to hold the Ministry accountable for collecting, and sharing, data the Fraser Institute deems important and relevant in ranking secondary schools throughout British Columbia. This is an especially important tactic given 2001 was an election year for British Columbian taxpayers. It would mark the end of one political party’s influence in the province and the beginning of another. The change of political parties at the legislative level meant that new kinds of relationships could be formed between the Fraser Institute and the newly elected Liberal government.    152 Common Sense Discourses  What is noteworthy about the discourse surrounding many of the reports card is the extent to which The Province newspaper normalized school rankings in the construction of common sense. It did so by invoking discursive phrases associated with regular school-issued report cards in its reporting of the ranking itself. Most parents were used to reading and, by extension, interpreting a school-issued report card. It was possible, therefore, for The Province to appeal to the emotions and anxieties parents of school-aged children commonly associated with the reporting process itself. For example, a published headline could erode the confidence parents may have for their children’s high schools by asking the question, “Does your school make the grade?” (Anonymous, 1999a). The question was answered in the form of a list of British Columbia high schools that were ranked from number one to number two hundred and sixty-two. Implicit in the published ranking was the understanding that top-ranked high schools at the time (Prince of Wales and St. Thomas Aquinas) had achieved top ranked grades (10.0) while bottom-ranked schools, like Salmo Secondary (1.2), had achieved failing grades (Anonymous, 1999a). Another article in the special educational report asked the question, “How is your high school performing?” (Anonymous, 1999b). Once again, the ranking discourse was normalized because it was predicated on traditional reporting practices that have always been implemented by teachers everywhere; that is to say, teachers have always commented on student achievement through a document intended for parents called a school report card: What parent doesn’t want to know how well their son or daughter is performing in math, science, or English in relation to the other students? The same question could also be discursively framed through the market-driven, business culture associated with stock markets whereby performance is closely aligned with profitability: How is your portfolio performing in this market? Still another headline promoted the ranking as being the solution to poor teaching and/or poor administrative planning when it proclaimed, “School rankings offer useful lessons to teachers and planners” (Raham, 1999, p. A46). This article was written by Helen Raham, then Executive Director   153 of the Society for the Advancement of Excellence in Education (SAEE). (The SAEE had a connection with the Fraser Institute when Stephen Easton67 sat on its Board of Directors in 2004/05). In 1999 Helen Raham had this to say about the Fraser Institute’s second published ranking,  “The Fraser Institute’s annual school report card helps consumers of the system assess individual school performance. As educators, we ought to be encouraged by *the ranking’s+ numbers to look for answers that will help all our students grow and learn. But it seems in some corners the opposite has happened—as a result of a direct request from the B.C. Teachers’ Federation, the deputy minister recently announced that he would not be releasing the school-level data to districts and schools anymore, thus removing an important aid in improving B.C. school performances. All the more reason for continuing the Fraser Institute’s report card initiatives” (Raham, 1999, p. A46).  Raham’s op-ed piece is significant for three reasons: (1) she casts parents as consumers of education, (2) she casts the Fraser Institute as a consumers’ union that produces economic goods for public consumption called report cards, and (3) she criticizes the NDP Ministry of Education by making public its reluctance to release data to schools and districts. This is especially important in the context of régime change because in portraying the Ministry, in general, and Paul Ramsey68, in particular, as being insensitive to the needs of discerning consumers of education, Raham effectively positioned government officials as also being insensitive to the needs of parents, and by extension, taxpayers. Also, by referring to the report card as being an annual publication she effectively normalizes the school ranking phenomenon in British Columbia. She does so within the context of ecological fallacy—a situation that occurs when data (collected at one level) is used to draw misleading inferences on another level. In this instance, Raham infers that school ranking scores (which is comprised of aggregate data from an entire population of students attending any school in the province) is used to say  67  Stephen T. Easton is professor of Economics at Simon Fraser University. He is a senior research fellow of The Fraser Institute and co-authors the annual secondary school report card with Peter Cowley.  68  NDP Minister of Education (February 1998-September 1999).   154 something definitive about individual students attending the schools being ranked. “Assumptions made about individuals based on aggregate data are vulnerable to the ecological fallacy” (Ratcliff, 2011). While the inferences made about the school ranking data presented by the Fraser Institute may be called into question, the focus of the 1999 school report card was to position the school ranking system within the discursive context of a normalized school reporting system. This served to make the ranking discursively palatable to both public and private school parents because it was discursively familiar—every parent in the province understood what distinguished an ‘A’ from an ‘F’ because they had been the product of a school system that, like the Fraser Institute, focused on ranking students. The difference between the Fraser Institute’s ranking of schools (and classroom teachers ranking their students through published letter grades or percentages to parents in the form of take-home report cards) had everything to do with scale. Whereas individual student report cards had always been a matter for private concern, the Fraser Institute’s published school report card was—by the spring of 1999—a matter for public concern in British Columbia.  Expanding the Surveillance Gaze Alberta School Rankings School rankings became a matter for public concern to Albertans as well, because June 1999 marked the first time the Fraser Institute published its secondary school ranking beyond British Columbia’s borders. Interestingly, the Calgary Herald used the same discursive technique The Province had used in drawing the public’s attention to the rankings—British Columbian and Albertan high schools were “under the microscope” (Editorial, 1999; Heyman, 1999, p. A1). The copy appearing in The Province newspaper ran alongside an image of a student placing a slide on the stage of a compound microscope. She is about to examine the biological specimen the reader assumes she has prepared. The ocular and objective lenses of the microscope figure prominently in the image, filling one-third of the page. The microscope serves as a   155 potent metaphor for how disciplinary power operates on the field of visibility. The Fraser Institute and The Province newspaper provided the general public with a school- ranking apparatus that, like the girl’s microscope, focused its magnified gaze on British Columbia’s secondary schools according to its own logic of practice—a logic that has been shown in Chapter 4 to privilege an epistemic vision that is steeped in surveillance, standardization, and performativity. This is remarkable achievement because it underscores how the logic underpinning the Fraser Institute’s ranking rubric can be exported to other educational fields beyond the limits imposed by the geographical boundaries of British Columbia. Furthermore, school-ranking results published in Alberta pitted Calgary area schools against their Edmonton school counterparts as if the Calgary Herald was reporting on the final outcome of a hockey match between the Edmonton Oilers and the Calgary Flames. The same held true for religious and non- sectarian schools.  “Calgary’s high schools, with an average of 5.94, came out ahead of those in Edmonton, which had a 5.48 rating. The Catholic district’s six schools, with a 6.18 average, beat out the public system’s 18 schools, which had a 5.86 rating” (Heyman, 1999, p. A1).  Alberta’s educational establishment did not embrace the published report card. “Both the Calgary Herald and the Edmonton Journal swiped at it, and the Alberta Teachers’ Association (ATA)—the counterpart of British Columbia’s BCTF—complained that the ranking system did not account for economic factors, unfairly comparing wealthy students with poor” (Steel, 1999, p. 50). The Fraser Institute’s ranking did, however, find a champion in David King, Alberta’s Minister of Education from 1979- 1986, and “the man responsible for bringing departmental [Grade 12] exams back to the public school system” (Steel, 1999, p. 50).  “Mr. King says the Fraser Institute report is a necessary spur toward excellence in education. He has no sympathy for educational establishment complaints” (Steel, 1999, p. 51).    156 Like its British Columbia counterpart, the inaugural Albertan report ranked schools on the same five performance indicators, but unlike reports published in British Columbia, the Albertan report excluded private schools from the mix of schools.  “For this inaugural edition, we were only able to obtain results from public and separate schools from Alberta Education. We hope that next year the results from the province’s private schools will be added since private schools are a choice that will be considered by some parents. More importantly, an awareness of the success (or failure) of alternative education delivery systems provides useful information for the effort to improve all schools—public, separate, and private” (Cowley & Easton, 1999a, p. 5).  While the focus of this project is on the Fraser Institute’s ranking of secondary schools in British Columbia, it is important at this juncture to underscore—not only the expanding presence the Fraser Institute was beginning to have in the school wide accountability movement outside its home province by 1999—but the relationship the Fraser Institute had to establish with government itself if it were to have access to the data it needed to produce its report: “We hope that next year the results from the province’s private schools will be added” (Cowley & Easton, 1999a, p. 5). The reconcilable, hopeful, tone of the authors rhetoric regarding the withholding of data from Alberta’s private school’s stands in marked contrast to the tone underpinning Raham’s (1999) rhetoric calling for the Fraser Institute to persist publishing its school report. As well, it is clear that the Fraser Institute continues to frame the success and failure of schools in Alberta through performance indicators it deems as being relevant to include in the first place. In this way, the Fraser Institute continues to be active on the educational terrain of British Columbia and Alberta in similar ways.  Elementary School Rankings When the Fraser Institute widened its circle of influence from the secondary school system into the elementary school system by creating an elementary school report card it marked a significant achievement for three principal reasons: (1) it made visible a   157 whole population of students, teachers, and schools that were not previously subjected to school ranking metrics by focusing on schools that it had not accounted for previously, (2) it significantly expanded the target audience of parents for whom its school ranking reports might appeal, and (3) it bolstered the public’s perception that the Fraser Institute had something definitive to say about the state of education in British Columbia and elsewhere. The Fraser Institute’s expanding presence on the school-wide accountability field in British Columbia seemed to parallel a significant shift in the political landscape in that province with the onset of a new millennium. Appendix B documents the sequence of events that define the Fraser Institute’s school ranking initiative from 1998-2009. It also highlights British Columbia’s shifting political landscape from the winter of 1998 to the spring of 2001. It shows that three different premiers held political office with the same political party within three successive years. The then New Democratic Party (NDP) Premier Glen Clark resigned in August 1999 because of a conflict-of-interest political scandal in which he was implicated (Hunter, 1999). Clark was replaced in February 2000 by the then Interim NDP Premier, Dan Miller. Soon thereafter Miller was replaced by the then former NDP Attorney General of British Columbia, Ujjal Dosanjh, who would go on to become British Columbia’s 33rd premier. In April—just weeks after NDP Premier Dosanjh had been sworn in—the Ministry of Education announced its decision to “release individual [Foundation Skills Assessment] results for the first time since the assessment began 25 years ago” (Steffenhagen, 2000, p. A4). The decision was perceived as weakening the BCTF’s authority insomuch as it served to “undermine their autonomy in assessing student performance” (Steffenhagen, 2000, p. A4). This position stood in marked contrast to the position taken by the British Columbia Confederation of Parent Advisory Councils; a group that had “lobbied for the release of individual *student+ results” (Steffenhagen, 2000). The Parent Advisory Council believed that parents and students should have “access to any and all data that help*ed+ them gauge performance” (Steffenhagen, 2000, p. A4). The BCTF, however, voiced their concern that the Ministry’s disclosure of Foundation Skill Assessment (FSA) data providing school-by-school results   158 would “encourage comparisons similar to the Fraser Institute’s ranking of schools” (Steffenhagen, 2000, p. A4). The concern proved to be a legitimate one. In May 2001 Liberal candidate, Gordon Campbell, defeated NDP Premier, Ujjal Dosanjh, in a provincial election. The political landscape in British Columbia had changed. By June of 2003, the Fraser Institute had published its first elementary school ranking in British Columbia based on the Foundation Skills Assessment (FSA) results of standardized assessment tools in reading comprehension, writing, and numeracy in Grades 4 and 7 respectively. Not surprisingly, the elementary school report card was met with a maelstrom of controversy. The then President of the British Columbia School Trustees Association, Gordon Comeau, worried that parents would “yank their kids from schools that ranked poorly even when the ranking is based on one year’s test results” (Steffenhagen, 2003b, p. A19). One independent school Head, Hugh Burke,69 whose elementary school was ranked number-one, rejected the Fraser Institute’s elementary report card as being “nonsensical and meaningless” (Burke, 2003, p. A19). Mr. Burke had this to say about his school’s top-placed ranking in a letter that was published in The Vancouver Sun:  “We reject our ranking, as any good school will do. I would be most suspicious of any school that actually boasted about such results. Real results do not reside in three tests, composed by a few people working for the government, scored by people who never met the kids, generating data that are highly dependent on testing circumstances, used inappropriately in statistical terms, for ideological purposes” (Burke, 2003, p. A19)  Mr. Burke’s stance about the placement of his elementary school ranking is relevant to a study that is focused primarily on secondary school report cards because it marks the first time that an educational leader from a top-ranked, independent, school publically discounts the Fraser Institute ranking of schools for all the same reasons that ranking opponents had articulated in the past. This time, however, the Headmaster’s  69  Hugh Burke is the Head of Maple Ridge’s Meadowridge School and was the Independent School Association of British Columbia (ISABC) president from 2008-2011.   159 voice is imbued with the social, political, and cultural capital acquired by an educational leader from a top-ranked school. When a school-wide-accountability-game ‘winner’ denounces the school-wide-accountability-game itself as being ‘nonsensical and meaningless’ the criticism must be perceived in a new light. A private school Headmaster has nothing to gain by denouncing a ranking that serves as a de facto endorsement for his school. In discounting the Fraser Institute ‘honour’ of being a top- ranked school, therefore, Mr. Burke casts doubt on the relevancy of the ranking itself in ways that school leaders from low ranked schools were not in a position to do so simply because their schools were low-ranked to begin with. Despite the controversy surrounding the elementary school ranking report the ability of the Fraser Institute to manage the public’s perception about the state of the educational system now spanned the entire educational spectrum—from kindergarten through to Grade 12. In this way the “high-stakes gaze of surveillance” (Pignatelli, 2002, p. 158) could be cast and recast, not only on a broader population of schools, but also on the teachers working within them. The Fraser Institute had effectively increased its client base by repackaging its secondary-school-ranking-report-card-product into a similar product that appealed to another niche market of educational consumers; namely, the parents of elementary school-aged children. In this way, the Fraser Institute very strategically focused the public’s gaze on what they deemed important about elementary schooling because, as Pignatelli (2002) pointed out in her paper on surveillance, “sorting and marking children, schools, and staff…based on norm- referenced, high-stakes tests reduces the notion of school effectiveness to something akin to a military campaign” (Pignatelli, 2002, p. 172). Clearly, the Fraser Institute did not mount a military campaign when it published its first elementary school ranking in British Columbia, but it is possible to think about the publishing of elementary (and secondary) school rankings—symbolically—as being an assault on the state of public school education by the Fraser Institute because private and independent schools (and the systems within which they operated) were consistently held up as being model schools.   160 Two-thousand-and-four (2004) was significant to the Fraser Institute for three principal reasons: (1) the think-tank turned thirty, (2) the secondary school report card was published in New Brunswick for the first time, which meant that the Fraser Institute had a coast-to-coast influence on how parents across the country perceived schools,70 and (3) the Fraser Institute ventured into the life world of Aboriginal students by publishing a secondary ‘Report Card on Aboriginal Education’. Any single one of these achievements could be viewed as a milestone in the life of the Fraser Institute, but taken collectively they speak to an expanded presence the institute was mounting on the school wide accountability field, not only geographically, but culturally as well.  New Brunswick School Rankings The Fraser Institute used the same techniques they had developed in British Columbia from 1998-2003 to promote its Maritime school report card in 2004. A posting on Canada NewsWire emphasized how the ranking used a variety of publicly available, objectively relevant, school performance indicators to, in general, answer the question: How is this school doing academically? New Brunswickers were drawn to the overall structure and ease by which “parents, school administrators, teachers students, and taxpayers *could+ analyze and compare the performance of individual schools” (Anonymous, 2004, p. 1). The news report advanced the Fraser Institute’s school choice agenda when it noted, “the Report Card alerts parents to those nearby schools that appear to have more effective academic programs” (Anonymous, 2004, p. 1). Most importantly, however, the Canada NewsWire report identified the kind of school leader that would accept the results presented in the Fraser Institute’s ranking as being anything but arbitrary. The report card, the article indicated, was useful to those “*s+chool administrators who are dedicated to improvement” (Anonymous, 2004, p. 1). This statement is relevant to note because it underscores how the Fraser Institute casts school leaders in one of two lights. Principals were either: (1) caring and effectual because they valued the results of a school report card made for them by the Fraser  70  The Fraser Institute only ranked Anglophone schools in New Brunswick. Their Francophone and bilingual counterparts were not included.   161 Institute, or (2) uncaring and ineffectual because they devalued the relevance and legitimacy of the school report itself. What is noteworthy about the New Brunswick school ranking, therefore, is not the statistical nuances of the ranking that make it different from its British Columbian counterpart, but the parallel capital acquisition and discursive strategies the Fraser Institute used to present its secondary school report card to a Maritime audience. At its core, the discursive techniques used were identical to the ones that have been described previously: (1) they were anchored in a parents right to know and choose; (2) they emphasized visible asymmetries that made possible distinctions between schools and school systems; (3) they were hermeneutically packaged to discount important contextual interpretations that were relevant in the life world of students; and (4) the report card was promoted as being objective. Together these report card elements were leveraged by the Fraser Institute to gain political capital on the broader field of power in New Brunswick.  Aboriginal Report Card These same discursive techniques were at play in the Fraser Institute’s ‘Report Card on Aboriginal Education in British Columbia’—a report that established what Aboriginal leaders, educators, and provincial and federal government officials already knew: “British Columbia’s education system is failing the province’s Aboriginal students” (Cowley & Easton, 2004a, p. 3). Cowley and Easton were surprised to learn this was true of Aboriginal students attending “even the highest ranked schools” in the province (Cowley & Easton, 2004a). While it is beyond the scope of this project to analyze a corollary school ranking that focuses on the performance of Aboriginal students attending British Columbian high schools, it is relevant to note that the Fraser Institute’s proposal to the “Ministry of Education, local school boards, and Aboriginal education authorities *who have collectively+ intended to remedy this long standing failure” was for these agents to implement two key strategies: (1) to allow Aboriginal parents to enroll their children in any school they chose, and (2) to provide interested parties with access to “easy-to-understand, school-by-school reports of student achievement”   162 (Cowley & Easton, 2004a, p. 3). The Fraser Institute, therefore, not only felt authorized to promote its school ranking report to improve the educational experience of Aboriginal students, but as importantly it called on members of government and the Aboriginal community to adopt the Fraser Institute’s mission of free-market-driven educational reform. Response to the Fraser Institute’s Aboriginal report card was met with considerable opposition in many circles. In an article appearing in Teacher Newsmagazine the Director of the BCTF’s Professional and Social Issues division, Pat Clarke, had this to say about the ‘Report Card on Aboriginal Education in British Columbia’:  “*t+he approach the Fraser Institute is using … is the research equivalent of the big-lie strategy in public relations—repeat often enough, and belief begins to set in…. We know that an obsession with counting tends to focus our attention only on what is counted. For students who come to school with a complex array of issues from poverty to cultural dislocation, factory-model approaches to learning are too often exactly the wrong thing to do. A lock-step devotion to testing for example, is a good way to keep Aboriginal students away from schools, not in them” (Clarke, 2004).  Clarke’s position is relevant because it highlights a conflicting epistemic understanding about how to improve the educational experience for Aboriginal students attending British Columbia’s secondary schools. This position stands in complete opposition to the Fraser Institute’s position that market-driven educational reforms lead to an overall improvement in student achievement. Here is an example of how two different political agents (the BCTF on one side, and the Fraser Institute on the other) compete for capital acquisition on the field of power by making visible different aspects of students’ experiences. Whereas the BCTF (vis-à-vis Clarke) focuses on socioeconomic and cultural aspects of the Aboriginal student’s experience that can adversely impact student achievement patterns in secondary schools, the Fraser Institute (vis-à-vis Cowley) focuses on provincial examination data; Clarke speaks for the Professional and Social Issues division of the BCTF, while Cowley speaks for the School Performance Studies   163 department of the Fraser Institute. Their respective positions are at epistemic and ontological odds. With the publication of elementary school, secondary school, and Aboriginal report cards within, and beyond, British Columbia’s borders, the Fraser Institute declared in its 2004 Annual Report that “rankings *were+ changing the educational debate” (The Fraser Institute, 2004, p. 6). A record nine school report cards were published by Peter Cowley’s School Performance Studies department that “ranked approximately 3,100,000 students in almost 5,900 schools in British Columbia, Alberta, Ontario, Quebec, and New Brunswick” (The Fraser Institute, 2004, p. 13). Within British Columbia, a “dearth of data” regarding other aspects of school performance would allow the Fraser Institute “to broaden the focus of the Report Card” beyond academic achievement measures (Cowley & Easton, 2004, p. 5). While the dearth of (statistical) information appealed to the data-centric nature of the Fraser Institute, one B.C. educator—David Denyer—believed the province’s schools were turning into “work camps where children *are+ simply compliant human capital to be equipped with marketable skills” (Steffenhagen, 2004a, p. A1). Denyer had been involved in developing B.C. curriculum for B.C. schools and was reported in The Vancouver Sun to say, “*w+e are witnessing a war on childhood” (Steffenhagen, 2004a, p. A1). Denyer’s criticism focused as well on the language around achievement that had come to dominate the public discourse around accountability. He noted,  “achievement doesn’t include mention of enjoyment, pleasure, or recreation. Instead there are accountability, data collection and, most recently, supervision of teachers, conducted by principals who are themselves supposedly supervised by directors and superintendents, who are in turn supervised by the deputy minister and the minister. It’s a top- down, paternalistic model of continuous surveillance, ostensibly aimed at improving instruction and of course achievement as measured by tests” (Steffenhagen, 2004a, p. A1).   This statement is relevant because it illustrates an underlying frustration that many educators had about the relational role between data-gathering and sense-making as   164 that relationship was articulated against an expanding school accountability backdrop. Here is an example of how hierarchical observations (data-gathering) combines with sense-making and is used as a technique of disciplinary power by the Fraser Institute. When individual cases (schools) are introduced by the Fraser Institute to the field of accountability through documentation disciplinary power is exercised. Embedded, therefore, in the technology of representation is the politic of representation. Political agents on the school wide accountability field, therefore, play the accountability game according to visible asymmetries that are rendered by the Fraser Institute through a technology of representation that has as its principal focus—the ranking of individual schools.  School Improvement Discourses With the publication of the third ranking in 2000, the Fraser Institute had analyzed enough information that had been collected about secondary schools from 1993-1999 that it felt confident to publicly acknowledge eleven of the province’s most improved high schools in its ‘Third Annual Report Card on British Columbia’s Secondary Schools’’. The schools had “recorded a significant improvement in at least four of the Report Card’s five academic performance indicators” (Cowley & Easton, 2000, p. 3). The Province featured an article about one of the Fraser Institute endorsed schools. Chatelech Secondary was located in the Sunshine Coast District. It had managed to move up 4.8 points in the ranking from a low of 3.8 to 8.6. The article attributed the school’s significant improvement to the leadership stability provided when principal Bruce Janssen signed a five-year contract at the school—the first in a long line of previous principals to commit to the school in that way. Before Janssen became principal there, Chatelech Secondary had experienced “fifteen administrators in eighteen years” (Austin, 2000, p. A9). This revelation has tremendous implications in the context of a secondary school ranking that focuses primarily on examination results because it shifts the emphasis away from a focus on effective teaching practices in classrooms towards a focus on effective leadership practices in schools. In this way the   165 public’s gaze is redirected away from teachers and towards principals—the people to which teachers report. It is important to note as well that a number of other articles about the ranking appeared in smaller regional newspapers throughout the province. A Nelson Daily Ne